Changeset 191683 in webkit


Ignore:
Timestamp:
Oct 28, 2015, 11:36:02 AM (9 years ago)
Author:
mark.lam@apple.com
Message:

Update FTL to support UntypedUse operands for op_sub.
https://bugs.webkit.org/show_bug.cgi?id=150562

Reviewed by Geoffrey Garen.

Source/JavaScriptCore:

  • assembler/MacroAssemblerARM64.h:
  • make the dataTempRegister and memoryTempRegister public so that we can move input registers out of them if needed.
  • ftl/FTLCapabilities.cpp:

(JSC::FTL::canCompile):

  • We can now compile ArithSub.
  • ftl/FTLCompile.cpp:
  • Added BinaryArithGenerationContext to shuffle registers into a state that is expected by the baseline snippet generator. This includes:
    1. Making sure that the input and output registers are not in the tag or scratch registers.
    2. Loading the tag registers with expected values.
    3. Restoring the registers to their original value on return.
  • Added code to implement the ArithSub inline cache.
  • ftl/FTLInlineCacheDescriptor.h:

(JSC::FTL::ArithSubDescriptor::ArithSubDescriptor):
(JSC::FTL::ArithSubDescriptor::leftType):
(JSC::FTL::ArithSubDescriptor::rightType):

  • ftl/FTLInlineCacheSize.cpp:

(JSC::FTL::sizeOfArithSub):

  • ftl/FTLInlineCacheSize.h:
  • ftl/FTLLowerDFGToLLVM.cpp:

(JSC::FTL::DFG::LowerDFGToLLVM::compileArithAddOrSub):

  • Added handling for UnusedType for the ArithSub case.
  • ftl/FTLState.h:
  • jit/GPRInfo.h:

(JSC::GPRInfo::reservedRegisters):

  • jit/JITSubGenerator.h:

(JSC::JITSubGenerator::generateFastPath):

  • When the result is in the same as one of the input registers, we'll end up corrupting the input in fast path even if we determine that we need to go to the slow path. We now move the input into the scratch register and operate on that instead and only move the result into the result register only after the fast path has succeeded.
  • tests/stress/op_sub.js:

(o1.valueOf):
(runTest):

  • Added some debugging tools: flags for verbose logging, and eager abort on fail.

LayoutTests:

  • js/regress/ftl-sub-expected.txt: Added.
  • js/regress/ftl-sub.html: Added.
  • js/regress/script-tests/ftl-sub.js: Added.

(o1.valueOf):
(o2.valueOf):
(foo):

Location:
trunk
Files:
3 added
13 edited

Legend:

Unmodified
Added
Removed
  • trunk/LayoutTests/ChangeLog

    r191681 r191683  
     12015-10-28  Mark Lam  <mark.lam@apple.com>
     2
     3        Update FTL to support UntypedUse operands for op_sub.
     4        https://bugs.webkit.org/show_bug.cgi?id=150562
     5
     6        Reviewed by Geoffrey Garen.
     7
     8        * js/regress/ftl-sub-expected.txt: Added.
     9        * js/regress/ftl-sub.html: Added.
     10        * js/regress/script-tests/ftl-sub.js: Added.
     11        (o1.valueOf):
     12        (o2.valueOf):
     13        (foo):
     14
    1152015-10-28  Hunseop Jeong  <hs85.jeong@samsung.com>
    216
  • trunk/Source/JavaScriptCore/ChangeLog

    r191682 r191683  
     12015-10-28  Mark Lam  <mark.lam@apple.com>
     2
     3        Update FTL to support UntypedUse operands for op_sub.
     4        https://bugs.webkit.org/show_bug.cgi?id=150562
     5
     6        Reviewed by Geoffrey Garen.
     7
     8        * assembler/MacroAssemblerARM64.h:
     9        - make the dataTempRegister and memoryTempRegister public so that we can
     10          move input registers out of them if needed.
     11
     12        * ftl/FTLCapabilities.cpp:
     13        (JSC::FTL::canCompile):
     14        - We can now compile ArithSub.
     15
     16        * ftl/FTLCompile.cpp:
     17        - Added BinaryArithGenerationContext to shuffle registers into a state that is
     18          expected by the baseline snippet generator.  This includes:
     19          1. Making sure that the input and output registers are not in the tag or
     20             scratch registers.
     21          2. Loading the tag registers with expected values.
     22          3. Restoring the registers to their original value on return.
     23        - Added code to implement the ArithSub inline cache.
     24
     25        * ftl/FTLInlineCacheDescriptor.h:
     26        (JSC::FTL::ArithSubDescriptor::ArithSubDescriptor):
     27        (JSC::FTL::ArithSubDescriptor::leftType):
     28        (JSC::FTL::ArithSubDescriptor::rightType):
     29
     30        * ftl/FTLInlineCacheSize.cpp:
     31        (JSC::FTL::sizeOfArithSub):
     32        * ftl/FTLInlineCacheSize.h:
     33
     34        * ftl/FTLLowerDFGToLLVM.cpp:
     35        (JSC::FTL::DFG::LowerDFGToLLVM::compileArithAddOrSub):
     36        - Added handling for UnusedType for the ArithSub case.
     37
     38        * ftl/FTLState.h:
     39        * jit/GPRInfo.h:
     40        (JSC::GPRInfo::reservedRegisters):
     41
     42        * jit/JITSubGenerator.h:
     43        (JSC::JITSubGenerator::generateFastPath):
     44        - When the result is in the same as one of the input registers, we'll end up
     45          corrupting the input in fast path even if we determine that we need to go to
     46          the slow path.  We now move the input into the scratch register and operate
     47          on that instead and only move the result into the result register only after
     48          the fast path has succeeded.
     49
     50        * tests/stress/op_sub.js:
     51        (o1.valueOf):
     52        (runTest):
     53        - Added some debugging tools: flags for verbose logging, and eager abort on fail.
     54
    1552015-10-28  Mark Lam  <mark.lam@apple.com>
    256
  • trunk/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h

    r191130 r191683  
    3636
    3737class MacroAssemblerARM64 : public AbstractMacroAssembler<ARM64Assembler, MacroAssemblerARM64> {
     38public:
    3839    static const RegisterID dataTempRegister = ARM64Registers::ip0;
    3940    static const RegisterID memoryTempRegister = ARM64Registers::ip1;
     41
     42private:
    4043    static const ARM64Registers::FPRegisterID fpTempRegister = ARM64Registers::q31;
    4144    static const ARM64Assembler::SetFlags S = ARM64Assembler::S;
  • trunk/Source/JavaScriptCore/ftl/FTLCapabilities.cpp

    r191621 r191683  
    8585    case ArithAdd:
    8686    case ArithClz32:
     87    case ArithSub:
    8788    case ArithMul:
    8889    case ArithDiv:
     
    212213        // These are OK.
    213214        break;
    214     case ArithSub:
    215         if (node->result() == NodeResultJS)
    216             return CannotCompile;
    217         break;
    218215
    219216    case Identity:
  • trunk/Source/JavaScriptCore/ftl/FTLCompile.cpp

    r191602 r191683  
    3535#include "DFGCommon.h"
    3636#include "DFGGraphSafepoint.h"
     37#include "DFGOperations.h"
    3738#include "DataView.h"
    3839#include "Disassembler.h"
     
    4243#include "FTLThunks.h"
    4344#include "FTLUnwindInfo.h"
     45#include "JITSubGenerator.h"
    4446#include "LLVMAPI.h"
    4547#include "LinkBuffer.h"
     
    4850
    4951using namespace DFG;
     52
     53static RegisterSet usedRegistersFor(const StackMaps::Record&);
    5054
    5155static uint8_t* mmAllocateCodeSection(
     
    302306}
    303307
     308class BinarySnippetRegisterContext {
     309    // The purpose of this class is to shuffle registers to get them into the state
     310    // that baseline code expects so that we can use the baseline snippet generators i.e.
     311    //    1. ensure that the inputs and outputs are not in tag or scratch registers.
     312    //    2. tag registers are loaded with the expected values.
     313    //
     314    // We also need to:
     315    //    1. restore the input and tag registers to the values that LLVM put there originally.
     316    //    2. that is except when one of the input registers is also the result register.
     317    //       In this case, we don't want to trash the result, and hence, should not restore into it.
     318
     319public:
     320    BinarySnippetRegisterContext(ScratchRegisterAllocator& allocator, GPRReg& result, GPRReg& left, GPRReg& right)
     321        : m_allocator(allocator)
     322        , m_result(result)
     323        , m_left(left)
     324        , m_right(right)
     325        , m_origResult(result)
     326        , m_origLeft(left)
     327        , m_origRight(right)
     328    {
     329        m_allocator.lock(m_result);
     330        m_allocator.lock(m_left);
     331        m_allocator.lock(m_right);
     332
     333        RegisterSet inputRegisters = RegisterSet(m_left, m_right);
     334        RegisterSet inputAndOutputRegisters = RegisterSet(inputRegisters, m_result);
     335
     336        RegisterSet reservedRegisters;
     337        for (GPRReg reg : GPRInfo::reservedRegisters())
     338            reservedRegisters.set(reg);
     339
     340        if (reservedRegisters.get(m_left))
     341            m_left = m_allocator.allocateScratchGPR();
     342        if (reservedRegisters.get(m_right))
     343            m_right = m_allocator.allocateScratchGPR();
     344        if (!inputRegisters.get(m_result) && reservedRegisters.get(m_result))
     345            m_result = m_allocator.allocateScratchGPR();
     346       
     347        if (!inputAndOutputRegisters.get(GPRInfo::tagMaskRegister))
     348            m_savedTagMaskRegister = m_allocator.allocateScratchGPR();
     349        if (!inputAndOutputRegisters.get(GPRInfo::tagTypeNumberRegister))
     350            m_savedTagTypeNumberRegister = m_allocator.allocateScratchGPR();
     351    }
     352
     353    void initializeRegisters(CCallHelpers& jit)
     354    {
     355        if (m_left != m_origLeft)
     356            jit.move(m_origLeft, m_left);
     357        if (m_right != m_origRight)
     358            jit.move(m_origRight, m_right);
     359
     360        if (m_savedTagMaskRegister != InvalidGPRReg)
     361            jit.move(GPRInfo::tagMaskRegister, m_savedTagMaskRegister);
     362        if (m_savedTagTypeNumberRegister != InvalidGPRReg)
     363            jit.move(GPRInfo::tagTypeNumberRegister, m_savedTagTypeNumberRegister);
     364
     365        jit.emitMaterializeTagCheckRegisters();
     366    }
     367
     368    void restoreRegisters(CCallHelpers& jit)
     369    {
     370        if (m_origLeft != m_left && m_origLeft != m_origResult)
     371            jit.move(m_left, m_origLeft);
     372        if (m_origRight != m_right && m_origRight != m_origResult)
     373            jit.move(m_right, m_origRight);
     374       
     375        if (m_savedTagMaskRegister != InvalidGPRReg)
     376            jit.move(m_savedTagMaskRegister, GPRInfo::tagMaskRegister);
     377        if (m_savedTagTypeNumberRegister != InvalidGPRReg)
     378            jit.move(m_savedTagTypeNumberRegister, GPRInfo::tagTypeNumberRegister);
     379    }
     380
     381private:
     382    ScratchRegisterAllocator& m_allocator;
     383
     384    GPRReg& m_result;
     385    GPRReg& m_left;
     386    GPRReg& m_right;
     387
     388    GPRReg m_origResult;
     389    GPRReg m_origLeft;
     390    GPRReg m_origRight;
     391
     392    GPRReg m_savedTagMaskRegister { InvalidGPRReg };
     393    GPRReg m_savedTagTypeNumberRegister { InvalidGPRReg };
     394};
     395
     396static void generateArithSubICFastPath(
     397    State& state, CodeBlock* codeBlock, GeneratedFunction generatedFunction,
     398    StackMaps::RecordMap& recordMap, ArithSubDescriptor& ic)
     399{
     400    VM& vm = state.graph.m_vm;
     401    size_t sizeOfIC = sizeOfArithSub();
     402
     403    StackMaps::RecordMap::iterator iter = recordMap.find(ic.stackmapID());
     404    if (iter == recordMap.end())
     405        return; // It was optimized out.
     406
     407    Vector<StackMaps::RecordAndIndex>& records = iter->value;
     408
     409    RELEASE_ASSERT(records.size() == ic.m_slowPathStarts.size());
     410
     411    for (unsigned i = records.size(); i--;) {
     412        StackMaps::Record& record = records[i].record;
     413
     414        CCallHelpers fastPathJIT(&vm, codeBlock);
     415
     416        GPRReg result = record.locations[0].directGPR();
     417        GPRReg left = record.locations[1].directGPR();
     418        GPRReg right = record.locations[2].directGPR();
     419
     420        RegisterSet usedRegisters = usedRegistersFor(record);
     421        ScratchRegisterAllocator allocator(usedRegisters);
     422
     423        BinarySnippetRegisterContext context(allocator, result, left, right);
     424
     425        GPRReg scratchGPR = allocator.allocateScratchGPR();
     426        FPRReg leftFPR = allocator.allocateScratchFPR();
     427        FPRReg rightFPR = allocator.allocateScratchFPR();
     428        FPRReg scratchFPR = InvalidFPRReg;
     429
     430        JITSubGenerator gen(JSValueRegs(result), JSValueRegs(left), JSValueRegs(right), ic.leftType(), ic.rightType(), leftFPR, rightFPR, scratchGPR, scratchFPR);
     431
     432        auto numberOfBytesUsedToPreserveReusedRegisters =
     433            allocator.preserveReusedRegistersByPushing(fastPathJIT, ScratchRegisterAllocator::ExtraStackSpace::NoExtraSpace);
     434
     435        context.initializeRegisters(fastPathJIT);
     436        gen.generateFastPath(fastPathJIT);
     437
     438        gen.endJumpList().link(&fastPathJIT);
     439        context.restoreRegisters(fastPathJIT);
     440        allocator.restoreReusedRegistersByPopping(fastPathJIT, numberOfBytesUsedToPreserveReusedRegisters,
     441            ScratchRegisterAllocator::ExtraStackSpace::SpaceForCCall);
     442        CCallHelpers::Jump done = fastPathJIT.jump();
     443
     444        gen.slowPathJumpList().link(&fastPathJIT);
     445        context.restoreRegisters(fastPathJIT);
     446        allocator.restoreReusedRegistersByPopping(fastPathJIT, numberOfBytesUsedToPreserveReusedRegisters,
     447            ScratchRegisterAllocator::ExtraStackSpace::SpaceForCCall);
     448        CCallHelpers::Jump slowPathStart = fastPathJIT.jump();
     449
     450        char* startOfIC = bitwise_cast<char*>(generatedFunction) + record.instructionOffset;
     451        generateInlineIfPossibleOutOfLineIfNot(state, vm, codeBlock, fastPathJIT, startOfIC, sizeOfIC, "ArithSub inline cache fast path", [&] (LinkBuffer& linkBuffer, CCallHelpers&, bool) {
     452            linkBuffer.link(done, CodeLocationLabel(startOfIC + sizeOfIC));
     453            state.finalizer->sideCodeLinkBuffer->link(ic.m_slowPathDone[i], CodeLocationLabel(startOfIC + sizeOfIC));
     454           
     455            linkBuffer.link(slowPathStart, state.finalizer->sideCodeLinkBuffer->locationOf(ic.m_slowPathStarts[i]));
     456        });
     457    }
     458}
    304459
    305460static RegisterSet usedRegistersFor(const StackMaps::Record& record)
     
    461616        || !state.putByIds.isEmpty()
    462617        || !state.checkIns.isEmpty()
     618        || !state.arithSubs.isEmpty()
    463619        || !state.lazySlowPaths.isEmpty()) {
    464620        CCallHelpers slowPathJIT(&vm, codeBlock);
     
    583739        }
    584740
     741        for (size_t i = state.arithSubs.size(); i--;) {
     742            ArithSubDescriptor& arithSub = state.arithSubs[i];
     743           
     744            if (verboseCompilationEnabled())
     745                dataLog("Handling ArithSub stackmap #", arithSub.stackmapID(), "\n");
     746           
     747            auto iter = recordMap.find(arithSub.stackmapID());
     748            if (iter == recordMap.end())
     749                continue; // It was optimized out.
     750           
     751            CodeOrigin codeOrigin = arithSub.codeOrigin();
     752            for (unsigned i = 0; i < iter->value.size(); ++i) {
     753                StackMaps::Record& record = iter->value[i].record;
     754                RegisterSet usedRegisters = usedRegistersFor(record);
     755
     756                GPRReg result = record.locations[0].directGPR();
     757                GPRReg left = record.locations[1].directGPR();
     758                GPRReg right = record.locations[2].directGPR();
     759
     760                arithSub.m_slowPathStarts.append(slowPathJIT.label());
     761
     762                callOperation(state, usedRegisters, slowPathJIT, codeOrigin, &exceptionTarget,
     763                    operationValueSub, result, left, right).call();
     764
     765                arithSub.m_slowPathDone.append(slowPathJIT.jump());
     766            }
     767        }
     768
    585769        for (unsigned i = state.lazySlowPaths.size(); i--;) {
    586770            LazySlowPathDescriptor& descriptor = state.lazySlowPaths[i];
     
    650834                state, codeBlock, generatedFunction, recordMap, state.checkIns[i],
    651835                sizeOfIn());
     836        }
     837        for (unsigned i = state.arithSubs.size(); i--;) {
     838            ArithSubDescriptor& arithSub = state.arithSubs[i];
     839            generateArithSubICFastPath(state, codeBlock, generatedFunction, recordMap, arithSub);
    652840        }
    653841        for (unsigned i = state.lazySlowPaths.size(); i--;) {
  • trunk/Source/JavaScriptCore/ftl/FTLInlineCacheDescriptor.h

    r190885 r191683  
    124124};
    125125
     126class ArithSubDescriptor : public InlineCacheDescriptor {
     127public:
     128    ArithSubDescriptor(unsigned stackmapID, CodeOrigin codeOrigin, ResultType leftType, ResultType rightType)
     129        : InlineCacheDescriptor(stackmapID, codeOrigin, nullptr)
     130        , m_leftType(leftType)
     131        , m_rightType(rightType)
     132    {
     133    }
     134
     135    ResultType leftType() const { return m_leftType; }
     136    ResultType rightType() const { return m_rightType; }
     137   
     138    Vector<MacroAssembler::Label> m_slowPathStarts;
     139
     140private:
     141    ResultType m_leftType;
     142    ResultType m_rightType;
     143};
     144
    126145// You can create a lazy slow path call in lowerDFGToLLVM by doing:
    127146// m_ftlState.lazySlowPaths.append(
  • trunk/Source/JavaScriptCore/ftl/FTLInlineCacheSize.cpp

    r190672 r191683  
    129129}
    130130
     131size_t sizeOfArithSub()
     132{
     133#if CPU(ARM64)
     134#ifdef NDEBUG
     135    return 192; // ARM64 release.
     136#else
     137    return 288; // ARM64 debug.
     138#endif
     139#else // CPU(X86_64)
     140#ifdef NDEBUG
     141    return 184; // X86_64 release.
     142#else
     143    return 259; // X86_64 debug.
     144#endif
     145#endif
     146}
     147
    131148size_t sizeOfICFor(Node* node)
    132149{
  • trunk/Source/JavaScriptCore/ftl/FTLInlineCacheSize.h

    r190370 r191683  
    4747size_t sizeOfConstructForwardVarargs();
    4848size_t sizeOfIn();
     49size_t sizeOfArithSub();
    4950
    5051size_t sizeOfICFor(DFG::Node*);
  • trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp

    r191625 r191683  
    14771477            break;
    14781478        }
    1479            
     1479
     1480        case UntypedUse: {
     1481            if (!isSub) {
     1482                DFG_CRASH(m_graph, m_node, "Bad use kind");
     1483                break;
     1484            }
     1485           
     1486            unsigned stackmapID = m_stackmapIDs++;
     1487
     1488            if (Options::verboseCompilation())
     1489                dataLog("    Emitting ArithSub patchpoint with stackmap #", stackmapID, "\n");
     1490
     1491            LValue left = lowJSValue(m_node->child1());
     1492            LValue right = lowJSValue(m_node->child2());
     1493
     1494            // Arguments: id, bytes, target, numArgs, args...
     1495            LValue call = m_out.call(
     1496                m_out.patchpointInt64Intrinsic(),
     1497                m_out.constInt64(stackmapID), m_out.constInt32(sizeOfArithSub()),
     1498                constNull(m_out.ref8), m_out.constInt32(2), left, right);
     1499            setInstructionCallingConvention(call, LLVMAnyRegCallConv);
     1500
     1501            m_ftlState.arithSubs.append(ArithSubDescriptor(stackmapID, m_node->origin.semantic,
     1502                abstractValue(m_node->child1()).resultType(),
     1503                abstractValue(m_node->child2()).resultType()));
     1504
     1505            setJSValue(call);
     1506            break;
     1507        }
     1508
    14801509        default:
    14811510            DFG_CRASH(m_graph, m_node, "Bad use kind");
  • trunk/Source/JavaScriptCore/ftl/FTLState.h

    r191058 r191683  
    7979    SegmentedVector<PutByIdDescriptor> putByIds;
    8080    SegmentedVector<CheckInDescriptor> checkIns;
     81    SegmentedVector<ArithSubDescriptor> arithSubs;
    8182    SegmentedVector<LazySlowPathDescriptor> lazySlowPaths;
    8283    Vector<JSCall> jsCalls;
  • trunk/Source/JavaScriptCore/jit/GPRInfo.h

    r189575 r191683  
    2828
    2929#include "MacroAssembler.h"
     30#include <array>
    3031#include <wtf/PrintStream.h>
    3132
     
    399400    static const GPRReg tagTypeNumberRegister = X86Registers::r14;
    400401    static const GPRReg tagMaskRegister = X86Registers::r15;
     402    static const GPRReg scratchRegister = MacroAssembler::scratchRegister;
     403
    401404    // Temporary registers.
    402405    static const GPRReg regT0 = X86Registers::eax;
     
    501504    }
    502505
     506    static const std::array<GPRReg, 3>& reservedRegisters()
     507    {
     508        static const std::array<GPRReg, 3> reservedRegisters { {
     509            scratchRegister,
     510            tagTypeNumberRegister,
     511            tagMaskRegister,
     512        } };
     513        return reservedRegisters;
     514    }
     515   
    503516    static const unsigned InvalidIndex = 0xffffffff;
    504517};
     
    604617    static const GPRReg tagTypeNumberRegister = ARM64Registers::x27;
    605618    static const GPRReg tagMaskRegister = ARM64Registers::x28;
     619    static const GPRReg dataTempRegister = MacroAssembler::dataTempRegister;
     620    static const GPRReg memoryTempRegister = MacroAssembler::memoryTempRegister;
    606621    // Temporary registers.
    607622    static const GPRReg regT0 = ARM64Registers::x0;
     
    696711    }
    697712
     713    static const std::array<GPRReg, 4>& reservedRegisters()
     714    {
     715        static const std::array<GPRReg, 4> reservedRegisters { {
     716            dataTempRegister,
     717            memoryTempRegister,
     718            tagTypeNumberRegister,
     719            tagMaskRegister,
     720        } };
     721        return reservedRegisters;
     722    }
     723   
    698724    static const unsigned InvalidIndex = 0xffffffff;
    699725};
  • trunk/Source/JavaScriptCore/jit/JITSubGenerator.h

    r191241 r191683  
    2929#include "CCallHelpers.h"
    3030#include "ResultType.h"
     31#include "ScratchRegisterAllocator.h"
    3132
    3233namespace JSC {
    33    
     34
    3435class JITSubGenerator {
    3536public:
     
    6263        CCallHelpers::Jump rightNotInt = jit.branchIfNotInt32(m_right);
    6364
    64         jit.move(m_left.payloadGPR(), m_result.payloadGPR());
     65        jit.move(m_left.payloadGPR(), m_scratchGPR);
    6566        m_slowPathJumpList.append(
    66             jit.branchSub32(CCallHelpers::Overflow, m_right.payloadGPR(), m_result.payloadGPR()));
     67            jit.branchSub32(CCallHelpers::Overflow, m_right.payloadGPR(), m_scratchGPR));
    6768
    68         jit.boxInt32(m_result.payloadGPR(), m_result);
     69        jit.boxInt32(m_scratchGPR, m_result);
    6970
    7071        m_endJumpList.append(jit.jump());
     
    7576            return;
    7677        }
    77        
     78
    7879        leftNotInt.link(&jit);
    7980        if (!m_leftType.definitelyIsNumber())
  • trunk/Source/JavaScriptCore/tests/stress/op_sub.js

    r191290 r191683  
    2020// errors.
    2121
     22var verbose = false;
     23var abortOnFirstFail = false;
     24
    2225var o1 = {
    2326    valueOf: function() { return 10; }
     
    276279            for (var scenarioID = 0; scenarioID < scenarios.length; scenarioID++) {
    277280                var scenario = scenarios[scenarioID];
     281                if (verbose)
     282                    print("Testing " + test.name + ":" + scenario.name + " on iteration " + i + ": expecting " + scenario.expected);
     283
    278284                var result = testFunc(scenario.x, scenario.y);
    279285                if (result == scenario.expected)
     
    283289                if (!failedScenario[scenarioID]) {
    284290                    errorReport += "FAIL: " + test.name + ":" + scenario.name + " started failing on iteration " + i + ": expected " + scenario.expected + ", actual " + result + "\n";
     291                    if (abortOnFirstFail)
     292                        throw errorReport;
    285293                    failedScenario[scenarioID] = scenario;
    286294                }
     
    288296        }
    289297    } catch(e) {
     298        if (abortOnFirstFail)
     299            throw e; // Negate the catch by re-throwing.
    290300        errorReport += "Unexpected exception: " + e + "\n";
    291301    }
Note: See TracChangeset for help on using the changeset viewer.