Changeset 265036 in webkit


Ignore:
Timestamp:
Jul 29, 2020 3:32:27 AM (4 years ago)
Author:
Paulo Matos
Message:

for..of intrinsics implementation for 32bits
https://bugs.webkit.org/show_bug.cgi?id=214737

Reviewed by Yusuke Suzuki.

Joint work with Caio Lima <Caio Lima>.
JSTests:

Unskip tests previously skipped due to missing for..of intrinsics.

  • stress/for-of-array-different-globals.js:
  • stress/for-of-iterator-open-osr-at-inlined-return-non-object.js:
  • stress/for-of-iterator-open-osr-at-iterator-set-local.js:
  • stress/for-of-iterator-open-return-non-object.js:

Source/JavaScriptCore:

Implements for..of intrinsics for 32bits.
Adds or8 instruction to ARMv7 and MIPS Macro Assembler.
Adds intrinsic operations to LLInt and Baseline for 32bits.
Fixes DFG OSR Exit bug, where checkpoint temporary value is
incorrectly recreated for Baseline.
Refactors code in DFG OSR Exit to be easier to modify and
maintain by separating the switch cases for 32 and 64bits.

  • assembler/MacroAssemblerARMv7.h:

(JSC::MacroAssemblerARMv7::or8): Adds or8(TrustedImm, AbsoluteAddress)
(JSC::MacroAssemblerARMv7::or32):
(JSC::MacroAssemblerARMv7::store8):

  • assembler/MacroAssemblerMIPS.h:

(JSC::MacroAssemblerMIPS::or8): Adds or8(TrustedImm, AbsoluteAddress)
(JSC::MacroAssemblerMIPS::store8):

  • assembler/testmasm.cpp:

(JSC::testOrImmMem): Tests or8

  • bytecompiler/BytecodeGenerator.cpp:

(JSC::BytecodeGenerator::emitEnumeration):

  • dfg/DFGOSRExit.cpp:

(JSC::DFG::OSRExit::compileExit): Fixes DFG OSR Exit bug, where checkpoint temporary value is
incorrectly recreated for Baseline. Refactors code in DFG OSR Exit to be easier to modify and
maintain by separating the switch cases for 32 and 64bits.

  • jit/JIT.h:
  • jit/JITCall32_64.cpp:

(JSC::JIT::emitPutCallResult):
(JSC::JIT::compileSetupFrame):
(JSC::JIT::compileOpCall):
(JSC::JIT::emit_op_iterator_open):
(JSC::JIT::emitSlow_op_iterator_open):
(JSC::JIT::emit_op_iterator_next):
(JSC::JIT::emitSlow_op_iterator_next):

  • jit/JITInlines.h:

(JSC::JIT::emitJumpSlowCaseIfNotJSCell):

  • llint/LowLevelInterpreter32_64.asm:
Location:
trunk
Files:
15 edited

Legend:

Unmodified
Added
Removed
  • trunk/JSTests/ChangeLog

    r265034 r265036  
     12020-07-29  Paulo Matos  <pmatos@igalia.com>
     2
     3        for..of intrinsics implementation for 32bits
     4        https://bugs.webkit.org/show_bug.cgi?id=214737
     5
     6        Reviewed by Yusuke Suzuki.
     7
     8        Joint work with Caio Lima <ticaiolima@gmail.com>.
     9        Unskip tests previously skipped due to missing for..of intrinsics.
     10
     11        * stress/for-of-array-different-globals.js:
     12        * stress/for-of-iterator-open-osr-at-inlined-return-non-object.js:
     13        * stress/for-of-iterator-open-osr-at-iterator-set-local.js:
     14        * stress/for-of-iterator-open-return-non-object.js:
     15
    1162020-07-29  Yusuke Suzuki  <ysuzuki@apple.com>
    217
  • trunk/JSTests/stress/for-of-array-different-globals.js

    r260361 r265036  
    1 //@ skip if ["arm", "mips"].include?($architecture)
    21let count = 0;
    32
  • trunk/JSTests/stress/for-of-iterator-open-osr-at-inlined-return-non-object.js

    r264873 r265036  
    1 //@ skip if $model == "Apple Watch Series 3" # We are not supporting fast for-of for 32bit architecture.
    2 //@ skip if ["arm", "mips"].include?($architecture)
    31let shouldVendNull = false;
    42function vendIterator() {
  • trunk/JSTests/stress/for-of-iterator-open-osr-at-iterator-set-local.js

    r264873 r265036  
    1 //@ skip if $model == "Apple Watch Series 3" # We are not supporting fast for-of for 32bit architecture.
    2 //@ skip if ["arm", "mips"].include?($architecture)
    31let shouldVendNull = false;
    42function vendIterator() {
  • trunk/JSTests/stress/for-of-iterator-open-return-non-object.js

    r264873 r265036  
    1 //@ skip if $model == "Apple Watch Series 3" # We are not supporting fast for-of for 32bit architecture.
    2 //@ skip if ["arm", "mips"].include?($architecture)
    31function vendIterator() {
    42    return 1;
  • trunk/Source/JavaScriptCore/ChangeLog

    r265034 r265036  
     12020-07-29  Paulo Matos  <pmatos@igalia.com>
     2
     3        for..of intrinsics implementation for 32bits
     4        https://bugs.webkit.org/show_bug.cgi?id=214737
     5
     6        Reviewed by Yusuke Suzuki.
     7
     8        Joint work with Caio Lima <ticaiolima@gmail.com>.
     9
     10        Implements for..of intrinsics for 32bits.
     11        Adds or8 instruction to ARMv7 and MIPS Macro Assembler.
     12        Adds intrinsic operations to LLInt and Baseline for 32bits.
     13        Fixes DFG OSR Exit bug, where checkpoint temporary value is
     14        incorrectly recreated for Baseline.
     15        Refactors code in DFG OSR Exit to be easier to modify and
     16        maintain by separating the switch cases for 32 and 64bits.
     17
     18        * assembler/MacroAssemblerARMv7.h:
     19        (JSC::MacroAssemblerARMv7::or8): Adds or8(TrustedImm, AbsoluteAddress)
     20        (JSC::MacroAssemblerARMv7::or32):
     21        (JSC::MacroAssemblerARMv7::store8):
     22        * assembler/MacroAssemblerMIPS.h:
     23        (JSC::MacroAssemblerMIPS::or8): Adds or8(TrustedImm, AbsoluteAddress)
     24        (JSC::MacroAssemblerMIPS::store8):
     25        * assembler/testmasm.cpp:
     26        (JSC::testOrImmMem): Tests or8
     27        * bytecompiler/BytecodeGenerator.cpp:
     28        (JSC::BytecodeGenerator::emitEnumeration):
     29        * dfg/DFGOSRExit.cpp:
     30        (JSC::DFG::OSRExit::compileExit): Fixes DFG OSR Exit bug, where checkpoint temporary value is
     31        incorrectly recreated for Baseline. Refactors code in DFG OSR Exit to be easier to modify and
     32        maintain by separating the switch cases for 32 and 64bits.
     33        * jit/JIT.h:
     34        * jit/JITCall32_64.cpp:
     35        (JSC::JIT::emitPutCallResult):
     36        (JSC::JIT::compileSetupFrame):
     37        (JSC::JIT::compileOpCall):
     38        (JSC::JIT::emit_op_iterator_open):
     39        (JSC::JIT::emitSlow_op_iterator_open):
     40        (JSC::JIT::emit_op_iterator_next):
     41        (JSC::JIT::emitSlow_op_iterator_next):
     42        * jit/JITInlines.h:
     43        (JSC::JIT::emitJumpSlowCaseIfNotJSCell):
     44        * llint/LowLevelInterpreter32_64.asm:
     45
    1462020-07-29  Yusuke Suzuki  <ysuzuki@apple.com>
    247
  • trunk/Source/JavaScriptCore/assembler/MacroAssemblerARMv7.h

    r264507 r265036  
    363363    }
    364364
    365     void or32(RegisterID src, RegisterID dest)
    366     {
    367         m_assembler.orr(dest, dest, src);
     365    void or8(TrustedImm32 imm, AbsoluteAddress address)
     366    {
     367        ARMThumbImmediate armImm = ARMThumbImmediate::makeEncodedImm(imm.m_value);
     368        if (armImm.isValid()) {
     369            move(TrustedImmPtr(address.m_ptr), addressTempRegister);
     370            load8(addressTempRegister, dataTempRegister);
     371            m_assembler.orr(dataTempRegister, dataTempRegister, armImm);
     372            store8(dataTempRegister, addressTempRegister);
     373        } else {
     374            move(TrustedImmPtr(address.m_ptr), addressTempRegister);
     375            load8(addressTempRegister, dataTempRegister);
     376            move(imm, addressTempRegister);
     377            m_assembler.orr(dataTempRegister, dataTempRegister, addressTempRegister);
     378            move(TrustedImmPtr(address.m_ptr), addressTempRegister);
     379            store8(dataTempRegister, addressTempRegister);
     380        }
    368381    }
    369382
     
    385398        }
    386399    }
    387    
     400
     401    void or32(RegisterID src, RegisterID dest)
     402    {
     403        m_assembler.orr(dest, dest, src);
     404    }
     405
    388406    void or32(RegisterID src, AbsoluteAddress dest)
    389407    {
     
    898916    }
    899917   
    900     void store8(RegisterID src, void* address)
     918    void store8(RegisterID src, const void *address)
    901919    {
    902920        move(TrustedImmPtr(address), addressTempRegister);
     
    904922    }
    905923   
    906     void store8(TrustedImm32 imm, void* address)
     924    void store8(TrustedImm32 imm, const void *address)
    907925    {
    908926        TrustedImm32 imm8(static_cast<int8_t>(imm.m_value));
     
    918936    }
    919937
     938    void store8(RegisterID src, RegisterID addrreg)
     939    {
     940        store8(src, ArmAddress(addrreg, 0));
     941    }
     942   
    920943    void store16(RegisterID src, ImplicitAddress address)
    921944    {
  • trunk/Source/JavaScriptCore/assembler/MacroAssemblerMIPS.h

    r264507 r265036  
    476476    }
    477477
     478
     479    void or8(TrustedImm32 imm, AbsoluteAddress dest)
     480    {
     481        if (!imm.m_value && !m_fixedWidth)
     482            return;
     483
     484        if (m_fixedWidth) {
     485            load8(dest.m_ptr, immTempRegister);
     486            or32(imm, immTempRegister);
     487            store8(immTempRegister, dest.m_ptr);
     488        } else {
     489            uintptr_t adr = reinterpret_cast<uintptr_t>(dest.m_ptr);
     490            m_assembler.lui(addrTempRegister, (adr + 0x8000) >> 16);
     491            m_assembler.lbu(immTempRegister, addrTempRegister, adr & 0xffff);
     492            or32(imm, immTempRegister);
     493            m_assembler.sb(immTempRegister, addrTempRegister, adr & 0xffff);           
     494        }
     495    }
     496   
    478497    void or16(TrustedImm32 imm, AbsoluteAddress dest)
    479498    {
     
    13131332    }
    13141333
    1315     void store8(RegisterID src, void* address)
     1334    void store8(RegisterID src, const void* address)
    13161335    {
    13171336        if (m_fixedWidth) {
  • trunk/Source/JavaScriptCore/assembler/testmasm.cpp

    r264507 r265036  
    22822282
    22832283    memoryLocation = 0x12341234;
     2284    auto or8 = compile([&] (CCallHelpers& jit) {
     2285        emitFunctionPrologue(jit);
     2286        jit.or8(CCallHelpers::TrustedImm32(42), CCallHelpers::AbsoluteAddress(&memoryLocation));
     2287        emitFunctionEpilogue(jit);
     2288        jit.ret();
     2289    });
     2290    invoke<void>(or8);
     2291    CHECK_EQ(memoryLocation, 0x12341234 | 42);
     2292
     2293    memoryLocation = 0x12341234;
    22842294    auto or16InvalidLogicalImmInARM64 = compile([&] (CCallHelpers& jit) {
    22852295        emitFunctionPrologue(jit);
  • trunk/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp

    r264809 r265036  
    41904190void BytecodeGenerator::emitEnumeration(ThrowableExpressionData* node, ExpressionNode* subjectNode, const ScopedLambda<void(BytecodeGenerator&, RegisterID*)>& callBack, ForOfNode* forLoopNode, RegisterID* forLoopSymbolTable)
    41914191{
    4192     if (!Options::useIterationIntrinsics() || is32Bit() || (forLoopNode && forLoopNode->isForAwait())) {
     4192    if (!Options::useIterationIntrinsics() || (forLoopNode && forLoopNode->isForAwait())) {
    41934193        emitGenericEnumeration(node, subjectNode, callBack, forLoopNode, forLoopSymbolTable);
    41944194        return;
  • trunk/Source/JavaScriptCore/dfg/DFGOSRExit.cpp

    r264929 r265036  
    600600                    // FIXME: We should do what the FTL does and materialize all the JSValues into the scratch buffer.
    601601                    switch (recovery.technique()) {
     602
    602603                    case Constant:
    603604                        sideState->tmps[i] = recovery.constant();
    604605                        break;
    605 
     606                       
     607#if USE(JSVALUE64)
    606608                    case UnboxedInt32InGPR:
    607609                    case Int32DisplacedInJSStack: {
     
    610612                    }
    611613
    612 #if USE(JSVALUE32_64)
    613                     case InPair:
    614 #endif
    615                     case InGPR:
    616614                    case BooleanDisplacedInJSStack:
    617615                    case CellDisplacedInJSStack:
     616                    case UnboxedCellInGPR:
     617                    case InGPR:
    618618                    case DisplacedInJSStack: {
    619619                        sideState->tmps[i] = reinterpret_cast<JSValue*>(tmpScratch)[i + tmpOffset];
     
    621621                    }
    622622
    623                     case UnboxedCellInGPR: {
    624 #if USE(JSVALUE64)
    625                         sideState->tmps[i] = reinterpret_cast<JSValue*>(tmpScratch)[i + tmpOffset];
    626 #else
    627                         EncodedValueDescriptor* valueDescriptor = bitwise_cast<EncodedValueDescriptor*>(tmpScratch + i + tmpOffset);
    628                         sideState->tmps[i] = JSValue(JSValue::CellTag, valueDescriptor->asBits.payload);
    629 #endif
    630                         break;
    631                     }
    632 
    633623                    case UnboxedBooleanInGPR: {
    634624                        sideState->tmps[i] = jsBoolean(static_cast<bool>(tmpScratch[i + tmpOffset]));
    635625                        break;
    636626                    }
     627
     628#else // USE(JSVALUE32_64)
     629                    case UnboxedInt32InGPR:
     630                    case Int32DisplacedInJSStack: {
     631                        sideState->tmps[i] = jsNumber(static_cast<int32_t>(tmpScratch[i + tmpOffset]));
     632                        break;
     633                    }
     634
     635                    case InPair:
     636                    case DisplacedInJSStack: {
     637                        sideState->tmps[i] = reinterpret_cast<JSValue*>(tmpScratch)[i + tmpOffset];
     638                        break;
     639                    }
     640
     641                    case CellDisplacedInJSStack:
     642                    case UnboxedCellInGPR: {
     643                        EncodedValueDescriptor* valueDescriptor = bitwise_cast<EncodedValueDescriptor*>(tmpScratch + i + tmpOffset);
     644                        sideState->tmps[i] = JSValue(JSValue::CellTag, valueDescriptor->asBits.payload);
     645                        break;
     646                    }
     647
     648                    case BooleanDisplacedInJSStack:
     649                    case UnboxedBooleanInGPR: {
     650                        sideState->tmps[i] = jsBoolean(static_cast<bool>(tmpScratch[i + tmpOffset]));
     651                        break;
     652                    }
     653                       
     654#endif // USE(JSVALUE64)
    637655
    638656                    default:
  • trunk/Source/JavaScriptCore/jit/JIT.h

    r265000 r265036  
    444444        void emitJumpSlowCaseIfNotJSCell(VirtualRegister);
    445445        void emitJumpSlowCaseIfNotJSCell(VirtualRegister, RegisterID tag);
     446        void emitJumpSlowCaseIfNotJSCell(RegisterID);
    446447
    447448        void compileGetByIdHotPath(const Identifier*);
  • trunk/Source/JavaScriptCore/jit/JITCall32_64.cpp

    r260323 r265036  
    11/*
     2 * Copyright (C) 2020 Igalia, S.L. All rights reserved.
     3 * Copyright (C) 2020 Metrological Group B.V.
    24 * Copyright (C) 2008-2019 Apple Inc. All rights reserved.
    35 *
     
    3032#include "JIT.h"
    3133
     34#include "CacheableIdentifierInlines.h"
    3235#include "CodeBlock.h"
    3336#include "Interpreter.h"
     
    4043#include "ResultType.h"
    4144#include "SetupVarargsFrame.h"
     45#include "SlowPathCall.h"
    4246#include "StackAlignment.h"
    4347#include "ThunkGenerators.h"
     
    5054{
    5155    emitValueProfilingSite(bytecode.metadata(m_codeBlock));
    52     emitStore(bytecode.m_dst, regT1, regT0);
     56    emitStore(destinationFor(bytecode, m_bytecodeIndex.checkpoint()).virtualRegister(), regT1, regT0);
    5357}
    5458
     
    153157JIT::compileSetupFrame(const Op& bytecode, CallLinkInfo*)
    154158{
     159    unsigned checkpoint = m_bytecodeIndex.checkpoint();
    155160    auto& metadata = bytecode.metadata(m_codeBlock);
    156     int argCount = bytecode.m_argc;
    157     int registerOffset = -static_cast<int>(bytecode.m_argv);
     161    int argCount = argumentCountIncludingThisFor(bytecode, checkpoint);
     162    int registerOffset = -static_cast<int>(stackOffsetInRegistersForCall(bytecode, checkpoint));
    158163
    159164    if (Op::opcodeID == op_call && shouldEmitProfiling()) {
     
    270275    OpcodeID opcodeID = Op::opcodeID;
    271276    auto bytecode = instruction->as<Op>();
    272     VirtualRegister callee = bytecode.m_callee;
     277    VirtualRegister callee = calleeFor(bytecode, m_bytecodeIndex.checkpoint());
    273278
    274279    /* Caller always:
     
    367372}
    368373
    369 void JIT::emit_op_iterator_open(const Instruction*)
    370 {
    371     UNREACHABLE_FOR_PLATFORM();
    372 }
    373 
    374 void JIT::emitSlow_op_iterator_open(const Instruction*, Vector<SlowCaseEntry>::iterator&)
    375 {
    376     UNREACHABLE_FOR_PLATFORM();
    377 }
    378 
    379 void JIT::emit_op_iterator_next(const Instruction*)
    380 {
    381     UNREACHABLE_FOR_PLATFORM();
    382 }
    383 
    384 void JIT::emitSlow_op_iterator_next(const Instruction*, Vector<SlowCaseEntry>::iterator&)
    385 {
    386     UNREACHABLE_FOR_PLATFORM();
     374void JIT::emit_op_iterator_open(const Instruction* instruction)
     375{
     376    auto bytecode = instruction->as<OpIteratorOpen>();
     377    auto* tryFastFunction = ([&] () {
     378        switch (instruction->width()) {
     379        case Narrow: return iterator_open_try_fast_narrow;
     380        case Wide16: return iterator_open_try_fast_wide16;
     381        case Wide32: return iterator_open_try_fast_wide32;
     382        default: RELEASE_ASSERT_NOT_REACHED();
     383        }
     384    })();
     385
     386    JITSlowPathCall slowPathCall(this, instruction, tryFastFunction);
     387    slowPathCall.call();
     388    Jump fastCase = branch32(NotEqual, GPRInfo::returnValueGPR2, TrustedImm32(static_cast<uint32_t>(IterationMode::Generic)));
     389
     390    compileOpCall<OpIteratorOpen>(instruction, m_callLinkInfoIndex++);
     391
     392    VirtualRegister vr = destinationFor(bytecode, m_bytecodeIndex.checkpoint()).virtualRegister();
     393    advanceToNextCheckpoint();
     394   
     395    // call result (iterator) is in regT1 (tag)/regT0 (payload)
     396    const Identifier* ident = &vm().propertyNames->next;
     397   
     398    emitJumpSlowCaseIfNotJSCell(vr, regT1);
     399
     400    GPRReg tagIteratorGPR = regT1;
     401    GPRReg payloadIteratorGPR = regT0;
     402
     403    GPRReg tagNextGPR = tagIteratorGPR;
     404    GPRReg payloadNextGPR = payloadIteratorGPR;
     405
     406    JITGetByIdGenerator gen(
     407        m_codeBlock,
     408        CodeOrigin(m_bytecodeIndex),
     409        CallSiteIndex(BytecodeIndex(m_bytecodeIndex.offset())),
     410        RegisterSet::stubUnavailableRegisters(),
     411        CacheableIdentifier::createFromImmortalIdentifier(ident->impl()),
     412        JSValueRegs(tagIteratorGPR, payloadIteratorGPR),
     413        JSValueRegs(tagNextGPR, payloadNextGPR),
     414        AccessType::GetById);
     415   
     416    gen.generateFastPath(*this);
     417    addSlowCase(gen.slowPathJump());
     418    m_getByIds.append(gen);
     419
     420    emitValueProfilingSite(bytecode.metadata(m_codeBlock));
     421    emitPutVirtualRegister(bytecode.m_next, JSValueRegs(tagNextGPR, payloadNextGPR));
     422
     423    fastCase.link(this);
     424}
     425
     426void JIT::emitSlow_op_iterator_open(const Instruction* instruction, Vector<SlowCaseEntry>::iterator& iter)
     427{
     428    linkAllSlowCases(iter);
     429    compileOpCallSlowCase<OpIteratorOpen>(instruction, iter, m_callLinkInfoIndex++);
     430    emitJumpSlowToHotForCheckpoint(jump());
     431
     432    linkAllSlowCases(iter);
     433
     434    GPRReg tagIteratorGPR = regT1;
     435    GPRReg payloadIteratorGPR = regT0;
     436
     437    JumpList notObject;
     438    notObject.append(branchIfNotCell(tagIteratorGPR));
     439    notObject.append(branchIfNotObject(payloadIteratorGPR));
     440
     441    auto bytecode = instruction->as<OpIteratorOpen>();
     442    VirtualRegister nextVReg = bytecode.m_next;
     443    UniquedStringImpl* ident = vm().propertyNames->next.impl();
     444
     445    JITGetByIdGenerator& gen = m_getByIds[m_getByIdIndex++];
     446   
     447    Label coldPathBegin = label();
     448
     449    Call call = callOperationWithProfile(
     450        bytecode.metadata(m_codeBlock), // metadata
     451        operationGetByIdOptimize, // operation
     452        nextVReg, // result
     453        TrustedImmPtr(m_codeBlock->globalObject()), // arg1
     454        gen.stubInfo(), // arg2
     455        JSValueRegs(tagIteratorGPR, payloadIteratorGPR), // arg3
     456        CacheableIdentifier::createFromImmortalIdentifier(ident).rawBits()); // arg4
     457   
     458    gen.reportSlowPathCall(coldPathBegin, call);
     459    auto done = jump();
     460
     461    notObject.link(this);
     462    callOperation(operationThrowIteratorResultIsNotObject, TrustedImmPtr(m_codeBlock->globalObject()));
     463
     464    done.link(this);
     465}
     466
     467void JIT::emit_op_iterator_next(const Instruction* instruction)
     468{
     469    auto bytecode = instruction->as<OpIteratorNext>();
     470    auto& metadata = bytecode.metadata(m_codeBlock);
     471    auto* tryFastFunction = ([&] () {
     472        switch (instruction->width()) {
     473        case Narrow: return iterator_next_try_fast_narrow;
     474        case Wide16: return iterator_next_try_fast_wide16;
     475        case Wide32: return iterator_next_try_fast_wide32;
     476        default: RELEASE_ASSERT_NOT_REACHED();
     477        }
     478    })();
     479
     480    JSValueRegs nextRegs(regT1, regT0);
     481    emitGetVirtualRegister(bytecode.m_next, nextRegs);
     482    Jump genericCase = branchIfNotEmpty(nextRegs);
     483
     484    JITSlowPathCall slowPathCall(this, instruction, tryFastFunction);
     485    slowPathCall.call();
     486    Jump fastCase = branch32(NotEqual, GPRInfo::returnValueGPR2, TrustedImm32(static_cast<uint32_t>(IterationMode::Generic)));
     487
     488    genericCase.link(this);
     489    or8(TrustedImm32(static_cast<uint8_t>(IterationMode::Generic)), AbsoluteAddress(&metadata.m_iterationMetadata.seenModes));
     490    compileOpCall<OpIteratorNext>(instruction, m_callLinkInfoIndex++);
     491    advanceToNextCheckpoint();
     492    // call result ({ done, value } JSObject) in regT1, regT0
     493
     494    GPRReg tagValueGPR = regT1;
     495    GPRReg payloadValueGPR = regT0;
     496       
     497    GPRReg tagDoneGPR = regT5;
     498    GPRReg payloadDoneGPR = regT4;
     499
     500    {
     501        GPRReg tagIterResultGPR = regT3;
     502        GPRReg payloadIterResultGPR = regT2;
     503       
     504        // iterResultGPR will get trashed by the first get by id below.
     505        move(regT1, tagIterResultGPR);
     506        move(regT0, payloadIterResultGPR);
     507
     508        emitJumpSlowCaseIfNotJSCell(tagIterResultGPR);
     509
     510        RegisterSet preservedRegs = RegisterSet::stubUnavailableRegisters();
     511        preservedRegs.add(tagValueGPR);
     512        preservedRegs.add(payloadValueGPR);
     513        JITGetByIdGenerator gen(
     514            m_codeBlock,
     515            CodeOrigin(m_bytecodeIndex),
     516            CallSiteIndex(BytecodeIndex(m_bytecodeIndex.offset())),
     517            preservedRegs,
     518            CacheableIdentifier::createFromImmortalIdentifier(vm().propertyNames->done.impl()),
     519            JSValueRegs(tagIterResultGPR, payloadIterResultGPR),
     520            JSValueRegs(tagDoneGPR, payloadDoneGPR),
     521            AccessType::GetById);
     522        gen.generateFastPath(*this);
     523        addSlowCase(gen.slowPathJump());
     524        m_getByIds.append(gen);
     525
     526        emitValueProfilingSite(metadata);
     527        emitPutVirtualRegister(bytecode.m_done, JSValueRegs(tagDoneGPR, payloadDoneGPR));
     528        advanceToNextCheckpoint();
     529    }
     530
     531    {
     532        GPRReg tagIterResultGPR = regT1;
     533        GPRReg payloadIterResultGPR = regT0;
     534
     535        GPRReg scratch1 = regT6;
     536        GPRReg scratch2 = regT7;
     537        const bool shouldCheckMasqueradesAsUndefined = false;
     538        JumpList iterationDone = branchIfTruthy(vm(), JSValueRegs(tagDoneGPR, payloadDoneGPR), scratch1, scratch2, fpRegT0, fpRegT1, shouldCheckMasqueradesAsUndefined, m_codeBlock->globalObject());
     539
     540        JITGetByIdGenerator gen(
     541            m_codeBlock,
     542            CodeOrigin(m_bytecodeIndex),
     543            CallSiteIndex(BytecodeIndex(m_bytecodeIndex.offset())),
     544            RegisterSet::stubUnavailableRegisters(),
     545            CacheableIdentifier::createFromImmortalIdentifier(vm().propertyNames->value.impl()),
     546            JSValueRegs(tagIterResultGPR, payloadIterResultGPR),
     547            JSValueRegs(tagValueGPR, payloadValueGPR),
     548            AccessType::GetById);
     549        gen.generateFastPath(*this);
     550        addSlowCase(gen.slowPathJump());
     551        m_getByIds.append(gen);
     552
     553        emitValueProfilingSite(metadata);
     554        emitPutVirtualRegister(bytecode.m_value, JSValueRegs(tagValueGPR, payloadValueGPR));
     555
     556        iterationDone.link(this);
     557    }
     558
     559    fastCase.link(this);
     560}
     561
     562void JIT::emitSlow_op_iterator_next(const Instruction* instruction, Vector<SlowCaseEntry>::iterator& iter)
     563{
     564    linkAllSlowCases(iter);
     565    compileOpCallSlowCase<OpIteratorNext>(instruction, iter, m_callLinkInfoIndex++);
     566    emitJumpSlowToHotForCheckpoint(jump());
     567
     568    auto bytecode = instruction->as<OpIteratorNext>();
     569    {
     570        VirtualRegister doneVReg = bytecode.m_done;
     571        GPRReg tagValueGPR = regT1;
     572        GPRReg payloadValueGPR = regT0;
     573           
     574        GPRReg tagIterResultGPR = regT3;
     575        GPRReg payloadIterResultGPR = regT2;
     576       
     577        GPRReg tagDoneGPR = regT5;
     578        GPRReg payloadDoneGPR = regT4;
     579
     580        linkAllSlowCases(iter);
     581        JumpList notObject;
     582        notObject.append(branchIfNotCell(tagIterResultGPR));
     583        notObject.append(branchIfNotObject(payloadIterResultGPR));
     584
     585        UniquedStringImpl* ident = vm().propertyNames->done.impl();
     586        JITGetByIdGenerator& gen = m_getByIds[m_getByIdIndex++];
     587       
     588        Label coldPathBegin = label();
     589
     590        Call call = callOperationWithProfile(
     591            bytecode.metadata(m_codeBlock), // metadata
     592            operationGetByIdOptimize, // operation
     593            doneVReg, // result
     594            TrustedImmPtr(m_codeBlock->globalObject()), // arg1
     595            gen.stubInfo(), // arg2
     596            JSValueRegs(tagIterResultGPR, payloadIterResultGPR), // arg3
     597            CacheableIdentifier::createFromImmortalIdentifier(ident).rawBits()); // arg4
     598
     599        gen.reportSlowPathCall(coldPathBegin, call);
     600        emitGetVirtualRegister(doneVReg, JSValueRegs(tagDoneGPR, payloadDoneGPR));
     601        emitGetVirtualRegister(bytecode.m_value, JSValueRegs(tagValueGPR, payloadValueGPR));
     602        emitJumpSlowToHotForCheckpoint(jump());
     603
     604        notObject.link(this);
     605        callOperation(operationThrowIteratorResultIsNotObject, TrustedImmPtr(m_codeBlock->globalObject()));
     606    }
     607
     608    {   
     609        GPRReg tagIterResultGPR = regT1;
     610        GPRReg payloadIterResultGPR = regT0;
     611
     612        linkAllSlowCases(iter);
     613        VirtualRegister valueVReg = bytecode.m_value;
     614
     615        UniquedStringImpl* ident = vm().propertyNames->value.impl();
     616        JITGetByIdGenerator& gen = m_getByIds[m_getByIdIndex++];
     617
     618        Label coldPathBegin = label();
     619
     620        Call call = callOperationWithProfile(
     621            bytecode.metadata(m_codeBlock), // metadata
     622            operationGetByIdOptimize, // operation
     623            valueVReg, // result
     624            TrustedImmPtr(m_codeBlock->globalObject()), // arg1
     625            gen.stubInfo(), // arg2
     626            JSValueRegs(tagIterResultGPR, payloadIterResultGPR), // arg3
     627            CacheableIdentifier::createFromImmortalIdentifier(ident).rawBits()); // arg4
     628
     629        gen.reportSlowPathCall(coldPathBegin, call);
     630    }
    387631}
    388632
  • trunk/Source/JavaScriptCore/jit/JITInlines.h

    r262613 r265036  
    617617}
    618618
    619 ALWAYS_INLINE void JIT::emitJumpSlowCaseIfNotJSCell(RegisterID reg)
    620 {
    621     addSlowCase(branchIfNotCell(reg));
    622 }
    623 
    624619ALWAYS_INLINE void JIT::emitJumpSlowCaseIfNotJSCell(RegisterID reg, VirtualRegister vReg)
    625620{
     
    656651
    657652#endif // USE(JSVALUE32_64)
     653
     654ALWAYS_INLINE void JIT::emitJumpSlowCaseIfNotJSCell(RegisterID reg)
     655{
     656    addSlowCase(branchIfNotCell(reg));
     657}
    658658
    659659ALWAYS_INLINE int JIT::jumpTarget(const Instruction* instruction, int target)
  • trunk/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm

    r264758 r265036  
    489489end
    490490
     491# loadVariable loads the value of field fieldName using macro get
     492# into register tagReg and payloadReg
     493# Clobbers: indexReg
    491494macro loadVariable(get, fieldName, indexReg, tagReg, payloadReg)
    492495    get(fieldName, indexReg)
     
    652655
    653656macro valueProfile(opcodeStruct, profileName, metadata, tag, payload)
    654     storei tag, %opcodeStruct%::Metadata::m_profile.m_buckets + TagOffset[metadata]
     657    storei tag, %opcodeStruct%::Metadata::%profileName%.m_buckets + TagOffset[metadata]
    655658    storei payload, %opcodeStruct%::Metadata::%profileName%.m_buckets + PayloadOffset[metadata]
    656659end
     
    14111414end)
    14121415
     1416# Assumption: The base object is in t3
     1417# FIXME: this is very close to the 64bit version. Consider refactoring.
     1418macro performGetByIDHelper(opcodeStruct, modeMetadataName, valueProfileName, slowLabel, size, metadata, return)
     1419    metadata(t2, t1)
     1420    loadb %opcodeStruct%::Metadata::%modeMetadataName%.mode[t2], t1
     1421       
     1422.opGetByIdDefault:
     1423    bbneq t1, constexpr GetByIdMode::Default, .opGetByIdProtoLoad
     1424    loadi JSCell::m_structureID[t3], t1 # assumes base object in t3
     1425    loadi %opcodeStruct%::Metadata::%modeMetadataName%.defaultMode.structureID[t2], t0
     1426    bineq t0, t1, slowLabel
     1427    loadis %opcodeStruct%::Metadata::%modeMetadataName%.defaultMode.cachedOffset[t2], t1
     1428    loadPropertyAtVariableOffset(t1, t3, t0, t1)
     1429    valueProfile(opcodeStruct, valueProfileName, t2, t0, t1)
     1430    return(t0, t1)
     1431
     1432.opGetByIdProtoLoad:
     1433    bbneq t1, constexpr GetByIdMode::ProtoLoad, .opGetByIdArrayLength
     1434    loadi JSCell::m_structureID[t3], t1
     1435    loadi %opcodeStruct%::Metadata::%modeMetadataName%.protoLoadMode.structureID[t2], t3
     1436    bineq t3, t1, slowLabel
     1437    loadis %opcodeStruct%::Metadata::%modeMetadataName%.protoLoadMode.cachedOffset[t2], t1
     1438    loadp %opcodeStruct%::Metadata::%modeMetadataName%.protoLoadMode.cachedSlot[t2], t3
     1439    loadPropertyAtVariableOffset(t1, t3, t0, t1)
     1440    valueProfile(opcodeStruct, valueProfileName, t2, t0, t1)
     1441    return(t0, t1)
     1442
     1443.opGetByIdArrayLength:
     1444    bbneq t1, constexpr GetByIdMode::ArrayLength, .opGetByIdUnset
     1445    move t3, t0
     1446    arrayProfile(%opcodeStruct%::Metadata::%modeMetadataName%.arrayLengthMode.arrayProfile, t0, t2, t5)
     1447    btiz t0, IsArray, slowLabel
     1448    btiz t0, IndexingShapeMask, slowLabel
     1449    loadp JSObject::m_butterfly[t3], t0
     1450    loadi -sizeof IndexingHeader + IndexingHeader::u.lengths.publicLength[t0], t0
     1451    bilt t0, 0, slowLabel
     1452    valueProfile(opcodeStruct, valueProfileName, t2, Int32Tag, t0)
     1453    return(Int32Tag, t0)
     1454   
     1455.opGetByIdUnset:
     1456    loadi JSCell::m_structureID[t3], t1
     1457    loadi %opcodeStruct%::Metadata::%modeMetadataName%.unsetMode.structureID[t2], t0
     1458    bineq t0, t1, slowLabel
     1459    valueProfile(opcodeStruct, valueProfileName, t2, UndefinedTag, 0)
     1460    return(UndefinedTag, 0)
     1461
     1462end
    14131463
    14141464llintOpWithMetadata(op_get_by_id, OpGetById, macro (size, get, dispatch, metadata, return)
     
    19832033end
    19842034
     2035macro callHelper(opcodeName, slowPath, opcodeStruct, valueProfileName, dstVirtualRegister, prepareCall, size, dispatch, metadata, getCallee, getArgumentStart, getArgumentCountIncludingThis)
     2036    metadata(t5, t0)
     2037    getCallee(t0)
     2038
     2039    loadp %opcodeStruct%::Metadata::m_callLinkInfo.m_calleeOrLastSeenCalleeWithLinkBit[t5], t2
     2040    loadConstantOrVariablePayload(size, t0, CellTag, t3, .opCallSlow)
     2041    bineq t3, t2, .opCallSlow
     2042    getArgumentStart(t3)
     2043    lshifti 3, t3
     2044    negi t3
     2045    addp cfr, t3  # t3 contains the new value of cfr
     2046    storei t2, Callee + PayloadOffset[t3]
     2047    getArgumentCountIncludingThis(t2)   
     2048    storei PC, ArgumentCountIncludingThis + TagOffset[cfr]
     2049    storei t2, ArgumentCountIncludingThis + PayloadOffset[t3]
     2050    storei CellTag, Callee + TagOffset[t3]
     2051    move t3, sp
     2052    prepareCall(%opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], t2, t3, t4, JSEntryPtrTag)
     2053    callTargetFunction(opcodeName, size, opcodeStruct, valueProfileName, dstVirtualRegister, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], JSEntryPtrTag)
     2054
     2055.opCallSlow:
     2056    slowPathForCall(opcodeName, size, opcodeStruct, valueProfileName, dstVirtualRegister, dispatch, slowPath, prepareCall)
     2057end
     2058       
    19852059macro commonCallOp(opcodeName, slowPath, opcodeStruct, prepareCall, prologue)
    19862060    llintOpWithMetadata(opcodeName, opcodeStruct, macro (size, get, dispatch, metadata, return)
     
    19912065        end, metadata)
    19922066
    1993         get(m_callee, t0)
    1994         loadp %opcodeStruct%::Metadata::m_callLinkInfo.m_calleeOrLastSeenCalleeWithLinkBit[t5], t2
    1995         loadConstantOrVariablePayload(size, t0, CellTag, t3, .opCallSlow)
    1996         bineq t3, t2, .opCallSlow
    1997         getu(size, opcodeStruct, m_argv, t3)
    1998         lshifti 3, t3
    1999         negi t3
    2000         addp cfr, t3  # t3 contains the new value of cfr
    2001         storei t2, Callee + PayloadOffset[t3]
    2002         getu(size, opcodeStruct, m_argc, t2)
    2003         storei PC, ArgumentCountIncludingThis + TagOffset[cfr]
    2004         storei t2, ArgumentCountIncludingThis + PayloadOffset[t3]
    2005         storei CellTag, Callee + TagOffset[t3]
    2006         move t3, sp
    2007         prepareCall(%opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], t2, t3, t4, JSEntryPtrTag)
    2008         callTargetFunction(opcodeName, size, opcodeStruct, m_profile, m_dst, dispatch, %opcodeStruct%::Metadata::m_callLinkInfo.m_machineCodeTarget[t5], JSEntryPtrTag)
    2009 
    2010     .opCallSlow:
    2011         slowPathForCall(opcodeName, size, opcodeStruct, m_profile, m_dst, dispatch, slowPath, prepareCall)
     2067        macro getCallee(dst)
     2068           get(m_callee, t0)
     2069        end
     2070
     2071        macro getArgumentStart(dst)
     2072            getu(size, opcodeStruct, m_argv, dst)
     2073        end
     2074
     2075        macro getArgumentCount(dst)
     2076            getu(size, opcodeStruct, m_argc, dst)
     2077        end
     2078       
     2079        callHelper(opcodeName, slowPath, opcodeStruct, m_profile, m_dst, prepareCall, size, dispatch, metadata, getCallee, getArgumentStart, getArgumentCount)
    20122080    end)
    20132081end
     
    26432711end)
    26442712
    2645 llintOp(op_iterator_open, OpIteratorOpen, macro (size, get, dispatch)
    2646     defineOSRExitReturnLabel(op_iterator_open, size)
    2647     break
    2648     if C_LOOP or C_LOOP_WIN
    2649         # Insert superflous call return labels for Cloop.
    2650         cloopCallJSFunction a0 # symbolIterator
    2651         cloopCallJSFunction a0 # get next
    2652     end
    2653 end)
    2654 
    2655 llintOp(op_iterator_next, OpIteratorNext, macro (size, get, dispatch)
    2656     defineOSRExitReturnLabel(op_iterator_next, size)
    2657     break
    2658     if C_LOOP or C_LOOP_WIN
    2659         # Insert superflous call return labels for Cloop.
    2660         # FIXME: Not sure why two are only needed...
    2661         cloopCallJSFunction a0 # next
    2662         cloopCallJSFunction a0 # get done
    2663     end
     2713llintOpWithMetadata(op_iterator_open, OpIteratorOpen, macro (size, get, dispatch, metadata, return)
     2714    macro fastNarrow()
     2715        callSlowPath(_iterator_open_try_fast_narrow)
     2716    end
     2717    macro fastWide16()
     2718        callSlowPath(_iterator_open_try_fast_wide16)
     2719    end
     2720    macro fastWide32()
     2721        callSlowPath(_iterator_open_try_fast_wide32)
     2722    end
     2723    size(fastNarrow, fastWide16, fastWide32, macro (callOp) callOp() end)
     2724
     2725    # FIXME: We should do this with inline assembly since it's the "fast" case.
     2726    bbeq r1, constexpr IterationMode::Generic, .iteratorOpenGeneric
     2727    dispatch()
     2728
     2729.iteratorOpenGeneric:
     2730    macro gotoGetByIdCheckpoint()
     2731        jmp .getByIdStart
     2732    end
     2733
     2734    macro getCallee(dst)
     2735        get(m_symbolIterator, dst)
     2736    end
     2737
     2738    macro getArgumentIncludingThisStart(dst)
     2739        getu(size, OpIteratorOpen, m_stackOffset, dst)
     2740    end
     2741
     2742    macro getArgumentIncludingThisCount(dst)
     2743        move 1, dst
     2744    end
     2745
     2746    callHelper(op_iterator_open, _llint_slow_path_iterator_open_call, OpIteratorOpen, m_iteratorProfile, m_iterator, prepareForRegularCall, size, gotoGetByIdCheckpoint, metadata, getCallee, getArgumentIncludingThisStart, getArgumentIncludingThisCount)
     2747
     2748.getByIdStart:
     2749    macro storeNextAndDispatch(valueTag, valuePayload)
     2750        move valueTag, t2
     2751        move valuePayload, t3
     2752        get(m_next, t1)
     2753        storei t2, TagOffset[cfr, t1, 8]
     2754        storei t3, PayloadOffset[cfr, t1, 8]
     2755        dispatch()
     2756    end
     2757
     2758    # We need to load m_iterator into t3 because that's where
     2759    # performGetByIDHelper expects the base object   
     2760    loadVariable(get, m_iterator, t3, t0, t3)
     2761    bineq t0, CellTag, .iteratorOpenGenericGetNextSlow
     2762    performGetByIDHelper(OpIteratorOpen, m_modeMetadata, m_nextProfile, .iteratorOpenGenericGetNextSlow, size, metadata, storeNextAndDispatch)
     2763
     2764.iteratorOpenGenericGetNextSlow:
     2765    callSlowPath(_llint_slow_path_iterator_open_get_next)
     2766    dispatch()
     2767end)
     2768
     2769llintOpWithMetadata(op_iterator_next, OpIteratorNext, macro (size, get, dispatch, metadata, return)
     2770       
     2771    loadVariable(get, m_next, t0, t1, t0)
     2772    bieq t1, UndefinedTag, .iteratorNextGeneric
     2773    btinz t0, .iteratorNextGeneric
     2774
     2775    macro fastNarrow()
     2776        callSlowPath(_iterator_next_try_fast_narrow)
     2777    end
     2778    macro fastWide16()
     2779        callSlowPath(_iterator_next_try_fast_wide16)
     2780    end
     2781    macro fastWide32()
     2782        callSlowPath(_iterator_next_try_fast_wide32)
     2783    end
     2784    size(fastNarrow, fastWide16, fastWide32, macro (callOp) callOp() end)
     2785
     2786    # FIXME: We should do this with inline assembly since it's the "fast" case.
     2787    bbeq r1, constexpr IterationMode::Generic, .iteratorNextGeneric
     2788    dispatch()
     2789
     2790.iteratorNextGeneric:
     2791    macro gotoGetDoneCheckpoint()
     2792        jmp .getDoneStart
     2793    end
     2794
     2795    macro getCallee(dst)
     2796        get(m_next, dst)
     2797    end
     2798
     2799    macro getArgumentIncludingThisStart(dst)
     2800        getu(size, OpIteratorNext, m_stackOffset, dst)
     2801    end
     2802
     2803    macro getArgumentIncludingThisCount(dst)
     2804        move 1, dst
     2805    end
     2806
     2807    # Use m_value slot as a tmp since we are going to write to it later.
     2808    callHelper(op_iterator_next, _llint_slow_path_iterator_next_call, OpIteratorNext, m_nextResultProfile, m_value, prepareForRegularCall, size, gotoGetDoneCheckpoint, metadata, getCallee, getArgumentIncludingThisStart, getArgumentIncludingThisCount)
     2809
     2810.getDoneStart:
     2811    macro storeDoneAndJmpToGetValue(doneValueTag, doneValuePayload)
     2812        move doneValueTag, t0
     2813        move doneValuePayload, t1
     2814        get(m_done, t2)
     2815        storei t0, TagOffset[cfr, t2, 8]
     2816        storei t1, PayloadOffset[cfr, t2, 8]
     2817        jmp .getValueStart
     2818    end
     2819
     2820    loadVariable(get, m_value, t3, t0, t3)
     2821    bineq t0, CellTag, .getDoneSlow
     2822    performGetByIDHelper(OpIteratorNext, m_doneModeMetadata, m_doneProfile, .getDoneSlow, size, metadata, storeDoneAndJmpToGetValue)
     2823
     2824.getDoneSlow:
     2825    callSlowPath(_llint_slow_path_iterator_next_get_done)
     2826    branchIfException(_llint_throw_from_slow_path_trampoline)
     2827    loadVariable(get, m_done, t1, t0, t1)
     2828
     2829    # storeDoneAndJmpToGetValue puts the doneValue into t0
     2830.getValueStart:
     2831    # In 32 bits, following slow path if tags is not null, undefined, int32, boolean.
     2832    # These satisfy the mask ~0x3.
     2833    # Therefore if the mask is _not_ satisfied, we branch to the slow case.
     2834    btinz t0, ~0x3, .getValueSlow
     2835    btiz t1, 0x1, .notDone
     2836    dispatch()
     2837
     2838.notDone:
     2839    macro storeValueAndDispatch(vTag, vPayload)
     2840        move vTag, t2
     2841        move vPayload, t3
     2842        get(m_value, t1)
     2843        storei t2, TagOffset[cfr, t1, 8]
     2844        storei t3, PayloadOffset[cfr, t1, 8]
     2845        checkStackPointerAlignment(t0, 0xbaddb01e)
     2846        dispatch()
     2847    end
     2848
     2849    # Reload the next result tmp since the get_by_id above may have clobbered t3.
     2850    loadVariable(get, m_value, t3, t0, t3)
     2851    # We don't need to check if the iterator result is a cell here since we will have thrown an error before.
     2852    performGetByIDHelper(OpIteratorNext, m_valueModeMetadata, m_valueProfile, .getValueSlow, size, metadata, storeValueAndDispatch)
     2853
     2854.getValueSlow:
     2855    callSlowPath(_llint_slow_path_iterator_next_get_value)
     2856    dispatch()
    26642857end)
    26652858
Note: See TracChangeset for help on using the changeset viewer.