Changeset 270208 in webkit


Ignore:
Timestamp:
Nov 27, 2020 4:02:55 PM (20 months ago)
Author:
ysuzuki@apple.com
Message:

[JSC] Add wasm atomics instructions
https://bugs.webkit.org/show_bug.cgi?id=218954

Reviewed by Filip Pizlo.

JSTests:

  • wasm.yaml:
  • wasm/Builder.js:

(const._importMemoryContinuation):
(export.default.Builder.prototype._registerSectionBuilders.const.section.in.WASM.description.section.switch.section.case.string_appeared_here.this.section):

  • wasm/Builder_WebAssemblyBinary.js:

(const.putResizableLimits):
(const.emitters.Import):
(const.emitters.Memory):

  • wasm/function-tests/trap-load-shared.js:
  • wasm/function-tests/trap-store-shared.js:
  • wasm/stress/atomic-decrement.js: Added.

(i.agent.start.import.string_appeared_here.then):
(i.async error):

  • wasm/stress/atomic-increment.js: Added.

(i.agent.start.import.string_appeared_here.then):
(i.async error):

  • wasm/stress/memory-fence.js: Added.

(async try):
(catch):

  • wasm/threads-spec-tests/atomic-signed.wast.js: Added.
  • wasm/threads-spec-tests/atomic.wast.js: Added.
  • wasm/threads-spec-tests/memory.wast.js: Added.
  • wasm/threads-spec-tests/resources/atomic-signed.wast: Added.
  • wasm/wasm.json:

Source/JavaScriptCore:

This patch implements wasm threading's atomic operations[1] in X86_64 and ARM64. Currently, all ARM64 atomic operations are implemented by using LL/SC.
Later, we will use ARM64 CAS operations if possible, at least in ARM64E.

To test it easily, we also extend jsc shell's worker to support transferring shared WebAssembly.Memory so that we can use wasm atomic operations in several
workers in jsc shell.

[1]: https://github.com/WebAssembly/threads

  • assembler/MacroAssemblerX86Common.h:

(JSC::MacroAssemblerX86Common::atomicXchg8):
(JSC::MacroAssemblerX86Common::atomicXchg16):
(JSC::MacroAssemblerX86Common::atomicXchg32):

  • b3/B3Kind.h:

(JSC::B3::Kind::hasTraps const):

  • b3/B3LowerToAir.cpp:
  • b3/B3Width.h:

(JSC::B3::bytesForWidth):

  • b3/testb3_8.cpp:

(testAtomicXchg):

  • bytecode/BytecodeList.rb:
  • interpreter/Register.h:

(JSC::Register::unboxedInt64 const):
(JSC::Register::asanUnsafeUnboxedInt64 const):

  • jsc.cpp:

(Message::releaseContents):
(Message::Message):
(JSC_DEFINE_HOST_FUNCTION):

  • llint/WebAssembly.asm:
  • offlineasm/arm64.rb:
  • offlineasm/instructions.rb:
  • offlineasm/x86.rb:
  • runtime/OptionsList.h:
  • wasm/WasmAirIRGenerator.cpp:

(JSC::Wasm::AirIRGenerator::appendEffectful):
(JSC::Wasm::accessWidth):
(JSC::Wasm::sizeOfAtomicOpMemoryAccess):
(JSC::Wasm::AirIRGenerator::fixupPointerPlusOffsetForAtomicOps):
(JSC::Wasm::AirIRGenerator::sanitizeAtomicResult):
(JSC::Wasm::AirIRGenerator::appendGeneralAtomic):
(JSC::Wasm::AirIRGenerator::appendStrongCAS):
(JSC::Wasm::AirIRGenerator::emitAtomicLoadOp):
(JSC::Wasm::AirIRGenerator::atomicLoad):
(JSC::Wasm::AirIRGenerator::emitAtomicStoreOp):
(JSC::Wasm::AirIRGenerator::atomicStore):
(JSC::Wasm::AirIRGenerator::emitAtomicBinaryRMWOp):
(JSC::Wasm::AirIRGenerator::atomicBinaryRMW):
(JSC::Wasm::AirIRGenerator::emitAtomicCompareExchange):
(JSC::Wasm::AirIRGenerator::atomicCompareExchange):
(JSC::Wasm::AirIRGenerator::atomicWait):
(JSC::Wasm::AirIRGenerator::atomicNotify):
(JSC::Wasm::AirIRGenerator::atomicFence):
(JSC::Wasm::AirIRGenerator::addCall):

  • wasm/WasmB3IRGenerator.cpp:

(JSC::Wasm::B3IRGenerator::emitCheckAndPreparePointer):
(JSC::Wasm::B3IRGenerator::memoryKind):
(JSC::Wasm::accessWidth):
(JSC::Wasm::sizeOfAtomicOpMemoryAccess):
(JSC::Wasm::B3IRGenerator::sanitizeAtomicResult):
(JSC::Wasm::B3IRGenerator::fixupPointerPlusOffsetForAtomicOps):
(JSC::Wasm::B3IRGenerator::emitAtomicLoadOp):
(JSC::Wasm::B3IRGenerator::atomicLoad):
(JSC::Wasm::B3IRGenerator::emitAtomicStoreOp):
(JSC::Wasm::B3IRGenerator::atomicStore):
(JSC::Wasm::B3IRGenerator::emitAtomicBinaryRMWOp):
(JSC::Wasm::B3IRGenerator::atomicBinaryRMW):
(JSC::Wasm::B3IRGenerator::emitAtomicCompareExchange):
(JSC::Wasm::B3IRGenerator::atomicCompareExchange):
(JSC::Wasm::B3IRGenerator::atomicWait):
(JSC::Wasm::B3IRGenerator::atomicNotify):
(JSC::Wasm::B3IRGenerator::atomicFence):
(JSC::Wasm::B3IRGenerator::addCall):

  • wasm/WasmFunctionParser.h:

(JSC::Wasm::FunctionParser<Context>::atomicLoad):
(JSC::Wasm::FunctionParser<Context>::atomicStore):
(JSC::Wasm::FunctionParser<Context>::atomicBinaryRMW):
(JSC::Wasm::FunctionParser<Context>::atomicCompareExchange):
(JSC::Wasm::FunctionParser<Context>::atomicWait):
(JSC::Wasm::FunctionParser<Context>::atomicNotify):
(JSC::Wasm::FunctionParser<Context>::atomicFence):
(JSC::Wasm::FunctionParser<Context>::parseExpression):
(JSC::Wasm::FunctionParser<Context>::parseUnreachableExpression):

  • wasm/WasmLLIntGenerator.cpp:

(JSC::Wasm::LLIntGenerator::atomicLoad):
(JSC::Wasm::LLIntGenerator::atomicStore):
(JSC::Wasm::LLIntGenerator::atomicBinaryRMW):
(JSC::Wasm::LLIntGenerator::atomicCompareExchange):
(JSC::Wasm::LLIntGenerator::atomicWait):
(JSC::Wasm::LLIntGenerator::atomicNotify):
(JSC::Wasm::LLIntGenerator::atomicFence):

  • wasm/WasmMemory.h:
  • wasm/WasmMemoryInformation.cpp:

(JSC::Wasm::MemoryInformation::MemoryInformation):

  • wasm/WasmMemoryInformation.h:

(JSC::Wasm::MemoryInformation::isShared const):

  • wasm/WasmOperations.cpp:

(JSC::Wasm::wait):
(JSC::Wasm::JSC_DEFINE_JIT_OPERATION):

  • wasm/WasmOperations.h:
  • wasm/WasmSectionParser.cpp:

(JSC::Wasm::SectionParser::parseResizableLimits):
(JSC::Wasm::SectionParser::parseTableHelper):
(JSC::Wasm::SectionParser::parseMemoryHelper):

  • wasm/WasmSectionParser.h:
  • wasm/WasmSlowPaths.cpp:

(JSC::LLInt::WASM_SLOW_PATH_DECL):

  • wasm/WasmSlowPaths.h:
  • wasm/generateWasm.py:

(isAtomic):
(isAtomicLoad):
(isAtomicStore):
(isAtomicBinaryRMW):
(memoryLog2Alignment):

  • wasm/generateWasmOpsHeader.py:

(atomicMemoryLoadMacroizer):
(atomicMemoryLoadMacroizer.modifier):
(atomicMemoryStoreMacroizer):
(atomicMemoryStoreMacroizer.modifier):
(atomicBinaryRMWMacroizer):
(atomicBinaryRMWMacroizer.modifier):
(memoryLog2AlignmentGenerator):
(atomicMemoryLog2AlignmentGenerator):
(ExtAtomicOpType):

  • wasm/js/JSWebAssemblyInstance.cpp:

(JSC::JSWebAssemblyInstance::tryCreate):

  • wasm/wasm.json:
Location:
trunk
Files:
9 added
38 edited

Legend:

Unmodified
Added
Removed
  • trunk/JSTests/ChangeLog

    r270043 r270208  
     12020-11-26  Yusuke Suzuki  <ysuzuki@apple.com>
     2
     3        [JSC] Add wasm atomics instructions
     4        https://bugs.webkit.org/show_bug.cgi?id=218954
     5
     6        Reviewed by Filip Pizlo.
     7
     8        * wasm.yaml:
     9        * wasm/Builder.js:
     10        (const._importMemoryContinuation):
     11        (export.default.Builder.prototype._registerSectionBuilders.const.section.in.WASM.description.section.switch.section.case.string_appeared_here.this.section):
     12        * wasm/Builder_WebAssemblyBinary.js:
     13        (const.putResizableLimits):
     14        (const.emitters.Import):
     15        (const.emitters.Memory):
     16        * wasm/function-tests/trap-load-shared.js:
     17        * wasm/function-tests/trap-store-shared.js:
     18        * wasm/stress/atomic-decrement.js: Added.
     19        (i.agent.start.import.string_appeared_here.then):
     20        (i.async error):
     21        * wasm/stress/atomic-increment.js: Added.
     22        (i.agent.start.import.string_appeared_here.then):
     23        (i.async error):
     24        * wasm/stress/memory-fence.js: Added.
     25        (async try):
     26        (catch):
     27        * wasm/threads-spec-tests/atomic-signed.wast.js: Added.
     28        * wasm/threads-spec-tests/atomic.wast.js: Added.
     29        * wasm/threads-spec-tests/memory.wast.js: Added.
     30        * wasm/threads-spec-tests/resources/atomic-signed.wast: Added.
     31        * wasm/wasm.json:
     32
    1332020-11-19  Xan López  <xan@igalia.com>
    234
  • trunk/JSTests/wasm.yaml

    r269831 r270208  
    6666  cmd: runWebAssemblySpecTest :normal
    6767
     68- path: wasm/threads-spec-tests/atomic.wast.js
     69  cmd: runWebAssemblySpecTest :normal
     70- path: wasm/threads-spec-tests/memory.wast.js
     71  cmd: runWebAssemblySpecTest :normal
     72- path: wasm/threads-spec-tests/atomic-signed.wast.js
     73  cmd: runWebAssemblySpecTest :normal
     74
    6875- path: wasm/spec-tests/address.wast.js
    6976  cmd: runWebAssemblySpecTest :normal
  • trunk/JSTests/wasm/Builder.js

    r269729 r270208  
    112112
    113113const _importMemoryContinuation = (builder, section, nextBuilder) => {
    114     return (module, field, {initial, maximum}) => {
     114    return (module, field, {initial, maximum, shared = false}) => {
    115115        assert.isString(module, `Import Memory module should be a string, got "${module}"`);
    116116        assert.isString(field, `Import Memory field should be a string, got "${field}"`);
    117         section.data.push({module, field, kind: "Memory", memoryDescription: {initial, maximum}});
     117        section.data.push({module, field, kind: "Memory", memoryDescription: {initial, maximum, shared}});
    118118        return _errorHandlingProxyFor(nextBuilder);
    119119    };
     
    523523                    const memoryBuilder = {
    524524                        End: () => this,
    525                         InitialMaxPages: (initial, maximum) => {
    526                             s.data.push({ initial, maximum });
     525                        InitialMaxPages: (initial, maximum, shared = false) => {
     526                            s.data.push({ initial, maximum, shared });
    527527                            return _errorHandlingProxyFor(memoryBuilder);
    528528                        }
  • trunk/JSTests/wasm/Builder_WebAssemblyBinary.js

    r269998 r270208  
    3030const put = (bin, type, value) => bin[type](value);
    3131
    32 const putResizableLimits = (bin, initial, maximum) => {
     32const putResizableLimits = (bin, initial, maximum, shared = false) => {
    3333    assert.truthy(typeof initial === "number", "We expect 'initial' to be an integer");
    34     let hasMaximum = 0;
     34    let flag = 0;
    3535    if (typeof maximum === "number") {
    36         hasMaximum = 1;
     36        flag |= 0b01;
    3737    } else {
    3838        assert.truthy(typeof maximum === "undefined", "We expect 'maximum' to be an integer if it's defined");
    3939    }
    40 
    41     put(bin, "uint8", hasMaximum);
     40    if (shared) {
     41        assert.truthy(typeof maximum === "number", "We expect 'maximum' to be an integer if shared is true");
     42        flag |= 0b10;
     43    }
     44
     45    put(bin, "uint8", flag);
    4246    put(bin, "varuint32", initial);
    43     if (hasMaximum)
     47    if (flag & 0b01)
    4448        put(bin, "varuint32", maximum);
    4549};
     
    133137            }
    134138            case "Memory": {
    135                 let {initial, maximum} = entry.memoryDescription;
    136                 putResizableLimits(bin, initial, maximum);
     139                let {initial, maximum, shared} = entry.memoryDescription;
     140                putResizableLimits(bin, initial, maximum, shared);
    137141                break;
    138142            };
     
    161165        put(bin, "varuint1", section.data.length);
    162166        for (const memory of section.data)
    163             putResizableLimits(bin, memory.initial, memory.maximum);
     167            putResizableLimits(bin, memory.initial, memory.maximum, memory.shared);
    164168    },
    165169
  • trunk/JSTests/wasm/function-tests/trap-load-shared.js

    r269974 r270208  
    88    .Type().End()
    99    .Import()
    10         .Memory("a", "b", {initial: numPages})
     10        .Memory("a", "b", {initial: numPages, maximum: numPages * 2, shared: true})
    1111    .End()
    1212    .Function().End()
  • trunk/JSTests/wasm/function-tests/trap-store-shared.js

    r269974 r270208  
    99        .Type().End()
    1010        .Import()
    11             .Memory("a", "b", {initial: numPages})
     11            .Memory("a", "b", {initial: numPages, maximum: numPages * 2, shared: true})
    1212        .End()
    1313        .Function().End()
  • trunk/JSTests/wasm/wasm.json

    r269998 r270208  
    224224        "i64.reinterpret/f64": { "category": "conversion", "value": 189, "return": ["i64"],                          "parameter": ["f64"],                        "immediate": [], "b3op": "BitwiseCast"  },
    225225        "i32.extend8_s":       { "category": "conversion", "value": 192, "return": ["i32"],                          "parameter": ["i32"],                        "immediate": [], "b3op": "SExt8"        },
    226         "i32.extend16_s":      { "category": "conversion", "value": 193, "return": ["i32"],                          "parameter": ["i32"],                        "immediate": [], "b3op": "SExt16"       }
     226        "i32.extend16_s":      { "category": "conversion", "value": 193, "return": ["i32"],                          "parameter": ["i32"],                        "immediate": [], "b3op": "SExt16"       },
     227
     228        "memory.atomic.notify":       { "category": "atomic",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp":  0 },
     229        "memory.atomic.wait32":       { "category": "atomic",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32", "i64"],   "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp":  1 },
     230        "memory.atomic.wait64":       { "category": "atomic",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i64", "i64"],   "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp":  2 },
     231        "atomic.fence":               { "category": "atomic",     "value": 254, "return": [],           "parameter": [],                       "immediate": [{"name": "flags",          "type": "uint8"    }], "extendedOp":  3 },
     232        "i32.atomic.load":            { "category": "atomic.load",     "value": 254, "return": ["i32"],      "parameter": ["addr"],                 "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 16 },
     233        "i64.atomic.load":            { "category": "atomic.load",     "value": 254, "return": ["i64"],      "parameter": ["addr"],                 "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 17 },
     234        "i32.atomic.load8_u":         { "category": "atomic.load",     "value": 254, "return": ["i32"],      "parameter": ["addr"],                 "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 18 },
     235        "i32.atomic.load16_u":        { "category": "atomic.load",     "value": 254, "return": ["i32"],      "parameter": ["addr"],                 "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 19 },
     236        "i64.atomic.load8_u":         { "category": "atomic.load",     "value": 254, "return": ["i64"],      "parameter": ["addr"],                 "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 20 },
     237        "i64.atomic.load16_u":        { "category": "atomic.load",     "value": 254, "return": ["i64"],      "parameter": ["addr"],                 "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 21 },
     238        "i64.atomic.load32_u":        { "category": "atomic.load",     "value": 254, "return": ["i64"],      "parameter": ["addr"],                 "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 22 },
     239        "i32.atomic.store":           { "category": "atomic.store",     "value": 254, "return": [],           "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 23 },
     240        "i64.atomic.store":           { "category": "atomic.store",     "value": 254, "return": [],           "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 24 },
     241        "i32.atomic.store8_u":        { "category": "atomic.store",     "value": 254, "return": [],           "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 25 },
     242        "i32.atomic.store16_u":       { "category": "atomic.store",     "value": 254, "return": [],           "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 26 },
     243        "i64.atomic.store8_u":        { "category": "atomic.store",     "value": 254, "return": [],           "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 27 },
     244        "i64.atomic.store16_u":       { "category": "atomic.store",     "value": 254, "return": [],           "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 28 },
     245        "i64.atomic.store32_u":       { "category": "atomic.store",     "value": 254, "return": [],           "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 29 },
     246        "i32.atomic.rmw.add":         { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 30 },
     247        "i64.atomic.rmw.add":         { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 31 },
     248        "i32.atomic.rmw8.add_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 32 },
     249        "i32.atomic.rmw16.add_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 33 },
     250        "i64.atomic.rmw8.add_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 34 },
     251        "i64.atomic.rmw16.add_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 35 },
     252        "i64.atomic.rmw32.add_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 36 },
     253        "i32.atomic.rmw.sub":         { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 37 },
     254        "i64.atomic.rmw.sub":         { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 38 },
     255        "i32.atomic.rmw8.sub_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 39 },
     256        "i32.atomic.rmw16.sub_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 40 },
     257        "i64.atomic.rmw8.sub_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 41 },
     258        "i64.atomic.rmw16.sub_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 42 },
     259        "i64.atomic.rmw32.sub_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 43 },
     260        "i32.atomic.rmw.and":         { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 44 },
     261        "i64.atomic.rmw.and":         { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 45 },
     262        "i32.atomic.rmw8.and_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 46 },
     263        "i32.atomic.rmw16.and_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 47 },
     264        "i64.atomic.rmw8.and_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 48 },
     265        "i64.atomic.rmw16.and_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 49 },
     266        "i64.atomic.rmw32.and_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 50 },
     267        "i32.atomic.rmw.or":          { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 51 },
     268        "i64.atomic.rmw.or":          { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 52 },
     269        "i32.atomic.rmw8.or_u":       { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 53 },
     270        "i32.atomic.rmw16.or_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 54 },
     271        "i64.atomic.rmw8.or_u":       { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 55 },
     272        "i64.atomic.rmw16.or_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 56 },
     273        "i64.atomic.rmw32.or_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 57 },
     274        "i32.atomic.rmw.xor":         { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 58 },
     275        "i64.atomic.rmw.xor":         { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 59 },
     276        "i32.atomic.rmw8.xor_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 60 },
     277        "i32.atomic.rmw16.xor_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 61 },
     278        "i64.atomic.rmw8.xor_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 62 },
     279        "i64.atomic.rmw16.xor_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 63 },
     280        "i64.atomic.rmw32.xor_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 64 },
     281        "i32.atomic.rmw.xchg":        { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 65 },
     282        "i64.atomic.rmw.xchg":        { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 66 },
     283        "i32.atomic.rmw8.xchg_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 67 },
     284        "i32.atomic.rmw16.xchg_u":    { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 68 },
     285        "i64.atomic.rmw8.xchg_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 69 },
     286        "i64.atomic.rmw16.xchg_u":    { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 70 },
     287        "i64.atomic.rmw32.xchg_u":    { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 71 },
     288        "i32.atomic.rmw.cmpxchg":     { "category": "atomic.rmw",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32", "i32"],   "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 72 },
     289        "i64.atomic.rmw.cmpxchg":     { "category": "atomic.rmw",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64", "i64"],   "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 73 },
     290        "i32.atomic.rmw8.cmpxchg_u":  { "category": "atomic.rmw",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32", "i32"],   "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 74 },
     291        "i32.atomic.rmw16.cmpxchg_u": { "category": "atomic.rmw",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32", "i32"],   "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 75 },
     292        "i64.atomic.rmw8.cmpxchg_u":  { "category": "atomic.rmw",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64", "i64"],   "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 76 },
     293        "i64.atomic.rmw16.cmpxchg_u": { "category": "atomic.rmw",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64", "i64"],   "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 77 },
     294        "i64.atomic.rmw32.cmpxchg_u": { "category": "atomic.rmw",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64", "i64"],   "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 78 }
    227295    }
    228296}
  • trunk/Source/JavaScriptCore/ChangeLog

    r270066 r270208  
     12020-11-26  Yusuke Suzuki  <ysuzuki@apple.com>
     2
     3        [JSC] Add wasm atomics instructions
     4        https://bugs.webkit.org/show_bug.cgi?id=218954
     5
     6        Reviewed by Filip Pizlo.
     7
     8        This patch implements wasm threading's atomic operations[1] in X86_64 and ARM64. Currently, all ARM64 atomic operations are implemented by using LL/SC.
     9        Later, we will use ARM64 CAS operations if possible, at least in ARM64E.
     10
     11        To test it easily, we also extend jsc shell's worker to support transferring shared WebAssembly.Memory so that we can use wasm atomic operations in several
     12        workers in jsc shell.
     13
     14        [1]: https://github.com/WebAssembly/threads
     15
     16        * assembler/MacroAssemblerX86Common.h:
     17        (JSC::MacroAssemblerX86Common::atomicXchg8):
     18        (JSC::MacroAssemblerX86Common::atomicXchg16):
     19        (JSC::MacroAssemblerX86Common::atomicXchg32):
     20        * b3/B3Kind.h:
     21        (JSC::B3::Kind::hasTraps const):
     22        * b3/B3LowerToAir.cpp:
     23        * b3/B3Width.h:
     24        (JSC::B3::bytesForWidth):
     25        * b3/testb3_8.cpp:
     26        (testAtomicXchg):
     27        * bytecode/BytecodeList.rb:
     28        * interpreter/Register.h:
     29        (JSC::Register::unboxedInt64 const):
     30        (JSC::Register::asanUnsafeUnboxedInt64 const):
     31        * jsc.cpp:
     32        (Message::releaseContents):
     33        (Message::Message):
     34        (JSC_DEFINE_HOST_FUNCTION):
     35        * llint/WebAssembly.asm:
     36        * offlineasm/arm64.rb:
     37        * offlineasm/instructions.rb:
     38        * offlineasm/x86.rb:
     39        * runtime/OptionsList.h:
     40        * wasm/WasmAirIRGenerator.cpp:
     41        (JSC::Wasm::AirIRGenerator::appendEffectful):
     42        (JSC::Wasm::accessWidth):
     43        (JSC::Wasm::sizeOfAtomicOpMemoryAccess):
     44        (JSC::Wasm::AirIRGenerator::fixupPointerPlusOffsetForAtomicOps):
     45        (JSC::Wasm::AirIRGenerator::sanitizeAtomicResult):
     46        (JSC::Wasm::AirIRGenerator::appendGeneralAtomic):
     47        (JSC::Wasm::AirIRGenerator::appendStrongCAS):
     48        (JSC::Wasm::AirIRGenerator::emitAtomicLoadOp):
     49        (JSC::Wasm::AirIRGenerator::atomicLoad):
     50        (JSC::Wasm::AirIRGenerator::emitAtomicStoreOp):
     51        (JSC::Wasm::AirIRGenerator::atomicStore):
     52        (JSC::Wasm::AirIRGenerator::emitAtomicBinaryRMWOp):
     53        (JSC::Wasm::AirIRGenerator::atomicBinaryRMW):
     54        (JSC::Wasm::AirIRGenerator::emitAtomicCompareExchange):
     55        (JSC::Wasm::AirIRGenerator::atomicCompareExchange):
     56        (JSC::Wasm::AirIRGenerator::atomicWait):
     57        (JSC::Wasm::AirIRGenerator::atomicNotify):
     58        (JSC::Wasm::AirIRGenerator::atomicFence):
     59        (JSC::Wasm::AirIRGenerator::addCall):
     60        * wasm/WasmB3IRGenerator.cpp:
     61        (JSC::Wasm::B3IRGenerator::emitCheckAndPreparePointer):
     62        (JSC::Wasm::B3IRGenerator::memoryKind):
     63        (JSC::Wasm::accessWidth):
     64        (JSC::Wasm::sizeOfAtomicOpMemoryAccess):
     65        (JSC::Wasm::B3IRGenerator::sanitizeAtomicResult):
     66        (JSC::Wasm::B3IRGenerator::fixupPointerPlusOffsetForAtomicOps):
     67        (JSC::Wasm::B3IRGenerator::emitAtomicLoadOp):
     68        (JSC::Wasm::B3IRGenerator::atomicLoad):
     69        (JSC::Wasm::B3IRGenerator::emitAtomicStoreOp):
     70        (JSC::Wasm::B3IRGenerator::atomicStore):
     71        (JSC::Wasm::B3IRGenerator::emitAtomicBinaryRMWOp):
     72        (JSC::Wasm::B3IRGenerator::atomicBinaryRMW):
     73        (JSC::Wasm::B3IRGenerator::emitAtomicCompareExchange):
     74        (JSC::Wasm::B3IRGenerator::atomicCompareExchange):
     75        (JSC::Wasm::B3IRGenerator::atomicWait):
     76        (JSC::Wasm::B3IRGenerator::atomicNotify):
     77        (JSC::Wasm::B3IRGenerator::atomicFence):
     78        (JSC::Wasm::B3IRGenerator::addCall):
     79        * wasm/WasmFunctionParser.h:
     80        (JSC::Wasm::FunctionParser<Context>::atomicLoad):
     81        (JSC::Wasm::FunctionParser<Context>::atomicStore):
     82        (JSC::Wasm::FunctionParser<Context>::atomicBinaryRMW):
     83        (JSC::Wasm::FunctionParser<Context>::atomicCompareExchange):
     84        (JSC::Wasm::FunctionParser<Context>::atomicWait):
     85        (JSC::Wasm::FunctionParser<Context>::atomicNotify):
     86        (JSC::Wasm::FunctionParser<Context>::atomicFence):
     87        (JSC::Wasm::FunctionParser<Context>::parseExpression):
     88        (JSC::Wasm::FunctionParser<Context>::parseUnreachableExpression):
     89        * wasm/WasmLLIntGenerator.cpp:
     90        (JSC::Wasm::LLIntGenerator::atomicLoad):
     91        (JSC::Wasm::LLIntGenerator::atomicStore):
     92        (JSC::Wasm::LLIntGenerator::atomicBinaryRMW):
     93        (JSC::Wasm::LLIntGenerator::atomicCompareExchange):
     94        (JSC::Wasm::LLIntGenerator::atomicWait):
     95        (JSC::Wasm::LLIntGenerator::atomicNotify):
     96        (JSC::Wasm::LLIntGenerator::atomicFence):
     97        * wasm/WasmMemory.h:
     98        * wasm/WasmMemoryInformation.cpp:
     99        (JSC::Wasm::MemoryInformation::MemoryInformation):
     100        * wasm/WasmMemoryInformation.h:
     101        (JSC::Wasm::MemoryInformation::isShared const):
     102        * wasm/WasmOperations.cpp:
     103        (JSC::Wasm::wait):
     104        (JSC::Wasm::JSC_DEFINE_JIT_OPERATION):
     105        * wasm/WasmOperations.h:
     106        * wasm/WasmSectionParser.cpp:
     107        (JSC::Wasm::SectionParser::parseResizableLimits):
     108        (JSC::Wasm::SectionParser::parseTableHelper):
     109        (JSC::Wasm::SectionParser::parseMemoryHelper):
     110        * wasm/WasmSectionParser.h:
     111        * wasm/WasmSlowPaths.cpp:
     112        (JSC::LLInt::WASM_SLOW_PATH_DECL):
     113        * wasm/WasmSlowPaths.h:
     114        * wasm/generateWasm.py:
     115        (isAtomic):
     116        (isAtomicLoad):
     117        (isAtomicStore):
     118        (isAtomicBinaryRMW):
     119        (memoryLog2Alignment):
     120        * wasm/generateWasmOpsHeader.py:
     121        (atomicMemoryLoadMacroizer):
     122        (atomicMemoryLoadMacroizer.modifier):
     123        (atomicMemoryStoreMacroizer):
     124        (atomicMemoryStoreMacroizer.modifier):
     125        (atomicBinaryRMWMacroizer):
     126        (atomicBinaryRMWMacroizer.modifier):
     127        (memoryLog2AlignmentGenerator):
     128        (atomicMemoryLog2AlignmentGenerator):
     129        (ExtAtomicOpType):
     130        * wasm/js/JSWebAssemblyInstance.cpp:
     131        (JSC::JSWebAssemblyInstance::tryCreate):
     132        * wasm/wasm.json:
     133
    11342020-11-19  Yusuke Suzuki  <ysuzuki@apple.com>
    2135
  • trunk/Source/JavaScriptCore/assembler/MacroAssemblerX86Common.h

    r269349 r270208  
    38463846    void atomicXchg8(RegisterID reg, Address address)
    38473847    {
    3848         m_assembler.lock();
    38493848        m_assembler.xchgb_rm(reg, address.offset, address.base);
    38503849    }
     
    38523851    void atomicXchg8(RegisterID reg, BaseIndex address)
    38533852    {
    3854         m_assembler.lock();
    38553853        m_assembler.xchgb_rm(reg, address.offset, address.base, address.index, address.scale);
    38563854    }
     
    38583856    void atomicXchg16(RegisterID reg, Address address)
    38593857    {
    3860         m_assembler.lock();
    38613858        m_assembler.xchgw_rm(reg, address.offset, address.base);
    38623859    }
     
    38643861    void atomicXchg16(RegisterID reg, BaseIndex address)
    38653862    {
    3866         m_assembler.lock();
    38673863        m_assembler.xchgw_rm(reg, address.offset, address.base, address.index, address.scale);
    38683864    }
     
    38703866    void atomicXchg32(RegisterID reg, Address address)
    38713867    {
    3872         m_assembler.lock();
    38733868        m_assembler.xchgl_rm(reg, address.offset, address.base);
    38743869    }
     
    38763871    void atomicXchg32(RegisterID reg, BaseIndex address)
    38773872    {
    3878         m_assembler.lock();
    38793873        m_assembler.xchgl_rm(reg, address.offset, address.base, address.index, address.scale);
    38803874    }
  • trunk/Source/JavaScriptCore/b3/B3Kind.h

    r264488 r270208  
    127127        case Store16:
    128128        case Store:
     129        case AtomicWeakCAS:
     130        case AtomicStrongCAS:
     131        case AtomicXchgAdd:
     132        case AtomicXchgAnd:
     133        case AtomicXchgOr:
     134        case AtomicXchgSub:
     135        case AtomicXchgXor:
     136        case AtomicXchg:
    129137            return true;
    130138        default:
  • trunk/Source/JavaScriptCore/b3/B3LowerToAir.cpp

    r265074 r270208  
    23532353       
    23542354        if (isValidForm(atomicOpcode, Arg::Imm, address.kind()) && imm(m_value->child(0))) {
    2355             append(atomicOpcode, imm(m_value->child(0)), address);
     2355            appendTrapping(atomicOpcode, imm(m_value->child(0)), address);
    23562356            return true;
    23572357        }
    23582358       
    23592359        if (isValidForm(atomicOpcode, Arg::Tmp, address.kind())) {
    2360             append(atomicOpcode, tmp(m_value->child(0)), address);
     2360            appendTrapping(atomicOpcode, tmp(m_value->child(0)), address);
    23612361            return true;
    23622362        }
     
    35953595            if (isValidForm(opcode, Arg::Tmp, address.kind())) {
    35963596                append(relaxedMoveForType(atomic->type()), tmp(atomic->child(0)), tmp(atomic));
    3597                 append(opcode, tmp(atomic), address);
     3597                appendTrapping(opcode, tmp(atomic), address);
    35983598                return;
    35993599            }
     
    36463646            if (isValidForm(opcode, Arg::Tmp, address.kind())) {
    36473647                append(relaxedMoveForType(atomic->type()), tmp(atomic->child(0)), tmp(atomic));
    3648                 append(opcode, tmp(atomic), address);
     3648                appendTrapping(opcode, tmp(atomic), address);
    36493649                return;
    36503650            }
  • trunk/Source/JavaScriptCore/b3/B3Width.h

    r262746 r270208  
    3838
    3939enum Width : int8_t {
    40     Width8,
     40    Width8 = 0,
    4141    Width16,
    4242    Width32,
     
    112112}
    113113
     114inline unsigned bytesForWidth(Width width)
     115{
     116    switch (width) {
     117    case Width8:
     118        return 1;
     119    case Width16:
     120        return 2;
     121    case Width32:
     122        return 4;
     123    case Width64:
     124        return 8;
     125    }
     126    return 1;
     127}
     128
    114129inline uint64_t mask(Width width)
    115130{
  • trunk/Source/JavaScriptCore/b3/testb3_8.cpp

    r249914 r270208  
    590590
    591591    auto checkMyDisassembly = [&] (Compilation& compilation, bool fenced) {
    592         if (isX86())
    593             checkUsesInstruction(compilation, "lock");
    594         else {
     592        if (isX86()) {
     593            // AtomicXchg can be lowered to "xchg" without "lock", and this is OK since "lock" signal is asserted for "xchg" by default.
     594            if (AtomicXchg != opcode)
     595                checkUsesInstruction(compilation, "lock");
     596        } else {
    595597            if (fenced) {
    596598                checkUsesInstruction(compilation, "ldax");
  • trunk/Source/JavaScriptCore/bytecode/BytecodeList.rb

    r269694 r270208  
    16921692    }
    16931693
     1694op_group :AtomicBinaryRMW,
     1695    [
     1696        "add",
     1697        "sub",
     1698        "and",
     1699        "or",
     1700        "xor",
     1701        "xchg",
     1702    ].flat_map {|op|
     1703        [
     1704            "i64_atomic_rmw_#{op}",
     1705            "i64_atomic_rmw8_#{op}_u",
     1706            "i64_atomic_rmw16_#{op}_u",
     1707            "i64_atomic_rmw32_#{op}_u",
     1708        ]
     1709    }.map {|op| op.to_sym },
     1710    args: {
     1711        dst: VirtualRegister,
     1712        pointer: VirtualRegister,
     1713        offset: unsigned,
     1714        value: VirtualRegister,
     1715    }
     1716
     1717op_group :AtomicCompareExchange,
     1718    [
     1719        :i64_atomic_rmw_cmpxchg,
     1720        :i64_atomic_rmw8_cmpxchg_u,
     1721        :i64_atomic_rmw16_cmpxchg_u,
     1722        :i64_atomic_rmw32_cmpxchg_u,
     1723    ],
     1724    args: {
     1725        dst: VirtualRegister,
     1726        pointer: VirtualRegister,
     1727        offset: unsigned,
     1728        expected: VirtualRegister,
     1729        value: VirtualRegister,
     1730    }
     1731
     1732op_group :AtomicWait,
     1733    [
     1734        :memory_atomic_wait32,
     1735        :memory_atomic_wait64,
     1736    ],
     1737    args: {
     1738        dst: VirtualRegister,
     1739        pointer: VirtualRegister,
     1740        offset: unsigned,
     1741        value: VirtualRegister,
     1742        timeout: VirtualRegister,
     1743    }
     1744
     1745op :memory_atomic_notify,
     1746    args: {
     1747        dst: VirtualRegister,
     1748        pointer: VirtualRegister,
     1749        offset: unsigned,
     1750        count: VirtualRegister,
     1751    }
     1752
     1753op :atomic_fence,
     1754    args: {
     1755    }
     1756
    16941757end_section :Wasm
  • trunk/Source/JavaScriptCore/interpreter/Register.h

    r251886 r270208  
    6969        int64_t unboxedStrictInt52() const;
    7070        int64_t asanUnsafeUnboxedStrictInt52() const;
     71        int64_t unboxedInt64() const;
     72        int64_t asanUnsafeUnboxedInt64() const;
    7173        bool unboxedBoolean() const;
    7274        double unboxedDouble() const;
     
    166168    }
    167169
     170    ALWAYS_INLINE int64_t Register::unboxedInt64() const
     171    {
     172        return u.integer;
     173    }
     174
     175    SUPPRESS_ASAN ALWAYS_INLINE int64_t Register::asanUnsafeUnboxedInt64() const
     176    {
     177        return u.integer;
     178    }
     179
    168180    ALWAYS_INLINE bool Register::unboxedBoolean() const
    169181    {
  • trunk/Source/JavaScriptCore/jsc.cpp

    r269531 r270208  
    202202class Message : public ThreadSafeRefCounted<Message> {
    203203public:
    204     Message(ArrayBufferContents&&, int32_t);
     204#if ENABLE(WEBASSEMBLY)
     205    using Content = Variant<ArrayBufferContents, Ref<Wasm::MemoryHandle>>;
     206#else
     207    using Content = Variant<ArrayBufferContents>;
     208#endif
     209    Message(Content&&, int32_t);
    205210    ~Message();
    206211   
    207     ArrayBufferContents&& releaseContents() { return WTFMove(m_contents); }
     212    Content&& releaseContents() { return WTFMove(m_contents); }
    208213    int32_t index() const { return m_index; }
    209214
    210215private:
    211     ArrayBufferContents m_contents;
     216    Content m_contents;
    212217    int32_t m_index { 0 };
    213218};
     
    17851790}
    17861791
    1787 Message::Message(ArrayBufferContents&& contents, int32_t index)
     1792Message::Message(Content&& contents, int32_t index)
    17881793    : m_contents(WTFMove(contents))
    17891794    , m_index(index)
     
    19471952    if (isGigacageMemoryExhausted(Gigacage::JSValue) || isGigacageMemoryExhausted(Gigacage::Primitive))
    19481953        return JSValue::encode(throwOutOfMemoryError(globalObject, scope, "Gigacage is exhausted"_s));
     1954
     1955    String workerPath = "worker"_s;
     1956    if (!callFrame->argument(1).isUndefined()) {
     1957        workerPath = callFrame->argument(1).toWTFString(globalObject);
     1958        RETURN_IF_EXCEPTION(scope, encodedJSValue());
     1959    }
    19491960   
    19501961    Thread::create(
    19511962        "JSC Agent",
    1952         [sourceCode = sourceCode.isolatedCopy(), &didStartLock, &didStartCondition, &didStart] () {
     1963        [sourceCode = sourceCode.isolatedCopy(), workerPath = workerPath.isolatedCopy(), &didStartLock, &didStartCondition, &didStart] () {
    19531964            CommandLine commandLine(CommandLine::CommandLineForWorkers);
    19541965            commandLine.m_interactive = false;
     
    19651976                    NakedPtr<Exception> evaluationException;
    19661977                    JSValue result;
    1967                     result = evaluate(globalObject, jscSource(sourceCode, SourceOrigin(URL({ }, "worker"_s))), JSValue(), evaluationException);
     1978                    result = evaluate(globalObject, jscSource(sourceCode, SourceOrigin(URL({ }, workerPath))), JSValue(), evaluationException);
    19681979                    if (evaluationException)
    19691980                        result = evaluationException->value();
     
    19982009        message = Worker::current().dequeue();
    19992010    }
    2000    
    2001     auto nativeBuffer = ArrayBuffer::create(message->releaseContents());
    2002     ArrayBufferSharingMode sharingMode = nativeBuffer->sharingMode();
    2003     JSArrayBuffer* jsBuffer = JSArrayBuffer::create(vm, globalObject->arrayBufferStructure(sharingMode), WTFMove(nativeBuffer));
    2004    
     2011
     2012    auto content = message->releaseContents();
     2013    JSValue result = ([&]() -> JSValue {
     2014        if (WTF::holds_alternative<ArrayBufferContents>(content)) {
     2015            auto nativeBuffer = ArrayBuffer::create(WTF::get<ArrayBufferContents>(WTFMove(content)));
     2016            ArrayBufferSharingMode sharingMode = nativeBuffer->sharingMode();
     2017            return JSArrayBuffer::create(vm, globalObject->arrayBufferStructure(sharingMode), WTFMove(nativeBuffer));
     2018        }
     2019#if ENABLE(WEBASSEMBLY)
     2020        if (WTF::holds_alternative<Ref<Wasm::MemoryHandle>>(content)) {
     2021            JSWebAssemblyMemory* jsMemory = JSC::JSWebAssemblyMemory::tryCreate(globalObject, vm, globalObject->webAssemblyMemoryStructure());
     2022            scope.releaseAssertNoException();
     2023            Ref<Wasm::Memory> memory = Wasm::Memory::create(WTF::get<Ref<Wasm::MemoryHandle>>(WTFMove(content)),
     2024                [&vm] (Wasm::Memory::NotifyPressure) { vm.heap.collectAsync(CollectionScope::Full); },
     2025                [&vm] (Wasm::Memory::SyncTryToReclaim) { vm.heap.collectSync(CollectionScope::Full); },
     2026                [&vm, jsMemory] (Wasm::Memory::GrowSuccess, Wasm::PageCount oldPageCount, Wasm::PageCount newPageCount) { jsMemory->growSuccessCallback(vm, oldPageCount, newPageCount); });
     2027            jsMemory->adopt(WTFMove(memory));
     2028            return jsMemory;
     2029        }
     2030#endif
     2031        return jsUndefined();
     2032    })();
     2033
    20052034    MarkedArgumentBuffer args;
    2006     args.append(jsBuffer);
     2035    args.append(result);
    20072036    args.append(jsNumber(message->index()));
    20082037    if (UNLIKELY(args.hasOverflowed()))
     
    20422071    auto scope = DECLARE_THROW_SCOPE(vm);
    20432072
    2044     JSArrayBuffer* jsBuffer = jsDynamicCast<JSArrayBuffer*>(vm, callFrame->argument(0));
    2045     if (!jsBuffer || !jsBuffer->isShared())
    2046         return JSValue::encode(throwException(globalObject, scope, createError(globalObject, "Expected SharedArrayBuffer"_s)));
    2047    
    20482073    int32_t index = callFrame->argument(1).toInt32(globalObject);
    20492074    RETURN_IF_EXCEPTION(scope, encodedJSValue());
    2050    
    2051     Workers::singleton().broadcast(
    2052         [&] (const AbstractLocker& locker, Worker& worker) {
    2053             ArrayBuffer* nativeBuffer = jsBuffer->impl();
    2054             ArrayBufferContents contents;
    2055             nativeBuffer->transferTo(vm, contents); // "transferTo" means "share" if the buffer is shared.
    2056             RefPtr<Message> message = adoptRef(new Message(WTFMove(contents), index));
    2057             worker.enqueue(locker, message);
    2058         });
    2059    
    2060     return JSValue::encode(jsUndefined());
     2075
     2076    JSArrayBuffer* jsBuffer = jsDynamicCast<JSArrayBuffer*>(vm, callFrame->argument(0));
     2077    if (jsBuffer && jsBuffer->isShared()) {
     2078        Workers::singleton().broadcast(
     2079            [&] (const AbstractLocker& locker, Worker& worker) {
     2080                ArrayBuffer* nativeBuffer = jsBuffer->impl();
     2081                ArrayBufferContents contents;
     2082                nativeBuffer->transferTo(vm, contents); // "transferTo" means "share" if the buffer is shared.
     2083                RefPtr<Message> message = adoptRef(new Message(WTFMove(contents), index));
     2084                worker.enqueue(locker, message);
     2085            });
     2086        return JSValue::encode(jsUndefined());
     2087    }
     2088
     2089#if ENABLE(WEBASSEMBLY)
     2090    JSWebAssemblyMemory* memory = jsDynamicCast<JSWebAssemblyMemory*>(vm, callFrame->argument(0));
     2091    if (memory && memory->memory().sharingMode() == Wasm::MemorySharingMode::Shared) {
     2092        Workers::singleton().broadcast(
     2093            [&] (const AbstractLocker& locker, Worker& worker) {
     2094                Ref<Wasm::MemoryHandle> handle { memory->memory().handle() };
     2095                RefPtr<Message> message = adoptRef(new Message(WTFMove(handle), index));
     2096                worker.enqueue(locker, message);
     2097            });
     2098        return JSValue::encode(jsUndefined());
     2099    }
     2100#endif
     2101
     2102    return JSValue::encode(throwException(globalObject, scope, createError(globalObject, "Not supported object"_s)));
    20612103}
    20622104
  • trunk/Source/JavaScriptCore/llint/WebAssembly.asm

    r269974 r270208  
    567567slowWasmOp(set_global_ref)
    568568slowWasmOp(set_global_ref_portable_binding)
     569slowWasmOp(memory_atomic_wait32)
     570slowWasmOp(memory_atomic_wait64)
     571slowWasmOp(memory_atomic_notify)
    569572
    570573wasmOp(grow_memory, WasmGrowMemory, macro(ctx)
     
    883886end
    884887
     888macro emitCheckAndPreparePointerAddingOffset(ctx, pointer, offset, size)
     889    leap size - 1[pointer, offset], t5
     890    bpb t5, boundsCheckingSize, .continuation
     891.throw:
     892    throwException(OutOfBoundsMemoryAccess)
     893.continuation:
     894    addp memoryBase, pointer
     895    addp offset, pointer
     896end
     897
     898macro emitCheckAndPreparePointerAddingOffsetWithAlignmentCheck(ctx, pointer, offset, size)
     899    leap size - 1[pointer, offset], t5
     900    bpb t5, boundsCheckingSize, .continuation
     901.throw:
     902    throwException(OutOfBoundsMemoryAccess)
     903.continuation:
     904    addp memoryBase, pointer
     905    addp offset, pointer
     906    btpnz pointer, (size - 1), .throw
     907end
     908
    885909macro wasmLoadOp(name, struct, size, fn)
    886910    wasmOp(name, struct, macro(ctx)
     
    21432167    dispatch(ctx)
    21442168end)
     2169
     2170macro wasmAtomicBinaryRMWOps(lowerCaseOpcode, upperCaseOpcode, fnb, fnh, fni, fnq)
     2171    wasmOp(i64_atomic_rmw8%lowerCaseOpcode%_u, WasmI64AtomicRmw8%upperCaseOpcode%U, macro(ctx)
     2172        mloadi(ctx, m_pointer, t3)
     2173        wgetu(ctx, m_offset, t1)
     2174        mloadq(ctx, m_value, t0)
     2175        emitCheckAndPreparePointerAddingOffset(ctx, t3, t1, 1)
     2176        fnb(t0, [t3], t2, t5, t1)
     2177        andq 0xff, t0 # FIXME: ZeroExtend8To64
     2178        assert(macro(ok) bqbeq t0, 0xff, .ok end)
     2179        returnq(ctx, t0)
     2180    end)
     2181    wasmOp(i64_atomic_rmw16%lowerCaseOpcode%_u, WasmI64AtomicRmw16%upperCaseOpcode%U, macro(ctx)
     2182        mloadi(ctx, m_pointer, t3)
     2183        wgetu(ctx, m_offset, t1)
     2184        mloadq(ctx, m_value, t0)
     2185        emitCheckAndPreparePointerAddingOffsetWithAlignmentCheck(ctx, t3, t1, 2)
     2186        fnh(t0, [t3], t2, t5, t1)
     2187        andq 0xffff, t0 # FIXME: ZeroExtend16To64
     2188        assert(macro(ok) bqbeq t0, 0xffff, .ok end)
     2189        returnq(ctx, t0)
     2190    end)
     2191    wasmOp(i64_atomic_rmw32%lowerCaseOpcode%_u, WasmI64AtomicRmw32%upperCaseOpcode%U, macro(ctx)
     2192        mloadi(ctx, m_pointer, t3)
     2193        wgetu(ctx, m_offset, t1)
     2194        mloadq(ctx, m_value, t0)
     2195        emitCheckAndPreparePointerAddingOffsetWithAlignmentCheck(ctx, t3, t1, 4)
     2196        fni(t0, [t3], t2, t5, t1)
     2197        zxi2q t0, t0
     2198        assert(macro(ok) bqbeq t0, 0xffffffff, .ok end)
     2199        returnq(ctx, t0)
     2200    end)
     2201    wasmOp(i64_atomic_rmw%lowerCaseOpcode%, WasmI64AtomicRmw%upperCaseOpcode%, macro(ctx)
     2202        mloadi(ctx, m_pointer, t3)
     2203        wgetu(ctx, m_offset, t1)
     2204        mloadq(ctx, m_value, t0)
     2205        emitCheckAndPreparePointerAddingOffsetWithAlignmentCheck(ctx, t3, t1, 8)
     2206        fnq(t0, [t3], t2, t5, t1)
     2207        returnq(ctx, t0)
     2208    end)
     2209end
     2210
     2211macro wasmAtomicBinaryRMWOpsWithWeakCAS(lowerCaseOpcode, upperCaseOpcode, fni, fnq)
     2212    wasmAtomicBinaryRMWOps(lowerCaseOpcode, upperCaseOpcode,
     2213        macro(t0GPR, mem, t2GPR, t5GPR, t1GPR)
     2214            if X86_64
     2215                move t0GPR, t5GPR
     2216                loadb mem, t0GPR
     2217            .loop:
     2218                move t0GPR, t2GPR
     2219                fni(t5GPR, t2GPR)
     2220                batomicweakcasb t0GPR, t2GPR, mem, .loop
     2221            else
     2222            .loop:
     2223                loadlinkacqb mem, t1GPR
     2224                fni(t0GPR, t1GPR, t2GPR)
     2225                storecondrelb t5GPR, t2GPR, mem
     2226                bineq t5GPR, 0, .loop
     2227                move t1GPR, t0GPR
     2228            end
     2229        end,
     2230        macro(t0GPR, mem, t2GPR, t5GPR, t1GPR)
     2231            if X86_64
     2232                move t0GPR, t5GPR
     2233                loadh mem, t0GPR
     2234            .loop:
     2235                move t0GPR, t2GPR
     2236                fni(t5GPR, t2GPR)
     2237                batomicweakcash t0GPR, t2GPR, mem, .loop
     2238            else
     2239            .loop:
     2240                loadlinkacqh mem, t1GPR
     2241                fni(t0GPR, t1GPR, t2GPR)
     2242                storecondrelh t5GPR, t2GPR, mem
     2243                bineq t5GPR, 0, .loop
     2244                move t1GPR, t0GPR
     2245            end
     2246        end,
     2247        macro(t0GPR, mem, t2GPR, t5GPR, t1GPR)
     2248            if X86_64
     2249                move t0GPR, t5GPR
     2250                loadi mem, t0GPR
     2251            .loop:
     2252                move t0GPR, t2GPR
     2253                fni(t5GPR, t2GPR)
     2254                batomicweakcasi t0GPR, t2GPR, mem, .loop
     2255            else
     2256            .loop:
     2257                loadlinkacqi mem, t1GPR
     2258                fni(t0GPR, t1GPR, t2GPR)
     2259                storecondreli t5GPR, t2GPR, mem
     2260                bineq t5GPR, 0, .loop
     2261                move t1GPR, t0GPR
     2262            end
     2263        end,
     2264        macro(t0GPR, mem, t2GPR, t5GPR, t1GPR)
     2265            if X86_64
     2266                move t0GPR, t5GPR
     2267                loadq mem, t0GPR
     2268            .loop:
     2269                move t0GPR, t2GPR
     2270                fnq(t5GPR, t2GPR)
     2271                batomicweakcasq t0GPR, t2GPR, mem, .loop
     2272            else
     2273            .loop:
     2274                loadlinkacqq mem, t1GPR
     2275                fnq(t0GPR, t1GPR, t2GPR)
     2276                storecondrelq t5GPR, t2GPR, mem
     2277                bineq t5GPR, 0, .loop
     2278                move t1GPR, t0GPR
     2279            end
     2280        end)
     2281end
     2282
     2283if X86_64
     2284    wasmAtomicBinaryRMWOps(_add, Add,
     2285        macro(t0GPR, mem, t2GPR, t5GPR, t1GPR) atomicxchgaddb t0GPR, mem end,
     2286        macro(t0GPR, mem, t2GPR, t5GPR, t1GPR) atomicxchgaddh t0GPR, mem end,
     2287        macro(t0GPR, mem, t2GPR, t5GPR, t1GPR) atomicxchgaddi t0GPR, mem end,
     2288        macro(t0GPR, mem, t2GPR, t5GPR, t1GPR) atomicxchgaddq t0GPR, mem end)
     2289    wasmAtomicBinaryRMWOps(_sub, Sub,
     2290        macro(t0GPR, mem, t2GPR, t5GPR, t1GPR) atomicxchgsubb t0GPR, mem end,
     2291        macro(t0GPR, mem, t2GPR, t5GPR, t1GPR) atomicxchgsubh t0GPR, mem end,
     2292        macro(t0GPR, mem, t2GPR, t5GPR, t1GPR) atomicxchgsubi t0GPR, mem end,
     2293        macro(t0GPR, mem, t2GPR, t5GPR, t1GPR) atomicxchgsubq t0GPR, mem end)
     2294    wasmAtomicBinaryRMWOps(_xchg, Xchg,
     2295        macro(t0GPR, mem, t2GPR, t5GPR, t1GPR) atomicxchgb t0GPR, mem end,
     2296        macro(t0GPR, mem, t2GPR, t5GPR, t1GPR) atomicxchgh t0GPR, mem end,
     2297        macro(t0GPR, mem, t2GPR, t5GPR, t1GPR) atomicxchgi t0GPR, mem end,
     2298        macro(t0GPR, mem, t2GPR, t5GPR, t1GPR) atomicxchgq t0GPR, mem end)
     2299    wasmAtomicBinaryRMWOpsWithWeakCAS(_and, And,
     2300        macro(t5GPR, t2GPR)
     2301            andi t5GPR, t2GPR
     2302        end,
     2303        macro(t5GPR, t2GPR)
     2304            andq t5GPR, t2GPR
     2305        end)
     2306    wasmAtomicBinaryRMWOpsWithWeakCAS(_or, Or,
     2307        macro(t5GPR, t2GPR)
     2308            ori t5GPR, t2GPR
     2309        end,
     2310        macro(t5GPR, t2GPR)
     2311            orq t5GPR, t2GPR
     2312        end)
     2313    wasmAtomicBinaryRMWOpsWithWeakCAS(_xor, Xor,
     2314        macro(t5GPR, t2GPR)
     2315            xori t5GPR, t2GPR
     2316        end,
     2317        macro(t5GPR, t2GPR)
     2318            xorq t5GPR, t2GPR
     2319        end)
     2320else
     2321    wasmAtomicBinaryRMWOpsWithWeakCAS(_add, Add,
     2322        macro(t0GPR, t1GPR, t2GPR)
     2323            addi t0GPR, t1GPR, t2GPR
     2324        end,
     2325        macro(t0GPR, t1GPR, t2GPR)
     2326            addq t0GPR, t1GPR, t2GPR
     2327        end)
     2328    wasmAtomicBinaryRMWOpsWithWeakCAS(_sub, Sub,
     2329        macro(t0GPR, t1GPR, t2GPR)
     2330            subi t1GPR, t0GPR, t2GPR
     2331        end,
     2332        macro(t0GPR, t1GPR, t2GPR)
     2333            subq t1GPR, t0GPR, t2GPR
     2334        end)
     2335    wasmAtomicBinaryRMWOpsWithWeakCAS(_xchg, Xchg,
     2336        macro(t0GPR, t1GPR, t2GPR)
     2337            move t0GPR, t2GPR
     2338        end,
     2339        macro(t0GPR, t1GPR, t2GPR)
     2340            move t0GPR, t2GPR
     2341        end)
     2342    wasmAtomicBinaryRMWOpsWithWeakCAS(_and, And,
     2343        macro(t0GPR, t1GPR, t2GPR)
     2344            andi t0GPR, t1GPR, t2GPR
     2345        end,
     2346        macro(t0GPR, t1GPR, t2GPR)
     2347            andq t0GPR, t1GPR, t2GPR
     2348        end)
     2349    wasmAtomicBinaryRMWOpsWithWeakCAS(_or, Or,
     2350        macro(t0GPR, t1GPR, t2GPR)
     2351            ori t0GPR, t1GPR, t2GPR
     2352        end,
     2353        macro(t0GPR, t1GPR, t2GPR)
     2354            orq t0GPR, t1GPR, t2GPR
     2355        end)
     2356    wasmAtomicBinaryRMWOpsWithWeakCAS(_xor, Xor,
     2357        macro(t0GPR, t1GPR, t2GPR)
     2358            xori t0GPR, t1GPR, t2GPR
     2359        end,
     2360        macro(t0GPR, t1GPR, t2GPR)
     2361            xorq t0GPR, t1GPR, t2GPR
     2362        end)
     2363end
     2364
     2365macro wasmAtomicCompareExchangeOps(lowerCaseOpcode, upperCaseOpcode, fnb, fnh, fni, fnq)
     2366    wasmOp(i64_atomic_rmw8%lowerCaseOpcode%_u, WasmI64AtomicRmw8%upperCaseOpcode%U, macro(ctx)
     2367        mloadi(ctx, m_pointer, t3)
     2368        wgetu(ctx, m_offset, t1)
     2369        mloadq(ctx, m_expected, t0)
     2370        mloadq(ctx, m_value, t2)
     2371        emitCheckAndPreparePointerAddingOffset(ctx, t3, t1, 1)
     2372        fnb(t0, t2, [t3], t5, t1)
     2373        andq 0xff, t0 # FIXME: ZeroExtend8To64
     2374        assert(macro(ok) bqbeq t0, 0xff, .ok end)
     2375        returnq(ctx, t0)
     2376    end)
     2377    wasmOp(i64_atomic_rmw16%lowerCaseOpcode%_u, WasmI64AtomicRmw16%upperCaseOpcode%U, macro(ctx)
     2378        mloadi(ctx, m_pointer, t3)
     2379        wgetu(ctx, m_offset, t1)
     2380        mloadq(ctx, m_expected, t0)
     2381        mloadq(ctx, m_value, t2)
     2382        emitCheckAndPreparePointerAddingOffsetWithAlignmentCheck(ctx, t3, t1, 2)
     2383        fnh(t0, t2, [t3], t5, t1)
     2384        andq 0xffff, t0 # FIXME: ZeroExtend16To64
     2385        assert(macro(ok) bqbeq t0, 0xffff, .ok end)
     2386        returnq(ctx, t0)
     2387    end)
     2388    wasmOp(i64_atomic_rmw32%lowerCaseOpcode%_u, WasmI64AtomicRmw32%upperCaseOpcode%U, macro(ctx)
     2389        mloadi(ctx, m_pointer, t3)
     2390        wgetu(ctx, m_offset, t1)
     2391        mloadq(ctx, m_expected, t0)
     2392        mloadq(ctx, m_value, t2)
     2393        emitCheckAndPreparePointerAddingOffsetWithAlignmentCheck(ctx, t3, t1, 4)
     2394        fni(t0, t2, [t3], t5, t1)
     2395        zxi2q t0, t0
     2396        assert(macro(ok) bqbeq t0, 0xffffffff, .ok end)
     2397        returnq(ctx, t0)
     2398    end)
     2399    wasmOp(i64_atomic_rmw%lowerCaseOpcode%, WasmI64AtomicRmw%upperCaseOpcode%, macro(ctx)
     2400        mloadi(ctx, m_pointer, t3)
     2401        wgetu(ctx, m_offset, t1)
     2402        mloadq(ctx, m_expected, t0)
     2403        mloadq(ctx, m_value, t2)
     2404        emitCheckAndPreparePointerAddingOffsetWithAlignmentCheck(ctx, t3, t1, 8)
     2405        fnq(t0, t2, [t3], t5, t1)
     2406        returnq(ctx, t0)
     2407    end)
     2408end
     2409
     2410# t0GPR => expected, t2GPR => value, mem => memory reference
     2411wasmAtomicCompareExchangeOps(_cmpxchg, Cmpxchg,
     2412    macro(t0GPR, t2GPR, mem, t5GPR, t1GPR)
     2413        if X86_64
     2414            atomicweakcasb t0GPR, t2GPR, mem
     2415        else
     2416        .loop:
     2417            loadlinkacqb mem, t1GPR
     2418            bineq t0GPR, t1GPR, .fail
     2419            storecondrelb t5GPR, t2GPR, mem
     2420            bieq t5GPR, 0, .done
     2421            jmp .loop
     2422        .fail:
     2423            storecondrelb t5GPR, t1GPR, mem
     2424            bieq t5GPR, 0, .done
     2425            jmp .loop
     2426        .done:
     2427            move t1GPR, t0GPR
     2428        end
     2429    end,
     2430    macro(t0GPR, t2GPR, mem, t5GPR, t1GPR)
     2431        if X86_64
     2432            atomicweakcash t0GPR, t2GPR, mem
     2433        else
     2434        .loop:
     2435            loadlinkacqh mem, t1GPR
     2436            bineq t0GPR, t1GPR, .fail
     2437            storecondrelh t5GPR, t2GPR, mem
     2438            bieq t5GPR, 0, .done
     2439            jmp .loop
     2440        .fail:
     2441            storecondrelh t5GPR, t1GPR, mem
     2442            bieq t5GPR, 0, .done
     2443            jmp .loop
     2444        .done:
     2445            move t1GPR, t0GPR
     2446        end
     2447    end,
     2448    macro(t0GPR, t2GPR, mem, t5GPR, t1GPR)
     2449        if X86_64
     2450            atomicweakcasi t0GPR, t2GPR, mem
     2451        else
     2452        .loop:
     2453            loadlinkacqi mem, t1GPR
     2454            bineq t0GPR, t1GPR, .fail
     2455            storecondreli t5GPR, t2GPR, mem
     2456            bieq t5GPR, 0, .done
     2457            jmp .loop
     2458        .fail:
     2459            storecondreli t5GPR, t1GPR, mem
     2460            bieq t5GPR, 0, .done
     2461            jmp .loop
     2462        .done:
     2463            move t1GPR, t0GPR
     2464        end
     2465    end,
     2466    macro(t0GPR, t2GPR, mem, t5GPR, t1GPR)
     2467        if X86_64
     2468            atomicweakcasq t0GPR, t2GPR, mem
     2469        else
     2470        .loop:
     2471            loadlinkacqq mem, t1GPR
     2472            bqneq t0GPR, t1GPR, .fail
     2473            storecondrelq t5GPR, t2GPR, mem
     2474            bieq t5GPR, 0, .done
     2475            jmp .loop
     2476        .fail:
     2477            storecondrelq t5GPR, t1GPR, mem
     2478            bieq t5GPR, 0, .done
     2479            jmp .loop
     2480        .done:
     2481            move t1GPR, t0GPR
     2482        end
     2483    end)
     2484
     2485wasmOp(atomic_fence, WasmDropKeep, macro(ctx)
     2486    fence
     2487    dispatch(ctx)
     2488end)
  • trunk/Source/JavaScriptCore/offlineasm/arm64.rb

    r269929 r270208  
    373373            | node, address |
    374374            case node.opcode
    375             when "loadb", "loadbsi", "loadbsq", "storeb", /^bb/, /^btb/, /^cb/, /^tb/
     375            when "loadb", "loadbsi", "loadbsq", "storeb", /^bb/, /^btb/, /^cb/, /^tb/, "loadlinkacqb", "storecondrelb"
    376376                size = 1
    377             when "loadh", "loadhsi", "loadhsq", "orh", "storeh"
     377            when "loadh", "loadhsi", "loadhsq", "orh", "storeh", "loadlinkacqh", "storecondrelh"
    378378                size = 2
    379379            when "loadi", "loadis", "storei", "addi", "andi", "lshifti", "muli", "negi",
    380380                "noti", "ori", "rshifti", "urshifti", "subi", "xori", /^bi/, /^bti/,
    381                 /^ci/, /^ti/, "addis", "subis", "mulis", "smulli", "leai", "loadf", "storef"
     381                /^ci/, /^ti/, "addis", "subis", "mulis", "smulli", "leai", "loadf", "storef", "loadlinkacqi", "storecondreli"
    382382                size = 4
    383383            when "loadp", "storep", "loadq", "storeq", "loadd", "stored", "lshiftp", "lshiftq", "negp", "negq", "rshiftp", "rshiftq",
    384384                "urshiftp", "urshiftq", "addp", "addq", "mulp", "mulq", "andp", "andq", "orp", "orq", "subp", "subq", "xorp", "xorq", "addd",
    385385                "divd", "subd", "muld", "sqrtd", /^bp/, /^bq/, /^btp/, /^btq/, /^cp/, /^cq/, /^tp/, /^tq/, /^bd/,
    386                 "jmp", "call", "leap", "leaq"
     386                "jmp", "call", "leap", "leaq", "loadlinkacqq", "storecondrelq"
    387387                size = $currentSettings["ADDRESS64"] ? 8 : 4
    388388            else
     
    10961096        when "memfence"
    10971097            $asm.puts "dmb sy"
     1098        when "fence"
     1099            $asm.puts "dmb ish"
    10981100        when "bfiq"
    10991101            $asm.puts "bfi #{operands[3].arm64Operand(:quad)}, #{operands[0].arm64Operand(:quad)}, #{operands[1].value}, #{operands[2].value}"
     
    13101312            $asm.puts "bic #{tmp}, #{tmp}, #7"
    13111313            $asm.puts "str #{operands[0].arm64Operand(:ptr)}, [#{tmp}, #{offset}]"
     1314        when "loadlinkacqb"
     1315            $asm.puts "ldaxrb #{operands[1].arm64Operand(:word)}, #{operands[0].arm64Operand(:word)}"
     1316        when "loadlinkacqh"
     1317            $asm.puts "ldaxrh #{operands[1].arm64Operand(:word)}, #{operands[0].arm64Operand(:word)}"
     1318        when "loadlinkacqi"
     1319            $asm.puts "ldaxr #{operands[1].arm64Operand(:word)}, #{operands[0].arm64Operand(:word)}"
     1320        when "loadlinkacqq"
     1321            $asm.puts "ldaxr #{operands[1].arm64Operand(:quad)}, #{operands[0].arm64Operand(:quad)}"
     1322        when "storecondrelb"
     1323            $asm.puts "stlxrb #{operands[0].arm64Operand(:word)}, #{operands[1].arm64Operand(:word)}, #{operands[2].arm64Operand(:word)}"
     1324        when "storecondrelh"
     1325            $asm.puts "stlxrh #{operands[0].arm64Operand(:word)}, #{operands[1].arm64Operand(:word)}, #{operands[2].arm64Operand(:word)}"
     1326        when "storecondreli"
     1327            $asm.puts "stlxr #{operands[0].arm64Operand(:word)}, #{operands[1].arm64Operand(:word)}, #{operands[2].arm64Operand(:word)}"
     1328        when "storecondrelq"
     1329            $asm.puts "stlxr #{operands[0].arm64Operand(:word)}, #{operands[1].arm64Operand(:quad)}, #{operands[2].arm64Operand(:quad)}"
    13121330        else
    13131331            lowerDefault
  • trunk/Source/JavaScriptCore/offlineasm/instructions.rb

    r269929 r270208  
    345345     "idivq",
    346346     "udivq",
     347     "atomicxchgaddb",
     348     "atomicxchgaddh",
     349     "atomicxchgaddi",
     350     "atomicxchgaddq",
     351     "atomicxchgsubb",
     352     "atomicxchgsubh",
     353     "atomicxchgsubi",
     354     "atomicxchgsubq",
     355     "atomicxchgb",
     356     "atomicxchgh",
     357     "atomicxchgi",
     358     "atomicxchgq",
     359     "batomicweakcasb",
     360     "batomicweakcash",
     361     "batomicweakcasi",
     362     "batomicweakcasq",
     363     "atomicweakcasb",
     364     "atomicweakcash",
     365     "atomicweakcasi",
     366     "atomicweakcasq",
     367     "fence",
    347368    ]
    348369
     
    363384     "divq",
    364385     "divqs",
     386     "loadlinkacqb",
     387     "loadlinkacqh",
     388     "loadlinkacqi",
     389     "loadlinkacqq",
     390     "storecondrelb",
     391     "storecondrelh",
     392     "storecondreli",
     393     "storecondrelq",
     394     "fence",
    365395    ]
    366396
  • trunk/Source/JavaScriptCore/offlineasm/x86.rb

    r269929 r270208  
    17931793        when "leap"
    17941794            $asm.puts "lea#{x86Suffix(:ptr)} #{orderOperands(operands[0].x86AddressOperand(:ptr), operands[1].x86Operand(:ptr))}"
    1795         when "memfence"
     1795        when "memfence", "fence"
    17961796            sp = RegisterID.new(nil, "sp")
    17971797            if isIntelSyntax
     
    18401840            end
    18411841            $asm.puts "mov#{x86Suffix(:ptr)} #{orderOperands(operands[0].x86Operand(:ptr), mem)}"
     1842        when "atomicxchgaddb"
     1843            $asm.puts "lock"
     1844            $asm.puts "xadd#{x86Suffix(:byte)} #{x86Operands(:byte, :byte)}"
     1845        when "atomicxchgaddh"
     1846            $asm.puts "lock"
     1847            $asm.puts "xadd#{x86Suffix(:half)} #{x86Operands(:half, :half)}"
     1848        when "atomicxchgaddi"
     1849            $asm.puts "lock"
     1850            $asm.puts "xadd#{x86Suffix(:int)} #{x86Operands(:int, :int)}"
     1851        when "atomicxchgaddq"
     1852            $asm.puts "lock"
     1853            $asm.puts "xadd#{x86Suffix(:quad)} #{x86Operands(:quad, :quad)}"
     1854        when "atomicxchgsubb"
     1855            $asm.puts "neg#{x86Suffix(:byte)} #{operands[0].x86Operand(:byte)}"
     1856            $asm.puts "lock"
     1857            $asm.puts "xadd#{x86Suffix(:byte)} #{x86Operands(:byte, :byte)}"
     1858        when "atomicxchgsubh"
     1859            $asm.puts "neg#{x86Suffix(:half)} #{operands[0].x86Operand(:half)}"
     1860            $asm.puts "lock"
     1861            $asm.puts "xadd#{x86Suffix(:half)} #{x86Operands(:half, :half)}"
     1862        when "atomicxchgsubi"
     1863            $asm.puts "neg#{x86Suffix(:int)} #{operands[0].x86Operand(:int)}"
     1864            $asm.puts "lock"
     1865            $asm.puts "xadd#{x86Suffix(:int)} #{x86Operands(:int, :int)}"
     1866        when "atomicxchgsubq"
     1867            $asm.puts "neg#{x86Suffix(:quad)} #{operands[0].x86Operand(:quad)}"
     1868            $asm.puts "lock"
     1869            $asm.puts "xadd#{x86Suffix(:quad)} #{x86Operands(:quad, :quad)}"
     1870        when "atomicxchgb"
     1871            $asm.puts "xchg#{x86Suffix(:byte)} #{x86Operands(:byte, :byte)}"
     1872        when "atomicxchgh"
     1873            $asm.puts "xchg#{x86Suffix(:half)} #{x86Operands(:half, :half)}"
     1874        when "atomicxchgi"
     1875            $asm.puts "xchg#{x86Suffix(:int)} #{x86Operands(:int, :int)}"
     1876        when "atomicxchgq"
     1877            $asm.puts "xchg#{x86Suffix(:quad)} #{x86Operands(:quad, :quad)}"
     1878        when "batomicweakcasb"
     1879            raise "first operand must be t0" unless operands[0].is_a? RegisterID and operands[0].name == 't0'
     1880            $asm.puts "lock"
     1881            $asm.puts "cmpxchg#{x86Suffix(:byte)} #{orderOperands(operands[1].x86Operand(:byte), operands[2].x86Operand(:byte))}"
     1882            $asm.puts "jne #{operands.last.asmLabel}"
     1883        when "batomicweakcash"
     1884            raise "first operand must be t0" unless operands[0].is_a? RegisterID and operands[0].name == 't0'
     1885            $asm.puts "lock"
     1886            $asm.puts "cmpxchg#{x86Suffix(:half)} #{orderOperands(operands[1].x86Operand(:half), operands[2].x86Operand(:half))}"
     1887            $asm.puts "jne #{operands.last.asmLabel}"
     1888        when "batomicweakcasi"
     1889            raise "first operand must be t0" unless operands[0].is_a? RegisterID and operands[0].name == 't0'
     1890            $asm.puts "lock"
     1891            $asm.puts "cmpxchg#{x86Suffix(:int)} #{orderOperands(operands[1].x86Operand(:int), operands[2].x86Operand(:int))}"
     1892            $asm.puts "jne #{operands.last.asmLabel}"
     1893        when "batomicweakcasq"
     1894            raise "first operand must be t0" unless operands[0].is_a? RegisterID and operands[0].name == 't0'
     1895            $asm.puts "lock"
     1896            $asm.puts "cmpxchg#{x86Suffix(:quad)} #{orderOperands(operands[1].x86Operand(:quad), operands[2].x86Operand(:quad))}"
     1897            $asm.puts "jne #{operands.last.asmLabel}"
     1898        when "atomicweakcasb"
     1899            raise "first operand must be t0" unless operands[0].is_a? RegisterID and operands[0].name == 't0'
     1900            $asm.puts "lock"
     1901            $asm.puts "cmpxchg#{x86Suffix(:byte)} #{orderOperands(operands[1].x86Operand(:byte), operands[2].x86Operand(:byte))}"
     1902        when "atomicweakcash"
     1903            raise "first operand must be t0" unless operands[0].is_a? RegisterID and operands[0].name == 't0'
     1904            $asm.puts "lock"
     1905            $asm.puts "cmpxchg#{x86Suffix(:half)} #{orderOperands(operands[1].x86Operand(:half), operands[2].x86Operand(:half))}"
     1906        when "atomicweakcasi"
     1907            raise "first operand must be t0" unless operands[0].is_a? RegisterID and operands[0].name == 't0'
     1908            $asm.puts "lock"
     1909            $asm.puts "cmpxchg#{x86Suffix(:int)} #{orderOperands(operands[1].x86Operand(:int), operands[2].x86Operand(:int))}"
     1910        when "atomicweakcasq"
     1911            raise "first operand must be t0" unless operands[0].is_a? RegisterID and operands[0].name == 't0'
     1912            $asm.puts "lock"
     1913            $asm.puts "cmpxchg#{x86Suffix(:quad)} #{orderOperands(operands[1].x86Operand(:quad), operands[2].x86Operand(:quad))}"
    18421914        else
    18431915            lowerDefault
  • trunk/Source/JavaScriptCore/runtime/OptionsList.h

    r270066 r270208  
    492492    v(Bool, useWebAssemblyReferences, false, Normal, "Allow types from the wasm references spec.") \
    493493    v(Bool, useWebAssemblyMultiValues, true, Normal, "Allow types from the wasm mulit-values spec.") \
     494    v(Bool, useWebAssemblyThreading, true, Normal, "Allow instructions from the wasm threading spec.") \
    494495    v(Bool, useWeakRefs, true, Normal, "Expose the WeakRef constructor.") \
    495496    v(Bool, useIntlDateTimeFormatDayPeriod, true, Normal, "Expose the Intl.DateTimeFormat dayPeriod feature.") \
  • trunk/Source/JavaScriptCore/wasm/WasmAirIRGenerator.cpp

    r270015 r270208  
    3636#include "B3CheckSpecial.h"
    3737#include "B3CheckValue.h"
     38#include "B3Commutativity.h"
    3839#include "B3PatchpointSpecial.h"
    3940#include "B3Procedure.h"
     
    278279    PartialResult WARN_UNUSED_RETURN addCurrentMemory(ExpressionType& result);
    279280
     281    // Atomics
     282    PartialResult WARN_UNUSED_RETURN atomicLoad(ExtAtomicOpType, Type, ExpressionType pointer, ExpressionType& result, uint32_t offset);
     283    PartialResult WARN_UNUSED_RETURN atomicStore(ExtAtomicOpType, Type, ExpressionType pointer, ExpressionType value, uint32_t offset);
     284    PartialResult WARN_UNUSED_RETURN atomicBinaryRMW(ExtAtomicOpType, Type, ExpressionType pointer, ExpressionType value, ExpressionType& result, uint32_t offset);
     285    PartialResult WARN_UNUSED_RETURN atomicCompareExchange(ExtAtomicOpType, Type, ExpressionType pointer, ExpressionType expected, ExpressionType value, ExpressionType& result, uint32_t offset);
     286    PartialResult WARN_UNUSED_RETURN atomicWait(ExtAtomicOpType, ExpressionType pointer, ExpressionType value, ExpressionType timeout, ExpressionType& result, uint32_t offset);
     287    PartialResult WARN_UNUSED_RETURN atomicNotify(ExtAtomicOpType, ExpressionType pointer, ExpressionType value, ExpressionType& result, uint32_t offset);
     288    PartialResult WARN_UNUSED_RETURN atomicFence(ExtAtomicOpType, uint8_t flags);
     289
    280290    // Basic operators
    281291    template<OpType>
     
    361371        kind.effects = true;
    362372        append(m_currentBlock, kind, std::forward<Arguments>(arguments)...);
     373    }
     374
     375    template<typename... Arguments>
     376    void appendEffectful(BasicBlock* block, B3::Air::Opcode op, Arguments&&... arguments)
     377    {
     378        Kind kind = op;
     379        kind.effects = true;
     380        append(block, kind, std::forward<Arguments>(arguments)...);
    363381    }
    364382
     
    638656    ExpressionType emitLoadOp(LoadOpType, ExpressionType pointer, uint32_t offset);
    639657    void emitStoreOp(StoreOpType, ExpressionType pointer, ExpressionType value, uint32_t offset);
     658    ExpressionType emitAtomicLoadOp(ExtAtomicOpType, Type, ExpressionType pointer, uint32_t offset);
     659    void emitAtomicStoreOp(ExtAtomicOpType, Type, ExpressionType pointer, ExpressionType value, uint32_t offset);
     660    ExpressionType emitAtomicBinaryRMWOp(ExtAtomicOpType, Type, ExpressionType pointer, ExpressionType value, uint32_t offset);
     661    ExpressionType emitAtomicCompareExchange(ExtAtomicOpType, Type, ExpressionType pointer, ExpressionType expected, ExpressionType value, uint32_t offset);
     662
     663    TypedTmp sanitizeAtomicResult(ExtAtomicOpType, Type, TypedTmp result);
     664    TypedTmp appendGeneralAtomic(ExtAtomicOpType, Type, B3::Air::Opcode nonAtomicOpcode, B3::Commutativity, Arg input, Arg addrArg);
     665    TypedTmp appendStrongCAS(ExtAtomicOpType, Type, TypedTmp expected, TypedTmp value, Arg addrArg);
    640666
    641667    void unify(const ExpressionType dst, const ExpressionType source);
     
    653679
    654680    int32_t WARN_UNUSED_RETURN fixupPointerPlusOffset(ExpressionType&, uint32_t);
     681    ExpressionType WARN_UNUSED_RETURN fixupPointerPlusOffsetForAtomicOps(ExtAtomicOpType, ExpressionType, uint32_t);
    655682
    656683    void restoreWasmContextInstance(BasicBlock*, TypedTmp);
     
    16841711}
    16851712
     1713#define OPCODE_FOR_WIDTH(opcode, width) ( \
     1714    (width) == B3::Width8 ? B3::Air::opcode ## 8 : \
     1715    (width) == B3::Width16 ? B3::Air::opcode ## 16 :    \
     1716    (width) == B3::Width32 ? B3::Air::opcode ## 32 :    \
     1717    B3::Air::opcode ## 64)
     1718#define OPCODE_FOR_CANONICAL_WIDTH(opcode, width) ( \
     1719    (width) == B3::Width64 ? B3::Air::opcode ## 64 : B3::Air::opcode ## 32)
     1720
     1721inline B3::Width accessWidth(ExtAtomicOpType op)
     1722{
     1723    return static_cast<B3::Width>(memoryLog2Alignment(op));
     1724}
     1725
     1726inline uint32_t sizeOfAtomicOpMemoryAccess(ExtAtomicOpType op)
     1727{
     1728    return bytesForWidth(accessWidth(op));
     1729}
     1730
     1731auto AirIRGenerator::fixupPointerPlusOffsetForAtomicOps(ExtAtomicOpType op, ExpressionType pointer, uint32_t uoffset) -> ExpressionType
     1732{
     1733    uint32_t offset = fixupPointerPlusOffset(pointer, uoffset);
     1734    if (Arg::isValidAddrForm(offset, B3::widthForBytes(sizeOfAtomicOpMemoryAccess(op)))) {
     1735        if (offset == 0)
     1736            return pointer;
     1737        TypedTmp newPtr = g64();
     1738        append(Lea64, Arg::addr(pointer, offset), newPtr);
     1739        return newPtr;
     1740    }
     1741    TypedTmp newPtr = g64();
     1742    append(Move, Arg::bigImm(offset), newPtr);
     1743    append(Add64, pointer, newPtr);
     1744    return newPtr;
     1745}
     1746
     1747TypedTmp AirIRGenerator::sanitizeAtomicResult(ExtAtomicOpType op, Type valueType, TypedTmp result)
     1748{
     1749    switch (valueType) {
     1750    case Type::I64: {
     1751        switch (accessWidth(op)) {
     1752        case B3::Width8:
     1753            append(ZeroExtend8To32, result, result);
     1754            return result;
     1755        case B3::Width16:
     1756            append(ZeroExtend16To32, result, result);
     1757            return result;
     1758        case B3::Width32:
     1759            append(Move32, result, result);
     1760            return result;
     1761        case B3::Width64:
     1762            return result;
     1763        }
     1764        break;
     1765    }
     1766    case Type::I32:
     1767        switch (accessWidth(op)) {
     1768        case B3::Width8:
     1769            append(ZeroExtend8To32, result, result);
     1770            return result;
     1771        case B3::Width16:
     1772            append(ZeroExtend16To32, result, result);
     1773            return result;
     1774        case B3::Width32:
     1775        case B3::Width64:
     1776            return result;
     1777        }
     1778        break;
     1779    default:
     1780        RELEASE_ASSERT_NOT_REACHED();
     1781        break;
     1782    }
     1783
     1784    return result;
     1785}
     1786
     1787TypedTmp AirIRGenerator::appendGeneralAtomic(ExtAtomicOpType op, Type valueType, B3::Air::Opcode opcode, B3::Commutativity commutativity, Arg input, Arg address)
     1788{
     1789    B3::Width accessWidth = Wasm::accessWidth(op);
     1790
     1791    auto newTmp = [&]() {
     1792        if (accessWidth == B3::Width64)
     1793            return g64();
     1794        return g32();
     1795    };
     1796
     1797    auto tmp = [&](Arg arg) -> TypedTmp {
     1798        if (arg.isTmp())
     1799            return TypedTmp(arg.tmp(), accessWidth == B3::Width64 ? Type::I64 : Type::I32);
     1800        TypedTmp result = newTmp();
     1801        append(Move, arg, result);
     1802        return result;
     1803    };
     1804
     1805    auto imm = [&](Arg arg) {
     1806        if (arg.isImm())
     1807            return arg;
     1808        return Arg();
     1809    };
     1810
     1811    auto bitImm = [&](Arg arg) {
     1812        if (arg.isBitImm())
     1813            return arg;
     1814        return Arg();
     1815    };
     1816
     1817    TypedTmp oldValue = valueType == Type::I64 ? g64() : g32();
     1818    Tmp newValue = opcode == B3::Air::Nop ? tmp(input) : newTmp();
     1819
     1820    // We need a CAS loop or a LL/SC loop. Using prepare/attempt jargon, we want:
     1821    //
     1822    // Block #reloop:
     1823    //     Prepare
     1824    //     opcode
     1825    //     Attempt
     1826    //   Successors: Then:#done, Else:#reloop
     1827    // Block #done:
     1828    //     Move oldValue, result
     1829
     1830    auto* beginBlock = m_currentBlock;
     1831    auto* reloopBlock = m_code.addBlock();
     1832    auto* doneBlock = m_code.addBlock();
     1833
     1834    append(B3::Air::Jump);
     1835    beginBlock->setSuccessors(reloopBlock);
     1836    m_currentBlock = reloopBlock;
     1837
     1838    B3::Air::Opcode prepareOpcode;
     1839    if (isX86()) {
     1840        switch (accessWidth) {
     1841        case B3::Width8:
     1842            prepareOpcode = Load8SignedExtendTo32;
     1843            break;
     1844        case B3::Width16:
     1845            prepareOpcode = Load16SignedExtendTo32;
     1846            break;
     1847        case B3::Width32:
     1848            prepareOpcode = Move32;
     1849            break;
     1850        case B3::Width64:
     1851            prepareOpcode = Move;
     1852            break;
     1853        }
     1854    } else {
     1855        RELEASE_ASSERT(isARM64());
     1856        prepareOpcode = OPCODE_FOR_WIDTH(LoadLinkAcq, accessWidth);
     1857    }
     1858    appendEffectful(prepareOpcode, address, oldValue);
     1859
     1860    if (opcode != B3::Air::Nop) {
     1861        // FIXME: If we ever have to write this again, we need to find a way to share the code with
     1862        // appendBinOp.
     1863        // https://bugs.webkit.org/show_bug.cgi?id=169249
     1864        if (commutativity == B3::Commutative && imm(input) && isValidForm(opcode, Arg::Imm, Arg::Tmp, Arg::Tmp))
     1865            append(opcode, imm(input), oldValue, newValue);
     1866        else if (imm(input) && isValidForm(opcode, Arg::Tmp, Arg::Imm, Arg::Tmp))
     1867            append(opcode, oldValue, imm(input), newValue);
     1868        else if (commutativity == B3::Commutative && bitImm(input) && isValidForm(opcode, Arg::BitImm, Arg::Tmp, Arg::Tmp))
     1869            append(opcode, bitImm(input), oldValue, newValue);
     1870        else if (isValidForm(opcode, Arg::Tmp, Arg::Tmp, Arg::Tmp))
     1871            append(opcode, oldValue, tmp(input), newValue);
     1872        else {
     1873            append(Move, oldValue, newValue);
     1874            if (imm(input) && isValidForm(opcode, Arg::Imm, Arg::Tmp))
     1875                append(opcode, imm(input), newValue);
     1876            else
     1877                append(opcode, tmp(input), newValue);
     1878        }
     1879    }
     1880
     1881    if (isX86()) {
     1882#if CPU(X86) || CPU(X86_64)
     1883        Tmp eax(X86Registers::eax);
     1884        B3::Air::Opcode casOpcode = OPCODE_FOR_WIDTH(BranchAtomicStrongCAS, accessWidth);
     1885        append(Move, oldValue, eax);
     1886        appendEffectful(casOpcode, Arg::statusCond(MacroAssembler::Success), eax, newValue, address);
     1887#endif
     1888    } else {
     1889        RELEASE_ASSERT(isARM64());
     1890        TypedTmp boolResult = newTmp();
     1891        appendEffectful(OPCODE_FOR_WIDTH(StoreCondRel, accessWidth), newValue, address, boolResult);
     1892        append(BranchTest32, Arg::resCond(MacroAssembler::Zero), boolResult, boolResult);
     1893    }
     1894    reloopBlock->setSuccessors(doneBlock, reloopBlock);
     1895    m_currentBlock = doneBlock;
     1896    return oldValue;
     1897}
     1898
     1899TypedTmp AirIRGenerator::appendStrongCAS(ExtAtomicOpType op, Type valueType, TypedTmp expected, TypedTmp value, Arg address)
     1900{
     1901    B3::Width accessWidth = Wasm::accessWidth(op);
     1902
     1903    auto newTmp = [&]() {
     1904        if (accessWidth == B3::Width64)
     1905            return g64();
     1906        return g32();
     1907    };
     1908
     1909    auto tmp = [&](Arg arg) -> TypedTmp {
     1910        if (arg.isTmp())
     1911            return TypedTmp(arg.tmp(), accessWidth == B3::Width64 ? Type::I64 : Type::I32);
     1912        TypedTmp result = newTmp();
     1913        append(Move, arg, result);
     1914        return result;
     1915    };
     1916
     1917    TypedTmp valueResultTmp = valueType == Type::I64 ? g64() : g32();
     1918    Tmp successBoolResultTmp = newTmp();
     1919
     1920    Tmp expectedValueTmp = tmp(expected);
     1921    Tmp newValueTmp = tmp(value);
     1922
     1923    if (isX86()) {
     1924#if CPU(X86) || CPU(X86_64)
     1925        Tmp eax(X86Registers::eax);
     1926        append(Move, expectedValueTmp, eax);
     1927        appendEffectful(OPCODE_FOR_WIDTH(AtomicStrongCAS, accessWidth), eax, newValueTmp, address);
     1928        append(Move, eax, valueResultTmp);
     1929#endif
     1930        return valueResultTmp;
     1931    }
     1932
     1933    RELEASE_ASSERT(isARM64());
     1934    // We wish to emit:
     1935    //
     1936    // Block #reloop:
     1937    //     LoadLink
     1938    //     Branch NotEqual
     1939    //   Successors: Then:#fail, Else: #store
     1940    // Block #store:
     1941    //     StoreCond
     1942    //     Xor $1, %result    <--- only if !invert
     1943    //     Jump
     1944    //   Successors: #done
     1945    // Block #fail:
     1946    //     Move $invert, %result
     1947    //     Jump
     1948    //   Successors: #done
     1949    // Block #done:
     1950
     1951    auto* reloopBlock = m_code.addBlock();
     1952    auto* storeBlock = m_code.addBlock();
     1953    auto* strongFailBlock = m_code.addBlock();
     1954    auto* doneBlock = m_code.addBlock();
     1955    auto* beginBlock = m_currentBlock;
     1956
     1957    append(B3::Air::Jump);
     1958    beginBlock->setSuccessors(reloopBlock);
     1959
     1960    m_currentBlock = reloopBlock;
     1961    appendEffectful(OPCODE_FOR_WIDTH(LoadLinkAcq, accessWidth), address, valueResultTmp);
     1962    append(OPCODE_FOR_CANONICAL_WIDTH(Branch, accessWidth), Arg::relCond(MacroAssembler::NotEqual), valueResultTmp, expectedValueTmp);
     1963    reloopBlock->setSuccessors(B3::Air::FrequentedBlock(strongFailBlock), storeBlock);
     1964
     1965    m_currentBlock = storeBlock;
     1966    appendEffectful(OPCODE_FOR_WIDTH(StoreCondRel, accessWidth), newValueTmp, address, successBoolResultTmp);
     1967    append(BranchTest32, Arg::resCond(MacroAssembler::Zero), successBoolResultTmp, successBoolResultTmp);
     1968    storeBlock->setSuccessors(doneBlock, reloopBlock);
     1969
     1970    m_currentBlock = strongFailBlock;
     1971    {
     1972        TypedTmp tmp = newTmp();
     1973        appendEffectful(OPCODE_FOR_WIDTH(StoreCondRel, accessWidth), valueResultTmp, address, tmp);
     1974        append(BranchTest32, Arg::resCond(MacroAssembler::Zero), tmp, tmp);
     1975    }
     1976    strongFailBlock->setSuccessors(B3::Air::FrequentedBlock(doneBlock), reloopBlock);
     1977
     1978    m_currentBlock = doneBlock;
     1979    return valueResultTmp;
     1980}
     1981
     1982inline TypedTmp AirIRGenerator::emitAtomicLoadOp(ExtAtomicOpType op, Type valueType, ExpressionType pointer, uint32_t uoffset)
     1983{
     1984    TypedTmp newPtr = fixupPointerPlusOffsetForAtomicOps(op, pointer, uoffset);
     1985    Arg addrArg = isX86() ? Arg::addr(newPtr) : Arg::simpleAddr(newPtr);
     1986
     1987    if (accessWidth(op) != B3::Width8) {
     1988        emitCheck([&] {
     1989            return Inst(BranchTest64, nullptr, Arg::resCond(MacroAssembler::NonZero), newPtr, isX86() ? Arg::bitImm(sizeOfAtomicOpMemoryAccess(op) - 1) : Arg::bitImm64(sizeOfAtomicOpMemoryAccess(op) - 1));
     1990        }, [=] (CCallHelpers& jit, const B3::StackmapGenerationParams&) {
     1991            this->emitThrowException(jit, ExceptionType::OutOfBoundsMemoryAccess);
     1992        });
     1993    }
     1994
     1995    Optional<B3::Air::Opcode> opcode;
     1996    if (isX86())
     1997        opcode = OPCODE_FOR_WIDTH(AtomicXchgAdd, accessWidth(op));
     1998    B3::Air::Opcode nonAtomicOpcode = OPCODE_FOR_CANONICAL_WIDTH(Add, accessWidth(op));
     1999
     2000    if (opcode && isValidForm(opcode.value(), Arg::Tmp, addrArg.kind())) {
     2001        TypedTmp result = valueType == Type::I64 ? g64() : g32();
     2002        append(Move, Arg::imm(0), result);
     2003        appendEffectful(opcode.value(), result, addrArg);
     2004        return sanitizeAtomicResult(op, valueType, result);
     2005    }
     2006
     2007    return sanitizeAtomicResult(op, valueType, appendGeneralAtomic(op, valueType, nonAtomicOpcode, B3::Commutative, Arg::imm(0), addrArg));
     2008}
     2009
     2010auto AirIRGenerator::atomicLoad(ExtAtomicOpType op, Type valueType, ExpressionType pointer, ExpressionType& result, uint32_t offset) -> PartialResult
     2011{
     2012    ASSERT(pointer.tmp().isGP());
     2013
     2014    if (UNLIKELY(sumOverflows<uint32_t>(offset, sizeOfAtomicOpMemoryAccess(op)))) {
     2015        // FIXME: Even though this is provably out of bounds, it's not a validation error, so we have to handle it
     2016        // as a runtime exception. However, this may change: https://bugs.webkit.org/show_bug.cgi?id=166435
     2017        auto* patch = addPatchpoint(B3::Void);
     2018        patch->setGenerator([this] (CCallHelpers& jit, const B3::StackmapGenerationParams&) {
     2019            this->emitThrowException(jit, ExceptionType::OutOfBoundsMemoryAccess);
     2020        });
     2021        emitPatchpoint(patch, Tmp());
     2022
     2023        // We won't reach here, so we just pick a random reg.
     2024        switch (valueType) {
     2025        case Type::I32:
     2026            result = g32();
     2027            break;
     2028        case Type::I64:
     2029            result = g64();
     2030            break;
     2031        default:
     2032            RELEASE_ASSERT_NOT_REACHED();
     2033            break;
     2034        }
     2035    } else
     2036        result = emitAtomicLoadOp(op, valueType, emitCheckAndPreparePointer(pointer, offset, sizeOfAtomicOpMemoryAccess(op)), offset);
     2037
     2038    return { };
     2039}
     2040
     2041inline void AirIRGenerator::emitAtomicStoreOp(ExtAtomicOpType op, Type valueType, ExpressionType pointer, ExpressionType value, uint32_t uoffset)
     2042{
     2043    TypedTmp newPtr = fixupPointerPlusOffsetForAtomicOps(op, pointer, uoffset);
     2044    Arg addrArg = isX86() ? Arg::addr(newPtr) : Arg::simpleAddr(newPtr);
     2045
     2046    if (accessWidth(op) != B3::Width8) {
     2047        emitCheck([&] {
     2048            return Inst(BranchTest64, nullptr, Arg::resCond(MacroAssembler::NonZero), newPtr, isX86() ? Arg::bitImm(sizeOfAtomicOpMemoryAccess(op) - 1) : Arg::bitImm64(sizeOfAtomicOpMemoryAccess(op) - 1));
     2049        }, [=] (CCallHelpers& jit, const B3::StackmapGenerationParams&) {
     2050            this->emitThrowException(jit, ExceptionType::OutOfBoundsMemoryAccess);
     2051        });
     2052    }
     2053
     2054    Optional<B3::Air::Opcode> opcode;
     2055    if (isX86())
     2056        opcode = OPCODE_FOR_WIDTH(AtomicXchg, accessWidth(op));
     2057    B3::Air::Opcode nonAtomicOpcode = B3::Air::Nop;
     2058
     2059    if (opcode && isValidForm(opcode.value(), Arg::Tmp, addrArg.kind())) {
     2060        TypedTmp result = valueType == Type::I64 ? g64() : g32();
     2061        append(Move, value, result);
     2062        appendEffectful(opcode.value(), result, addrArg);
     2063        return;
     2064    }
     2065
     2066    appendGeneralAtomic(op, valueType, nonAtomicOpcode, B3::Commutative, value, addrArg);
     2067}
     2068
     2069auto AirIRGenerator::atomicStore(ExtAtomicOpType op, Type valueType, ExpressionType pointer, ExpressionType value, uint32_t offset) -> PartialResult
     2070{
     2071    ASSERT(pointer.tmp().isGP());
     2072
     2073    if (UNLIKELY(sumOverflows<uint32_t>(offset, sizeOfAtomicOpMemoryAccess(op)))) {
     2074        // FIXME: Even though this is provably out of bounds, it's not a validation error, so we have to handle it
     2075        // as a runtime exception. However, this may change: https://bugs.webkit.org/show_bug.cgi?id=166435
     2076        auto* throwException = addPatchpoint(B3::Void);
     2077        throwException->setGenerator([this] (CCallHelpers& jit, const B3::StackmapGenerationParams&) {
     2078            this->emitThrowException(jit, ExceptionType::OutOfBoundsMemoryAccess);
     2079        });
     2080        emitPatchpoint(throwException, Tmp());
     2081    } else
     2082        emitAtomicStoreOp(op, valueType, emitCheckAndPreparePointer(pointer, offset, sizeOfAtomicOpMemoryAccess(op)), value, offset);
     2083
     2084    return { };
     2085}
     2086
     2087TypedTmp AirIRGenerator::emitAtomicBinaryRMWOp(ExtAtomicOpType op, Type valueType, ExpressionType pointer, ExpressionType value, uint32_t uoffset)
     2088{
     2089    TypedTmp newPtr = fixupPointerPlusOffsetForAtomicOps(op, pointer, uoffset);
     2090    Arg addrArg = isX86() ? Arg::addr(newPtr) : Arg::simpleAddr(newPtr);
     2091
     2092    if (accessWidth(op) != B3::Width8) {
     2093        emitCheck([&] {
     2094            return Inst(BranchTest64, nullptr, Arg::resCond(MacroAssembler::NonZero), newPtr, isX86() ? Arg::bitImm(sizeOfAtomicOpMemoryAccess(op) - 1) : Arg::bitImm64(sizeOfAtomicOpMemoryAccess(op) - 1));
     2095        }, [=] (CCallHelpers& jit, const B3::StackmapGenerationParams&) {
     2096            this->emitThrowException(jit, ExceptionType::OutOfBoundsMemoryAccess);
     2097        });
     2098    }
     2099
     2100    Optional<B3::Air::Opcode> opcode;
     2101    B3::Air::Opcode nonAtomicOpcode = B3::Air::Nop;
     2102    B3::Commutativity commutativity = B3::NotCommutative;
     2103    switch (op) {
     2104    case ExtAtomicOpType::I32AtomicRmw8AddU:
     2105    case ExtAtomicOpType::I32AtomicRmw16AddU:
     2106    case ExtAtomicOpType::I32AtomicRmwAdd:
     2107    case ExtAtomicOpType::I64AtomicRmw8AddU:
     2108    case ExtAtomicOpType::I64AtomicRmw16AddU:
     2109    case ExtAtomicOpType::I64AtomicRmw32AddU:
     2110    case ExtAtomicOpType::I64AtomicRmwAdd:
     2111        if (isX86())
     2112            opcode = OPCODE_FOR_WIDTH(AtomicXchgAdd, accessWidth(op));
     2113        nonAtomicOpcode = OPCODE_FOR_CANONICAL_WIDTH(Add, accessWidth(op));
     2114        commutativity = B3::Commutative;
     2115        break;
     2116    case ExtAtomicOpType::I32AtomicRmw8SubU:
     2117    case ExtAtomicOpType::I32AtomicRmw16SubU:
     2118    case ExtAtomicOpType::I32AtomicRmwSub:
     2119    case ExtAtomicOpType::I64AtomicRmw8SubU:
     2120    case ExtAtomicOpType::I64AtomicRmw16SubU:
     2121    case ExtAtomicOpType::I64AtomicRmw32SubU:
     2122    case ExtAtomicOpType::I64AtomicRmwSub:
     2123        if (isX86()) {
     2124            TypedTmp newValue;
     2125            if (valueType == Type::I64) {
     2126                newValue = g64();
     2127                append(Move, value, newValue);
     2128                append(Neg64, newValue);
     2129            } else {
     2130                newValue = g32();
     2131                append(Move, value, newValue);
     2132                append(Neg32, newValue);
     2133            }
     2134            value = newValue;
     2135            opcode = OPCODE_FOR_WIDTH(AtomicXchgAdd, accessWidth(op));
     2136            nonAtomicOpcode = OPCODE_FOR_CANONICAL_WIDTH(Add, accessWidth(op));
     2137            commutativity = B3::Commutative;
     2138        } else {
     2139            nonAtomicOpcode = OPCODE_FOR_CANONICAL_WIDTH(Sub, accessWidth(op));
     2140            commutativity = B3::NotCommutative;
     2141        }
     2142        break;
     2143    case ExtAtomicOpType::I32AtomicRmw8AndU:
     2144    case ExtAtomicOpType::I32AtomicRmw16AndU:
     2145    case ExtAtomicOpType::I32AtomicRmwAnd:
     2146    case ExtAtomicOpType::I64AtomicRmw8AndU:
     2147    case ExtAtomicOpType::I64AtomicRmw16AndU:
     2148    case ExtAtomicOpType::I64AtomicRmw32AndU:
     2149    case ExtAtomicOpType::I64AtomicRmwAnd:
     2150        nonAtomicOpcode = OPCODE_FOR_CANONICAL_WIDTH(And, accessWidth(op));
     2151        commutativity = B3::Commutative;
     2152        break;
     2153    case ExtAtomicOpType::I32AtomicRmw8OrU:
     2154    case ExtAtomicOpType::I32AtomicRmw16OrU:
     2155    case ExtAtomicOpType::I32AtomicRmwOr:
     2156    case ExtAtomicOpType::I64AtomicRmw8OrU:
     2157    case ExtAtomicOpType::I64AtomicRmw16OrU:
     2158    case ExtAtomicOpType::I64AtomicRmw32OrU:
     2159    case ExtAtomicOpType::I64AtomicRmwOr:
     2160        nonAtomicOpcode = OPCODE_FOR_CANONICAL_WIDTH(Or, accessWidth(op));
     2161        commutativity = B3::Commutative;
     2162        break;
     2163    case ExtAtomicOpType::I32AtomicRmw8XorU:
     2164    case ExtAtomicOpType::I32AtomicRmw16XorU:
     2165    case ExtAtomicOpType::I32AtomicRmwXor:
     2166    case ExtAtomicOpType::I64AtomicRmw8XorU:
     2167    case ExtAtomicOpType::I64AtomicRmw16XorU:
     2168    case ExtAtomicOpType::I64AtomicRmw32XorU:
     2169    case ExtAtomicOpType::I64AtomicRmwXor:
     2170        nonAtomicOpcode = OPCODE_FOR_CANONICAL_WIDTH(Xor, accessWidth(op));
     2171        commutativity = B3::Commutative;
     2172        break;
     2173    case ExtAtomicOpType::I32AtomicRmw8XchgU:
     2174    case ExtAtomicOpType::I32AtomicRmw16XchgU:
     2175    case ExtAtomicOpType::I32AtomicRmwXchg:
     2176    case ExtAtomicOpType::I64AtomicRmw8XchgU:
     2177    case ExtAtomicOpType::I64AtomicRmw16XchgU:
     2178    case ExtAtomicOpType::I64AtomicRmw32XchgU:
     2179    case ExtAtomicOpType::I64AtomicRmwXchg:
     2180        if (isX86())
     2181            opcode = OPCODE_FOR_WIDTH(AtomicXchg, accessWidth(op));
     2182        nonAtomicOpcode = B3::Air::Nop;
     2183        break;
     2184    default:
     2185        RELEASE_ASSERT_NOT_REACHED();
     2186        break;
     2187    }
     2188
     2189    if (opcode && isValidForm(opcode.value(), Arg::Tmp, addrArg.kind())) {
     2190        TypedTmp result = valueType == Type::I64 ? g64() : g32();
     2191        append(Move, value, result);
     2192        appendEffectful(opcode.value(), result, addrArg);
     2193        return sanitizeAtomicResult(op, valueType, result);
     2194    }
     2195
     2196    return sanitizeAtomicResult(op, valueType, appendGeneralAtomic(op, valueType, nonAtomicOpcode, commutativity, value, addrArg));
     2197}
     2198
     2199auto AirIRGenerator::atomicBinaryRMW(ExtAtomicOpType op, Type valueType, ExpressionType pointer, ExpressionType value, ExpressionType& result, uint32_t offset) -> PartialResult
     2200{
     2201    ASSERT(pointer.tmp().isGP());
     2202
     2203    if (UNLIKELY(sumOverflows<uint32_t>(offset, sizeOfAtomicOpMemoryAccess(op)))) {
     2204        // FIXME: Even though this is provably out of bounds, it's not a validation error, so we have to handle it
     2205        // as a runtime exception. However, this may change: https://bugs.webkit.org/show_bug.cgi?id=166435
     2206        auto* patch = addPatchpoint(B3::Void);
     2207        patch->setGenerator([this] (CCallHelpers& jit, const B3::StackmapGenerationParams&) {
     2208            this->emitThrowException(jit, ExceptionType::OutOfBoundsMemoryAccess);
     2209        });
     2210        emitPatchpoint(patch, Tmp());
     2211
     2212        switch (valueType) {
     2213        case Type::I32:
     2214            result = g32();
     2215            break;
     2216        case Type::I64:
     2217            result = g64();
     2218            break;
     2219        default:
     2220            RELEASE_ASSERT_NOT_REACHED();
     2221            break;
     2222        }
     2223    } else
     2224        result = emitAtomicBinaryRMWOp(op, valueType, emitCheckAndPreparePointer(pointer, offset, sizeOfAtomicOpMemoryAccess(op)), value, offset);
     2225
     2226    return { };
     2227}
     2228
     2229TypedTmp AirIRGenerator::emitAtomicCompareExchange(ExtAtomicOpType op, Type valueType, ExpressionType pointer, ExpressionType expected, ExpressionType value, uint32_t uoffset)
     2230{
     2231    TypedTmp newPtr = fixupPointerPlusOffsetForAtomicOps(op, pointer, uoffset);
     2232    Arg addrArg = isX86() ? Arg::addr(newPtr) : Arg::simpleAddr(newPtr);
     2233
     2234    if (accessWidth(op) != B3::Width8) {
     2235        emitCheck([&] {
     2236            return Inst(BranchTest64, nullptr, Arg::resCond(MacroAssembler::NonZero), newPtr, isX86() ? Arg::bitImm(sizeOfAtomicOpMemoryAccess(op) - 1) : Arg::bitImm64(sizeOfAtomicOpMemoryAccess(op) - 1));
     2237        }, [=] (CCallHelpers& jit, const B3::StackmapGenerationParams&) {
     2238            this->emitThrowException(jit, ExceptionType::OutOfBoundsMemoryAccess);
     2239        });
     2240    }
     2241
     2242    return sanitizeAtomicResult(op, valueType, appendStrongCAS(op, valueType, expected, value, addrArg));
     2243}
     2244
     2245auto AirIRGenerator::atomicCompareExchange(ExtAtomicOpType op, Type valueType, ExpressionType pointer, ExpressionType expected, ExpressionType value, ExpressionType& result, uint32_t offset) -> PartialResult
     2246{
     2247    ASSERT(pointer.tmp().isGP());
     2248
     2249    if (UNLIKELY(sumOverflows<uint32_t>(offset, sizeOfAtomicOpMemoryAccess(op)))) {
     2250        // FIXME: Even though this is provably out of bounds, it's not a validation error, so we have to handle it
     2251        // as a runtime exception. However, this may change: https://bugs.webkit.org/show_bug.cgi?id=166435
     2252        auto* patch = addPatchpoint(B3::Void);
     2253        patch->setGenerator([this] (CCallHelpers& jit, const B3::StackmapGenerationParams&) {
     2254            this->emitThrowException(jit, ExceptionType::OutOfBoundsMemoryAccess);
     2255        });
     2256        emitPatchpoint(patch, Tmp());
     2257
     2258        // We won't reach here, so we just pick a random reg.
     2259        switch (valueType) {
     2260        case Type::I32:
     2261            result = g32();
     2262            break;
     2263        case Type::I64:
     2264            result = g64();
     2265            break;
     2266        default:
     2267            RELEASE_ASSERT_NOT_REACHED();
     2268        }
     2269    } else
     2270        result = emitAtomicCompareExchange(op, valueType, emitCheckAndPreparePointer(pointer, offset, sizeOfAtomicOpMemoryAccess(op)), expected, value, offset);
     2271
     2272    return { };
     2273}
     2274
     2275auto AirIRGenerator::atomicWait(ExtAtomicOpType op, ExpressionType pointer, ExpressionType value, ExpressionType timeout, ExpressionType& result, uint32_t offset) -> PartialResult
     2276{
     2277    result = g32();
     2278
     2279    if (op == ExtAtomicOpType::MemoryAtomicWait32)
     2280        emitCCall(&operationMemoryAtomicWait32, result, instanceValue(), pointer, addConstant(Type::I32, offset), value, timeout);
     2281    else
     2282        emitCCall(&operationMemoryAtomicWait64, result, instanceValue(), pointer, addConstant(Type::I32, offset), value, timeout);
     2283    emitCheck([&] {
     2284        return Inst(Branch32, nullptr, Arg::relCond(MacroAssembler::LessThan), result, Arg::imm(0));
     2285    }, [=] (CCallHelpers& jit, const B3::StackmapGenerationParams&) {
     2286        this->emitThrowException(jit, ExceptionType::OutOfBoundsMemoryAccess);
     2287    });
     2288
     2289    return { };
     2290}
     2291
     2292auto AirIRGenerator::atomicNotify(ExtAtomicOpType, ExpressionType pointer, ExpressionType count, ExpressionType& result, uint32_t offset) -> PartialResult
     2293{
     2294    result = g32();
     2295
     2296    emitCCall(&operationMemoryAtomicNotify, result, instanceValue(), pointer, addConstant(Type::I32, offset), count);
     2297    emitCheck([&] {
     2298        return Inst(Branch32, nullptr, Arg::relCond(MacroAssembler::LessThan), result, Arg::imm(0));
     2299    }, [=] (CCallHelpers& jit, const B3::StackmapGenerationParams&) {
     2300        this->emitThrowException(jit, ExceptionType::OutOfBoundsMemoryAccess);
     2301    });
     2302
     2303    return { };
     2304}
     2305
     2306auto AirIRGenerator::atomicFence(ExtAtomicOpType, uint8_t) -> PartialResult
     2307{
     2308    append(MemoryFence);
     2309    return { };
     2310}
     2311
    16862312auto AirIRGenerator::addSelect(ExpressionType condition, ExpressionType nonZero, ExpressionType zero, ExpressionType& result) -> PartialResult
    16872313{
     
    21592785        auto* patchpoint = emitCallPatchpoint(m_currentBlock, signature, results, args);
    21602786        // We need to clobber the size register since the LLInt always bounds checks
    2161         if (m_mode == MemoryMode::Signaling)
     2787        if (m_mode == MemoryMode::Signaling || m_info.memory.isShared())
    21622788            patchpoint->clobberLate(RegisterSet { PinnedRegisterInfo::get().boundsCheckingSizeRegister });
    21632789        patchpoint->setGenerator([unlinkedWasmToWasmCalls, functionIndex] (CCallHelpers& jit, const B3::StackmapGenerationParams&) {
  • trunk/Source/JavaScriptCore/wasm/WasmB3IRGenerator.cpp

    r270015 r270208  
    233233    PartialResult WARN_UNUSED_RETURN addCurrentMemory(ExpressionType& result);
    234234
     235    // Atomics
     236    PartialResult WARN_UNUSED_RETURN atomicLoad(ExtAtomicOpType, Type, ExpressionType pointer, ExpressionType& result, uint32_t offset);
     237    PartialResult WARN_UNUSED_RETURN atomicStore(ExtAtomicOpType, Type, ExpressionType pointer, ExpressionType value, uint32_t offset);
     238    PartialResult WARN_UNUSED_RETURN atomicBinaryRMW(ExtAtomicOpType, Type, ExpressionType pointer, ExpressionType value, ExpressionType& result, uint32_t offset);
     239    PartialResult WARN_UNUSED_RETURN atomicCompareExchange(ExtAtomicOpType, Type, ExpressionType pointer, ExpressionType expected, ExpressionType value, ExpressionType& result, uint32_t offset);
     240
     241    PartialResult WARN_UNUSED_RETURN atomicWait(ExtAtomicOpType, ExpressionType pointer, ExpressionType value, ExpressionType timeout, ExpressionType& result, uint32_t offset);
     242    PartialResult WARN_UNUSED_RETURN atomicNotify(ExtAtomicOpType, ExpressionType pointer, ExpressionType value, ExpressionType& result, uint32_t offset);
     243    PartialResult WARN_UNUSED_RETURN atomicFence(ExtAtomicOpType, uint8_t flags);
     244
    235245    // Basic operators
    236246    template<OpType>
     
    286296    void emitStoreOp(StoreOpType, ExpressionType pointer, ExpressionType value, uint32_t offset);
    287297
     298    ExpressionType sanitizeAtomicResult(ExtAtomicOpType, Type, ExpressionType result);
     299    ExpressionType emitAtomicLoadOp(ExtAtomicOpType, Type, ExpressionType pointer, uint32_t offset);
     300    void emitAtomicStoreOp(ExtAtomicOpType, Type, ExpressionType pointer, ExpressionType value, uint32_t offset);
     301    ExpressionType emitAtomicBinaryRMWOp(ExtAtomicOpType, Type, ExpressionType pointer, ExpressionType value, uint32_t offset);
     302    ExpressionType emitAtomicCompareExchange(ExtAtomicOpType, Type, ExpressionType pointer, ExpressionType expected, ExpressionType value, uint32_t offset);
     303
    288304    void unify(const ExpressionType phi, const ExpressionType source);
    289305    void unifyValuesWithBlock(const Stack& resultStack, const ResultList& stack);
     
    292308
    293309    int32_t WARN_UNUSED_RETURN fixupPointerPlusOffset(ExpressionType&, uint32_t);
     310    ExpressionType WARN_UNUSED_RETURN fixupPointerPlusOffsetForAtomicOps(ExtAtomicOpType, ExpressionType, uint32_t);
    294311
    295312    void restoreWasmContextInstance(Procedure&, BasicBlock*, Value*);
     
    956973    switch (m_mode) {
    957974    case MemoryMode::BoundsChecking: {
    958         // We're not using signal handling at all, we must therefore check that no memory access exceeds the current memory size.
     975        // We're not using signal handling only when the memory is not shared.
     976        // Regardless of signaling, we must check that no memory access exceeds the current memory size.
    959977        ASSERT(m_boundsCheckingSizeGPR);
    960978        ASSERT(sizeOfOperation + offset > offset);
     
    10131031inline B3::Kind B3IRGenerator::memoryKind(B3::Opcode memoryOp)
    10141032{
    1015     if (m_mode == MemoryMode::Signaling)
     1033    if (m_mode == MemoryMode::Signaling || m_info.memory.isShared())
    10161034        return trapping(memoryOp);
    10171035    return memoryOp;
     
    12041222}
    12051223
     1224inline B3::Width accessWidth(ExtAtomicOpType op)
     1225{
     1226    return static_cast<B3::Width>(memoryLog2Alignment(op));
     1227}
     1228
     1229inline uint32_t sizeOfAtomicOpMemoryAccess(ExtAtomicOpType op)
     1230{
     1231    return bytesForWidth(accessWidth(op));
     1232}
     1233
     1234inline Value* B3IRGenerator::sanitizeAtomicResult(ExtAtomicOpType op, Type valueType, ExpressionType result)
     1235{
     1236    auto sanitize32 = [&](ExpressionType result) {
     1237        switch (accessWidth(op)) {
     1238        case B3::Width8:
     1239            return m_currentBlock->appendNew<Value>(m_proc, BitAnd, origin(), result, constant(Int32, 0xff));
     1240        case B3::Width16:
     1241            return m_currentBlock->appendNew<Value>(m_proc, BitAnd, origin(), result, constant(Int32, 0xffff));
     1242        default:
     1243            return result;
     1244        }
     1245    };
     1246
     1247    switch (valueType) {
     1248    case Type::I64: {
     1249        if (accessWidth(op) == B3::Width64)
     1250            return result;
     1251        return m_currentBlock->appendNew<Value>(m_proc, ZExt32, origin(), sanitize32(result));
     1252    }
     1253    case Type::I32:
     1254        return sanitize32(result);
     1255    default:
     1256        RELEASE_ASSERT_NOT_REACHED();
     1257        return nullptr;
     1258    }
     1259}
     1260
     1261Value* B3IRGenerator::fixupPointerPlusOffsetForAtomicOps(ExtAtomicOpType op, ExpressionType ptr, uint32_t offset)
     1262{
     1263    auto pointer = m_currentBlock->appendNew<Value>(m_proc, Add, origin(), ptr, m_currentBlock->appendNew<Const64Value>(m_proc, origin(), offset));
     1264    if (accessWidth(op) != B3::Width8) {
     1265        CheckValue* check = m_currentBlock->appendNew<CheckValue>(m_proc, Check, origin(),
     1266            m_currentBlock->appendNew<Value>(m_proc, BitAnd, origin(), pointer, constant(pointerType(), sizeOfAtomicOpMemoryAccess(op) - 1)));
     1267        check->setGenerator([=] (CCallHelpers& jit, const B3::StackmapGenerationParams&) {
     1268            this->emitExceptionCheck(jit, ExceptionType::OutOfBoundsMemoryAccess);
     1269        });
     1270    }
     1271    return pointer;
     1272}
     1273
     1274inline Value* B3IRGenerator::emitAtomicLoadOp(ExtAtomicOpType op, Type valueType, ExpressionType pointer, uint32_t uoffset)
     1275{
     1276    pointer = fixupPointerPlusOffsetForAtomicOps(op, pointer, uoffset);
     1277
     1278    ExpressionType value = nullptr;
     1279    switch (accessWidth(op)) {
     1280    case B3::Width8:
     1281    case B3::Width16:
     1282    case B3::Width32:
     1283        value = constant(Int32, 0);
     1284        break;
     1285    case B3::Width64:
     1286        value = constant(Int64, 0);
     1287        break;
     1288    }
     1289
     1290    return sanitizeAtomicResult(op, valueType, m_currentBlock->appendNew<AtomicValue>(m_proc, memoryKind(AtomicXchgAdd), origin(), accessWidth(op), value, pointer));
     1291}
     1292
     1293auto B3IRGenerator::atomicLoad(ExtAtomicOpType op, Type valueType, ExpressionType pointer, ExpressionType& result, uint32_t offset) -> PartialResult
     1294{
     1295    ASSERT(pointer->type() == Int32);
     1296
     1297    if (UNLIKELY(sumOverflows<uint32_t>(offset, sizeOfAtomicOpMemoryAccess(op)))) {
     1298        // FIXME: Even though this is provably out of bounds, it's not a validation error, so we have to handle it
     1299        // as a runtime exception. However, this may change: https://bugs.webkit.org/show_bug.cgi?id=166435
     1300        B3::PatchpointValue* throwException = m_currentBlock->appendNew<B3::PatchpointValue>(m_proc, B3::Void, origin());
     1301        throwException->setGenerator([this] (CCallHelpers& jit, const B3::StackmapGenerationParams&) {
     1302            this->emitExceptionCheck(jit, ExceptionType::OutOfBoundsMemoryAccess);
     1303        });
     1304
     1305        switch (valueType) {
     1306        case Type::I32:
     1307            result = constant(Int32, 0);
     1308            break;
     1309        case Type::I64:
     1310            result = constant(Int64, 0);
     1311            break;
     1312        default:
     1313            RELEASE_ASSERT_NOT_REACHED();
     1314            break;
     1315        }
     1316    } else
     1317        result = emitAtomicLoadOp(op, valueType, emitCheckAndPreparePointer(pointer, offset, sizeOfAtomicOpMemoryAccess(op)), offset);
     1318
     1319    return { };
     1320}
     1321
     1322inline void B3IRGenerator::emitAtomicStoreOp(ExtAtomicOpType op, Type valueType, ExpressionType pointer, ExpressionType value, uint32_t uoffset)
     1323{
     1324    pointer = fixupPointerPlusOffsetForAtomicOps(op, pointer, uoffset);
     1325
     1326    if (valueType == Type::I64 && accessWidth(op) != B3::Width64)
     1327        value = m_currentBlock->appendNew<B3::Value>(m_proc, B3::Trunc, Origin(), value);
     1328    m_currentBlock->appendNew<AtomicValue>(m_proc, memoryKind(AtomicXchg), origin(), accessWidth(op), value, pointer);
     1329}
     1330
     1331auto B3IRGenerator::atomicStore(ExtAtomicOpType op, Type valueType, ExpressionType pointer, ExpressionType value, uint32_t offset) -> PartialResult
     1332{
     1333    ASSERT(pointer->type() == Int32);
     1334
     1335    if (UNLIKELY(sumOverflows<uint32_t>(offset, sizeOfAtomicOpMemoryAccess(op)))) {
     1336        // FIXME: Even though this is provably out of bounds, it's not a validation error, so we have to handle it
     1337        // as a runtime exception. However, this may change: https://bugs.webkit.org/show_bug.cgi?id=166435
     1338        B3::PatchpointValue* throwException = m_currentBlock->appendNew<B3::PatchpointValue>(m_proc, B3::Void, origin());
     1339        throwException->setGenerator([this] (CCallHelpers& jit, const B3::StackmapGenerationParams&) {
     1340            this->emitExceptionCheck(jit, ExceptionType::OutOfBoundsMemoryAccess);
     1341        });
     1342    } else
     1343        emitAtomicStoreOp(op, valueType, emitCheckAndPreparePointer(pointer, offset, sizeOfAtomicOpMemoryAccess(op)), value, offset);
     1344
     1345    return { };
     1346}
     1347
     1348inline Value* B3IRGenerator::emitAtomicBinaryRMWOp(ExtAtomicOpType op, Type valueType, ExpressionType pointer, ExpressionType value, uint32_t uoffset)
     1349{
     1350    pointer = fixupPointerPlusOffsetForAtomicOps(op, pointer, uoffset);
     1351
     1352    B3::Opcode opcode = B3::Nop;
     1353    switch (op) {
     1354    case ExtAtomicOpType::I32AtomicRmw8AddU:
     1355    case ExtAtomicOpType::I32AtomicRmw16AddU:
     1356    case ExtAtomicOpType::I32AtomicRmwAdd:
     1357    case ExtAtomicOpType::I64AtomicRmw8AddU:
     1358    case ExtAtomicOpType::I64AtomicRmw16AddU:
     1359    case ExtAtomicOpType::I64AtomicRmw32AddU:
     1360    case ExtAtomicOpType::I64AtomicRmwAdd:
     1361        opcode = AtomicXchgAdd;
     1362        break;
     1363    case ExtAtomicOpType::I32AtomicRmw8SubU:
     1364    case ExtAtomicOpType::I32AtomicRmw16SubU:
     1365    case ExtAtomicOpType::I32AtomicRmwSub:
     1366    case ExtAtomicOpType::I64AtomicRmw8SubU:
     1367    case ExtAtomicOpType::I64AtomicRmw16SubU:
     1368    case ExtAtomicOpType::I64AtomicRmw32SubU:
     1369    case ExtAtomicOpType::I64AtomicRmwSub:
     1370        opcode = AtomicXchgSub;
     1371        break;
     1372    case ExtAtomicOpType::I32AtomicRmw8AndU:
     1373    case ExtAtomicOpType::I32AtomicRmw16AndU:
     1374    case ExtAtomicOpType::I32AtomicRmwAnd:
     1375    case ExtAtomicOpType::I64AtomicRmw8AndU:
     1376    case ExtAtomicOpType::I64AtomicRmw16AndU:
     1377    case ExtAtomicOpType::I64AtomicRmw32AndU:
     1378    case ExtAtomicOpType::I64AtomicRmwAnd:
     1379        opcode = AtomicXchgAnd;
     1380        break;
     1381    case ExtAtomicOpType::I32AtomicRmw8OrU:
     1382    case ExtAtomicOpType::I32AtomicRmw16OrU:
     1383    case ExtAtomicOpType::I32AtomicRmwOr:
     1384    case ExtAtomicOpType::I64AtomicRmw8OrU:
     1385    case ExtAtomicOpType::I64AtomicRmw16OrU:
     1386    case ExtAtomicOpType::I64AtomicRmw32OrU:
     1387    case ExtAtomicOpType::I64AtomicRmwOr:
     1388        opcode = AtomicXchgOr;
     1389        break;
     1390    case ExtAtomicOpType::I32AtomicRmw8XorU:
     1391    case ExtAtomicOpType::I32AtomicRmw16XorU:
     1392    case ExtAtomicOpType::I32AtomicRmwXor:
     1393    case ExtAtomicOpType::I64AtomicRmw8XorU:
     1394    case ExtAtomicOpType::I64AtomicRmw16XorU:
     1395    case ExtAtomicOpType::I64AtomicRmw32XorU:
     1396    case ExtAtomicOpType::I64AtomicRmwXor:
     1397        opcode = AtomicXchgXor;
     1398        break;
     1399    case ExtAtomicOpType::I32AtomicRmw8XchgU:
     1400    case ExtAtomicOpType::I32AtomicRmw16XchgU:
     1401    case ExtAtomicOpType::I32AtomicRmwXchg:
     1402    case ExtAtomicOpType::I64AtomicRmw8XchgU:
     1403    case ExtAtomicOpType::I64AtomicRmw16XchgU:
     1404    case ExtAtomicOpType::I64AtomicRmw32XchgU:
     1405    case ExtAtomicOpType::I64AtomicRmwXchg:
     1406        opcode = AtomicXchg;
     1407        break;
     1408    default:
     1409        RELEASE_ASSERT_NOT_REACHED();
     1410        break;
     1411    }
     1412
     1413    if (valueType == Type::I64 && accessWidth(op) != B3::Width64)
     1414        value = m_currentBlock->appendNew<B3::Value>(m_proc, B3::Trunc, Origin(), value);
     1415
     1416    return sanitizeAtomicResult(op, valueType, m_currentBlock->appendNew<AtomicValue>(m_proc, memoryKind(opcode), origin(), accessWidth(op), value, pointer));
     1417}
     1418
     1419auto B3IRGenerator::atomicBinaryRMW(ExtAtomicOpType op, Type valueType, ExpressionType pointer, ExpressionType value, ExpressionType& result, uint32_t offset) -> PartialResult
     1420{
     1421    ASSERT(pointer->type() == Int32);
     1422
     1423    if (UNLIKELY(sumOverflows<uint32_t>(offset, sizeOfAtomicOpMemoryAccess(op)))) {
     1424        // FIXME: Even though this is provably out of bounds, it's not a validation error, so we have to handle it
     1425        // as a runtime exception. However, this may change: https://bugs.webkit.org/show_bug.cgi?id=166435
     1426        B3::PatchpointValue* throwException = m_currentBlock->appendNew<B3::PatchpointValue>(m_proc, B3::Void, origin());
     1427        throwException->setGenerator([this] (CCallHelpers& jit, const B3::StackmapGenerationParams&) {
     1428            this->emitExceptionCheck(jit, ExceptionType::OutOfBoundsMemoryAccess);
     1429        });
     1430
     1431        switch (valueType) {
     1432        case Type::I32:
     1433            result = constant(Int32, 0);
     1434            break;
     1435        case Type::I64:
     1436            result = constant(Int64, 0);
     1437            break;
     1438        default:
     1439            RELEASE_ASSERT_NOT_REACHED();
     1440            break;
     1441        }
     1442    } else
     1443        result = emitAtomicBinaryRMWOp(op, valueType, emitCheckAndPreparePointer(pointer, offset, sizeOfAtomicOpMemoryAccess(op)), value, offset);
     1444
     1445    return { };
     1446}
     1447
     1448Value* B3IRGenerator::emitAtomicCompareExchange(ExtAtomicOpType op, Type valueType, ExpressionType pointer, ExpressionType expected, ExpressionType value, uint32_t uoffset)
     1449{
     1450    pointer = fixupPointerPlusOffsetForAtomicOps(op, pointer, uoffset);
     1451
     1452    if (valueType == Type::I64 && accessWidth(op) != B3::Width64) {
     1453        expected = m_currentBlock->appendNew<B3::Value>(m_proc, B3::Trunc, Origin(), expected);
     1454        value = m_currentBlock->appendNew<B3::Value>(m_proc, B3::Trunc, Origin(), value);
     1455    }
     1456
     1457    return sanitizeAtomicResult(op, valueType, m_currentBlock->appendNew<AtomicValue>(m_proc, memoryKind(AtomicStrongCAS), origin(), accessWidth(op), expected, value, pointer));
     1458}
     1459
     1460auto B3IRGenerator::atomicCompareExchange(ExtAtomicOpType op, Type valueType, ExpressionType pointer, ExpressionType expected, ExpressionType value, ExpressionType& result, uint32_t offset) -> PartialResult
     1461{
     1462    ASSERT(pointer->type() == Int32);
     1463
     1464    if (UNLIKELY(sumOverflows<uint32_t>(offset, sizeOfAtomicOpMemoryAccess(op)))) {
     1465        // FIXME: Even though this is provably out of bounds, it's not a validation error, so we have to handle it
     1466        // as a runtime exception. However, this may change: https://bugs.webkit.org/show_bug.cgi?id=166435
     1467        B3::PatchpointValue* throwException = m_currentBlock->appendNew<B3::PatchpointValue>(m_proc, B3::Void, origin());
     1468        throwException->setGenerator([this] (CCallHelpers& jit, const B3::StackmapGenerationParams&) {
     1469            this->emitExceptionCheck(jit, ExceptionType::OutOfBoundsMemoryAccess);
     1470        });
     1471
     1472        switch (valueType) {
     1473        case Type::I32:
     1474            result = constant(Int32, 0);
     1475            break;
     1476        case Type::I64:
     1477            result = constant(Int64, 0);
     1478            break;
     1479        default:
     1480            RELEASE_ASSERT_NOT_REACHED();
     1481            break;
     1482        }
     1483    } else
     1484        result = emitAtomicCompareExchange(op, valueType, emitCheckAndPreparePointer(pointer, offset, sizeOfAtomicOpMemoryAccess(op)), expected, value, offset);
     1485
     1486    return { };
     1487}
     1488
     1489auto B3IRGenerator::atomicWait(ExtAtomicOpType op, ExpressionType pointer, ExpressionType value, ExpressionType timeout, ExpressionType& result, uint32_t offset) -> PartialResult
     1490{
     1491    if (op == ExtAtomicOpType::MemoryAtomicWait32) {
     1492        result = m_currentBlock->appendNew<CCallValue>(m_proc, Int32, origin(),
     1493            m_currentBlock->appendNew<ConstPtrValue>(m_proc, origin(), tagCFunction<OperationPtrTag>(operationMemoryAtomicWait32)),
     1494            instanceValue(), pointer, m_currentBlock->appendNew<Const32Value>(m_proc, origin(), offset), value, timeout);
     1495    } else {
     1496        result = m_currentBlock->appendNew<CCallValue>(m_proc, Int32, origin(),
     1497            m_currentBlock->appendNew<ConstPtrValue>(m_proc, origin(), tagCFunction<OperationPtrTag>(operationMemoryAtomicWait64)),
     1498            instanceValue(), pointer, m_currentBlock->appendNew<Const32Value>(m_proc, origin(), offset), value, timeout);
     1499    }
     1500
     1501    {
     1502        CheckValue* check = m_currentBlock->appendNew<CheckValue>(m_proc, Check, origin(),
     1503            m_currentBlock->appendNew<Value>(m_proc, LessThan, origin(), result, m_currentBlock->appendNew<Const32Value>(m_proc, origin(), 0)));
     1504
     1505        check->setGenerator([=] (CCallHelpers& jit, const B3::StackmapGenerationParams&) {
     1506            this->emitExceptionCheck(jit, ExceptionType::OutOfBoundsMemoryAccess);
     1507        });
     1508    }
     1509
     1510    return { };
     1511}
     1512
     1513auto B3IRGenerator::atomicNotify(ExtAtomicOpType, ExpressionType pointer, ExpressionType count, ExpressionType& result, uint32_t offset) -> PartialResult
     1514{
     1515    result = m_currentBlock->appendNew<CCallValue>(m_proc, Int32, origin(),
     1516        m_currentBlock->appendNew<ConstPtrValue>(m_proc, origin(), tagCFunction<OperationPtrTag>(operationMemoryAtomicNotify)),
     1517        instanceValue(), pointer, m_currentBlock->appendNew<Const32Value>(m_proc, origin(), offset), count);
     1518
     1519    {
     1520        CheckValue* check = m_currentBlock->appendNew<CheckValue>(m_proc, Check, origin(),
     1521            m_currentBlock->appendNew<Value>(m_proc, LessThan, origin(), result, m_currentBlock->appendNew<Const32Value>(m_proc, origin(), 0)));
     1522
     1523        check->setGenerator([=] (CCallHelpers& jit, const B3::StackmapGenerationParams&) {
     1524            this->emitExceptionCheck(jit, ExceptionType::OutOfBoundsMemoryAccess);
     1525        });
     1526    }
     1527
     1528    return { };
     1529}
     1530
     1531auto B3IRGenerator::atomicFence(ExtAtomicOpType, uint8_t) -> PartialResult
     1532{
     1533    m_currentBlock->appendNew<FenceValue>(m_proc, origin());
     1534    return { };
     1535}
     1536
    12061537auto B3IRGenerator::addSelect(ExpressionType condition, ExpressionType nonZero, ExpressionType zero, ExpressionType& result) -> PartialResult
    12071538{
     
    17122043
    17132044                // We need to clobber the size register since the LLInt always bounds checks
    1714                 if (m_mode == MemoryMode::Signaling)
     2045                if (m_mode == MemoryMode::Signaling || m_info.memory.isShared())
    17152046                    patchpoint->clobberLate(RegisterSet { PinnedRegisterInfo::get().boundsCheckingSizeRegister });
    17162047                patchpoint->setGenerator([unlinkedWasmToWasmCalls, functionIndex] (CCallHelpers& jit, const B3::StackmapGenerationParams&) {
  • trunk/Source/JavaScriptCore/wasm/WasmFunctionParser.h

    r270015 r270208  
    128128    PartialResult WARN_UNUSED_RETURN load(Type memoryType);
    129129
     130    PartialResult WARN_UNUSED_RETURN atomicLoad(ExtAtomicOpType, Type memoryType);
     131    PartialResult WARN_UNUSED_RETURN atomicStore(ExtAtomicOpType, Type memoryType);
     132    PartialResult WARN_UNUSED_RETURN atomicBinaryRMW(ExtAtomicOpType, Type memoryType);
     133    PartialResult WARN_UNUSED_RETURN atomicCompareExchange(ExtAtomicOpType, Type memoryType);
     134    PartialResult WARN_UNUSED_RETURN atomicWait(ExtAtomicOpType, Type memoryType);
     135    PartialResult WARN_UNUSED_RETURN atomicNotify(ExtAtomicOpType);
     136    PartialResult WARN_UNUSED_RETURN atomicFence(ExtAtomicOpType);
     137
    130138#define WASM_TRY_ADD_TO_CONTEXT(add_expression) WASM_FAIL_IF_HELPER_FAILS(m_context.add_expression)
    131139
     
    314322
    315323    WASM_TRY_ADD_TO_CONTEXT(store(static_cast<StoreOpType>(m_currentOpcode), pointer, value, offset));
     324    return { };
     325}
     326
     327template<typename Context>
     328auto FunctionParser<Context>::atomicLoad(ExtAtomicOpType op, Type memoryType) -> PartialResult
     329{
     330    WASM_VALIDATOR_FAIL_IF(!m_info.memory, "atomic instruction without memory");
     331
     332    uint32_t alignment;
     333    uint32_t offset;
     334    TypedExpression pointer;
     335    WASM_PARSER_FAIL_IF(!parseVarUInt32(alignment), "can't get load alignment");
     336    WASM_PARSER_FAIL_IF(alignment > memoryLog2Alignment(op), "byte alignment ", 1ull << alignment, " exceeds load's natural alignment ", 1ull << memoryLog2Alignment(op));
     337    WASM_PARSER_FAIL_IF(!parseVarUInt32(offset), "can't get load offset");
     338    WASM_TRY_POP_EXPRESSION_STACK_INTO(pointer, "load pointer");
     339
     340    WASM_VALIDATOR_FAIL_IF(pointer.type() != I32, static_cast<unsigned>(op), " pointer type mismatch");
     341
     342    ExpressionType result;
     343    WASM_TRY_ADD_TO_CONTEXT(atomicLoad(op, memoryType, pointer, result, offset));
     344    m_expressionStack.constructAndAppend(memoryType, result);
     345    return { };
     346}
     347
     348template<typename Context>
     349auto FunctionParser<Context>::atomicStore(ExtAtomicOpType op, Type memoryType) -> PartialResult
     350{
     351    WASM_VALIDATOR_FAIL_IF(!m_info.memory, "atomic instruction without memory");
     352
     353    uint32_t alignment;
     354    uint32_t offset;
     355    TypedExpression value;
     356    TypedExpression pointer;
     357    WASM_PARSER_FAIL_IF(!parseVarUInt32(alignment), "can't get store alignment");
     358    WASM_PARSER_FAIL_IF(alignment > memoryLog2Alignment(op), "byte alignment ", 1ull << alignment, " exceeds store's natural alignment ", 1ull << memoryLog2Alignment(op));
     359    WASM_PARSER_FAIL_IF(!parseVarUInt32(offset), "can't get store offset");
     360    WASM_TRY_POP_EXPRESSION_STACK_INTO(value, "store value");
     361    WASM_TRY_POP_EXPRESSION_STACK_INTO(pointer, "store pointer");
     362
     363    WASM_VALIDATOR_FAIL_IF(pointer.type() != I32, m_currentOpcode, " pointer type mismatch");
     364    WASM_VALIDATOR_FAIL_IF(value.type() != memoryType, m_currentOpcode, " value type mismatch");
     365
     366    WASM_TRY_ADD_TO_CONTEXT(atomicStore(op, memoryType, pointer, value, offset));
     367    return { };
     368}
     369
     370template<typename Context>
     371auto FunctionParser<Context>::atomicBinaryRMW(ExtAtomicOpType op, Type memoryType) -> PartialResult
     372{
     373    WASM_VALIDATOR_FAIL_IF(!m_info.memory, "atomic instruction without memory");
     374
     375    uint32_t alignment;
     376    uint32_t offset;
     377    TypedExpression pointer;
     378    TypedExpression value;
     379    WASM_PARSER_FAIL_IF(!parseVarUInt32(alignment), "can't get load alignment");
     380    WASM_PARSER_FAIL_IF(alignment > memoryLog2Alignment(op), "byte alignment ", 1ull << alignment, " exceeds load's natural alignment ", 1ull << memoryLog2Alignment(op));
     381    WASM_PARSER_FAIL_IF(!parseVarUInt32(offset), "can't get load offset");
     382    WASM_TRY_POP_EXPRESSION_STACK_INTO(value, "value");
     383    WASM_TRY_POP_EXPRESSION_STACK_INTO(pointer, "pointer");
     384
     385    WASM_VALIDATOR_FAIL_IF(pointer.type() != I32, static_cast<unsigned>(op), " pointer type mismatch");
     386    WASM_VALIDATOR_FAIL_IF(value.type() != memoryType, static_cast<unsigned>(op), " value type mismatch");
     387
     388    ExpressionType result;
     389    WASM_TRY_ADD_TO_CONTEXT(atomicBinaryRMW(op, memoryType, pointer, value, result, offset));
     390    m_expressionStack.constructAndAppend(memoryType, result);
     391    return { };
     392}
     393
     394template<typename Context>
     395auto FunctionParser<Context>::atomicCompareExchange(ExtAtomicOpType op, Type memoryType) -> PartialResult
     396{
     397    WASM_VALIDATOR_FAIL_IF(!m_info.memory, "atomic instruction without memory");
     398
     399    uint32_t alignment;
     400    uint32_t offset;
     401    TypedExpression pointer;
     402    TypedExpression expected;
     403    TypedExpression value;
     404    WASM_PARSER_FAIL_IF(!parseVarUInt32(alignment), "can't get load alignment");
     405    WASM_PARSER_FAIL_IF(alignment > memoryLog2Alignment(op), "byte alignment ", 1ull << alignment, " exceeds load's natural alignment ", 1ull << memoryLog2Alignment(op));
     406    WASM_PARSER_FAIL_IF(!parseVarUInt32(offset), "can't get load offset");
     407    WASM_TRY_POP_EXPRESSION_STACK_INTO(value, "value");
     408    WASM_TRY_POP_EXPRESSION_STACK_INTO(expected, "expected");
     409    WASM_TRY_POP_EXPRESSION_STACK_INTO(pointer, "pointer");
     410
     411    WASM_VALIDATOR_FAIL_IF(pointer.type() != I32, static_cast<unsigned>(op), " pointer type mismatch");
     412    WASM_VALIDATOR_FAIL_IF(expected.type() != memoryType, static_cast<unsigned>(op), " expected type mismatch");
     413    WASM_VALIDATOR_FAIL_IF(value.type() != memoryType, static_cast<unsigned>(op), " value type mismatch");
     414
     415    ExpressionType result;
     416    WASM_TRY_ADD_TO_CONTEXT(atomicCompareExchange(op, memoryType, pointer, expected, value, result, offset));
     417    m_expressionStack.constructAndAppend(memoryType, result);
     418    return { };
     419}
     420
     421template<typename Context>
     422auto FunctionParser<Context>::atomicWait(ExtAtomicOpType op, Type memoryType) -> PartialResult
     423{
     424    WASM_VALIDATOR_FAIL_IF(!m_info.memory, "atomic instruction without memory");
     425
     426    uint32_t alignment;
     427    uint32_t offset;
     428    TypedExpression pointer;
     429    TypedExpression value;
     430    TypedExpression timeout;
     431    WASM_PARSER_FAIL_IF(!parseVarUInt32(alignment), "can't get load alignment");
     432    WASM_PARSER_FAIL_IF(alignment > memoryLog2Alignment(op), "byte alignment ", 1ull << alignment, " exceeds load's natural alignment ", 1ull << memoryLog2Alignment(op));
     433    WASM_PARSER_FAIL_IF(!parseVarUInt32(offset), "can't get load offset");
     434    WASM_TRY_POP_EXPRESSION_STACK_INTO(timeout, "timeout");
     435    WASM_TRY_POP_EXPRESSION_STACK_INTO(value, "value");
     436    WASM_TRY_POP_EXPRESSION_STACK_INTO(pointer, "pointer");
     437
     438    WASM_VALIDATOR_FAIL_IF(pointer.type() != I32, static_cast<unsigned>(op), " pointer type mismatch");
     439    WASM_VALIDATOR_FAIL_IF(value.type() != memoryType, static_cast<unsigned>(op), " value type mismatch");
     440    WASM_VALIDATOR_FAIL_IF(timeout.type() != I64, static_cast<unsigned>(op), " timeout type mismatch");
     441
     442    ExpressionType result;
     443    WASM_TRY_ADD_TO_CONTEXT(atomicWait(op, pointer, value, timeout, result, offset));
     444    m_expressionStack.constructAndAppend(I32, result);
     445    return { };
     446}
     447
     448template<typename Context>
     449auto FunctionParser<Context>::atomicNotify(ExtAtomicOpType op) -> PartialResult
     450{
     451    WASM_VALIDATOR_FAIL_IF(!m_info.memory, "atomic instruction without memory");
     452
     453    uint32_t alignment;
     454    uint32_t offset;
     455    TypedExpression pointer;
     456    TypedExpression count;
     457    WASM_PARSER_FAIL_IF(!parseVarUInt32(alignment), "can't get load alignment");
     458    WASM_PARSER_FAIL_IF(alignment > memoryLog2Alignment(op), "byte alignment ", 1ull << alignment, " exceeds load's natural alignment ", 1ull << memoryLog2Alignment(op));
     459    WASM_PARSER_FAIL_IF(!parseVarUInt32(offset), "can't get load offset");
     460    WASM_TRY_POP_EXPRESSION_STACK_INTO(count, "count");
     461    WASM_TRY_POP_EXPRESSION_STACK_INTO(pointer, "pointer");
     462
     463    WASM_VALIDATOR_FAIL_IF(pointer.type() != I32, static_cast<unsigned>(op), " pointer type mismatch");
     464    WASM_VALIDATOR_FAIL_IF(count.type() != I32, static_cast<unsigned>(op), " count type mismatch"); // The spec's definition is saying i64, but all implementations (including tests) are using i32. So looks like the spec is wrong.
     465
     466    ExpressionType result;
     467    WASM_TRY_ADD_TO_CONTEXT(atomicNotify(op, pointer, count, result, offset));
     468    m_expressionStack.constructAndAppend(I32, result);
     469    return { };
     470}
     471
     472template<typename Context>
     473auto FunctionParser<Context>::atomicFence(ExtAtomicOpType op) -> PartialResult
     474{
     475    uint8_t flags;
     476    WASM_PARSER_FAIL_IF(!parseUInt8(flags), "can't get flags");
     477    WASM_PARSER_FAIL_IF(flags != 0x0, "flags should be 0x0 but got ", flags);
     478    WASM_TRY_ADD_TO_CONTEXT(atomicFence(op, flags));
    316479    return { };
    317480}
     
    492655    }
    493656
     657    case ExtAtomic: {
     658        WASM_PARSER_FAIL_IF(!Options::useWebAssemblyThreading(), "wasm-threading is not enabled");
     659        uint8_t extOp;
     660        WASM_PARSER_FAIL_IF(!parseUInt8(extOp), "can't parse atomic extended opcode");
     661
     662        ExtAtomicOpType op = static_cast<ExtAtomicOpType>(extOp);
     663        switch (op) {
     664#define CREATE_CASE(name, id, b3op, inc, memoryType) case ExtAtomicOpType::name: return atomicLoad(op, memoryType);
     665        FOR_EACH_WASM_EXT_ATOMIC_LOAD_OP(CREATE_CASE)
     666#undef CREATE_CASE
     667#define CREATE_CASE(name, id, b3op, inc, memoryType) case ExtAtomicOpType::name: return atomicStore(op, memoryType);
     668        FOR_EACH_WASM_EXT_ATOMIC_STORE_OP(CREATE_CASE)
     669#undef CREATE_CASE
     670#define CREATE_CASE(name, id, b3op, inc, memoryType) case ExtAtomicOpType::name: return atomicBinaryRMW(op, memoryType);
     671        FOR_EACH_WASM_EXT_ATOMIC_BINARY_RMW_OP(CREATE_CASE)
     672#undef CREATE_CASE
     673        case ExtAtomicOpType::MemoryAtomicWait64:
     674            return atomicWait(op, I64);
     675        case ExtAtomicOpType::MemoryAtomicWait32:
     676            return atomicWait(op, I32);
     677        case ExtAtomicOpType::MemoryAtomicNotify:
     678            return atomicNotify(op);
     679        case ExtAtomicOpType::AtomicFence:
     680            return atomicFence(op);
     681        case ExtAtomicOpType::I32AtomicRmw8CmpxchgU:
     682        case ExtAtomicOpType::I32AtomicRmw16CmpxchgU:
     683        case ExtAtomicOpType::I32AtomicRmwCmpxchg:
     684            return atomicCompareExchange(op, I32);
     685        case ExtAtomicOpType::I64AtomicRmw8CmpxchgU:
     686        case ExtAtomicOpType::I64AtomicRmw16CmpxchgU:
     687        case ExtAtomicOpType::I64AtomicRmw32CmpxchgU:
     688        case ExtAtomicOpType::I64AtomicRmwCmpxchg:
     689            return atomicCompareExchange(op, I64);
     690        default:
     691            WASM_PARSER_FAIL_IF(true, "invalid extended atomic op ", extOp);
     692            break;
     693        }
     694        return { };
     695    }
     696
    494697    case RefNull: {
    495698        WASM_PARSER_FAIL_IF(!Options::useWebAssemblyReferences(), "references are not enabled");
     
    10131216    }
    10141217
     1218#define CREATE_ATOMIC_CASE(name, ...) case ExtAtomicOpType::name:
     1219    case ExtAtomic: {
     1220        WASM_PARSER_FAIL_IF(!Options::useWebAssemblyThreading(), "wasm-threading is not enabled");
     1221        uint8_t extOp;
     1222        WASM_PARSER_FAIL_IF(!parseUInt8(extOp), "can't parse atomic extended opcode");
     1223        ExtAtomicOpType op = static_cast<ExtAtomicOpType>(extOp);
     1224        switch (op) {
     1225        FOR_EACH_WASM_EXT_ATOMIC_LOAD_OP(CREATE_ATOMIC_CASE)
     1226        FOR_EACH_WASM_EXT_ATOMIC_STORE_OP(CREATE_ATOMIC_CASE)
     1227        FOR_EACH_WASM_EXT_ATOMIC_BINARY_RMW_OP(CREATE_ATOMIC_CASE)
     1228        case ExtAtomicOpType::MemoryAtomicWait64:
     1229        case ExtAtomicOpType::MemoryAtomicWait32:
     1230        case ExtAtomicOpType::MemoryAtomicNotify:
     1231        case ExtAtomicOpType::I32AtomicRmw8CmpxchgU:
     1232        case ExtAtomicOpType::I32AtomicRmw16CmpxchgU:
     1233        case ExtAtomicOpType::I32AtomicRmwCmpxchg:
     1234        case ExtAtomicOpType::I64AtomicRmw8CmpxchgU:
     1235        case ExtAtomicOpType::I64AtomicRmw16CmpxchgU:
     1236        case ExtAtomicOpType::I64AtomicRmw32CmpxchgU:
     1237        case ExtAtomicOpType::I64AtomicRmwCmpxchg:
     1238        {
     1239            WASM_VALIDATOR_FAIL_IF(!m_info.memory, "atomic instruction without memory");
     1240            uint32_t alignment;
     1241            uint32_t unused;
     1242            WASM_PARSER_FAIL_IF(!parseVarUInt32(alignment), "can't get load alignment");
     1243            WASM_PARSER_FAIL_IF(alignment > memoryLog2Alignment(op), "byte alignment ", 1ull << alignment, " exceeds load's natural alignment ", 1ull << memoryLog2Alignment(op));
     1244            WASM_PARSER_FAIL_IF(!parseVarUInt32(unused), "can't get first immediate for atomic ", static_cast<unsigned>(op), " in unreachable context");
     1245            break;
     1246        }
     1247        case ExtAtomicOpType::AtomicFence: {
     1248            uint8_t flags;
     1249            WASM_PARSER_FAIL_IF(!parseUInt8(flags), "can't get flags");
     1250            WASM_PARSER_FAIL_IF(flags != 0x0, "flags should be 0x0 but got ", flags);
     1251            break;
     1252        }
     1253        default:
     1254            WASM_PARSER_FAIL_IF(true, "invalid extended atomic op ", extOp);
     1255            break;
     1256        }
     1257
     1258        return { };
     1259    }
     1260#undef CREATE_ATOMIC_CASE
     1261
    10151262    // no immediate cases
    10161263    FOR_EACH_WASM_BINARY_OP(CREATE_CASE)
  • trunk/Source/JavaScriptCore/wasm/WasmLLIntGenerator.cpp

    r270015 r270208  
    210210    PartialResult WARN_UNUSED_RETURN addCurrentMemory(ExpressionType& result);
    211211
     212    // Atomics
     213    PartialResult WARN_UNUSED_RETURN atomicLoad(ExtAtomicOpType, Type, ExpressionType pointer, ExpressionType& result, uint32_t offset);
     214    PartialResult WARN_UNUSED_RETURN atomicStore(ExtAtomicOpType, Type, ExpressionType pointer, ExpressionType value, uint32_t offset);
     215    PartialResult WARN_UNUSED_RETURN atomicBinaryRMW(ExtAtomicOpType, Type, ExpressionType pointer, ExpressionType value, ExpressionType& result, uint32_t offset);
     216    PartialResult WARN_UNUSED_RETURN atomicCompareExchange(ExtAtomicOpType, Type, ExpressionType pointer, ExpressionType expected, ExpressionType value, ExpressionType& result, uint32_t offset);
     217    PartialResult WARN_UNUSED_RETURN atomicWait(ExtAtomicOpType, ExpressionType pointer, ExpressionType value, ExpressionType timeout, ExpressionType& result, uint32_t offset);
     218    PartialResult WARN_UNUSED_RETURN atomicNotify(ExtAtomicOpType, ExpressionType pointer, ExpressionType value, ExpressionType& result, uint32_t offset);
     219    PartialResult WARN_UNUSED_RETURN atomicFence(ExtAtomicOpType, uint8_t flags);
     220
    212221    // Basic operators
    213222    template<OpType>
     
    12411250}
    12421251
     1252auto LLIntGenerator::atomicLoad(ExtAtomicOpType op, Type, ExpressionType pointer, ExpressionType& result, uint32_t offset) -> PartialResult
     1253{
     1254    result = push();
     1255    switch (op) {
     1256    case ExtAtomicOpType::I32AtomicLoad8U:
     1257    case ExtAtomicOpType::I64AtomicLoad8U:
     1258        WasmI64AtomicRmw8AddU::emit(this, result, pointer, offset, zeroConstant());
     1259        break;
     1260    case ExtAtomicOpType::I32AtomicLoad16U:
     1261    case ExtAtomicOpType::I64AtomicLoad16U:
     1262        WasmI64AtomicRmw16AddU::emit(this, result, pointer, offset, zeroConstant());
     1263        break;
     1264    case ExtAtomicOpType::I32AtomicLoad:
     1265    case ExtAtomicOpType::I64AtomicLoad32U:
     1266        WasmI64AtomicRmw32AddU::emit(this, result, pointer, offset, zeroConstant());
     1267        break;
     1268    case ExtAtomicOpType::I64AtomicLoad:
     1269        WasmI64AtomicRmwAdd::emit(this, result, pointer, offset, zeroConstant());
     1270        break;
     1271    default:
     1272        RELEASE_ASSERT_NOT_REACHED();
     1273    }
     1274
     1275    return { };
     1276}
     1277
     1278auto LLIntGenerator::atomicStore(ExtAtomicOpType op, Type, ExpressionType pointer, ExpressionType value, uint32_t offset) -> PartialResult
     1279{
     1280    auto result = push();
     1281    switch (op) {
     1282    case ExtAtomicOpType::I32AtomicStore8U:
     1283    case ExtAtomicOpType::I64AtomicStore8U:
     1284        WasmI64AtomicRmw8XchgU::emit(this, result, pointer, offset, value);
     1285        break;
     1286    case ExtAtomicOpType::I32AtomicStore16U:
     1287    case ExtAtomicOpType::I64AtomicStore16U:
     1288        WasmI64AtomicRmw16XchgU::emit(this, result, pointer, offset, value);
     1289        break;
     1290    case ExtAtomicOpType::I32AtomicStore:
     1291    case ExtAtomicOpType::I64AtomicStore32U:
     1292        WasmI64AtomicRmw32XchgU::emit(this, result, pointer, offset, value);
     1293        break;
     1294    case ExtAtomicOpType::I64AtomicStore:
     1295        WasmI64AtomicRmwXchg::emit(this, result, pointer, offset, value);
     1296        break;
     1297    default:
     1298        RELEASE_ASSERT_NOT_REACHED();
     1299    }
     1300
     1301    didPopValueFromStack(); // Ignore the result.
     1302    return { };
     1303}
     1304
     1305auto LLIntGenerator::atomicBinaryRMW(ExtAtomicOpType op, Type, ExpressionType pointer, ExpressionType value, ExpressionType& result, uint32_t offset) -> PartialResult
     1306{
     1307    result = push();
     1308    switch (op) {
     1309    case ExtAtomicOpType::I32AtomicRmw8AddU:
     1310    case ExtAtomicOpType::I64AtomicRmw8AddU:
     1311        WasmI64AtomicRmw8AddU::emit(this, result, pointer, offset, value);
     1312        break;
     1313    case ExtAtomicOpType::I32AtomicRmw16AddU:
     1314    case ExtAtomicOpType::I64AtomicRmw16AddU:
     1315        WasmI64AtomicRmw16AddU::emit(this, result, pointer, offset, value);
     1316        break;
     1317    case ExtAtomicOpType::I32AtomicRmwAdd:
     1318    case ExtAtomicOpType::I64AtomicRmw32AddU:
     1319        WasmI64AtomicRmw32AddU::emit(this, result, pointer, offset, value);
     1320        break;
     1321    case ExtAtomicOpType::I64AtomicRmwAdd:
     1322        WasmI64AtomicRmwAdd::emit(this, result, pointer, offset, value);
     1323        break;
     1324    case ExtAtomicOpType::I32AtomicRmw8SubU:
     1325    case ExtAtomicOpType::I64AtomicRmw8SubU:
     1326        WasmI64AtomicRmw8SubU::emit(this, result, pointer, offset, value);
     1327        break;
     1328    case ExtAtomicOpType::I32AtomicRmw16SubU:
     1329    case ExtAtomicOpType::I64AtomicRmw16SubU:
     1330        WasmI64AtomicRmw16SubU::emit(this, result, pointer, offset, value);
     1331        break;
     1332    case ExtAtomicOpType::I32AtomicRmwSub:
     1333    case ExtAtomicOpType::I64AtomicRmw32SubU:
     1334        WasmI64AtomicRmw32SubU::emit(this, result, pointer, offset, value);
     1335        break;
     1336    case ExtAtomicOpType::I64AtomicRmwSub:
     1337        WasmI64AtomicRmwSub::emit(this, result, pointer, offset, value);
     1338        break;
     1339    case ExtAtomicOpType::I32AtomicRmw8AndU:
     1340    case ExtAtomicOpType::I64AtomicRmw8AndU:
     1341        WasmI64AtomicRmw8AndU::emit(this, result, pointer, offset, value);
     1342        break;
     1343    case ExtAtomicOpType::I32AtomicRmw16AndU:
     1344    case ExtAtomicOpType::I64AtomicRmw16AndU:
     1345        WasmI64AtomicRmw16AndU::emit(this, result, pointer, offset, value);
     1346        break;
     1347    case ExtAtomicOpType::I32AtomicRmwAnd:
     1348    case ExtAtomicOpType::I64AtomicRmw32AndU:
     1349        WasmI64AtomicRmw32AndU::emit(this, result, pointer, offset, value);
     1350        break;
     1351    case ExtAtomicOpType::I64AtomicRmwAnd:
     1352        WasmI64AtomicRmwAnd::emit(this, result, pointer, offset, value);
     1353        break;
     1354    case ExtAtomicOpType::I32AtomicRmw8OrU:
     1355    case ExtAtomicOpType::I64AtomicRmw8OrU:
     1356        WasmI64AtomicRmw8OrU::emit(this, result, pointer, offset, value);
     1357        break;
     1358    case ExtAtomicOpType::I32AtomicRmw16OrU:
     1359    case ExtAtomicOpType::I64AtomicRmw16OrU:
     1360        WasmI64AtomicRmw16OrU::emit(this, result, pointer, offset, value);
     1361        break;
     1362    case ExtAtomicOpType::I32AtomicRmwOr:
     1363    case ExtAtomicOpType::I64AtomicRmw32OrU:
     1364        WasmI64AtomicRmw32OrU::emit(this, result, pointer, offset, value);
     1365        break;
     1366    case ExtAtomicOpType::I64AtomicRmwOr:
     1367        WasmI64AtomicRmwOr::emit(this, result, pointer, offset, value);
     1368        break;
     1369    case ExtAtomicOpType::I32AtomicRmw8XorU:
     1370    case ExtAtomicOpType::I64AtomicRmw8XorU:
     1371        WasmI64AtomicRmw8XorU::emit(this, result, pointer, offset, value);
     1372        break;
     1373    case ExtAtomicOpType::I32AtomicRmw16XorU:
     1374    case ExtAtomicOpType::I64AtomicRmw16XorU:
     1375        WasmI64AtomicRmw16XorU::emit(this, result, pointer, offset, value);
     1376        break;
     1377    case ExtAtomicOpType::I32AtomicRmwXor:
     1378    case ExtAtomicOpType::I64AtomicRmw32XorU:
     1379        WasmI64AtomicRmw32XorU::emit(this, result, pointer, offset, value);
     1380        break;
     1381    case ExtAtomicOpType::I64AtomicRmwXor:
     1382        WasmI64AtomicRmwXor::emit(this, result, pointer, offset, value);
     1383        break;
     1384    case ExtAtomicOpType::I32AtomicRmw8XchgU:
     1385    case ExtAtomicOpType::I64AtomicRmw8XchgU:
     1386        WasmI64AtomicRmw8XchgU::emit(this, result, pointer, offset, value);
     1387        break;
     1388    case ExtAtomicOpType::I32AtomicRmw16XchgU:
     1389    case ExtAtomicOpType::I64AtomicRmw16XchgU:
     1390        WasmI64AtomicRmw16XchgU::emit(this, result, pointer, offset, value);
     1391        break;
     1392    case ExtAtomicOpType::I32AtomicRmwXchg:
     1393    case ExtAtomicOpType::I64AtomicRmw32XchgU:
     1394        WasmI64AtomicRmw32XchgU::emit(this, result, pointer, offset, value);
     1395        break;
     1396    case ExtAtomicOpType::I64AtomicRmwXchg:
     1397        WasmI64AtomicRmwXchg::emit(this, result, pointer, offset, value);
     1398        break;
     1399    default:
     1400        RELEASE_ASSERT_NOT_REACHED();
     1401        break;
     1402    }
     1403
     1404    return { };
     1405}
     1406
     1407auto LLIntGenerator::atomicCompareExchange(ExtAtomicOpType op, Type, ExpressionType pointer, ExpressionType expected, ExpressionType value, ExpressionType& result, uint32_t offset) -> PartialResult
     1408{
     1409    result = push();
     1410    switch (op) {
     1411    case ExtAtomicOpType::I32AtomicRmw8CmpxchgU:
     1412    case ExtAtomicOpType::I64AtomicRmw8CmpxchgU:
     1413        WasmI64AtomicRmw8CmpxchgU::emit(this, result, pointer, offset, expected, value);
     1414        break;
     1415    case ExtAtomicOpType::I32AtomicRmw16CmpxchgU:
     1416    case ExtAtomicOpType::I64AtomicRmw16CmpxchgU:
     1417        WasmI64AtomicRmw16CmpxchgU::emit(this, result, pointer, offset, expected, value);
     1418        break;
     1419    case ExtAtomicOpType::I32AtomicRmwCmpxchg:
     1420    case ExtAtomicOpType::I64AtomicRmw32CmpxchgU:
     1421        WasmI64AtomicRmw32CmpxchgU::emit(this, result, pointer, offset, expected, value);
     1422        break;
     1423    case ExtAtomicOpType::I64AtomicRmwCmpxchg:
     1424        WasmI64AtomicRmwCmpxchg::emit(this, result, pointer, offset, expected, value);
     1425        break;
     1426    default:
     1427        RELEASE_ASSERT_NOT_REACHED();
     1428        break;
     1429    }
     1430
     1431    return { };
     1432}
     1433
     1434auto LLIntGenerator::atomicWait(ExtAtomicOpType op, ExpressionType pointer, ExpressionType value, ExpressionType timeout, ExpressionType& result, uint32_t offset) -> PartialResult
     1435{
     1436    result = push();
     1437    switch (op) {
     1438    case ExtAtomicOpType::MemoryAtomicWait32:
     1439        WasmMemoryAtomicWait32::emit(this, result, pointer, offset, value, timeout);
     1440        break;
     1441    case ExtAtomicOpType::MemoryAtomicWait64:
     1442        WasmMemoryAtomicWait64::emit(this, result, pointer, offset, value, timeout);
     1443        break;
     1444    default:
     1445        RELEASE_ASSERT_NOT_REACHED();
     1446        break;
     1447    }
     1448    return { };
     1449}
     1450
     1451auto LLIntGenerator::atomicNotify(ExtAtomicOpType op, ExpressionType pointer, ExpressionType count, ExpressionType& result, uint32_t offset) -> PartialResult
     1452{
     1453    result = push();
     1454    RELEASE_ASSERT(op == ExtAtomicOpType::MemoryAtomicNotify);
     1455    WasmMemoryAtomicNotify::emit(this, result, pointer, offset, count);
     1456    return { };
     1457}
     1458
     1459auto LLIntGenerator::atomicFence(ExtAtomicOpType, uint8_t) -> PartialResult
     1460{
     1461    WasmAtomicFence::emit(this);
     1462    return { };
     1463}
     1464
    12431465void LLIntGenerator::linkSwitchTargets(Label& label, unsigned location)
    12441466{
  • trunk/Source/JavaScriptCore/wasm/WasmMemory.h

    r269974 r270208  
    6262    size_t size() const { return m_size; }
    6363    size_t mappedCapacity() const { return m_mappedCapacity; }
     64    size_t boundsCheckingSize() const
     65    {
     66        if (m_mode == MemoryMode::BoundsChecking)
     67            return m_mappedCapacity;
     68        return UINT32_MAX;
     69    }
    6470    PageCount initial() const { return m_initial; }
    6571    PageCount maximum() const { return m_maximum; }
     
    112118    size_t size() const { return m_handle->size(); }
    113119    PageCount sizeInPages() const { return PageCount::fromBytes(size()); }
    114     size_t boundsCheckingSize() const { return m_handle->mappedCapacity(); }
     120    size_t boundsCheckingSize() const { return m_handle->boundsCheckingSize(); }
    115121    PageCount initial() const { return m_handle->initial(); }
    116122    PageCount maximum() const { return m_handle->maximum(); }
  • trunk/Source/JavaScriptCore/wasm/WasmMemoryInformation.cpp

    r269974 r270208  
    6161}
    6262
    63 MemoryInformation::MemoryInformation(PageCount initial, PageCount maximum, bool isImport)
     63MemoryInformation::MemoryInformation(PageCount initial, PageCount maximum, bool isShared, bool isImport)
    6464    : m_initial(initial)
    6565    , m_maximum(maximum)
     66    , m_isShared(isShared)
    6667    , m_isImport(isImport)
    6768{
  • trunk/Source/JavaScriptCore/wasm/WasmMemoryInformation.h

    r269974 r270208  
    7373    }
    7474
    75     MemoryInformation(PageCount initial, PageCount maximum, bool isImport);
     75    MemoryInformation(PageCount initial, PageCount maximum, bool isShared, bool isImport);
    7676
    7777    PageCount initial() const { return m_initial; }
    7878    PageCount maximum() const { return m_maximum; }
     79    bool isShared() const { return m_isShared; }
    7980    bool isImport() const { return m_isImport; }
    8081
     
    8485    PageCount m_initial { };
    8586    PageCount m_maximum { };
     87    bool m_isShared { false };
    8688    bool m_isImport { false };
    8789};
  • trunk/Source/JavaScriptCore/wasm/WasmOperations.cpp

    r269552 r270208  
    3939#include "JSWebAssemblyRuntimeError.h"
    4040#include "ProbeContext.h"
     41#include "ReleaseHeapAccessScope.h"
     42#include "TypedArrayController.h"
    4143#include "WasmCallee.h"
    4244#include "WasmCallingConvention.h"
     
    733735}
    734736
     737template<typename ValueType>
     738static int32_t wait(VM& vm, ValueType* pointer, ValueType expectedValue, int64_t timeoutInNanoseconds)
     739{
     740    Seconds timeout = Seconds::infinity();
     741    if (timeoutInNanoseconds >= 0) {
     742        int64_t timeoutInMilliseconds = timeoutInNanoseconds / 1000;
     743        timeout = Seconds::fromMilliseconds(timeoutInMilliseconds);
     744    }
     745    bool didPassValidation = false;
     746    ParkingLot::ParkResult result;
     747    {
     748        ReleaseHeapAccessScope releaseHeapAccessScope(vm.heap);
     749        result = ParkingLot::parkConditionally(
     750            pointer,
     751            [&] () -> bool {
     752                didPassValidation = WTF::atomicLoad(pointer) == expectedValue;
     753                return didPassValidation;
     754            },
     755            [] () { },
     756            MonotonicTime::now() + timeout);
     757    }
     758    if (!didPassValidation)
     759        return 1;
     760    if (!result.wasUnparked)
     761        return 2;
     762    return 0;
     763}
     764
     765JSC_DEFINE_JIT_OPERATION(operationMemoryAtomicWait32, int32_t, (Instance* instance, unsigned base, unsigned offset, uint32_t value, int64_t timeoutInNanoseconds))
     766{
     767    VM& vm = instance->owner<JSWebAssemblyInstance>()->vm();
     768    uint64_t offsetInMemory = static_cast<uint64_t>(base) + offset;
     769    if (offsetInMemory & (0x4 - 1))
     770        return -1;
     771    if (!instance->memory())
     772        return -1;
     773    if (offsetInMemory >= instance->memory()->size())
     774        return -1;
     775    if (instance->memory()->sharingMode() != MemorySharingMode::Shared)
     776        return -1;
     777    if (!vm.m_typedArrayController->isAtomicsWaitAllowedOnCurrentThread())
     778        return -1;
     779    uint32_t* pointer = bitwise_cast<uint32_t*>(instance->memory()->memory()) + offsetInMemory;
     780    return wait<uint32_t>(vm, pointer, value, timeoutInNanoseconds);
     781}
     782
     783JSC_DEFINE_JIT_OPERATION(operationMemoryAtomicWait64, int32_t, (Instance* instance, unsigned base, unsigned offset, uint64_t value, int64_t timeoutInNanoseconds))
     784{
     785    VM& vm = instance->owner<JSWebAssemblyInstance>()->vm();
     786    uint64_t offsetInMemory = static_cast<uint64_t>(base) + offset;
     787    if (offsetInMemory & (0x8 - 1))
     788        return -1;
     789    if (!instance->memory())
     790        return -1;
     791    if (offsetInMemory >= instance->memory()->size())
     792        return -1;
     793    if (instance->memory()->sharingMode() != MemorySharingMode::Shared)
     794        return -1;
     795    if (!vm.m_typedArrayController->isAtomicsWaitAllowedOnCurrentThread())
     796        return -1;
     797    uint64_t* pointer = bitwise_cast<uint64_t*>(instance->memory()->memory()) + offsetInMemory;
     798    return wait<uint64_t>(vm, pointer, value, timeoutInNanoseconds);
     799}
     800
     801JSC_DEFINE_JIT_OPERATION(operationMemoryAtomicNotify, int32_t, (Instance* instance, unsigned base, unsigned offset, int32_t countValue))
     802{
     803    uint64_t offsetInMemory = static_cast<uint64_t>(base) + offset;
     804    if (offsetInMemory & (0x4 - 1))
     805        return -1;
     806    if (!instance->memory())
     807        return -1;
     808    if (offsetInMemory >= instance->memory()->size())
     809        return -1;
     810    if (instance->memory()->sharingMode() != MemorySharingMode::Shared)
     811        return 0;
     812    uint8_t* pointer = bitwise_cast<uint8_t*>(instance->memory()->memory()) + offsetInMemory;
     813    unsigned count = UINT_MAX;
     814    if (countValue >= 0)
     815        count = static_cast<unsigned>(countValue);
     816    return ParkingLot::unparkCount(pointer, count);
     817}
     818
    735819JSC_DEFINE_JIT_OPERATION(operationWasmToJSException, void*, (CallFrame* callFrame, Wasm::ExceptionType type, Instance* wasmInstance))
    736820{
  • trunk/Source/JavaScriptCore/wasm/WasmOperations.h

    r267727 r270208  
    7070JSC_DECLARE_JIT_OPERATION(operationGetWasmTableSize, int32_t, (Instance*, unsigned));
    7171
     72JSC_DECLARE_JIT_OPERATION(operationMemoryAtomicWait32, int32_t, (Instance* instance, unsigned base, unsigned offset, uint32_t value, int64_t timeout));
     73JSC_DECLARE_JIT_OPERATION(operationMemoryAtomicWait64, int32_t, (Instance* instance, unsigned base, unsigned offset, uint64_t value, int64_t timeout));
     74JSC_DECLARE_JIT_OPERATION(operationMemoryAtomicNotify, int32_t, (Instance*, unsigned, unsigned, int32_t));
     75
    7276JSC_DECLARE_JIT_OPERATION(operationWasmToJSException, void*, (CallFrame*, Wasm::ExceptionType, Instance*));
    7377
  • trunk/Source/JavaScriptCore/wasm/WasmSectionParser.cpp

    r270015 r270208  
    178178}
    179179
    180 auto SectionParser::parseResizableLimits(uint32_t& initial, Optional<uint32_t>& maximum) -> PartialResult
     180auto SectionParser::parseResizableLimits(uint32_t& initial, Optional<uint32_t>& maximum, bool& isShared, LimitsType limitsType) -> PartialResult
    181181{
    182182    ASSERT(!maximum);
     
    184184    uint8_t flags;
    185185    WASM_PARSER_FAIL_IF(!parseUInt8(flags), "can't parse resizable limits flags");
    186     WASM_PARSER_FAIL_IF(flags != 0x0 && flags != 0x1, "resizable limits flag should be 0x00 or 0x01 but 0x", hex(flags, 2, Lowercase));
     186    WASM_PARSER_FAIL_IF(flags != 0x0 && flags != 0x1 && flags != 0x3, "resizable limits flag should be 0x00, 0x01, or 0x03 but 0x", hex(flags, 2, Lowercase));
     187    WASM_PARSER_FAIL_IF(flags == 0x3 && limitsType != LimitsType::Memory, "can't use shared limits for non memory");
    187188    WASM_PARSER_FAIL_IF(!parseVarUInt32(initial), "can't parse resizable limits initial page count");
     189
     190    isShared = flags == 0x3;
     191    WASM_PARSER_FAIL_IF(isShared && !Options::useSharedArrayBuffer(), "shared memory is not enabled");
    188192
    189193    if (flags) {
     
    207211    uint32_t initial;
    208212    Optional<uint32_t> maximum;
    209     PartialResult limits = parseResizableLimits(initial, maximum);
     213    bool isShared = false;
     214    PartialResult limits = parseResizableLimits(initial, maximum, isShared, LimitsType::Table);
    210215    if (UNLIKELY(!limits))
    211216        return makeUnexpected(WTFMove(limits.error()));
     
    241246    PageCount initialPageCount;
    242247    PageCount maximumPageCount;
     248    bool isShared = false;
    243249    {
    244250        uint32_t initial;
    245251        Optional<uint32_t> maximum;
    246         PartialResult limits = parseResizableLimits(initial, maximum);
     252        PartialResult limits = parseResizableLimits(initial, maximum, isShared, LimitsType::Memory);
    247253        if (UNLIKELY(!limits))
    248254            return makeUnexpected(WTFMove(limits.error()));
     
    260266    ASSERT(!maximumPageCount || maximumPageCount >= initialPageCount);
    261267
    262     m_info->memory = MemoryInformation(initialPageCount, maximumPageCount, isImport);
     268    m_info->memory = MemoryInformation(initialPageCount, maximumPageCount, isShared, isImport);
    263269    return { };
    264270}
  • trunk/Source/JavaScriptCore/wasm/WasmSectionParser.h

    r254087 r270208  
    6464    PartialResult WARN_UNUSED_RETURN parseMemoryHelper(bool isImport);
    6565    PartialResult WARN_UNUSED_RETURN parseTableHelper(bool isImport);
    66     PartialResult WARN_UNUSED_RETURN parseResizableLimits(uint32_t& initial, Optional<uint32_t>& maximum);
     66    enum class LimitsType { Memory, Table };
     67    PartialResult WARN_UNUSED_RETURN parseResizableLimits(uint32_t& initial, Optional<uint32_t>& maximum, bool& isShared, LimitsType);
    6768    PartialResult WARN_UNUSED_RETURN parseInitExpr(uint8_t&, uint64_t&, Type& initExprType);
    6869
  • trunk/Source/JavaScriptCore/wasm/WasmSlowPaths.cpp

    r269349 r270208  
    439439}
    440440
     441WASM_SLOW_PATH_DECL(memory_atomic_wait32)
     442{
     443    auto instruction = pc->as<WasmMemoryAtomicWait32, WasmOpcodeTraits>();
     444    unsigned base = READ(instruction.m_pointer).unboxedInt32();
     445    unsigned offset = instruction.m_offset;
     446    uint32_t value = READ(instruction.m_value).unboxedInt32();
     447    int64_t timeout = READ(instruction.m_timeout).unboxedInt64();
     448    int32_t result = Wasm::operationMemoryAtomicWait32(instance, base, offset, value, timeout);
     449    if (result < 0)
     450        WASM_THROW(Wasm::ExceptionType::OutOfBoundsMemoryAccess);
     451    WASM_RETURN(result);
     452}
     453
     454WASM_SLOW_PATH_DECL(memory_atomic_wait64)
     455{
     456    auto instruction = pc->as<WasmMemoryAtomicWait64, WasmOpcodeTraits>();
     457    unsigned base = READ(instruction.m_pointer).unboxedInt32();
     458    unsigned offset = instruction.m_offset;
     459    uint64_t value = READ(instruction.m_value).unboxedInt64();
     460    int64_t timeout = READ(instruction.m_timeout).unboxedInt64();
     461    int32_t result = Wasm::operationMemoryAtomicWait64(instance, base, offset, value, timeout);
     462    if (result < 0)
     463        WASM_THROW(Wasm::ExceptionType::OutOfBoundsMemoryAccess);
     464    WASM_RETURN(result);
     465}
     466
     467WASM_SLOW_PATH_DECL(memory_atomic_notify)
     468{
     469    auto instruction = pc->as<WasmMemoryAtomicNotify, WasmOpcodeTraits>();
     470    unsigned base = READ(instruction.m_pointer).unboxedInt32();
     471    unsigned offset = instruction.m_offset;
     472    int32_t count = READ(instruction.m_count).unboxedInt32();
     473    int32_t result = Wasm::operationMemoryAtomicNotify(instance, base, offset, count);
     474    if (result < 0)
     475        WASM_THROW(Wasm::ExceptionType::OutOfBoundsMemoryAccess);
     476    WASM_RETURN(result);
     477}
     478
    441479extern "C" SlowPathReturnType slow_path_wasm_throw_exception(CallFrame* callFrame, const Instruction* pc, Wasm::Instance* instance, Wasm::ExceptionType exceptionType)
    442480{
  • trunk/Source/JavaScriptCore/wasm/WasmSlowPaths.h

    r253074 r270208  
    7070WASM_SLOW_PATH_HIDDEN_DECL(set_global_ref);
    7171WASM_SLOW_PATH_HIDDEN_DECL(set_global_ref_portable_binding);
     72WASM_SLOW_PATH_HIDDEN_DECL(memory_atomic_wait32);
     73WASM_SLOW_PATH_HIDDEN_DECL(memory_atomic_wait64);
     74WASM_SLOW_PATH_HIDDEN_DECL(memory_atomic_notify);
    7275
    7376extern "C" SlowPathReturnType slow_path_wasm_throw_exception(CallFrame*, const Instruction*, Wasm::Instance* instance, Wasm::ExceptionType) WTF_INTERNAL;
  • trunk/Source/JavaScriptCore/wasm/generateWasm.py

    r268993 r270208  
    9494
    9595
     96def isAtomic(op):
     97    return op["category"].startswith("atomic")
     98
     99
     100def isAtomicLoad(op):
     101    return op["category"] == "atomic.load"
     102
     103
     104def isAtomicStore(op):
     105    return op["category"] == "atomic.store"
     106
     107
     108def isAtomicBinaryRMW(op):
     109    return op["category"] == "atomic.rmw.binary"
     110
     111
    96112def isSimple(op):
    97113    return "b3op" in op
     
    99115
    100116def memoryLog2Alignment(op):
    101     assert op["opcode"]["category"] == "memory"
    102     match = re.match(r'^[if]([36][24])\.[^0-9]+([0-9]+)?_?[us]?$', op["name"])
    103     memoryBits = int(match.group(2) if match.group(2) else match.group(1))
     117    assert op["opcode"]["category"] == "memory" or op["opcode"]["category"] == "atomic.load" or op["opcode"]["category"] == "atomic.store" or op["opcode"]["category"] == "atomic.rmw" or op["opcode"]["category"] == "atomic" or op["opcode"]["category"] == "atomic.rmw.binary"
     118    if op["opcode"]["category"] == "atomic.rmw.binary" or op["opcode"]["category"] == "atomic.rmw":
     119        match = re.match(r'^[if]([36][24])\.[^0-9]+([0-9]+)?\.[^0-9]+_?[us]?$', op["name"])
     120        if not match:
     121            print(op["name"])
     122        memoryBits = int(match.group(2) if match.group(2) else match.group(1))
     123    elif op["opcode"]["category"] == "atomic":
     124        if op["name"] == "atomic.fence":
     125            memoryBits = 8
     126        elif op["name"] == "memory.atomic.notify":
     127            memoryBits = 32
     128        else:
     129            match = re.match(r'^memory\.atomic\.wait([0-9]+)$', op["name"])
     130            memoryBits = int(match.group(1))
     131    else:
     132        match = re.match(r'^[if]([36][24])\.[^0-9]+([0-9]+)?_?[us]?$', op["name"])
     133        if not match:
     134            print(op["name"])
     135        memoryBits = int(match.group(2) if match.group(2) else match.group(1))
    104136    assert 2 ** math.log(memoryBits, 2) == memoryBits
    105137    return str(int(math.log(memoryBits / 8, 2)))
  • trunk/Source/JavaScriptCore/wasm/generateWasmOpsHeader.py

    r253140 r270208  
    9393
    9494
     95def atomicMemoryLoadMacroizer():
     96    def modifier(op):
     97        return [cppType(op["return"][0])]
     98    return opcodeMacroizer(lambda op: isAtomicLoad(op), modifier=modifier, opcodeField="extendedOp")
     99
     100
     101def atomicMemoryStoreMacroizer():
     102    def modifier(op):
     103        return [cppType(op["parameter"][1])]
     104    return opcodeMacroizer(lambda op: isAtomicStore(op), modifier=modifier, opcodeField="extendedOp")
     105
     106
     107def atomicBinaryRMWMacroizer():
     108    def modifier(op):
     109        return [cppType(op["parameter"][1])]
     110    return opcodeMacroizer(lambda op: isAtomicBinaryRMW(op), modifier=modifier, opcodeField="extendedOp")
     111
     112
    95113defines = ["#define FOR_EACH_WASM_SPECIAL_OP(macro)"]
    96 defines.extend([op for op in opcodeMacroizer(lambda op: not (isUnary(op) or isBinary(op) or op["category"] == "control" or op["category"] == "memory" or op["category"] == "exttable"))])
     114defines.extend([op for op in opcodeMacroizer(lambda op: not (isUnary(op) or isBinary(op) or op["category"] == "control" or op["category"] == "memory" or op["category"] == "exttable"or isAtomic(op)))])
    97115defines.append("\n\n#define FOR_EACH_WASM_CONTROL_FLOW_OP(macro)")
    98116defines.extend([op for op in opcodeMacroizer(lambda op: op["category"] == "control")])
     
    111129defines.append("\n\n#define FOR_EACH_WASM_EXT_TABLE_OP(macro)")
    112130defines.extend([op for op in opcodeMacroizer(lambda op: (op["category"] == "exttable"), opcodeField="extendedOp")])
     131defines.append("\n\n#define FOR_EACH_WASM_EXT_ATOMIC_LOAD_OP(macro)")
     132defines.extend([op for op in atomicMemoryLoadMacroizer()])
     133defines.append("\n\n#define FOR_EACH_WASM_EXT_ATOMIC_STORE_OP(macro)")
     134defines.extend([op for op in atomicMemoryStoreMacroizer()])
     135defines.append("\n\n#define FOR_EACH_WASM_EXT_ATOMIC_BINARY_RMW_OP(macro)")
     136defines.extend([op for op in atomicBinaryRMWMacroizer()])
     137defines.append("\n\n#define FOR_EACH_WASM_EXT_ATOMIC_OTHER_OP(macro)")
     138defines.extend([op for op in opcodeMacroizer(lambda op: isAtomic(op) and (not isAtomicLoad(op) and not isAtomicStore(op) and not isAtomicBinaryRMW(op)), opcodeField="extendedOp")])
    113139defines.append("\n\n")
    114140
     
    143169    return "\n".join(result)
    144170
     171
     172def atomicMemoryLog2AlignmentGenerator(filter):
     173    result = []
     174    for op in wasm.opcodeIterator(filter):
     175        result.append("    case ExtAtomicOpType::" + wasm.toCpp(op["name"]) + ": return " + memoryLog2Alignment(op) + ";")
     176    return "\n".join(result)
     177
     178
    145179memoryLog2AlignmentLoads = memoryLog2AlignmentGenerator(lambda op: (op["category"] == "memory" and len(op["return"]) == 1))
    146180memoryLog2AlignmentStores = memoryLog2AlignmentGenerator(lambda op: (op["category"] == "memory" and len(op["return"]) == 0))
    147 
     181memoryLog2AlignmentAtomic = atomicMemoryLog2AlignmentGenerator(lambda op: (isAtomic(op)))
    148182
    149183contents = wasm.header + """
     
    234268    FOR_EACH_WASM_MEMORY_LOAD_OP(macro) \\
    235269    FOR_EACH_WASM_MEMORY_STORE_OP(macro) \\
    236     macro(ExtTable, 0xFC, Oops, 0)
     270    macro(ExtTable,  0xFC, Oops, 0) \\
     271    macro(ExtAtomic, 0xFE, Oops, 0)
    237272
    238273#define CREATE_ENUM_VALUE(name, id, ...) name = id,
     
    268303enum class ExtTableOpType : uint8_t {
    269304    FOR_EACH_WASM_EXT_TABLE_OP(CREATE_ENUM_VALUE)
     305};
     306
     307enum class ExtAtomicOpType : uint8_t {
     308    FOR_EACH_WASM_EXT_ATOMIC_LOAD_OP(CREATE_ENUM_VALUE)
     309    FOR_EACH_WASM_EXT_ATOMIC_STORE_OP(CREATE_ENUM_VALUE)
     310    FOR_EACH_WASM_EXT_ATOMIC_BINARY_RMW_OP(CREATE_ENUM_VALUE)
     311    FOR_EACH_WASM_EXT_ATOMIC_OTHER_OP(CREATE_ENUM_VALUE)
    270312};
    271313
     
    323365}
    324366
     367inline uint32_t memoryLog2Alignment(ExtAtomicOpType op)
     368{
     369    switch (op) {
     370""" + memoryLog2AlignmentAtomic + """
     371    default:
     372        break;
     373    }
     374    RELEASE_ASSERT_NOT_REACHED();
     375    return 0;
     376}
     377
    325378#define CREATE_CASE(name, ...) case name: return #name;
    326379inline const char* makeString(OpType op)
  • trunk/Source/JavaScriptCore/wasm/js/JSWebAssemblyInstance.cpp

    r269974 r270208  
    269269            }
    270270
     271            if ((memory->memory().sharingMode() == Wasm::MemorySharingMode::Shared) != moduleInformation.memory.isShared())
     272                return exception(createJSWebAssemblyLinkError(globalObject, vm, importFailMessage(import, "Memory import", "provided a 'shared' that is differnt from the module's declared 'shared' import memory attribute")));
     273
    271274            // ii. Append v to memories.
    272275            // iii. Append v.[[Memory]] to imports.
     
    292295            RETURN_IF_EXCEPTION(throwScope, nullptr);
    293296
    294             RefPtr<Wasm::Memory> memory = Wasm::Memory::tryCreate(moduleInformation.memory.initial(), moduleInformation.memory.maximum(), Wasm::MemorySharingMode::Default,
     297            RefPtr<Wasm::Memory> memory = Wasm::Memory::tryCreate(moduleInformation.memory.initial(), moduleInformation.memory.maximum(), moduleInformation.memory.isShared() ? Wasm::MemorySharingMode::Shared: Wasm::MemorySharingMode::Default,
    295298                [&vm] (Wasm::Memory::NotifyPressure) { vm.heap.collectAsync(CollectionScope::Full); },
    296299                [&vm] (Wasm::Memory::SyncTryToReclaim) { vm.heap.collectSync(CollectionScope::Full); },
  • trunk/Source/JavaScriptCore/wasm/wasm.json

    r269998 r270208  
    224224        "i64.reinterpret/f64": { "category": "conversion", "value": 189, "return": ["i64"],                          "parameter": ["f64"],                        "immediate": [], "b3op": "BitwiseCast"  },
    225225        "i32.extend8_s":       { "category": "conversion", "value": 192, "return": ["i32"],                          "parameter": ["i32"],                        "immediate": [], "b3op": "SExt8"        },
    226         "i32.extend16_s":      { "category": "conversion", "value": 193, "return": ["i32"],                          "parameter": ["i32"],                        "immediate": [], "b3op": "SExt16"       }
     226        "i32.extend16_s":      { "category": "conversion", "value": 193, "return": ["i32"],                          "parameter": ["i32"],                        "immediate": [], "b3op": "SExt16"       },
     227
     228        "memory.atomic.notify":       { "category": "atomic",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp":  0 },
     229        "memory.atomic.wait32":       { "category": "atomic",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32", "i64"],   "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp":  1 },
     230        "memory.atomic.wait64":       { "category": "atomic",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i64", "i64"],   "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp":  2 },
     231        "atomic.fence":               { "category": "atomic",     "value": 254, "return": [],           "parameter": [],                       "immediate": [{"name": "flags",          "type": "uint8"    }], "extendedOp":  3 },
     232        "i32.atomic.load":            { "category": "atomic.load",     "value": 254, "return": ["i32"],      "parameter": ["addr"],                 "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 16 },
     233        "i64.atomic.load":            { "category": "atomic.load",     "value": 254, "return": ["i64"],      "parameter": ["addr"],                 "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 17 },
     234        "i32.atomic.load8_u":         { "category": "atomic.load",     "value": 254, "return": ["i32"],      "parameter": ["addr"],                 "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 18 },
     235        "i32.atomic.load16_u":        { "category": "atomic.load",     "value": 254, "return": ["i32"],      "parameter": ["addr"],                 "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 19 },
     236        "i64.atomic.load8_u":         { "category": "atomic.load",     "value": 254, "return": ["i64"],      "parameter": ["addr"],                 "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 20 },
     237        "i64.atomic.load16_u":        { "category": "atomic.load",     "value": 254, "return": ["i64"],      "parameter": ["addr"],                 "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 21 },
     238        "i64.atomic.load32_u":        { "category": "atomic.load",     "value": 254, "return": ["i64"],      "parameter": ["addr"],                 "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 22 },
     239        "i32.atomic.store":           { "category": "atomic.store",     "value": 254, "return": [],           "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 23 },
     240        "i64.atomic.store":           { "category": "atomic.store",     "value": 254, "return": [],           "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 24 },
     241        "i32.atomic.store8_u":        { "category": "atomic.store",     "value": 254, "return": [],           "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 25 },
     242        "i32.atomic.store16_u":       { "category": "atomic.store",     "value": 254, "return": [],           "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 26 },
     243        "i64.atomic.store8_u":        { "category": "atomic.store",     "value": 254, "return": [],           "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 27 },
     244        "i64.atomic.store16_u":       { "category": "atomic.store",     "value": 254, "return": [],           "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 28 },
     245        "i64.atomic.store32_u":       { "category": "atomic.store",     "value": 254, "return": [],           "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 29 },
     246        "i32.atomic.rmw.add":         { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 30 },
     247        "i64.atomic.rmw.add":         { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 31 },
     248        "i32.atomic.rmw8.add_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 32 },
     249        "i32.atomic.rmw16.add_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 33 },
     250        "i64.atomic.rmw8.add_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 34 },
     251        "i64.atomic.rmw16.add_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 35 },
     252        "i64.atomic.rmw32.add_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 36 },
     253        "i32.atomic.rmw.sub":         { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 37 },
     254        "i64.atomic.rmw.sub":         { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 38 },
     255        "i32.atomic.rmw8.sub_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 39 },
     256        "i32.atomic.rmw16.sub_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 40 },
     257        "i64.atomic.rmw8.sub_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 41 },
     258        "i64.atomic.rmw16.sub_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 42 },
     259        "i64.atomic.rmw32.sub_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 43 },
     260        "i32.atomic.rmw.and":         { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 44 },
     261        "i64.atomic.rmw.and":         { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 45 },
     262        "i32.atomic.rmw8.and_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 46 },
     263        "i32.atomic.rmw16.and_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 47 },
     264        "i64.atomic.rmw8.and_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 48 },
     265        "i64.atomic.rmw16.and_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 49 },
     266        "i64.atomic.rmw32.and_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 50 },
     267        "i32.atomic.rmw.or":          { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 51 },
     268        "i64.atomic.rmw.or":          { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 52 },
     269        "i32.atomic.rmw8.or_u":       { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 53 },
     270        "i32.atomic.rmw16.or_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 54 },
     271        "i64.atomic.rmw8.or_u":       { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 55 },
     272        "i64.atomic.rmw16.or_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 56 },
     273        "i64.atomic.rmw32.or_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 57 },
     274        "i32.atomic.rmw.xor":         { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 58 },
     275        "i64.atomic.rmw.xor":         { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 59 },
     276        "i32.atomic.rmw8.xor_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 60 },
     277        "i32.atomic.rmw16.xor_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 61 },
     278        "i64.atomic.rmw8.xor_u":      { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 62 },
     279        "i64.atomic.rmw16.xor_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 63 },
     280        "i64.atomic.rmw32.xor_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 64 },
     281        "i32.atomic.rmw.xchg":        { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 65 },
     282        "i64.atomic.rmw.xchg":        { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 66 },
     283        "i32.atomic.rmw8.xchg_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 67 },
     284        "i32.atomic.rmw16.xchg_u":    { "category": "atomic.rmw.binary",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 68 },
     285        "i64.atomic.rmw8.xchg_u":     { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 69 },
     286        "i64.atomic.rmw16.xchg_u":    { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 70 },
     287        "i64.atomic.rmw32.xchg_u":    { "category": "atomic.rmw.binary",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64"],          "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 71 },
     288        "i32.atomic.rmw.cmpxchg":     { "category": "atomic.rmw",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32", "i32"],   "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 72 },
     289        "i64.atomic.rmw.cmpxchg":     { "category": "atomic.rmw",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64", "i64"],   "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 73 },
     290        "i32.atomic.rmw8.cmpxchg_u":  { "category": "atomic.rmw",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32", "i32"],   "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 74 },
     291        "i32.atomic.rmw16.cmpxchg_u": { "category": "atomic.rmw",     "value": 254, "return": ["i32"],      "parameter": ["addr", "i32", "i32"],   "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 75 },
     292        "i64.atomic.rmw8.cmpxchg_u":  { "category": "atomic.rmw",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64", "i64"],   "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 76 },
     293        "i64.atomic.rmw16.cmpxchg_u": { "category": "atomic.rmw",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64", "i64"],   "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 77 },
     294        "i64.atomic.rmw32.cmpxchg_u": { "category": "atomic.rmw",     "value": 254, "return": ["i64"],      "parameter": ["addr", "i64", "i64"],   "immediate": [{"name": "flags",          "type": "varuint32"}, {"name": "offset",   "type": "varuint32"}], "extendedOp": 78 }
    227295    }
    228296}
Note: See TracChangeset for help on using the changeset viewer.