Changeset 230129 in webkit
- Timestamp:
- Mar 31, 2018, 12:04:00 AM (7 years ago)
- Location:
- trunk/Source
- Files:
-
- 58 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/JavaScriptCore/ChangeLog
r230127 r230129 1 2018-03-30 Mark Lam <mark.lam@apple.com> 2 3 Add pointer profiling support in baseline JIT and supporting files. 4 https://bugs.webkit.org/show_bug.cgi?id=184200 5 <rdar://problem/39057300> 6 7 Reviewed by Filip Pizlo. 8 9 1. To simplify pointer profiling support, vmEntryToJavaScript() now always enters 10 the code via the arity check entry. 11 2. To accommodate (1), all JITCode must now populate their arity check entry code 12 pointers as well. For native code, programs, evals, and modules that don't 13 do arity check, we set the normal entry as the arity check entry (though with 14 the CodeEntryWithArityCheckPtrTag profile instead). 15 16 * assembler/AbstractMacroAssembler.h: 17 * assembler/LinkBuffer.h: 18 (JSC::LinkBuffer::locationOfNearCall): 19 * assembler/MacroAssemblerARM64.h: 20 (JSC::MacroAssemblerARM64::readCallTarget): 21 (JSC::MacroAssemblerARM64::linkCall): 22 * bytecode/AccessCase.cpp: 23 (JSC::AccessCase::generateImpl): 24 * bytecode/AccessCaseSnippetParams.cpp: 25 (JSC::SlowPathCallGeneratorWithArguments::generateImpl): 26 * bytecode/CodeBlock.cpp: 27 (JSC::CodeBlock::addJITAddIC): 28 (JSC::CodeBlock::addJITMulIC): 29 (JSC::CodeBlock::addJITSubIC): 30 (JSC::CodeBlock::addJITNegIC): 31 * bytecode/CodeBlock.h: 32 (JSC::CodeBlock::addMathIC): 33 * bytecode/InlineAccess.cpp: 34 (JSC::InlineAccess::rewireStubAsJump): 35 * bytecode/LLIntCallLinkInfo.h: 36 (JSC::LLIntCallLinkInfo::unlink): 37 (): Deleted. 38 * bytecode/PolymorphicAccess.cpp: 39 (JSC::AccessGenerationState::emitExplicitExceptionHandler): 40 (JSC::PolymorphicAccess::regenerate): 41 * dfg/DFGJITFinalizer.cpp: 42 (JSC::DFG::JITFinalizer::finalize): 43 (JSC::DFG::JITFinalizer::finalizeFunction): 44 * dfg/DFGSpeculativeJIT.cpp: 45 (JSC::DFG::SpeculativeJIT::compileValueAdd): 46 (JSC::DFG::SpeculativeJIT::compileArithSub): 47 (JSC::DFG::SpeculativeJIT::compileArithNegate): 48 (JSC::DFG::SpeculativeJIT::compileArithMul): 49 (JSC::DFG::SpeculativeJIT::emitSwitchIntJump): 50 (JSC::DFG::SpeculativeJIT::emitSwitchImm): 51 (JSC::DFG::SpeculativeJIT::emitSwitchStringOnString): 52 * disassembler/ARM64Disassembler.cpp: 53 (JSC::tryToDisassemble): 54 * ftl/FTLJITFinalizer.cpp: 55 (JSC::FTL::JITFinalizer::finalizeCommon): 56 * ftl/FTLLowerDFGToB3.cpp: 57 (JSC::FTL::DFG::LowerDFGToB3::compileValueAdd): 58 (JSC::FTL::DFG::LowerDFGToB3::compileUnaryMathIC): 59 (JSC::FTL::DFG::LowerDFGToB3::compileBinaryMathIC): 60 (JSC::FTL::DFG::LowerDFGToB3::compileArithAddOrSub): 61 (JSC::FTL::DFG::LowerDFGToB3::compileArithMul): 62 (JSC::FTL::DFG::LowerDFGToB3::compileArithNegate): 63 * heap/JITStubRoutineSet.h: 64 (JSC::JITStubRoutineSet::mark): 65 * jit/AssemblyHelpers.cpp: 66 (JSC::AssemblyHelpers::callExceptionFuzz): 67 (JSC::AssemblyHelpers::debugCall): 68 * jit/AssemblyHelpers.h: 69 (JSC::AssemblyHelpers::emitFunctionPrologue): 70 * jit/CCallHelpers.cpp: 71 (JSC::CCallHelpers::ensureShadowChickenPacket): 72 * jit/CCallHelpers.h: 73 (JSC::CCallHelpers::prepareForTailCallSlow): 74 * jit/CallFrameShuffler.cpp: 75 (JSC::CallFrameShuffler::prepareForTailCall): 76 * jit/ExecutableAllocator.cpp: 77 (JSC::FixedVMPoolExecutableAllocator::jitWriteThunkGenerator): 78 * jit/ExecutableAllocator.h: 79 (JSC::performJITMemcpy): 80 * jit/JIT.cpp: 81 (JSC::JIT::compileWithoutLinking): 82 (JSC::JIT::link): 83 * jit/JITArithmetic.cpp: 84 (JSC::JIT::emit_op_negate): 85 (JSC::JIT::emit_op_add): 86 (JSC::JIT::emitMathICFast): 87 (JSC::JIT::emitMathICSlow): 88 (JSC::JIT::emit_op_mul): 89 (JSC::JIT::emit_op_sub): 90 * jit/JITCode.cpp: 91 (JSC::JITCode::execute): 92 (JSC::JITCodeWithCodeRef::executableAddressAtOffset): 93 (JSC::DirectJITCode::DirectJITCode): 94 (JSC::DirectJITCode::initializeCodeRef): 95 (JSC::NativeJITCode::addressForCall): 96 * jit/JITExceptions.cpp: 97 (JSC::genericUnwind): 98 * jit/JITMathIC.h: 99 (JSC::isProfileEmpty): 100 (JSC::JITBinaryMathIC::JITBinaryMathIC): 101 (JSC::JITUnaryMathIC::JITUnaryMathIC): 102 * jit/JITOpcodes.cpp: 103 (JSC::JIT::emit_op_switch_imm): 104 (JSC::JIT::emit_op_switch_char): 105 (JSC::JIT::emit_op_switch_string): 106 (JSC::JIT::privateCompileHasIndexedProperty): 107 (JSC::JIT::emitSlow_op_has_indexed_property): 108 * jit/JITOpcodes32_64.cpp: 109 (JSC::JIT::privateCompileHasIndexedProperty): 110 * jit/JITOperations.cpp: 111 (JSC::getByVal): 112 (JSC::tryGetByValOptimize): 113 * jit/JITPropertyAccess.cpp: 114 (JSC::JIT::stringGetByValStubGenerator): 115 (JSC::JIT::emitGetByValWithCachedId): 116 (JSC::JIT::emitSlow_op_get_by_val): 117 (JSC::JIT::emitPutByValWithCachedId): 118 (JSC::JIT::emitSlow_op_put_by_val): 119 (JSC::JIT::emitSlow_op_try_get_by_id): 120 (JSC::JIT::emitSlow_op_get_by_id): 121 (JSC::JIT::emitSlow_op_get_by_id_with_this): 122 (JSC::JIT::emitSlow_op_put_by_id): 123 (JSC::JIT::privateCompileGetByVal): 124 (JSC::JIT::privateCompileGetByValWithCachedId): 125 (JSC::JIT::privateCompilePutByVal): 126 (JSC::JIT::privateCompilePutByValWithCachedId): 127 * jit/JITThunks.cpp: 128 (JSC::JITThunks::hostFunctionStub): 129 * jit/Repatch.cpp: 130 (JSC::tryCacheGetByID): 131 (JSC::repatchGetByID): 132 (JSC::appropriateOptimizingPutByIdFunction): 133 (JSC::tryCachePutByID): 134 (JSC::repatchPutByID): 135 (JSC::linkFor): 136 (JSC::revertCall): 137 (JSC::linkPolymorphicCall): 138 (JSC::resetGetByID): 139 (JSC::resetPutByID): 140 * jit/Repatch.h: 141 * jit/SpecializedThunkJIT.h: 142 (JSC::SpecializedThunkJIT::finalize): 143 (JSC::SpecializedThunkJIT::callDoubleToDouble): 144 * jit/ThunkGenerators.cpp: 145 (JSC::emitPointerValidation): 146 (JSC::throwExceptionFromCallSlowPathGenerator): 147 (JSC::slowPathFor): 148 (JSC::linkCallThunkGenerator): Deleted. 149 (JSC::linkPolymorphicCallThunkGenerator): Deleted. 150 (JSC::virtualThunkFor): Deleted. 151 (JSC::nativeForGenerator): Deleted. 152 (JSC::nativeCallGenerator): Deleted. 153 (JSC::nativeTailCallGenerator): Deleted. 154 (JSC::nativeTailCallWithoutSavedTagsGenerator): Deleted. 155 (JSC::nativeConstructGenerator): Deleted. 156 (JSC::internalFunctionCallGenerator): Deleted. 157 (JSC::internalFunctionConstructGenerator): Deleted. 158 (JSC::arityFixupGenerator): Deleted. 159 (JSC::unreachableGenerator): Deleted. 160 (JSC::stringCharLoad): Deleted. 161 (JSC::charToString): Deleted. 162 (JSC::charCodeAtThunkGenerator): Deleted. 163 (JSC::charAtThunkGenerator): Deleted. 164 (JSC::fromCharCodeThunkGenerator): Deleted. 165 (JSC::clz32ThunkGenerator): Deleted. 166 (JSC::sqrtThunkGenerator): Deleted. 167 (JSC::floorThunkGenerator): Deleted. 168 (JSC::ceilThunkGenerator): Deleted. 169 (JSC::truncThunkGenerator): Deleted. 170 (JSC::roundThunkGenerator): Deleted. 171 (JSC::expThunkGenerator): Deleted. 172 (JSC::logThunkGenerator): Deleted. 173 (JSC::absThunkGenerator): Deleted. 174 (JSC::imulThunkGenerator): Deleted. 175 (JSC::randomThunkGenerator): Deleted. 176 (JSC::boundThisNoArgsFunctionCallGenerator): Deleted. 177 * llint/LLIntData.cpp: 178 (JSC::LLInt::initialize): 179 * llint/LLIntData.h: 180 (JSC::LLInt::getCodePtr): 181 * llint/LLIntEntrypoint.cpp: 182 (JSC::LLInt::setEvalEntrypoint): 183 (JSC::LLInt::setProgramEntrypoint): 184 (JSC::LLInt::setModuleProgramEntrypoint): 185 * llint/LLIntSlowPaths.cpp: 186 (JSC::LLInt::setUpCall): 187 * llint/LLIntThunks.cpp: 188 (JSC::LLInt::generateThunkWithJumpTo): 189 * llint/LowLevelInterpreter.asm: 190 * llint/LowLevelInterpreter32_64.asm: 191 * llint/LowLevelInterpreter64.asm: 192 * runtime/ExecutableBase.h: 193 * runtime/NativeExecutable.cpp: 194 (JSC::NativeExecutable::finishCreation): 195 * runtime/NativeFunction.h: 196 (JSC::TaggedNativeFunction::TaggedNativeFunction): 197 (JSC::TaggedNativeFunction::operator NativeFunction): 198 * runtime/PropertySlot.h: 199 (JSC::PropertySlot::setCustom): 200 (JSC::PropertySlot::setCacheableCustom): 201 * runtime/PtrTag.h: 202 * runtime/PutPropertySlot.h: 203 (JSC::PutPropertySlot::setCustomValue): 204 (JSC::PutPropertySlot::setCustomAccessor): 205 * runtime/SamplingProfiler.cpp: 206 (JSC::SamplingProfiler::takeSample): 207 * runtime/VMTraps.cpp: 208 (JSC::SignalContext::SignalContext): 209 (JSC::VMTraps::tryInstallTrapBreakpoints): 210 * tools/SigillCrashAnalyzer.cpp: 211 (JSC::installCrashHandler): 212 * yarr/YarrJIT.cpp: 213 (JSC::Yarr::YarrGenerator::generateTryReadUnicodeCharacterHelper): 214 (JSC::Yarr::YarrGenerator::generateEnter): 215 1 216 2018-03-30 Devin Rousso <webkit@devinrousso.com> 2 217 -
trunk/Source/JavaScriptCore/assembler/AbstractMacroAssembler.h
r229911 r230129 873 873 } 874 874 875 // FIXME: remove the default PtrTag value once we've tagged all the clients.876 875 static void* getLinkerAddress(void* code, AssemblerLabel label, PtrTag tag = NoPtrTag) 877 876 { -
trunk/Source/JavaScriptCore/assembler/LinkBuffer.h
r229609 r230129 183 183 ASSERT(call.isFlagSet(Call::Linkable)); 184 184 ASSERT(call.isFlagSet(Call::Near)); 185 return CodeLocationNearCall(MacroAssembler::getLinkerAddress(code(), applyOffset(call.m_label) , NearCallPtrTag),185 return CodeLocationNearCall(MacroAssembler::getLinkerAddress(code(), applyOffset(call.m_label)), 186 186 call.isFlagSet(Call::Tail) ? NearCallMode::Tail : NearCallMode::Regular); 187 187 } … … 192 192 } 193 193 194 // FIXME: remove the default PtrTag value once we've tagged all the clients.195 194 CodeLocationLabel locationOf(Label label, PtrTag tag = NoPtrTag) 196 195 { -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
r229988 r230129 3775 3775 static FunctionPtr readCallTarget(CodeLocationCall call) 3776 3776 { 3777 return FunctionPtr( reinterpret_cast<void(*)()>(Assembler::readCallTarget(call.dataLocation())), CodeEntryPtrTag);3777 return FunctionPtr(MacroAssemblerCodePtr(Assembler::readCallTarget(call.dataLocation()))); 3778 3778 } 3779 3779 … … 4443 4443 if (!call.isFlagSet(Call::Near)) 4444 4444 Assembler::linkPointer(code, call.m_label.labelAtOffset(REPATCH_OFFSET_CALL_TO_POINTER), function.executableAddress()); 4445 else if (call.isFlagSet(Call::Tail)) 4445 else if (call.isFlagSet(Call::Tail)) { 4446 assertIsNotTagged(function.executableAddress()); 4446 4447 Assembler::linkJump(code, call.m_label, function.executableAddress()); 4447 else 4448 } else { 4449 assertIsNotTagged(function.executableAddress()); 4448 4450 Assembler::linkCall(code, call.m_label, function.executableAddress()); 4451 } 4449 4452 } 4450 4453 -
trunk/Source/JavaScriptCore/bytecode/AccessCase.cpp
r229842 r230129 868 868 jit.storePtr(GPRInfo::callFrameRegister, &vm.topCallFrame); 869 869 870 PtrTag callTag = ptrTag( JITOperationPtrTag, nextPtrTagID());870 PtrTag callTag = ptrTag(GetterSetterPtrTag, nextPtrTagID()); 871 871 operationCall = jit.call(callTag); 872 872 jit.addLinkTask([=] (LinkBuffer& linkBuffer) { -
trunk/Source/JavaScriptCore/bytecode/AccessCaseSnippetParams.cpp
r229609 r230129 1 1 /* 2 * Copyright (C) 2016 Apple Inc. All rights reserved.2 * Copyright (C) 2016-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 62 62 jit.setupArguments<FunctionType>(std::get<ArgumentsIndex>(m_arguments)...); 63 63 64 CCallHelpers::Call operationCall = jit.call(NoPtrTag); 64 PtrTag tag = ptrTag(JITOperationPtrTag, nextPtrTagID()); 65 CCallHelpers::Call operationCall = jit.call(tag); 65 66 auto function = m_function; 66 67 jit.addLinkTask([=] (LinkBuffer& linkBuffer) { 67 linkBuffer.link(operationCall, FunctionPtr(function ));68 linkBuffer.link(operationCall, FunctionPtr(function, tag)); 68 69 }); 69 70 -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp
r229815 r230129 1431 1431 } 1432 1432 1433 JITAddIC* CodeBlock::addJITAddIC(ArithProfile* arithProfile )1434 { 1435 return m_addICs.add(arithProfile );1436 } 1437 1438 JITMulIC* CodeBlock::addJITMulIC(ArithProfile* arithProfile )1439 { 1440 return m_mulICs.add(arithProfile );1441 } 1442 1443 JITSubIC* CodeBlock::addJITSubIC(ArithProfile* arithProfile )1444 { 1445 return m_subICs.add(arithProfile );1446 } 1447 1448 JITNegIC* CodeBlock::addJITNegIC(ArithProfile* arithProfile )1449 { 1450 return m_negICs.add(arithProfile );1433 JITAddIC* CodeBlock::addJITAddIC(ArithProfile* arithProfile, Instruction* instruction) 1434 { 1435 return m_addICs.add(arithProfile, instruction); 1436 } 1437 1438 JITMulIC* CodeBlock::addJITMulIC(ArithProfile* arithProfile, Instruction* instruction) 1439 { 1440 return m_mulICs.add(arithProfile, instruction); 1441 } 1442 1443 JITSubIC* CodeBlock::addJITSubIC(ArithProfile* arithProfile, Instruction* instruction) 1444 { 1445 return m_subICs.add(arithProfile, instruction); 1446 } 1447 1448 JITNegIC* CodeBlock::addJITNegIC(ArithProfile* arithProfile, Instruction* instruction) 1449 { 1450 return m_negICs.add(arithProfile, instruction); 1451 1451 } 1452 1452 -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.h
r228500 r230129 249 249 250 250 #if ENABLE(JIT) 251 JITAddIC* addJITAddIC(ArithProfile* );252 JITMulIC* addJITMulIC(ArithProfile* );253 JITNegIC* addJITNegIC(ArithProfile* );254 JITSubIC* addJITSubIC(ArithProfile* );251 JITAddIC* addJITAddIC(ArithProfile*, Instruction*); 252 JITMulIC* addJITMulIC(ArithProfile*, Instruction*); 253 JITNegIC* addJITNegIC(ArithProfile*, Instruction*); 254 JITSubIC* addJITSubIC(ArithProfile*, Instruction*); 255 255 256 256 template <typename Generator, typename = typename std::enable_if<std::is_same<Generator, JITAddGenerator>::value>::type> 257 JITAddIC* addMathIC(ArithProfile* profile ) { return addJITAddIC(profile); }257 JITAddIC* addMathIC(ArithProfile* profile, Instruction* instruction) { return addJITAddIC(profile, instruction); } 258 258 259 259 template <typename Generator, typename = typename std::enable_if<std::is_same<Generator, JITMulGenerator>::value>::type> 260 JITMulIC* addMathIC(ArithProfile* profile ) { return addJITMulIC(profile); }260 JITMulIC* addMathIC(ArithProfile* profile, Instruction* instruction) { return addJITMulIC(profile, instruction); } 261 261 262 262 template <typename Generator, typename = typename std::enable_if<std::is_same<Generator, JITNegGenerator>::value>::type> 263 JITNegIC* addMathIC(ArithProfile* profile ) { return addJITNegIC(profile); }263 JITNegIC* addMathIC(ArithProfile* profile, Instruction* instruction) { return addJITNegIC(profile, instruction); } 264 264 265 265 template <typename Generator, typename = typename std::enable_if<std::is_same<Generator, JITSubGenerator>::value>::type> 266 JITSubIC* addMathIC(ArithProfile* profile ) { return addJITSubIC(profile); }266 JITSubIC* addMathIC(ArithProfile* profile, Instruction* instruction) { return addJITSubIC(profile, instruction); } 267 267 268 268 StructureStubInfo* addStubInfo(AccessType); … … 311 311 } 312 312 313 typedef JSC::Instruction Instruction; 314 typedef PoisonedRefCountedArray<CodeBlockPoison, Instruction>& UnpackedInstructions; 315 313 316 static void clearLLIntGetByIdCache(Instruction*); 314 317 … … 318 321 return static_cast<Instruction*>(returnAddress) - instructions().begin(); 319 322 } 320 321 typedef JSC::Instruction Instruction;322 typedef PoisonedRefCountedArray<CodeBlockPoison, Instruction>& UnpackedInstructions;323 323 324 324 unsigned numberOfInstructions() const { return m_instructions.size(); } -
trunk/Source/JavaScriptCore/bytecode/InlineAccess.cpp
r229609 r230129 291 291 linkBuffer.link(jump, target); 292 292 293 FINALIZE_CODE(linkBuffer, JITCodePtrTag, "InlineAccess: linking constant jump");293 FINALIZE_CODE(linkBuffer, NearJumpPtrTag, "InlineAccess: linking constant jump"); 294 294 } 295 295 -
trunk/Source/JavaScriptCore/bytecode/LLIntCallLinkInfo.h
r229481 r230129 52 52 callee.clear(); 53 53 machineCodeTarget = MacroAssemblerCodePtr(); 54 callPtrTag = NoPtrTag;55 54 if (isOnList()) 56 55 remove(); … … 60 59 WriteBarrier<JSObject> lastSeenCallee; 61 60 MacroAssemblerCodePtr machineCodeTarget; 62 PtrTag callPtrTag { NoPtrTag };63 61 }; 64 62 -
trunk/Source/JavaScriptCore/bytecode/PolymorphicAccess.cpp
r229609 r230129 200 200 } else { 201 201 jit->setupArguments<decltype(lookupExceptionHandler)>(CCallHelpers::TrustedImmPtr(&m_vm), GPRInfo::callFrameRegister); 202 CCallHelpers::Call lookupExceptionHandlerCall = jit->call(NoPtrTag); 202 PtrTag tag = ptrTag(JITOperationPtrTag, nextPtrTagID()); 203 CCallHelpers::Call lookupExceptionHandlerCall = jit->call(tag); 203 204 jit->addLinkTask( 204 205 [=] (LinkBuffer& linkBuffer) { 205 linkBuffer.link(lookupExceptionHandlerCall, lookupExceptionHandler, NoPtrTag);206 linkBuffer.link(lookupExceptionHandlerCall, FunctionPtr(lookupExceptionHandler, tag)); 206 207 }); 207 208 jit->jumpToExceptionHandler(m_vm); … … 539 540 540 541 HandlerInfo handlerToRegister = oldHandler; 541 handlerToRegister.nativeCode = linkBuffer.locationOf(makeshiftCatchHandler, N oPtrTag);542 handlerToRegister.nativeCode = linkBuffer.locationOf(makeshiftCatchHandler, NearJumpPtrTag); 542 543 handlerToRegister.start = newExceptionHandlingCallSite.bits(); 543 544 handlerToRegister.end = newExceptionHandlingCallSite.bits() + 1; … … 569 570 570 571 MacroAssemblerCodeRef code = FINALIZE_CODE_FOR( 571 codeBlock, linkBuffer, N oPtrTag,572 codeBlock, linkBuffer, NearJumpPtrTag, 572 573 "%s", toCString("Access stub for ", *codeBlock, " ", stubInfo.codeOrigin, " with return point ", successLabel, ": ", listDump(cases)).data()); 573 574 -
trunk/Source/JavaScriptCore/dfg/DFGJITFinalizer.cpp
r229609 r230129 57 57 bool JITFinalizer::finalize() 58 58 { 59 m_jitCode->initializeCodeRef( 60 FINALIZE_DFG_CODE(*m_linkBuffer, NoPtrTag, "DFG JIT code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::DFGJIT)).data()), 61 MacroAssemblerCodePtr()); 59 MacroAssemblerCodeRef codeRef = FINALIZE_DFG_CODE(*m_linkBuffer, CodeEntryPtrTag, "DFG JIT code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::DFGJIT)).data()); 60 m_jitCode->initializeCodeRef(codeRef, codeRef.retaggedCode(CodeEntryPtrTag, CodeEntryWithArityCheckPtrTag)); 62 61 63 62 m_plan.codeBlock->setJITCode(m_jitCode.copyRef()); … … 72 71 RELEASE_ASSERT(!m_withArityCheck.isEmptyValue()); 73 72 m_jitCode->initializeCodeRef( 74 FINALIZE_DFG_CODE(*m_linkBuffer, NoPtrTag, "DFG JIT code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::DFGJIT)).data()),73 FINALIZE_DFG_CODE(*m_linkBuffer, CodeEntryPtrTag, "DFG JIT code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::DFGJIT)).data()), 75 74 m_withArityCheck); 76 75 m_plan.codeBlock->setJITCode(m_jitCode.copyRef()); -
trunk/Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
r230022 r230129 3770 3770 #endif 3771 3771 3772 ArithProfile* arithProfile = m_jit.graph().baselineCodeBlockFor(node->origin.semantic)->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex); 3773 JITAddIC* addIC = m_jit.codeBlock()->addJITAddIC(arithProfile); 3772 CodeBlock* baselineCodeBlock = m_jit.graph().baselineCodeBlockFor(node->origin.semantic); 3773 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex); 3774 Instruction* instruction = &baselineCodeBlock->instructions()[node->origin.semantic.bytecodeIndex]; 3775 JITAddIC* addIC = m_jit.codeBlock()->addJITAddIC(arithProfile, instruction); 3774 3776 auto repatchingFunction = operationValueAddOptimize; 3775 3777 auto nonRepatchingFunction = operationValueAdd; … … 4435 4437 #endif 4436 4438 4437 ArithProfile* arithProfile = m_jit.graph().baselineCodeBlockFor(node->origin.semantic)->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex); 4438 JITSubIC* subIC = m_jit.codeBlock()->addJITSubIC(arithProfile); 4439 CodeBlock* baselineCodeBlock = m_jit.graph().baselineCodeBlockFor(node->origin.semantic); 4440 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex); 4441 Instruction* instruction = &baselineCodeBlock->instructions()[node->origin.semantic.bytecodeIndex]; 4442 JITSubIC* subIC = m_jit.codeBlock()->addJITSubIC(arithProfile, instruction); 4439 4443 auto repatchingFunction = operationValueSubOptimize; 4440 4444 auto nonRepatchingFunction = operationValueSub; … … 4524 4528 4525 4529 default: { 4526 ArithProfile* arithProfile = m_jit.graph().baselineCodeBlockFor(node->origin.semantic)->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex); 4527 JITNegIC* negIC = m_jit.codeBlock()->addJITNegIC(arithProfile); 4530 CodeBlock* baselineCodeBlock = m_jit.graph().baselineCodeBlockFor(node->origin.semantic); 4531 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex); 4532 Instruction* instruction = &baselineCodeBlock->instructions()[node->origin.semantic.bytecodeIndex]; 4533 JITNegIC* negIC = m_jit.codeBlock()->addJITNegIC(arithProfile, instruction); 4528 4534 auto repatchingFunction = operationArithNegateOptimize; 4529 4535 auto nonRepatchingFunction = operationArithNegate; … … 4789 4795 #endif 4790 4796 4791 ArithProfile* arithProfile = m_jit.graph().baselineCodeBlockFor(node->origin.semantic)->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex); 4792 JITMulIC* mulIC = m_jit.codeBlock()->addJITMulIC(arithProfile); 4797 CodeBlock* baselineCodeBlock = m_jit.graph().baselineCodeBlockFor(node->origin.semantic); 4798 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex); 4799 Instruction* instruction = &baselineCodeBlock->instructions()[node->origin.semantic.bytecodeIndex]; 4800 JITMulIC* mulIC = m_jit.codeBlock()->addJITMulIC(arithProfile, instruction); 4793 4801 auto repatchingFunction = operationValueMulOptimize; 4794 4802 auto nonRepatchingFunction = operationValueMul; … … 9901 9909 m_jit.xor64(poisonScratch, scratch); 9902 9910 #endif 9903 m_jit.jump(scratch, NoPtrTag); 9911 PtrTag tag = ptrTag(SwitchTablePtrTag, &table); 9912 m_jit.jump(scratch, tag); 9904 9913 data->didUseJumpTable = true; 9905 9914 } … … 9926 9935 9927 9936 value.use(); 9928 9937 9938 SimpleJumpTable& table = m_jit.codeBlock()->switchJumpTable(data->switchTableIndex); 9939 PtrTag tag = ptrTag(SwitchTablePtrTag, &table); 9929 9940 auto notInt32 = m_jit.branchIfNotInt32(valueRegs); 9930 9941 emitSwitchIntJump(data, valueRegs.payloadGPR(), scratch, scratch2); … … 9935 9946 silentFillAllRegisters(); 9936 9947 9937 m_jit.jump(scratch, NoPtrTag);9948 m_jit.jump(scratch, tag); 9938 9949 noResult(node, UseChildrenCalledExplicitly); 9939 9950 break; … … 10193 10204 } 10194 10205 10206 auto* codeBlock = m_jit.codeBlock(); 10195 10207 if (!canDoBinarySwitch || totalLength > Options::maximumBinaryStringSwitchTotalLength()) { 10208 StringJumpTable& table = codeBlock->stringSwitchJumpTable(data->switchTableIndex); 10209 PtrTag tag = ptrTag(SwitchTablePtrTag, &table); 10196 10210 flushRegisters(); 10197 10211 callOperation( 10198 10212 operationSwitchString, string, static_cast<size_t>(data->switchTableIndex), string); 10199 10213 m_jit.exceptionCheck(); 10200 m_jit.jump(string, NoPtrTag);10214 m_jit.jump(string, tag); 10201 10215 return; 10202 10216 } … … 10231 10245 data, cases, 0, 0, cases.size(), string, lengthGPR, tempGPR, 0, false); 10232 10246 10247 StringJumpTable& table = codeBlock->stringSwitchJumpTable(data->switchTableIndex); 10248 PtrTag tag = ptrTag(SwitchTablePtrTag, &table); 10249 10233 10250 slowCases.link(&m_jit); 10234 10251 silentSpillAllRegisters(string); … … 10236 10253 silentFillAllRegisters(); 10237 10254 m_jit.exceptionCheck(); 10238 m_jit.jump(string, NoPtrTag);10255 m_jit.jump(string, tag); 10239 10256 } 10240 10257 -
trunk/Source/JavaScriptCore/disassembler/ARM64Disassembler.cpp
r228105 r230129 1 1 /* 2 * Copyright (C) 2012 , 2014Apple Inc. All rights reserved.2 * Copyright (C) 2012-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 38 38 A64DOpcode arm64Opcode; 39 39 40 uint32_t* currentPC = re interpret_cast<uint32_t*>(codePtr.executableAddress());40 uint32_t* currentPC = removeCodePtrTag<uint32_t*>(codePtr.executableAddress()); 41 41 size_t byteCount = size; 42 42 -
trunk/Source/JavaScriptCore/ftl/FTLJITFinalizer.cpp
r229609 r230129 74 74 bool dumpDisassembly = shouldDumpDisassembly() || Options::asyncDisassembly(); 75 75 76 jitCode->initializeB3Code( 77 FINALIZE_CODE_IF( 78 dumpDisassembly, *b3CodeLinkBuffer, CodeEntryPtrTag, 79 "FTL B3 code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::FTLJIT)).data())); 76 MacroAssemblerCodeRef b3CodeRef = 77 FINALIZE_CODE_IF(dumpDisassembly, *b3CodeLinkBuffer, CodeEntryPtrTag, 78 "FTL B3 code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::FTLJIT)).data()); 80 79 81 if (entrypointLinkBuffer) { 82 jitCode->initializeArityCheckEntrypoint( 83 FINALIZE_CODE_IF( 84 dumpDisassembly, *entrypointLinkBuffer, CodeEntryWithArityCheckPtrTag, 85 "FTL entrypoint thunk for %s with B3 generated code at %p", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::FTLJIT)).data(), function)); 86 } 87 80 MacroAssemblerCodeRef arityCheckCodeRef = entrypointLinkBuffer 81 ? FINALIZE_CODE_IF(dumpDisassembly, *entrypointLinkBuffer, CodeEntryWithArityCheckPtrTag, 82 "FTL entrypoint thunk for %s with B3 generated code at %p", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::FTLJIT)).data(), function) 83 : MacroAssemblerCodeRef::createSelfManagedCodeRef(b3CodeRef.retaggedCode(CodeEntryPtrTag, CodeEntryWithArityCheckPtrTag)); 84 85 jitCode->initializeB3Code(b3CodeRef); 86 jitCode->initializeArityCheckEntrypoint(arityCheckCodeRef); 87 88 88 m_plan.codeBlock->setJITCode(*jitCode); 89 89 -
trunk/Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
r230098 r230129 1806 1806 void compileValueAdd() 1807 1807 { 1808 ArithProfile* arithProfile = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic)->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex); 1808 CodeBlock* baselineCodeBlock = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic); 1809 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex); 1810 Instruction* instruction = &baselineCodeBlock->instructions()[m_node->origin.semantic.bytecodeIndex]; 1809 1811 auto repatchingFunction = operationValueAddOptimize; 1810 1812 auto nonRepatchingFunction = operationValueAdd; 1811 compileBinaryMathIC<JITAddGenerator>(arithProfile, repatchingFunction, nonRepatchingFunction); 1812 } 1813 1814 template <typename Generator> 1815 void compileUnaryMathIC(ArithProfile* arithProfile, FunctionPtr repatchingFunction, FunctionPtr nonRepatchingFunction) 1813 compileBinaryMathIC<JITAddGenerator>(arithProfile, instruction, repatchingFunction, nonRepatchingFunction); 1814 } 1815 1816 template <typename Generator, typename Func1, typename Func2, 1817 typename = std::enable_if_t<std::is_function<typename std::remove_pointer<Func1>::type>::value && std::is_function<typename std::remove_pointer<Func2>::type>::value>> 1818 void compileUnaryMathIC(ArithProfile* arithProfile, Instruction* instruction, Func1 repatchingFunction, Func2 nonRepatchingFunction) 1816 1819 { 1817 1820 Node* node = m_node; … … 1839 1842 1840 1843 Box<MathICGenerationState> mathICGenerationState = Box<MathICGenerationState>::create(); 1841 JITUnaryMathIC<Generator>* mathIC = jit.codeBlock()->addMathIC<Generator>(arithProfile );1844 JITUnaryMathIC<Generator>* mathIC = jit.codeBlock()->addMathIC<Generator>(arithProfile, instruction); 1842 1845 mathIC->m_generator = Generator(JSValueRegs(params[0].gpr()), JSValueRegs(params[1].gpr()), params.gpScratch(0)); 1843 1846 … … 1897 1900 } 1898 1901 1899 template <typename Generator> 1900 void compileBinaryMathIC(ArithProfile* arithProfile, FunctionPtr repatchingFunction, FunctionPtr nonRepatchingFunction) 1902 template <typename Generator, typename Func1, typename Func2, 1903 typename = std::enable_if_t<std::is_function<typename std::remove_pointer<Func1>::type>::value && std::is_function<typename std::remove_pointer<Func2>::type>::value>> 1904 void compileBinaryMathIC(ArithProfile* arithProfile, Instruction* instruction, Func1 repatchingFunction, Func2 nonRepatchingFunction) 1901 1905 { 1902 1906 Node* node = m_node; … … 1932 1936 1933 1937 Box<MathICGenerationState> mathICGenerationState = Box<MathICGenerationState>::create(); 1934 JITBinaryMathIC<Generator>* mathIC = jit.codeBlock()->addMathIC<Generator>(arithProfile );1938 JITBinaryMathIC<Generator>* mathIC = jit.codeBlock()->addMathIC<Generator>(arithProfile, instruction); 1935 1939 mathIC->m_generator = Generator(leftOperand, rightOperand, JSValueRegs(params[0].gpr()), 1936 1940 JSValueRegs(params[1].gpr()), JSValueRegs(params[2].gpr()), params.fpScratch(0), … … 2063 2067 } 2064 2068 2065 ArithProfile* arithProfile = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic)->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex); 2069 CodeBlock* baselineCodeBlock = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic); 2070 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex); 2071 Instruction* instruction = &baselineCodeBlock->instructions()[m_node->origin.semantic.bytecodeIndex]; 2066 2072 auto repatchingFunction = operationValueSubOptimize; 2067 2073 auto nonRepatchingFunction = operationValueSub; 2068 compileBinaryMathIC<JITSubGenerator>(arithProfile, repatchingFunction, nonRepatchingFunction);2074 compileBinaryMathIC<JITSubGenerator>(arithProfile, instruction, repatchingFunction, nonRepatchingFunction); 2069 2075 break; 2070 2076 } … … 2157 2163 2158 2164 case UntypedUse: { 2159 ArithProfile* arithProfile = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic)->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex); 2165 CodeBlock* baselineCodeBlock = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic); 2166 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex); 2167 Instruction* instruction = &baselineCodeBlock->instructions()[m_node->origin.semantic.bytecodeIndex]; 2160 2168 auto repatchingFunction = operationValueMulOptimize; 2161 2169 auto nonRepatchingFunction = operationValueMul; 2162 compileBinaryMathIC<JITMulGenerator>(arithProfile, repatchingFunction, nonRepatchingFunction);2170 compileBinaryMathIC<JITMulGenerator>(arithProfile, instruction, repatchingFunction, nonRepatchingFunction); 2163 2171 break; 2164 2172 } … … 2738 2746 default: 2739 2747 DFG_ASSERT(m_graph, m_node, m_node->child1().useKind() == UntypedUse, m_node->child1().useKind()); 2740 ArithProfile* arithProfile = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic)->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex); 2748 CodeBlock* baselineCodeBlock = m_ftlState.graph.baselineCodeBlockFor(m_node->origin.semantic); 2749 ArithProfile* arithProfile = baselineCodeBlock->arithProfileForBytecodeOffset(m_node->origin.semantic.bytecodeIndex); 2750 Instruction* instruction = &baselineCodeBlock->instructions()[m_node->origin.semantic.bytecodeIndex]; 2741 2751 auto repatchingFunction = operationArithNegateOptimize; 2742 2752 auto nonRepatchingFunction = operationArithNegate; 2743 compileUnaryMathIC<JITNegGenerator>(arithProfile, repatchingFunction, nonRepatchingFunction);2753 compileUnaryMathIC<JITNegGenerator>(arithProfile, instruction, repatchingFunction, nonRepatchingFunction); 2744 2754 break; 2745 2755 } -
trunk/Source/JavaScriptCore/heap/JITStubRoutineSet.h
r206525 r230129 1 1 /* 2 * Copyright (C) 2012 Apple Inc. All rights reserved.2 * Copyright (C) 2012-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 52 52 void mark(void* candidateAddress) 53 53 { 54 uintptr_t address = re interpret_cast<uintptr_t>(candidateAddress);54 uintptr_t address = removeCodePtrTag<uintptr_t>(candidateAddress); 55 55 if (!JITStubRoutine::passesFilter(address)) 56 56 return; -
trunk/Source/JavaScriptCore/jit/AssemblyHelpers.cpp
r229609 r230129 347 347 move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0); 348 348 #endif 349 move(TrustedImmPtr(tagCFunctionPtr(operationExceptionFuzz, SlowPathPtrTag)), GPRInfo::nonPreservedNonReturnGPR); 350 call(GPRInfo::nonPreservedNonReturnGPR, SlowPathPtrTag); 349 PtrTag tag = ptrTag(JITOperationPtrTag, nextPtrTagID()); 350 move(TrustedImmPtr(tagCFunctionPtr(operationExceptionFuzz, tag)), GPRInfo::nonPreservedNonReturnGPR); 351 call(GPRInfo::nonPreservedNonReturnGPR, tag); 351 352 352 353 for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) { … … 941 942 #error "JIT not supported on this platform." 942 943 #endif 943 move(TrustedImmPtr(tagCFunctionPtr(function, SlowPathPtrTag)), scratch); 944 call(scratch, SlowPathPtrTag); 944 PtrTag tag = ptrTag(JITOperationPtrTag, nextPtrTagID()); 945 move(TrustedImmPtr(tagCFunctionPtr(function, tag)), scratch); 946 call(scratch, tag); 945 947 946 948 move(TrustedImmPtr(scratchBuffer->addressOfActiveLength()), GPRInfo::regT0); -
trunk/Source/JavaScriptCore/jit/AssemblyHelpers.h
r229444 r230129 530 530 void emitFunctionPrologue() 531 531 { 532 tagReturnAddress(); 532 533 pushPair(framePointerRegister, linkRegister); 533 534 move(stackPointerRegister, framePointerRegister); -
trunk/Source/JavaScriptCore/jit/CCallHelpers.cpp
r229609 r230129 60 60 Jump ok = branchPtr(Below, shadowPacket, TrustedImmPtr(vm.shadowChicken().logEnd())); 61 61 setupArguments<decltype(operationProcessShadowChickenLog)>(); 62 move(TrustedImmPtr(tagCFunctionPtr(operationProcessShadowChickenLog, SlowPathPtrTag)), scratch1NonArgGPR); 63 call(scratch1NonArgGPR, SlowPathPtrTag); 62 PtrTag tag = ptrTag(JITOperationPtrTag, nextPtrTagID()); 63 move(TrustedImmPtr(tagCFunctionPtr(operationProcessShadowChickenLog, tag)), scratch1NonArgGPR); 64 call(scratch1NonArgGPR, tag); 64 65 move(TrustedImmPtr(vm.shadowChicken().addressOfLogCursor()), scratch1NonArgGPR); 65 66 loadPtr(Address(scratch1NonArgGPR), shadowPacket); -
trunk/Source/JavaScriptCore/jit/CCallHelpers.h
r230091 r230129 587 587 // caller. 588 588 #if CPU(ARM) || CPU(ARM64) 589 loadPtr(Address(framePointerRegister, sizeof(void*)), linkRegister);589 loadPtr(Address(framePointerRegister, CallFrame::returnPCOffset()), linkRegister); 590 590 subPtr(TrustedImm32(2 * sizeof(void*)), newFrameSizeGPR); 591 #if USE(POINTER_PROFILING) 592 addPtr(TrustedImm32(sizeof(CallerFrameAndPC)), MacroAssembler::framePointerRegister, tempGPR); 593 untagPtr(linkRegister, tempGPR); 594 #endif 591 595 #elif CPU(MIPS) 592 596 loadPtr(Address(framePointerRegister, sizeof(void*)), returnAddressRegister); -
trunk/Source/JavaScriptCore/jit/CallFrameShuffler.cpp
r223047 r230129 1 1 /* 2 * Copyright (C) 2015-201 6Apple Inc. All rights reserved.2 * Copyright (C) 2015-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 447 447 // We load the link register manually for architectures that have one 448 448 #if CPU(ARM) || CPU(ARM64) 449 m_jit.loadPtr(MacroAssembler::Address(MacroAssembler::framePointerRegister, sizeof(void*)),449 m_jit.loadPtr(MacroAssembler::Address(MacroAssembler::framePointerRegister, CallFrame::returnPCOffset()), 450 450 MacroAssembler::linkRegister); 451 #if USE(POINTER_PROFILING) 452 m_jit.addPtr(MacroAssembler::TrustedImm32(sizeof(CallerFrameAndPC)), MacroAssembler::framePointerRegister); 453 m_jit.untagPtr(MacroAssembler::linkRegister, MacroAssembler::framePointerRegister); 454 m_jit.subPtr(MacroAssembler::TrustedImm32(sizeof(CallerFrameAndPC)), MacroAssembler::framePointerRegister); 455 #endif 456 451 457 #elif CPU(MIPS) 452 458 m_jit.loadPtr(MacroAssembler::Address(MacroAssembler::framePointerRegister, sizeof(void*)), -
trunk/Source/JavaScriptCore/jit/ExecutableAllocator.cpp
r230092 r230129 239 239 MacroAssembler jit; 240 240 241 jit.tagReturnAddress(); 241 242 jit.move(MacroAssembler::TrustedImmPtr(writableAddr), x7); 242 243 jit.addPtr(x7, x0); … … 299 300 // The second is we can't guarantee that the code is readable when using the 300 301 // asyncDisassembly option as our caller will set our pages execute only. 301 return linkBuffer.finalizeCodeWithoutDisassembly(NoPtrTag); 302 PtrTag tag = ptrTag(JITWriteThunkPtrTag, &jitWriteSeparateHeapsFunction); 303 return linkBuffer.finalizeCodeWithoutDisassembly(tag); 302 304 } 303 305 #else // CPU(ARM64) && USE(EXECUTE_ONLY_JIT_WRITE_FUNCTION) -
trunk/Source/JavaScriptCore/jit/ExecutableAllocator.h
r230092 r230129 1 1 /* 2 * Copyright (C) 2008 , 2017Apple Inc. All rights reserved.2 * Copyright (C) 2008-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 27 27 28 28 #include "JITCompilationEffort.h" 29 #include "PtrTag.h" 29 30 #include <stddef.h> // for ptrdiff_t 30 31 #include <limits> … … 91 92 // memcpy that takes an offset into the JIT region as its destination (first) parameter. 92 93 off_t offset = (off_t)((uintptr_t)dst - startOfFixedExecutableMemoryPool); 93 jitWriteSeparateHeapsFunction(offset, src, n); 94 PtrTag tag = ptrTag(JITWriteThunkPtrTag, &jitWriteSeparateHeapsFunction); 95 retagCodePtr(jitWriteSeparateHeapsFunction, tag, CFunctionPtrTag)(offset, src, n); 94 96 return dst; 95 97 } -
trunk/Source/JavaScriptCore/jit/JIT.cpp
r229957 r230129 650 650 m_pcToCodeOriginMapBuilder.appendItem(label(), CodeOrigin(0, nullptr)); 651 651 652 Label entryLabel(this); 652 653 if (m_disassembler) 653 m_disassembler->setStartOfCode( label());654 m_disassembler->setStartOfCode(entryLabel); 654 655 655 656 // Just add a little bit of randomness to the codegen … … 736 737 addPtr(TrustedImm32(maxFrameExtentForSlowPathCall), stackPointerRegister); 737 738 branchTest32(Zero, returnValueGPR).linkTo(beginLabel, this); 739 #if CPU(ARM64) && USE(POINTER_PROFILING) 740 loadPtr(Address(callFrameRegister, CallFrame::returnPCOffset()), linkRegister); 741 addPtr(TrustedImm32(sizeof(CallerFrameAndPC)), callFrameRegister, regT1); 742 untagPtr(linkRegister, regT1); 743 PtrTag tempTag = ptrTag(JITThunkPtrTag, nextPtrTagID()); 744 move(TrustedImmPtr(tempTag), regT1); 745 tagPtr(linkRegister, regT1); 746 storePtr(linkRegister, Address(callFrameRegister, CallFrame::returnPCOffset())); 747 #endif 738 748 move(returnValueGPR, GPRInfo::argumentGPR0); 739 emitNakedCall(m_vm->getCTIStub(arityFixupGenerator).code()); 749 emitNakedCall(m_vm->getCTIStub(arityFixupGenerator).retaggedCode(ptrTag(JITThunkPtrTag, m_vm), NearCallPtrTag)); 750 #if CPU(ARM64) && USE(POINTER_PROFILING) 751 loadPtr(Address(callFrameRegister, CallFrame::returnPCOffset()), linkRegister); 752 move(TrustedImmPtr(tempTag), regT1); 753 untagPtr(linkRegister, regT1); 754 addPtr(TrustedImm32(sizeof(CallerFrameAndPC)), callFrameRegister, regT1); 755 tagPtr(linkRegister, regT1); 756 storePtr(linkRegister, Address(callFrameRegister, CallFrame::returnPCOffset())); 757 #endif 740 758 741 759 #if !ASSERT_DISABLED … … 744 762 745 763 jump(beginLabel); 746 } 764 } else 765 m_arityCheck = entryLabel; // Not a function. 747 766 748 767 ASSERT(m_jmpTable.isEmpty()); … … 785 804 ASSERT(record.jumpTable.simpleJumpTable->branchOffsets.size() == record.jumpTable.simpleJumpTable->ctiOffsets.size()); 786 805 787 record.jumpTable.simpleJumpTable->ctiDefault = patchBuffer.locationOf(m_labels[bytecodeOffset + record.defaultOffset], NoPtrTag); 806 auto* simpleJumpTable = record.jumpTable.simpleJumpTable; 807 PtrTag tag = ptrTag(SwitchTablePtrTag, simpleJumpTable); 808 simpleJumpTable->ctiDefault = patchBuffer.locationOf(m_labels[bytecodeOffset + record.defaultOffset], tag); 788 809 789 810 for (unsigned j = 0; j < record.jumpTable.simpleJumpTable->branchOffsets.size(); ++j) { 790 811 unsigned offset = record.jumpTable.simpleJumpTable->branchOffsets[j]; 791 record.jumpTable.simpleJumpTable->ctiOffsets[j] = offset ? patchBuffer.locationOf(m_labels[bytecodeOffset + offset], NoPtrTag) : record.jumpTable.simpleJumpTable->ctiDefault; 812 simpleJumpTable->ctiOffsets[j] = offset 813 ? patchBuffer.locationOf(m_labels[bytecodeOffset + offset], tag) 814 : simpleJumpTable->ctiDefault; 792 815 } 793 816 } else { … … 795 818 796 819 auto* stringJumpTable = record.jumpTable.stringJumpTable; 820 PtrTag tag = ptrTag(SwitchTablePtrTag, stringJumpTable); 797 821 stringJumpTable->ctiDefault = 798 patchBuffer.locationOf(m_labels[bytecodeOffset + record.defaultOffset], NoPtrTag);822 patchBuffer.locationOf(m_labels[bytecodeOffset + record.defaultOffset], tag); 799 823 800 824 for (auto& location : stringJumpTable->offsetTable.values()) { 801 825 unsigned offset = location.branchOffset; 802 826 location.ctiOffset = offset 803 ? patchBuffer.locationOf(m_labels[bytecodeOffset + offset], NoPtrTag)827 ? patchBuffer.locationOf(m_labels[bytecodeOffset + offset], tag) 804 828 : stringJumpTable->ctiDefault; 805 829 } … … 833 857 notIndexJump = CodeLocationJump(patchBuffer.locationOf(patchableNotIndexJump)); 834 858 CodeLocationJump badTypeJump = CodeLocationJump(patchBuffer.locationOf(byValCompilationInfo.badTypeJump)); 835 CodeLocationLabel doneTarget = patchBuffer.locationOf(byValCompilationInfo.doneTarget , NoPtrTag);836 CodeLocationLabel nextHotPathTarget = patchBuffer.locationOf(byValCompilationInfo.nextHotPathTarget , NoPtrTag);837 CodeLocationLabel slowPathTarget = patchBuffer.locationOf(byValCompilationInfo.slowPathTarget , NoPtrTag);859 CodeLocationLabel doneTarget = patchBuffer.locationOf(byValCompilationInfo.doneTarget); 860 CodeLocationLabel nextHotPathTarget = patchBuffer.locationOf(byValCompilationInfo.nextHotPathTarget); 861 CodeLocationLabel slowPathTarget = patchBuffer.locationOf(byValCompilationInfo.slowPathTarget); 838 862 CodeLocationCall returnAddress = patchBuffer.locationOf(byValCompilationInfo.returnAddress); 839 863 … … 866 890 m_codeBlock->setJITCodeMap(jitCodeMapEncoder.finish()); 867 891 868 MacroAssemblerCodePtr withArityCheck; 869 if (m_codeBlock->codeType() == FunctionCode) 870 withArityCheck = patchBuffer.locationOf(m_arityCheck, CodeEntryWithArityCheckPtrTag); 892 MacroAssemblerCodePtr withArityCheck = patchBuffer.locationOf(m_arityCheck, CodeEntryWithArityCheckPtrTag); 871 893 872 894 if (Options::dumpDisassembly()) { -
trunk/Source/JavaScriptCore/jit/JITArithmetic.cpp
r229767 r230129 1 1 /* 2 * Copyright (C) 2008-201 7Apple Inc. All rights reserved.2 * Copyright (C) 2008-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 515 515 { 516 516 ArithProfile* arithProfile = m_codeBlock->arithProfileForPC(currentInstruction); 517 JITNegIC* negateIC = m_codeBlock->addJITNegIC(arithProfile );517 JITNegIC* negateIC = m_codeBlock->addJITNegIC(arithProfile, currentInstruction); 518 518 m_instructionToMathIC.add(currentInstruction, negateIC); 519 519 emitMathICFast(negateIC, currentInstruction, operationArithNegateProfiled, operationArithNegate); … … 663 663 { 664 664 ArithProfile* arithProfile = m_codeBlock->arithProfileForPC(currentInstruction); 665 JITAddIC* addIC = m_codeBlock->addJITAddIC(arithProfile );665 JITAddIC* addIC = m_codeBlock->addJITAddIC(arithProfile, currentInstruction); 666 666 m_instructionToMathIC.add(currentInstruction, addIC); 667 667 emitMathICFast(addIC, currentInstruction, operationValueAddProfiled, operationValueAdd); … … 706 706 bool generatedInlineCode = mathIC->generateInline(*this, mathICGenerationState); 707 707 if (!generatedInlineCode) { 708 PtrTag tag = ptrTag(MathICPtrTag, currentInstruction); 708 709 ArithProfile* arithProfile = mathIC->arithProfile(); 709 710 if (arithProfile && shouldEmitProfiling()) 710 callOperationWithResult(profiledFunction, NoPtrTag, resultRegs, srcRegs, arithProfile);711 callOperationWithResult(profiledFunction, tag, resultRegs, srcRegs, arithProfile); 711 712 else 712 callOperationWithResult(nonProfiledFunction, NoPtrTag, resultRegs, srcRegs);713 callOperationWithResult(nonProfiledFunction, tag, resultRegs, srcRegs); 713 714 } else 714 715 addSlowCase(mathICGenerationState.slowPathJumps); … … 779 780 else if (rightOperand.isConst()) 780 781 emitGetVirtualRegister(op2, rightRegs); 782 PtrTag tag = ptrTag(MathICPtrTag, currentInstruction); 781 783 ArithProfile* arithProfile = mathIC->arithProfile(); 782 784 if (arithProfile && shouldEmitProfiling()) 783 callOperationWithResult(profiledFunction, NoPtrTag, resultRegs, leftRegs, rightRegs, arithProfile);785 callOperationWithResult(profiledFunction, tag, resultRegs, leftRegs, rightRegs, arithProfile); 784 786 else 785 callOperationWithResult(nonProfiledFunction, NoPtrTag, resultRegs, leftRegs, rightRegs);787 callOperationWithResult(nonProfiledFunction, tag, resultRegs, leftRegs, rightRegs); 786 788 } else 787 789 addSlowCase(mathICGenerationState.slowPathJumps); … … 818 820 #endif 819 821 822 PtrTag tag = ptrTag(MathICPtrTag, currentInstruction); 820 823 ArithProfile* arithProfile = mathIC->arithProfile(); 821 824 if (arithProfile && shouldEmitProfiling()) { 822 825 if (mathICGenerationState.shouldSlowPathRepatch) 823 mathICGenerationState.slowPathCall = callOperationWithResult(reinterpret_cast<J_JITOperation_EJMic>(profiledRepatchFunction), NoPtrTag, resultRegs, srcRegs, TrustedImmPtr(mathIC));826 mathICGenerationState.slowPathCall = callOperationWithResult(reinterpret_cast<J_JITOperation_EJMic>(profiledRepatchFunction), tag, resultRegs, srcRegs, TrustedImmPtr(mathIC)); 824 827 else 825 mathICGenerationState.slowPathCall = callOperationWithResult(profiledFunction, NoPtrTag, resultRegs, srcRegs, arithProfile);828 mathICGenerationState.slowPathCall = callOperationWithResult(profiledFunction, tag, resultRegs, srcRegs, arithProfile); 826 829 } else 827 mathICGenerationState.slowPathCall = callOperationWithResult(reinterpret_cast<J_JITOperation_EJMic>(repatchFunction), NoPtrTag, resultRegs, srcRegs, TrustedImmPtr(mathIC));830 mathICGenerationState.slowPathCall = callOperationWithResult(reinterpret_cast<J_JITOperation_EJMic>(repatchFunction), tag, resultRegs, srcRegs, TrustedImmPtr(mathIC)); 828 831 829 832 #if ENABLE(MATH_IC_STATS) … … 884 887 #endif 885 888 889 PtrTag callTag = ptrTag(MathICPtrTag, currentInstruction); 886 890 ArithProfile* arithProfile = mathIC->arithProfile(); 887 891 if (arithProfile && shouldEmitProfiling()) { 888 892 if (mathICGenerationState.shouldSlowPathRepatch) 889 mathICGenerationState.slowPathCall = callOperationWithResult(bitwise_cast<J_JITOperation_EJJMic>(profiledRepatchFunction), NoPtrTag, resultRegs, leftRegs, rightRegs, TrustedImmPtr(mathIC));893 mathICGenerationState.slowPathCall = callOperationWithResult(bitwise_cast<J_JITOperation_EJJMic>(profiledRepatchFunction), callTag, resultRegs, leftRegs, rightRegs, TrustedImmPtr(mathIC)); 890 894 else 891 mathICGenerationState.slowPathCall = callOperationWithResult(profiledFunction, NoPtrTag, resultRegs, leftRegs, rightRegs, arithProfile);895 mathICGenerationState.slowPathCall = callOperationWithResult(profiledFunction, callTag, resultRegs, leftRegs, rightRegs, arithProfile); 892 896 } else 893 mathICGenerationState.slowPathCall = callOperationWithResult(bitwise_cast<J_JITOperation_EJJMic>(repatchFunction), NoPtrTag, resultRegs, leftRegs, rightRegs, TrustedImmPtr(mathIC));897 mathICGenerationState.slowPathCall = callOperationWithResult(bitwise_cast<J_JITOperation_EJJMic>(repatchFunction), callTag, resultRegs, leftRegs, rightRegs, TrustedImmPtr(mathIC)); 894 898 895 899 #if ENABLE(MATH_IC_STATS) … … 978 982 { 979 983 ArithProfile* arithProfile = m_codeBlock->arithProfileForPC(currentInstruction); 980 JITMulIC* mulIC = m_codeBlock->addJITMulIC(arithProfile );984 JITMulIC* mulIC = m_codeBlock->addJITMulIC(arithProfile, currentInstruction); 981 985 m_instructionToMathIC.add(currentInstruction, mulIC); 982 986 emitMathICFast(mulIC, currentInstruction, operationValueMulProfiled, operationValueMul); … … 994 998 { 995 999 ArithProfile* arithProfile = m_codeBlock->arithProfileForPC(currentInstruction); 996 JITSubIC* subIC = m_codeBlock->addJITSubIC(arithProfile );1000 JITSubIC* subIC = m_codeBlock->addJITSubIC(arithProfile, currentInstruction); 997 1001 m_instructionToMathIC.add(currentInstruction, subIC); 998 1002 emitMathICFast(subIC, currentInstruction, operationValueSubProfiled, operationValueSub); -
trunk/Source/JavaScriptCore/jit/JITCode.cpp
r225363 r230129 1 1 /* 2 * Copyright (C) 2008-201 7Apple Inc. All rights reserved.2 * Copyright (C) 2008-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 72 72 auto scope = DECLARE_THROW_SCOPE(*vm); 73 73 void* entryAddress; 74 JSFunction* function = jsDynamicCast<JSFunction*>(*vm, protoCallFrame->callee()); 75 76 if (!function || !protoCallFrame->needArityCheck()) { 77 ASSERT(!protoCallFrame->needArityCheck()); 78 entryAddress = executableAddress(); 79 } else 80 entryAddress = addressForCall(MustCheckArity).executableAddress(); 74 entryAddress = addressForCall(MustCheckArity).executableAddress(); 81 75 JSValue result = JSValue::decode(vmEntryToJavaScript(entryAddress, vm, protoCallFrame)); 82 76 return scope.exception() ? jsNull() : result; … … 128 122 { 129 123 RELEASE_ASSERT(m_ref); 130 return m_ref.code().executableAddress<char*>() + offset; 124 assertIsTaggedWith(m_ref.code().executableAddress(), CodeEntryPtrTag); 125 if (!offset) 126 return m_ref.code().executableAddress(); 127 128 char* executableAddress = untagCodePtr<char*>(m_ref.code().executableAddress(), CodeEntryPtrTag); 129 return tagCodePtr(executableAddress + offset, CodeEntryPtrTag); 131 130 } 132 131 … … 167 166 , m_withArityCheck(withArityCheck) 168 167 { 168 ASSERT(m_ref); 169 ASSERT(m_withArityCheck); 169 170 } 170 171 … … 178 179 m_ref = ref; 179 180 m_withArityCheck = withArityCheck; 181 ASSERT(m_ref); 182 ASSERT(m_withArityCheck); 180 183 } 181 184 … … 214 217 } 215 218 216 JITCode::CodePtr NativeJITCode::addressForCall(ArityCheckMode) 217 { 218 RELEASE_ASSERT(!!m_ref); 219 return m_ref.code(); 219 JITCode::CodePtr NativeJITCode::addressForCall(ArityCheckMode arity) 220 { 221 RELEASE_ASSERT(m_ref); 222 switch (arity) { 223 case ArityCheckNotRequired: 224 return m_ref.code(); 225 case MustCheckArity: 226 return m_ref.retaggedCode(CodeEntryPtrTag, CodeEntryWithArityCheckPtrTag); 227 } 228 RELEASE_ASSERT_NOT_REACHED(); 229 return CodePtr(); 220 230 } 221 231 -
trunk/Source/JavaScriptCore/jit/JITExceptions.cpp
r223738 r230129 1 1 /* 2 * Copyright (C) 2012-201 7Apple Inc. All rights reserved.2 * Copyright (C) 2012-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 87 87 ASSERT(bitwise_cast<uintptr_t>(callFrame) < bitwise_cast<uintptr_t>(vm->topEntryFrame)); 88 88 89 assertIsTaggedWith(catchRoutine, ExceptionHandlerPtrTag); 89 90 vm->callFrameForCatch = callFrame; 90 91 vm->targetMachinePCForThrow = catchRoutine; -
trunk/Source/JavaScriptCore/jit/JITMathIC.h
r229609 r230129 57 57 WTF_MAKE_FAST_ALLOCATED; 58 58 public: 59 JITMathIC(ArithProfile* arithProfile )59 JITMathIC(ArithProfile* arithProfile, Instruction* instruction) 60 60 : m_arithProfile(arithProfile) 61 , m_instruction(instruction) 61 62 { 62 63 } … … 140 141 RELEASE_ASSERT(linkBuffer.isValid()); 141 142 linkBuffer.link(jump, CodeLocationLabel(m_code.code())); 142 FINALIZE_CODE(linkBuffer, N oPtrTag, "JITMathIC: linking constant jump to out of line stub");143 FINALIZE_CODE(linkBuffer, NearJumpPtrTag, "JITMathIC: linking constant jump to out of line stub"); 143 144 }; 144 145 145 146 auto replaceCall = [&] () { 146 ftlThunkAwareRepatchCall(codeBlock, slowPathCallLocation(), callReplacement); 147 PtrTag tag = ptrTag(MathICPtrTag, m_instruction); 148 ftlThunkAwareRepatchCall(codeBlock, slowPathCallLocation(), FunctionPtr(callReplacement, tag)); 147 149 }; 148 150 … … 167 169 168 170 m_code = FINALIZE_CODE_FOR( 169 codeBlock, linkBuffer, N oPtrTag, "JITMathIC: generating out of line fast IC snippet");171 codeBlock, linkBuffer, NearJumpPtrTag, "JITMathIC: generating out of line fast IC snippet"); 170 172 171 173 if (!generationState.shouldSlowPathRepatch) { … … 209 211 210 212 m_code = FINALIZE_CODE_FOR( 211 codeBlock, linkBuffer, N oPtrTag, "JITMathIC: generating out of line IC snippet");213 codeBlock, linkBuffer, NearJumpPtrTag, "JITMathIC: generating out of line IC snippet"); 212 214 } 213 215 … … 217 219 void finalizeInlineCode(const MathICGenerationState& state, LinkBuffer& linkBuffer) 218 220 { 219 CodeLocationLabel start = linkBuffer.locationOf(state.fastPathStart, N oPtrTag);221 CodeLocationLabel start = linkBuffer.locationOf(state.fastPathStart, NearJumpPtrTag); 220 222 m_inlineStart = start; 221 223 … … 227 229 start, linkBuffer.locationOf(state.slowPathCall)); 228 230 m_deltaFromStartToSlowPathStart = MacroAssembler::differenceBetweenCodePtr( 229 start, linkBuffer.locationOf(state.slowPathStart, SlowPathPtrTag));231 start, linkBuffer.locationOf(state.slowPathStart, NoPtrTag)); 230 232 } 231 233 … … 244 246 245 247 ArithProfile* m_arithProfile; 248 Instruction* m_instruction; 246 249 MacroAssemblerCodeRef m_code; 247 250 CodeLocationLabel m_inlineStart; … … 260 263 class JITBinaryMathIC : public JITMathIC<GeneratorType, isBinaryProfileEmpty> { 261 264 public: 262 JITBinaryMathIC(ArithProfile* arithProfile )263 : JITMathIC<GeneratorType, isBinaryProfileEmpty>(arithProfile )265 JITBinaryMathIC(ArithProfile* arithProfile, Instruction* instruction) 266 : JITMathIC<GeneratorType, isBinaryProfileEmpty>(arithProfile, instruction) 264 267 { 265 268 } … … 278 281 class JITUnaryMathIC : public JITMathIC<GeneratorType, isUnaryProfileEmpty> { 279 282 public: 280 JITUnaryMathIC(ArithProfile* arithProfile )281 : JITMathIC<GeneratorType, isUnaryProfileEmpty>(arithProfile )283 JITUnaryMathIC(ArithProfile* arithProfile, Instruction* instruction) 284 : JITMathIC<GeneratorType, isUnaryProfileEmpty>(arithProfile, instruction) 282 285 { 283 286 } -
trunk/Source/JavaScriptCore/jit/JITOpcodes.cpp
r229957 r230129 682 682 emitGetVirtualRegister(scrutinee, regT0); 683 683 callOperation(operationSwitchImmWithUnknownKeyType, regT0, tableIndex); 684 jump(returnValueGPR, NoPtrTag);684 jump(returnValueGPR, ptrTag(SwitchTablePtrTag, jumpTable)); 685 685 } 686 686 … … 698 698 emitGetVirtualRegister(scrutinee, regT0); 699 699 callOperation(operationSwitchCharWithUnknownKeyType, regT0, tableIndex); 700 jump(returnValueGPR, NoPtrTag);700 jump(returnValueGPR, ptrTag(SwitchTablePtrTag, jumpTable)); 701 701 } 702 702 … … 713 713 emitGetVirtualRegister(scrutinee, regT0); 714 714 callOperation(operationSwitchStringWithUnknownKeyType, regT0, tableIndex); 715 jump(returnValueGPR, NoPtrTag);715 jump(returnValueGPR, ptrTag(SwitchTablePtrTag, jumpTable)); 716 716 } 717 717 … … 1171 1171 1172 1172 byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB( 1173 m_codeBlock, patchBuffer, N oPtrTag,1173 m_codeBlock, patchBuffer, NearJumpPtrTag, 1174 1174 "Baseline has_indexed_property stub for %s, return point %p", toCString(*m_codeBlock).data(), returnAddress.value()); 1175 1175 1176 1176 MacroAssembler::repatchJump(byValInfo->badTypeJump, CodeLocationLabel(byValInfo->stubRoutine->code().code())); 1177 MacroAssembler::repatchCall(CodeLocationCall(MacroAssemblerCodePtr(returnAddress)), FunctionPtr(operationHasIndexedPropertyGeneric, SlowPathPtrTag));1177 MacroAssembler::repatchCall(CodeLocationCall(MacroAssemblerCodePtr(returnAddress)), FunctionPtr(operationHasIndexedPropertyGeneric, HasPropertyPtrTag)); 1178 1178 } 1179 1179 … … 1234 1234 emitGetVirtualRegister(base, regT0); 1235 1235 emitGetVirtualRegister(property, regT1); 1236 Call call = callOperation(operationHasIndexedPropertyDefault, dst, regT0, regT1, byValInfo);1236 Call call = callOperation(operationHasIndexedPropertyDefault, HasPropertyPtrTag, dst, regT0, regT1, byValInfo); 1237 1237 1238 1238 m_byValCompilationInfo[m_byValInstructionIndex].slowPathTarget = slowPath; -
trunk/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp
r229957 r230129 1044 1044 1045 1045 patchBuffer.link(done, byValInfo->badTypeJump.labelAtOffset(byValInfo->badTypeJumpToDone)); 1046 1046 1047 1047 byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB( 1048 1048 m_codeBlock, patchBuffer, NoPtrTag, … … 1050 1050 1051 1051 MacroAssembler::repatchJump(byValInfo->badTypeJump, CodeLocationLabel(byValInfo->stubRoutine->code().code())); 1052 MacroAssembler::repatchCall(CodeLocationCall(MacroAssemblerCodePtr(returnAddress)), FunctionPtr(operationHasIndexedPropertyGeneric, SlowPathPtrTag));1052 MacroAssembler::repatchCall(CodeLocationCall(MacroAssemblerCodePtr(returnAddress)), FunctionPtr(operationHasIndexedPropertyGeneric, NoPtrTag)); 1053 1053 } 1054 1054 -
trunk/Source/JavaScriptCore/jit/JITOperations.cpp
r229957 r230129 1 1 /* 2 * Copyright (C) 2013-201 7Apple Inc. All rights reserved.2 * Copyright (C) 2013-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 65 65 #include "ProgramCodeBlock.h" 66 66 #include "PropertyName.h" 67 #include "PtrTag.h" 67 68 #include "RegExpObject.h" 68 69 #include "Repatch.h" … … 731 732 // Don't ever try to optimize. 732 733 byValInfo->tookSlowPath = true; 733 ctiPatchCallByReturnAddress(ReturnAddressPtr(OUR_RETURN_ADDRESS), FunctionPtr(operationPutByValGeneric ));734 ctiPatchCallByReturnAddress(ReturnAddressPtr(OUR_RETURN_ADDRESS), FunctionPtr(operationPutByValGeneric, PutPropertyPtrTag)); 734 735 } 735 736 putByVal(exec, baseValue, subscript, value, byValInfo); … … 815 816 // Don't ever try to optimize. 816 817 byValInfo->tookSlowPath = true; 817 ctiPatchCallByReturnAddress(ReturnAddressPtr(OUR_RETURN_ADDRESS), FunctionPtr(operationDirectPutByValGeneric ));818 ctiPatchCallByReturnAddress(ReturnAddressPtr(OUR_RETURN_ADDRESS), FunctionPtr(operationDirectPutByValGeneric, PutPropertyPtrTag)); 818 819 } 819 820 … … 862 863 } 863 864 864 static SlowPathReturnType handleHostCall(ExecState* execCallee, JSValue callee, CallLinkInfo* callLinkInfo )865 static SlowPathReturnType handleHostCall(ExecState* execCallee, JSValue callee, CallLinkInfo* callLinkInfo, PtrTag resultTag) 865 866 { 866 867 ExecState* exec = execCallee->callerFrame(); … … 881 882 vm->hostCallReturnValue = JSValue::decode(callData.native.function(execCallee)); 882 883 if (UNLIKELY(scope.exception())) { 884 PtrTag thunkTag = ptrTag(JITThunkPtrTag, vm, throwExceptionFromCallSlowPathGenerator); 883 885 return encodeResult( 884 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator). code().executableAddress(),886 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).retaggedCode(thunkTag, resultTag).executableAddress(), 885 887 reinterpret_cast<void*>(KeepTheFrame)); 886 888 } 887 889 888 890 return encodeResult( 889 bitwise_cast<void*>(getHostCallReturnValue),891 tagCFunctionPtr<void*>(getHostCallReturnValue, resultTag), 890 892 reinterpret_cast<void*>(callLinkInfo->callMode() == CallMode::Tail ? ReuseTheFrame : KeepTheFrame)); 891 893 } … … 893 895 ASSERT(callType == CallType::None); 894 896 throwException(exec, scope, createNotAFunctionError(exec, callee)); 897 PtrTag thunkTag = ptrTag(JITThunkPtrTag, vm, throwExceptionFromCallSlowPathGenerator); 895 898 return encodeResult( 896 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator). code().executableAddress(),899 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).retaggedCode(thunkTag, resultTag).executableAddress(), 897 900 reinterpret_cast<void*>(KeepTheFrame)); 898 901 } … … 910 913 vm->hostCallReturnValue = JSValue::decode(constructData.native.function(execCallee)); 911 914 if (UNLIKELY(scope.exception())) { 915 PtrTag thunkTag = ptrTag(JITThunkPtrTag, vm, throwExceptionFromCallSlowPathGenerator); 912 916 return encodeResult( 913 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator). code().executableAddress(),917 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).retaggedCode(thunkTag, resultTag).executableAddress(), 914 918 reinterpret_cast<void*>(KeepTheFrame)); 915 919 } 916 920 917 return encodeResult( bitwise_cast<void*>(getHostCallReturnValue), reinterpret_cast<void*>(KeepTheFrame));921 return encodeResult(tagCFunctionPtr<void*>(getHostCallReturnValue, resultTag), reinterpret_cast<void*>(KeepTheFrame)); 918 922 } 919 923 920 924 ASSERT(constructType == ConstructType::None); 921 925 throwException(exec, scope, createNotAConstructorError(exec, callee)); 926 PtrTag thunkTag = ptrTag(JITThunkPtrTag, vm, throwExceptionFromCallSlowPathGenerator); 922 927 return encodeResult( 923 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator). code().executableAddress(),928 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).retaggedCode(thunkTag, resultTag).executableAddress(), 924 929 reinterpret_cast<void*>(KeepTheFrame)); 925 930 } … … 931 936 auto throwScope = DECLARE_THROW_SCOPE(*vm); 932 937 938 PtrTag linkedTargetTag = ptrTag(OperationLinkCallPtrTag, vm); 933 939 CodeSpecializationKind kind = callLinkInfo->specializationKind(); 934 940 NativeCallFrameTracer tracer(vm, exec); … … 946 952 callLinkInfo->setSeen(); 947 953 else 948 linkFor(execCallee, *callLinkInfo, nullptr, internalFunction, codePtr); 949 950 return encodeResult(codePtr.executableAddress(), reinterpret_cast<void*>(callLinkInfo->callMode() == CallMode::Tail ? ReuseTheFrame : KeepTheFrame)); 954 linkFor(execCallee, *callLinkInfo, nullptr, internalFunction, codePtr, CodeEntryPtrTag); 955 956 void* linkedTarget = retagCodePtr(codePtr.executableAddress(), CodeEntryPtrTag, linkedTargetTag); 957 return encodeResult(linkedTarget, reinterpret_cast<void*>(callLinkInfo->callMode() == CallMode::Tail ? ReuseTheFrame : KeepTheFrame)); 951 958 } 952 959 throwScope.release(); 953 return handleHostCall(execCallee, calleeAsValue, callLinkInfo );960 return handleHostCall(execCallee, calleeAsValue, callLinkInfo, linkedTargetTag); 954 961 } 955 962 … … 960 967 MacroAssemblerCodePtr codePtr; 961 968 CodeBlock* codeBlock = nullptr; 969 PtrTag codeTag = NoPtrTag; 962 970 if (executable->isHostFunction()) { 963 971 codePtr = executable->entrypointFor(kind, MustCheckArity); 972 codeTag = CodeEntryWithArityCheckPtrTag; 964 973 } else { 965 974 FunctionExecutable* functionExecutable = static_cast<FunctionExecutable*>(executable); 966 975 976 auto handleThrowException = [&] () { 977 PtrTag thunkTag = ptrTag(JITThunkPtrTag, vm, throwExceptionFromCallSlowPathGenerator); 978 void* throwTarget = retagCodePtr(vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(), thunkTag, linkedTargetTag); 979 return encodeResult(throwTarget, reinterpret_cast<void*>(KeepTheFrame)); 980 }; 981 967 982 if (!isCall(kind) && functionExecutable->constructAbility() == ConstructAbility::CannotConstruct) { 968 983 throwException(exec, throwScope, createNotAConstructorError(exec, callee)); 969 return encodeResult( 970 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(), 971 reinterpret_cast<void*>(KeepTheFrame)); 984 return handleThrowException(); 972 985 } 973 986 … … 975 988 JSObject* error = functionExecutable->prepareForExecution<FunctionExecutable>(*vm, callee, scope, kind, *codeBlockSlot); 976 989 EXCEPTION_ASSERT(throwScope.exception() == reinterpret_cast<Exception*>(error)); 977 if (error) { 978 return encodeResult( 979 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).code().executableAddress(), 980 reinterpret_cast<void*>(KeepTheFrame)); 981 } 990 if (error) 991 return handleThrowException(); 982 992 codeBlock = *codeBlockSlot; 983 993 ArityCheckMode arity; 984 if (execCallee->argumentCountIncludingThis() < static_cast<size_t>(codeBlock->numParameters()) || callLinkInfo->isVarargs()) 994 if (execCallee->argumentCountIncludingThis() < static_cast<size_t>(codeBlock->numParameters()) || callLinkInfo->isVarargs()) { 985 995 arity = MustCheckArity; 986 else 996 codeTag = CodeEntryWithArityCheckPtrTag; 997 } else { 987 998 arity = ArityCheckNotRequired; 999 codeTag = CodeEntryPtrTag; 1000 } 988 1001 codePtr = functionExecutable->entrypointFor(kind, arity); 989 1002 } … … 991 1004 callLinkInfo->setSeen(); 992 1005 else 993 linkFor(execCallee, *callLinkInfo, codeBlock, callee, codePtr );994 995 return encodeResult(codePtr. executableAddress(), reinterpret_cast<void*>(callLinkInfo->callMode() == CallMode::Tail ? ReuseTheFrame : KeepTheFrame));1006 linkFor(execCallee, *callLinkInfo, codeBlock, callee, codePtr, codeTag); 1007 1008 return encodeResult(codePtr.retagged(codeTag, linkedTargetTag).executableAddress(), reinterpret_cast<void*>(callLinkInfo->callMode() == CallMode::Tail ? ReuseTheFrame : KeepTheFrame)); 996 1009 } 997 1010 … … 1026 1039 CodeBlock* codeBlock = nullptr; 1027 1040 if (executable->isHostFunction()) 1028 codePtr = executable->entrypointFor(kind, MustCheckArity) ;1041 codePtr = executable->entrypointFor(kind, MustCheckArity).retagged(CodeEntryWithArityCheckPtrTag, NearCallPtrTag); 1029 1042 else { 1030 1043 FunctionExecutable* functionExecutable = static_cast<FunctionExecutable*>(executable); … … 1036 1049 if (error) 1037 1050 return; 1038 ArityCheckMode arity;1039 1051 unsigned argumentStackSlots = callLinkInfo->maxNumArguments(); 1040 1052 if (argumentStackSlots < static_cast<size_t>(codeBlock->numParameters())) 1041 arity = MustCheckArity;1053 codePtr = functionExecutable->entrypointFor(kind, MustCheckArity).retagged(CodeEntryWithArityCheckPtrTag, NearCallPtrTag); 1042 1054 else 1043 arity = ArityCheckNotRequired; 1044 codePtr = functionExecutable->entrypointFor(kind, arity); 1055 codePtr = functionExecutable->entrypointFor(kind, ArityCheckNotRequired).retagged(CodeEntryPtrTag, NearCallPtrTag); 1045 1056 } 1046 1057 … … 1049 1060 1050 1061 inline SlowPathReturnType virtualForWithFunction( 1051 ExecState* execCallee, CallLinkInfo* callLinkInfo, JSCell*& calleeAsFunctionCell )1062 ExecState* execCallee, CallLinkInfo* callLinkInfo, JSCell*& calleeAsFunctionCell, PtrTag resultTag) 1052 1063 { 1053 1064 ExecState* exec = execCallee->callerFrame(); … … 1064 1075 MacroAssemblerCodePtr codePtr = vm->getCTIInternalFunctionTrampolineFor(kind); 1065 1076 ASSERT(!!codePtr); 1066 return encodeResult(codePtr. executableAddress(), reinterpret_cast<void*>(callLinkInfo->callMode() == CallMode::Tail ? ReuseTheFrame : KeepTheFrame));1077 return encodeResult(codePtr.retagged(CodeEntryPtrTag, resultTag).executableAddress(), reinterpret_cast<void*>(callLinkInfo->callMode() == CallMode::Tail ? ReuseTheFrame : KeepTheFrame)); 1067 1078 } 1068 1079 throwScope.release(); 1069 return handleHostCall(execCallee, calleeAsValue, callLinkInfo );1080 return handleHostCall(execCallee, calleeAsValue, callLinkInfo, resultTag); 1070 1081 } 1071 1082 … … 1078 1089 if (!isCall(kind) && functionExecutable->constructAbility() == ConstructAbility::CannotConstruct) { 1079 1090 throwException(exec, throwScope, createNotAConstructorError(exec, function)); 1091 PtrTag thunkTag = ptrTag(JITThunkPtrTag, vm, throwExceptionFromCallSlowPathGenerator); 1080 1092 return encodeResult( 1081 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator). code().executableAddress(),1093 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).retaggedCode(thunkTag, resultTag).executableAddress(), 1082 1094 reinterpret_cast<void*>(KeepTheFrame)); 1083 1095 } … … 1087 1099 EXCEPTION_ASSERT(throwScope.exception() == reinterpret_cast<Exception*>(error)); 1088 1100 if (error) { 1101 PtrTag thunkTag = ptrTag(JITThunkPtrTag, vm, throwExceptionFromCallSlowPathGenerator); 1089 1102 return encodeResult( 1090 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator). code().executableAddress(),1103 vm->getCTIStub(throwExceptionFromCallSlowPathGenerator).retaggedCode(thunkTag, resultTag).executableAddress(), 1091 1104 reinterpret_cast<void*>(KeepTheFrame)); 1092 1105 } 1093 1106 } 1094 1107 return encodeResult(executable->entrypointFor( 1095 kind, MustCheckArity). executableAddress(),1108 kind, MustCheckArity).retagged(CodeEntryWithArityCheckPtrTag, resultTag).executableAddress(), 1096 1109 reinterpret_cast<void*>(callLinkInfo->callMode() == CallMode::Tail ? ReuseTheFrame : KeepTheFrame)); 1097 1110 } … … 1099 1112 SlowPathReturnType JIT_OPERATION operationLinkPolymorphicCall(ExecState* execCallee, CallLinkInfo* callLinkInfo) 1100 1113 { 1114 ExecState* exec = execCallee->callerFrame(); 1115 VM* vm = &exec->vm(); 1101 1116 ASSERT(callLinkInfo->specializationKind() == CodeForCall); 1102 1117 JSCell* calleeAsFunctionCell; 1103 SlowPathReturnType result = virtualForWithFunction(execCallee, callLinkInfo, calleeAsFunctionCell); 1118 PtrTag resultTag = ptrTag(OperationLinkPolymorphicCallPtrTag, vm); 1119 SlowPathReturnType result = virtualForWithFunction(execCallee, callLinkInfo, calleeAsFunctionCell, resultTag); 1104 1120 1105 1121 linkPolymorphicCall(execCallee, *callLinkInfo, CallVariant(calleeAsFunctionCell)); … … 1110 1126 SlowPathReturnType JIT_OPERATION operationVirtualCall(ExecState* execCallee, CallLinkInfo* callLinkInfo) 1111 1127 { 1128 ExecState* exec = execCallee->callerFrame(); 1129 VM* vm = &exec->vm(); 1112 1130 JSCell* calleeAsFunctionCellIgnored; 1113 return virtualForWithFunction(execCallee, callLinkInfo, calleeAsFunctionCellIgnored); 1131 PtrTag resultTag = ptrTag(OperationVirtualCallPtrTag, vm); 1132 return virtualForWithFunction(execCallee, callLinkInfo, calleeAsFunctionCellIgnored, resultTag); 1114 1133 } 1115 1134 … … 1762 1781 if (isJSString(baseValue)) { 1763 1782 if (asString(baseValue)->canGetIndex(i)) { 1764 ctiPatchCallByReturnAddress(returnAddress, FunctionPtr(operationGetByValString ));1783 ctiPatchCallByReturnAddress(returnAddress, FunctionPtr(operationGetByValString, GetPropertyPtrTag)); 1765 1784 scope.release(); 1766 1785 return asString(baseValue)->getIndex(exec, i); … … 1830 1849 byValInfo->arrayProfile->computeUpdatedPrediction(locker, codeBlock, structure); 1831 1850 1832 JIT::compileGetByVal(&vm, exec->codeBlock(), byValInfo, returnAddress, arrayMode);1851 JIT::compileGetByVal(&vm, codeBlock, byValInfo, returnAddress, arrayMode); 1833 1852 optimizationResult = OptimizationResult::Optimized; 1834 1853 } … … 1902 1921 // Don't ever try to optimize. 1903 1922 byValInfo->tookSlowPath = true; 1904 ctiPatchCallByReturnAddress(returnAddress, FunctionPtr(operationGetByValGeneric ));1923 ctiPatchCallByReturnAddress(returnAddress, FunctionPtr(operationGetByValGeneric, GetPropertyPtrTag)); 1905 1924 } 1906 1925 … … 1942 1961 || object->structure(vm)->typeInfo().interceptsGetOwnPropertySlotByIndexEvenWhenLengthIsNotZero()) { 1943 1962 // Don't ever try to optimize. 1944 ctiPatchCallByReturnAddress(ReturnAddressPtr(OUR_RETURN_ADDRESS), FunctionPtr(operationHasIndexedPropertyGeneric ));1963 ctiPatchCallByReturnAddress(ReturnAddressPtr(OUR_RETURN_ADDRESS), FunctionPtr(operationHasIndexedPropertyGeneric, HasPropertyPtrTag)); 1945 1964 } 1946 1965 } … … 2002 2021 if (!isJSString(baseValue)) { 2003 2022 ASSERT(exec->bytecodeOffset()); 2004 ctiPatchCallByReturnAddress(ReturnAddressPtr(OUR_RETURN_ADDRESS), FunctionPtr(byValInfo->stubRoutine ? operationGetByValGeneric : operationGetByValOptimize)); 2023 auto getByValFunction = byValInfo->stubRoutine ? operationGetByValGeneric : operationGetByValOptimize; 2024 ctiPatchCallByReturnAddress(ReturnAddressPtr(OUR_RETURN_ADDRESS), FunctionPtr(getByValFunction, GetPropertyPtrTag)); 2005 2025 } 2006 2026 } else { … … 2151 2171 } 2152 2172 2173 assertIsTaggedWith(result, ptrTag(SwitchTablePtrTag, &jumpTable)); 2153 2174 return reinterpret_cast<char*>(result); 2154 2175 } … … 2169 2190 else 2170 2191 result = jumpTable.ctiDefault.executableAddress(); 2192 assertIsTaggedWith(result, ptrTag(SwitchTablePtrTag, &jumpTable)); 2171 2193 return reinterpret_cast<char*>(result); 2172 2194 } … … 2188 2210 result = jumpTable.ctiDefault.executableAddress(); 2189 2211 2212 assertIsTaggedWith(result, ptrTag(SwitchTablePtrTag, &jumpTable)); 2190 2213 return reinterpret_cast<char*>(result); 2191 2214 } -
trunk/Source/JavaScriptCore/jit/JITPropertyAccess.cpp
r229852 r230129 55 55 JSInterfaceJIT jit(vm); 56 56 JumpList failures; 57 jit.tagReturnAddress(); 57 58 failures.append(jit.branchStructure( 58 59 NotEqual, … … 91 92 92 93 LinkBuffer patchBuffer(jit, GLOBAL_THUNK_ID); 93 return FINALIZE_CODE(patchBuffer, N oPtrTag, "String get_by_val stub");94 return FINALIZE_CODE(patchBuffer, NearCallPtrTag, "String get_by_val stub"); 94 95 } 95 96 … … 238 239 gen.slowPathJump().link(this); 239 240 240 Call call = callOperationWithProfile(operationGetByIdOptimize, dst, gen.stubInfo(), regT0, propertyName.impl());241 Call call = callOperationWithProfile(operationGetByIdOptimize, GetPropertyPtrTag, dst, gen.stubInfo(), regT0, propertyName.impl()); 241 242 gen.reportSlowPathCall(coldPathBegin, call); 242 243 slowDoneCase = jump(); … … 276 277 emitGetVirtualRegister(base, regT0); 277 278 emitGetVirtualRegister(property, regT1); 278 Call call = callOperation(operationGetByValOptimize, NoPtrTag, dst, regT0, regT1, byValInfo);279 Call call = callOperation(operationGetByValOptimize, GetPropertyPtrTag, dst, regT0, regT1, byValInfo); 279 280 280 281 m_byValCompilationInfo[m_byValInstructionIndex].slowPathTarget = slowPath; … … 456 457 gen.slowPathJump().link(this); 457 458 458 Call call = callOperation(gen.slowPathFunction(), gen.stubInfo(), regT1, regT0, propertyName.impl());459 Call call = callOperation(gen.slowPathFunction(), PutPropertyPtrTag, gen.stubInfo(), regT1, regT0, propertyName.impl()); 459 460 gen.reportSlowPathCall(coldPathBegin, call); 460 461 doneCases.append(jump()); … … 493 494 emitGetVirtualRegister(value, regT2); 494 495 bool isDirect = Interpreter::getOpcodeID(currentInstruction->u.opcode) == op_put_by_val_direct; 495 Call call = callOperation(isDirect ? operationDirectPutByValOptimize : operationPutByValOptimize, NoPtrTag, regT0, regT1, regT2, byValInfo);496 Call call = callOperation(isDirect ? operationDirectPutByValOptimize : operationPutByValOptimize, PutPropertyPtrTag, regT0, regT1, regT2, byValInfo); 496 497 497 498 m_byValCompilationInfo[m_byValInstructionIndex].slowPathTarget = slowPath; … … 594 595 Label coldPathBegin = label(); 595 596 596 Call call = callOperation(operationTryGetByIdOptimize, resultVReg, gen.stubInfo(), regT0, ident->impl());597 Call call = callOperation(operationTryGetByIdOptimize, GetPropertyPtrTag, resultVReg, gen.stubInfo(), regT0, ident->impl()); 597 598 598 599 gen.reportSlowPathCall(coldPathBegin, call); … … 656 657 657 658 Label coldPathBegin = label(); 658 659 Call call = callOperationWithProfile(operationGetByIdOptimize, resultVReg, gen.stubInfo(), regT0, ident->impl());659 660 Call call = callOperationWithProfile(operationGetByIdOptimize, GetPropertyPtrTag, resultVReg, gen.stubInfo(), regT0, ident->impl()); 660 661 661 662 gen.reportSlowPathCall(coldPathBegin, call); … … 672 673 673 674 Label coldPathBegin = label(); 674 675 Call call = callOperationWithProfile(operationGetByIdWithThisOptimize, resultVReg, gen.stubInfo(), regT0, regT1, ident->impl());675 676 Call call = callOperationWithProfile(operationGetByIdWithThisOptimize, GetPropertyPtrTag, resultVReg, gen.stubInfo(), regT0, regT1, ident->impl()); 676 677 677 678 gen.reportSlowPathCall(coldPathBegin, call); … … 715 716 JITPutByIdGenerator& gen = m_putByIds[m_putByIdIndex++]; 716 717 717 Call call = callOperation( 718 gen.slowPathFunction(), gen.stubInfo(), regT1, regT0, ident->impl()); 718 Call call = callOperation(gen.slowPathFunction(), PutPropertyPtrTag, gen.stubInfo(), regT1, regT0, ident->impl()); 719 719 720 720 gen.reportSlowPathCall(coldPathBegin, call); … … 1259 1259 1260 1260 byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB( 1261 m_codeBlock, patchBuffer, N oPtrTag,1261 m_codeBlock, patchBuffer, NearJumpPtrTag, 1262 1262 "Baseline get_by_val stub for %s, return point %p", toCString(*m_codeBlock).data(), returnAddress.value()); 1263 1263 1264 1264 MacroAssembler::repatchJump(byValInfo->badTypeJump, CodeLocationLabel(byValInfo->stubRoutine->code().code())); 1265 MacroAssembler::repatchCall(CodeLocationCall(MacroAssemblerCodePtr(returnAddress)), FunctionPtr(operationGetByValGeneric, SlowPathPtrTag));1265 MacroAssembler::repatchCall(CodeLocationCall(MacroAssemblerCodePtr(returnAddress)), FunctionPtr(operationGetByValGeneric, GetPropertyPtrTag)); 1266 1266 } 1267 1267 … … 1291 1291 1292 1292 byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB( 1293 m_codeBlock, patchBuffer, N oPtrTag,1293 m_codeBlock, patchBuffer, NearJumpPtrTag, 1294 1294 "Baseline get_by_val with cached property name '%s' stub for %s, return point %p", propertyName.impl()->utf8().data(), toCString(*m_codeBlock).data(), returnAddress.value()); 1295 1295 byValInfo->stubInfo = gen.stubInfo(); 1296 1296 1297 1297 MacroAssembler::repatchJump(byValInfo->notIndexJump, CodeLocationLabel(byValInfo->stubRoutine->code().code())); 1298 MacroAssembler::repatchCall(CodeLocationCall(MacroAssemblerCodePtr(returnAddress)), FunctionPtr(operationGetByValGeneric, SlowPathPtrTag));1298 MacroAssembler::repatchCall(CodeLocationCall(MacroAssemblerCodePtr(returnAddress)), FunctionPtr(operationGetByValGeneric, GetPropertyPtrTag)); 1299 1299 } 1300 1300 … … 1339 1339 patchBuffer.link(done, byValInfo->badTypeJump.labelAtOffset(byValInfo->badTypeJumpToDone)); 1340 1340 if (needsLinkForWriteBarrier) { 1341 ASSERT( m_calls.last().callee.executableAddress() == operationWriteBarrierSlowPath);1342 patchBuffer.link(m_calls.last().from, FunctionPtr(operationWriteBarrierSlowPath, SlowPathPtrTag));1341 ASSERT(removeCodePtrTag(m_calls.last().callee.executableAddress()) == removeCodePtrTag(operationWriteBarrierSlowPath)); 1342 patchBuffer.link(m_calls.last().from, m_calls.last().callee); 1343 1343 } 1344 1344 … … 1346 1346 if (!isDirect) { 1347 1347 byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB( 1348 m_codeBlock, patchBuffer, N oPtrTag,1348 m_codeBlock, patchBuffer, NearJumpPtrTag, 1349 1349 "Baseline put_by_val stub for %s, return point %p", toCString(*m_codeBlock).data(), returnAddress.value()); 1350 1350 1351 1351 } else { 1352 1352 byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB( 1353 m_codeBlock, patchBuffer, N oPtrTag,1353 m_codeBlock, patchBuffer, NearJumpPtrTag, 1354 1354 "Baseline put_by_val_direct stub for %s, return point %p", toCString(*m_codeBlock).data(), returnAddress.value()); 1355 1355 } 1356 1356 MacroAssembler::repatchJump(byValInfo->badTypeJump, CodeLocationLabel(byValInfo->stubRoutine->code().code())); 1357 MacroAssembler::repatchCall(CodeLocationCall(MacroAssemblerCodePtr(returnAddress)), FunctionPtr(isDirect ? operationDirectPutByValGeneric : operationPutByValGeneric, SlowPathPtrTag));1357 MacroAssembler::repatchCall(CodeLocationCall(MacroAssemblerCodePtr(returnAddress)), FunctionPtr(isDirect ? operationDirectPutByValGeneric : operationPutByValGeneric, PutPropertyPtrTag)); 1358 1358 } 1359 1359 … … 1381 1381 1382 1382 byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB( 1383 m_codeBlock, patchBuffer, N oPtrTag,1383 m_codeBlock, patchBuffer, NearJumpPtrTag, 1384 1384 "Baseline put_by_val%s with cached property name '%s' stub for %s, return point %p", (putKind == Direct) ? "_direct" : "", propertyName.impl()->utf8().data(), toCString(*m_codeBlock).data(), returnAddress.value()); 1385 1385 byValInfo->stubInfo = gen.stubInfo(); 1386 1386 1387 1387 MacroAssembler::repatchJump(byValInfo->notIndexJump, CodeLocationLabel(byValInfo->stubRoutine->code().code())); 1388 MacroAssembler::repatchCall(CodeLocationCall(MacroAssemblerCodePtr(returnAddress)), FunctionPtr(putKind == Direct ? operationDirectPutByValGeneric : operationPutByValGeneric, SlowPathPtrTag));1388 MacroAssembler::repatchCall(CodeLocationCall(MacroAssemblerCodePtr(returnAddress)), FunctionPtr(putKind == Direct ? operationDirectPutByValGeneric : operationPutByValGeneric, PutPropertyPtrTag)); 1389 1389 } 1390 1390 -
trunk/Source/JavaScriptCore/jit/JITThunks.cpp
r229547 r230129 125 125 if (generator) { 126 126 MacroAssemblerCodeRef entry = generator(vm); 127 forCall = adoptRef(new DirectJITCode(entry, entry. code(), JITCode::HostCallThunk));127 forCall = adoptRef(new DirectJITCode(entry, entry.retaggedCode(CodeEntryPtrTag, CodeEntryWithArityCheckPtrTag), JITCode::HostCallThunk)); 128 128 } else 129 129 forCall = adoptRef(new NativeJITCode(MacroAssemblerCodeRef::createSelfManagedCodeRef(ctiNativeCall(vm)), JITCode::HostCallThunk)); -
trunk/Source/JavaScriptCore/jit/Repatch.cpp
r229856 r230129 194 194 bool generatedCodeInline = InlineAccess::generateArrayLength(stubInfo, jsCast<JSArray*>(baseCell)); 195 195 if (generatedCodeInline) { 196 ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind));196 ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), FunctionPtr(appropriateOptimizingGetByIdFunction(kind), GetPropertyPtrTag)); 197 197 stubInfo.initArrayLength(); 198 198 return RetryCacheLater; … … 251 251 LOG_IC((ICEvent::GetByIdSelfPatch, structure->classInfo(), propertyName)); 252 252 structure->startWatchingPropertyForReplacements(vm, slot.cachedOffset()); 253 ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind));253 ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), FunctionPtr(appropriateOptimizingGetByIdFunction(kind), GetPropertyPtrTag)); 254 254 stubInfo.initGetByIdSelf(codeBlock, structure, slot.cachedOffset()); 255 255 return RetryCacheLater; … … 369 369 SuperSamplerScope superSamplerScope(false); 370 370 371 if (tryCacheGetByID(exec, baseValue, propertyName, slot, stubInfo, kind) == GiveUpOnCache) 372 ftlThunkAwareRepatchCall(exec->codeBlock(), stubInfo.slowPathCallLocation(), appropriateGenericGetByIdFunction(kind)); 371 if (tryCacheGetByID(exec, baseValue, propertyName, slot, stubInfo, kind) == GiveUpOnCache) { 372 CodeBlock* codeBlock = exec->codeBlock(); 373 ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), FunctionPtr(appropriateGenericGetByIdFunction(kind), GetPropertyPtrTag)); 374 } 373 375 } 374 376 … … 387 389 static V_JITOperation_ESsiJJI appropriateOptimizingPutByIdFunction(const PutPropertySlot &slot, PutKind putKind) 388 390 { 389 auto pickSlowPath = [&] () -> V_JITOperation_ESsiJJI { 390 if (slot.isStrictMode()) { 391 if (putKind == Direct) 392 return operationPutByIdDirectStrictOptimize; 393 return operationPutByIdStrictOptimize; 394 } 391 if (slot.isStrictMode()) { 395 392 if (putKind == Direct) 396 return operationPutByIdDirectNonStrictOptimize; 397 return operationPutByIdNonStrictOptimize; 398 }; 399 return tagCFunctionPtr(pickSlowPath(), SlowPathPtrTag); 393 return operationPutByIdDirectStrictOptimize; 394 return operationPutByIdStrictOptimize; 395 } 396 if (putKind == Direct) 397 return operationPutByIdDirectNonStrictOptimize; 398 return operationPutByIdNonStrictOptimize; 400 399 } 401 400 … … 444 443 if (generatedCodeInline) { 445 444 LOG_IC((ICEvent::PutByIdSelfPatch, structure->classInfo(), ident)); 446 ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingPutByIdFunction(slot, putKind));445 ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), FunctionPtr(appropriateOptimizingPutByIdFunction(slot, putKind), PutPropertyPtrTag)); 447 446 stubInfo.initPutByIdReplace(codeBlock, structure, slot.cachedOffset()); 448 447 return RetryCacheLater; … … 576 575 SuperSamplerScope superSamplerScope(false); 577 576 578 if (tryCachePutByID(exec, baseValue, structure, propertyName, slot, stubInfo, putKind) == GiveUpOnCache) 579 ftlThunkAwareRepatchCall(exec->codeBlock(), stubInfo.slowPathCallLocation(), appropriateGenericPutByIdFunction(slot, putKind)); 577 if (tryCachePutByID(exec, baseValue, structure, propertyName, slot, stubInfo, putKind) == GiveUpOnCache) { 578 CodeBlock* codeBlock = exec->codeBlock(); 579 ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), FunctionPtr(appropriateGenericPutByIdFunction(slot, putKind), PutPropertyPtrTag)); 580 } 580 581 } 581 582 … … 699 700 void linkFor( 700 701 ExecState* exec, CallLinkInfo& callLinkInfo, CodeBlock* calleeCodeBlock, 701 JSObject* callee, MacroAssemblerCodePtr codePtr )702 JSObject* callee, MacroAssemblerCodePtr codePtr, PtrTag codeTag) 702 703 { 703 704 ASSERT(!callLinkInfo.stub()); … … 721 722 dataLog("Linking call in ", FullCodeOrigin(callerCodeBlock, callLinkInfo.codeOrigin()), " to ", pointerDump(calleeCodeBlock), ", entrypoint at ", codePtr, "\n"); 722 723 723 MacroAssembler::repatchNearCall(callLinkInfo.hotPathOther(), CodeLocationLabel(codePtr ));724 MacroAssembler::repatchNearCall(callLinkInfo.hotPathOther(), CodeLocationLabel(codePtr.retagged(codeTag, NearCallPtrTag))); 724 725 725 726 if (calleeCodeBlock) … … 768 769 static void revertCall(VM* vm, CallLinkInfo& callLinkInfo, MacroAssemblerCodeRef codeRef) 769 770 { 771 assertIsTaggedWith(codeRef.code().executableAddress(), NearCallPtrTag); 770 772 if (callLinkInfo.isDirect()) { 771 773 callLinkInfo.clearCodeBlock(); … … 1086 1088 auto stubRoutine = adoptRef(*new PolymorphicCallStubRoutine( 1087 1089 FINALIZE_CODE_FOR( 1088 callerCodeBlock, patchBuffer, N oPtrTag,1090 callerCodeBlock, patchBuffer, NearJumpPtrTag, 1089 1091 "Polymorphic call stub for %s, return point %p, targets %s", 1090 1092 isWebAssembly ? "WebAssembly" : toCString(*callerCodeBlock).data(), callLinkInfo.callReturnLocation().labelAtOffset(0).executableAddress(), … … 1113 1115 void resetGetByID(CodeBlock* codeBlock, StructureStubInfo& stubInfo, GetByIDKind kind) 1114 1116 { 1115 ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind));1117 ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), FunctionPtr(appropriateOptimizingGetByIdFunction(kind), GetPropertyPtrTag)); 1116 1118 InlineAccess::rewireStubAsJump(stubInfo, stubInfo.slowPathStartLocation()); 1117 1119 } … … 1119 1121 void resetPutByID(CodeBlock* codeBlock, StructureStubInfo& stubInfo) 1120 1122 { 1121 V_JITOperation_ESsiJJI unoptimizedFunction = bitwise_cast<V_JITOperation_ESsiJJI>(readCallTarget(codeBlock, stubInfo.slowPathCallLocation()).executableAddress());1123 V_JITOperation_ESsiJJI unoptimizedFunction = untagCFunctionPtr<V_JITOperation_ESsiJJI>(readCallTarget(codeBlock, stubInfo.slowPathCallLocation()).executableAddress(), PutPropertyPtrTag); 1122 1124 V_JITOperation_ESsiJJI optimizedFunction; 1123 1125 if (unoptimizedFunction == operationPutByIdStrict || unoptimizedFunction == operationPutByIdStrictOptimize) … … 1132 1134 } 1133 1135 1134 ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), tagCFunctionPtr(optimizedFunction, SlowPathPtrTag));1136 ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), FunctionPtr(optimizedFunction, PutPropertyPtrTag)); 1135 1137 InlineAccess::rewireStubAsJump(stubInfo, stubInfo.slowPathStartLocation()); 1136 1138 } -
trunk/Source/JavaScriptCore/jit/Repatch.h
r224487 r230129 1 1 /* 2 * Copyright (C) 2011 , 2015Apple Inc. All rights reserved.2 * Copyright (C) 2011-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 45 45 void buildPutByIdList(ExecState*, JSValue, Structure*, const Identifier&, const PutPropertySlot&, StructureStubInfo&, PutKind); 46 46 void repatchIn(ExecState*, JSCell*, const Identifier&, bool wasFound, const PropertySlot&, StructureStubInfo&); 47 void linkFor(ExecState*, CallLinkInfo&, CodeBlock*, JSObject* callee, MacroAssemblerCodePtr );47 void linkFor(ExecState*, CallLinkInfo&, CodeBlock*, JSObject* callee, MacroAssemblerCodePtr, PtrTag); 48 48 void linkDirectFor(ExecState*, CallLinkInfo&, CodeBlock*, MacroAssemblerCodePtr); 49 49 void linkSlowFor(ExecState*, CallLinkInfo&); -
trunk/Source/JavaScriptCore/jit/SpecializedThunkJIT.h
r229609 r230129 171 171 for (unsigned i = 0; i < m_calls.size(); i++) 172 172 patchBuffer.link(m_calls[i].first, m_calls[i].second); 173 return FINALIZE_CODE(patchBuffer, NoPtrTag, "Specialized thunk for %s", thunkKind);173 return FINALIZE_CODE(patchBuffer, CodeEntryPtrTag, "Specialized thunk for %s", thunkKind); 174 174 } 175 175 … … 178 178 void callDoubleToDouble(FunctionPtr function) 179 179 { 180 m_calls.append(std::make_pair(call(SlowPathPtrTag), function)); 180 assertIsCFunctionPtr(function.executableAddress()); 181 PtrTag tag = ptrTag(SpecializedThunkPtrTag, nextPtrTagID()); 182 m_calls.append(std::make_pair(call(tag), FunctionPtr(function, tag))); 181 183 } 182 184 -
trunk/Source/JavaScriptCore/jit/ThunkGenerators.cpp
r229767 r230129 45 45 namespace JSC { 46 46 47 inline void emitPointerValidation(CCallHelpers& jit, GPRReg pointerGPR) 47 template<typename TagType> 48 inline void emitPointerValidation(CCallHelpers& jit, GPRReg pointerGPR, TagType tag) 48 49 { 49 50 if (ASSERT_DISABLED) … … 53 54 isNonZero.link(&jit); 54 55 jit.pushToSave(pointerGPR); 56 jit.untagPtr(pointerGPR, tag); 55 57 jit.load8(pointerGPR, pointerGPR); 56 58 jit.popToRestore(pointerGPR); … … 69 71 jit.copyCalleeSavesToEntryFrameCalleeSavesBuffer(vm->topEntryFrame); 70 72 73 PtrTag callTag = ptrTag(JITThunkPtrTag, nextPtrTagID()); 71 74 jit.setupArguments<decltype(lookupExceptionHandler)>(CCallHelpers::TrustedImmPtr(vm), GPRInfo::callFrameRegister); 72 jit.move(CCallHelpers::TrustedImmPtr( bitwise_cast<void*>(lookupExceptionHandler)), GPRInfo::nonArgGPR0);73 emitPointerValidation(jit, GPRInfo::nonArgGPR0 );74 jit.call(GPRInfo::nonArgGPR0, NoPtrTag);75 jit.move(CCallHelpers::TrustedImmPtr(tagCFunctionPtr(lookupExceptionHandler, callTag)), GPRInfo::nonArgGPR0); 76 emitPointerValidation(jit, GPRInfo::nonArgGPR0, callTag); 77 jit.call(GPRInfo::nonArgGPR0, callTag); 75 78 jit.jumpToExceptionHandler(*vm); 76 79 80 PtrTag thunkTag = ptrTag(JITThunkPtrTag, vm, throwExceptionFromCallSlowPathGenerator); 77 81 LinkBuffer patchBuffer(jit, GLOBAL_THUNK_ID); 78 return FINALIZE_CODE(patchBuffer, NoPtrTag, "Throw exception from call slow path thunk");82 return FINALIZE_CODE(patchBuffer, thunkTag, "Throw exception from call slow path thunk"); 79 83 } 80 84 81 85 static void slowPathFor( 82 CCallHelpers& jit, VM* vm, Sprt_JITOperation_ECli slowPathFunction) 83 { 86 CCallHelpers& jit, VM* vm, Sprt_JITOperation_ECli slowPathFunction, PtrTag expectedLinkedTargetTag) 87 { 88 PtrTag callTag = ptrTag(JITThunkPtrTag, nextPtrTagID()); 89 84 90 jit.sanitizeStackInline(*vm, GPRInfo::nonArgGPR0); 85 91 jit.emitFunctionPrologue(); … … 94 100 jit.addPtr(CCallHelpers::TrustedImm32(32), CCallHelpers::stackPointerRegister, GPRInfo::argumentGPR0); 95 101 jit.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1); 96 jit.move(CCallHelpers::TrustedImmPtr( bitwise_cast<void*>(slowPathFunction)), GPRInfo::nonArgGPR0);97 emitPointerValidation(jit, GPRInfo::nonArgGPR0 );98 jit.call(GPRInfo::nonArgGPR0, NoPtrTag);102 jit.move(CCallHelpers::TrustedImmPtr(tagCFunctionPtr(slowPathFunction, callTag)), GPRInfo::nonArgGPR0); 103 emitPointerValidation(jit, GPRInfo::nonArgGPR0, callTag); 104 jit.call(GPRInfo::nonArgGPR0, callTag); 99 105 jit.loadPtr(CCallHelpers::Address(GPRInfo::returnValueGPR, 8), GPRInfo::returnValueGPR2); 100 106 jit.loadPtr(CCallHelpers::Address(GPRInfo::returnValueGPR), GPRInfo::returnValueGPR); … … 104 110 jit.addPtr(CCallHelpers::TrustedImm32(-maxFrameExtentForSlowPathCall), CCallHelpers::stackPointerRegister); 105 111 jit.setupArguments<decltype(slowPathFunction)>(GPRInfo::regT2); 106 jit.move(CCallHelpers::TrustedImmPtr( bitwise_cast<void*>(slowPathFunction)), GPRInfo::nonArgGPR0);107 emitPointerValidation(jit, GPRInfo::nonArgGPR0 );108 jit.call(GPRInfo::nonArgGPR0, NoPtrTag);112 jit.move(CCallHelpers::TrustedImmPtr(tagCFunctionPtr(slowPathFunction, callTag)), GPRInfo::nonArgGPR0); 113 emitPointerValidation(jit, GPRInfo::nonArgGPR0, callTag); 114 jit.call(GPRInfo::nonArgGPR0, callTag); 109 115 if (maxFrameExtentForSlowPathCall) 110 116 jit.addPtr(CCallHelpers::TrustedImm32(maxFrameExtentForSlowPathCall), CCallHelpers::stackPointerRegister); … … 117 123 // The second return value GPR will hold a non-zero value for tail calls. 118 124 119 emitPointerValidation(jit, GPRInfo::returnValueGPR );125 emitPointerValidation(jit, GPRInfo::returnValueGPR, expectedLinkedTargetTag); 120 126 jit.emitFunctionEpilogue(); 127 jit.untagReturnAddress(); 121 128 122 129 RELEASE_ASSERT(reinterpret_cast<void*>(KeepTheFrame) == reinterpret_cast<void*>(0)); … … 127 134 128 135 doNotTrash.link(&jit); 129 jit.jump(GPRInfo::returnValueGPR, NoPtrTag);136 jit.jump(GPRInfo::returnValueGPR, expectedLinkedTargetTag); 130 137 } 131 138 … … 138 145 // been adjusted, and all other registers to be available for use. 139 146 CCallHelpers jit; 140 141 slowPathFor(jit, vm, operationLinkCall); 142 147 148 PtrTag expectedLinkedTargetTag = ptrTag(OperationLinkCallPtrTag, vm); 149 slowPathFor(jit, vm, operationLinkCall, expectedLinkedTargetTag); 150 143 151 LinkBuffer patchBuffer(jit, GLOBAL_THUNK_ID); 144 return FINALIZE_CODE(patchBuffer, N oPtrTag, "Link call slow path thunk");152 return FINALIZE_CODE(patchBuffer, NearCallPtrTag, "Link call slow path thunk"); 145 153 } 146 154 … … 150 158 { 151 159 CCallHelpers jit; 152 153 slowPathFor(jit, vm, operationLinkPolymorphicCall); 154 160 161 PtrTag expectedLinkedTargetTag = ptrTag(OperationLinkPolymorphicCallPtrTag, vm); 162 slowPathFor(jit, vm, operationLinkPolymorphicCall, expectedLinkedTargetTag); 163 155 164 LinkBuffer patchBuffer(jit, GLOBAL_THUNK_ID); 156 return FINALIZE_CODE(patchBuffer, N oPtrTag, "Link polymorphic call slow path thunk");165 return FINALIZE_CODE(patchBuffer, NearCallPtrTag, "Link polymorphic call slow path thunk"); 157 166 } 158 167 … … 201 210 202 211 // Now we know we have a JSFunction. 203 212 204 213 jit.loadPtr( 205 214 CCallHelpers::Address(GPRInfo::regT0, JSFunction::offsetOfExecutable()), … … 222 231 // Make a tail call. This will return back to JIT code. 223 232 JSInterfaceJIT::Label callCode(jit.label()); 224 emitPointerValidation(jit, GPRInfo::regT4 );233 emitPointerValidation(jit, GPRInfo::regT4, CodeEntryWithArityCheckPtrTag); 225 234 if (callLinkInfo.isTailCall()) { 226 235 jit.preserveReturnAddressAfterCall(GPRInfo::regT0); 227 236 jit.prepareForTailCallSlow(GPRInfo::regT4); 228 237 } 229 jit.jump(GPRInfo::regT4, NoPtrTag);238 jit.jump(GPRInfo::regT4, CodeEntryWithArityCheckPtrTag); 230 239 231 240 notJSFunction.link(&jit); 232 241 slowCase.append(jit.branchIfNotType(GPRInfo::regT0, InternalFunctionType)); 233 jit.move(CCallHelpers::TrustedImmPtr(vm->getCTIInternalFunctionTrampolineFor(callLinkInfo.specializationKind()).executableAddress()), GPRInfo::regT4); 242 void* executableAddress = vm->getCTIInternalFunctionTrampolineFor(callLinkInfo.specializationKind()).retagged(CodeEntryPtrTag, CodeEntryWithArityCheckPtrTag).executableAddress(); 243 jit.move(CCallHelpers::TrustedImmPtr(executableAddress), GPRInfo::regT4); 234 244 jit.jump().linkTo(callCode, &jit); 235 245 … … 237 247 238 248 // Here we don't know anything, so revert to the full slow path. 239 slowPathFor(jit, vm, operationVirtualCall); 249 PtrTag expectedLinkedTargetTag = ptrTag(OperationVirtualCallPtrTag, vm); 250 slowPathFor(jit, vm, operationVirtualCall, expectedLinkedTargetTag); 240 251 241 252 LinkBuffer patchBuffer(jit, GLOBAL_THUNK_ID); 242 253 return FINALIZE_CODE( 243 patchBuffer, N oPtrTag,254 patchBuffer, NearCallPtrTag, 244 255 "Virtual %s slow path thunk", 245 256 callLinkInfo.callMode() == CallMode::Regular ? "call" : callLinkInfo.callMode() == CallMode::Tail ? "tail call" : "construct"); … … 294 305 jit.loadPtr(JSInterfaceJIT::Address(JSInterfaceJIT::regT1, JSFunction::offsetOfExecutable()), JSInterfaceJIT::regT1); 295 306 jit.xorPtr(JSInterfaceJIT::TrustedImmPtr(JSFunctionPoison::key()), JSInterfaceJIT::regT1); 296 jit.call(JSInterfaceJIT::Address(JSInterfaceJIT::regT1, executableOffsetToFunction), NoPtrTag);307 jit.call(JSInterfaceJIT::Address(JSInterfaceJIT::regT1, executableOffsetToFunction), CodeEntryPtrTag); 297 308 } else 298 jit.call(JSInterfaceJIT::Address(JSInterfaceJIT::regT1, InternalFunction::offsetOfNativeFunctionFor(kind)), NoPtrTag);309 jit.call(JSInterfaceJIT::Address(JSInterfaceJIT::regT1, InternalFunction::offsetOfNativeFunctionFor(kind)), CodeEntryPtrTag); 299 310 300 311 jit.addPtr(JSInterfaceJIT::TrustedImm32(8), JSInterfaceJIT::stackPointerRegister); … … 315 326 jit.move(JSInterfaceJIT::TrustedImm64(NativeCodePoison::key()), X86Registers::esi); 316 327 jit.xor64(X86Registers::esi, X86Registers::r9); 317 jit.call(X86Registers::r9, NoPtrTag);328 jit.call(X86Registers::r9, CodeEntryPtrTag); 318 329 319 330 #else … … 330 341 jit.loadPtr(JSInterfaceJIT::Address(X86Registers::edx, JSFunction::offsetOfExecutable()), X86Registers::r9); 331 342 jit.xorPtr(JSInterfaceJIT::TrustedImmPtr(JSFunctionPoison::key()), X86Registers::r9); 332 jit.call(JSInterfaceJIT::Address(X86Registers::r9, executableOffsetToFunction), NoPtrTag);343 jit.call(JSInterfaceJIT::Address(X86Registers::r9, executableOffsetToFunction), CodeEntryPtrTag); 333 344 } else 334 jit.call(JSInterfaceJIT::Address(X86Registers::edx, InternalFunction::offsetOfNativeFunctionFor(kind)), NoPtrTag);345 jit.call(JSInterfaceJIT::Address(X86Registers::edx, InternalFunction::offsetOfNativeFunctionFor(kind)), CodeEntryPtrTag); 335 346 336 347 jit.addPtr(JSInterfaceJIT::TrustedImm32(4 * sizeof(int64_t)), JSInterfaceJIT::stackPointerRegister); … … 354 365 jit.move(JSInterfaceJIT::TrustedImm64(NativeCodePoison::key()), ARM64Registers::x1); 355 366 jit.xor64(ARM64Registers::x1, ARM64Registers::x2); 356 jit.call(ARM64Registers::x2, NoPtrTag);367 jit.call(ARM64Registers::x2, CodeEntryPtrTag); 357 368 358 369 #elif CPU(ARM) || CPU(MIPS) … … 370 381 jit.loadPtr(JSInterfaceJIT::Address(JSInterfaceJIT::argumentGPR1, JSFunction::offsetOfExecutable()), JSInterfaceJIT::regT2); 371 382 jit.xorPtr(JSInterfaceJIT::TrustedImmPtr(JSFunctionPoison::key()), JSInterfaceJIT::regT2); 372 jit.call(JSInterfaceJIT::Address(JSInterfaceJIT::regT2, executableOffsetToFunction), NoPtrTag);383 jit.call(JSInterfaceJIT::Address(JSInterfaceJIT::regT2, executableOffsetToFunction), CodeEntryPtrTag); 373 384 } else 374 jit.call(JSInterfaceJIT::Address(JSInterfaceJIT::argumentGPR1, InternalFunction::offsetOfNativeFunctionFor(kind)), NoPtrTag);385 jit.call(JSInterfaceJIT::Address(JSInterfaceJIT::argumentGPR1, InternalFunction::offsetOfNativeFunctionFor(kind)), CodeEntryPtrTag); 375 386 376 387 #if CPU(MIPS) … … 428 439 429 440 LinkBuffer patchBuffer(jit, GLOBAL_THUNK_ID); 430 return FINALIZE_CODE(patchBuffer, NoPtrTag, "%s %s%s trampoline", thunkFunctionType == ThunkFunctionType::JSFunction ? "native" : "internal", entryType == EnterViaJumpWithSavedTags ? "Tail With Saved Tags " : entryType == EnterViaJumpWithoutSavedTags ? "Tail Without Saved Tags " : "", toCString(kind).data());441 return FINALIZE_CODE(patchBuffer, CodeEntryPtrTag, "%s %s%s trampoline", thunkFunctionType == ThunkFunctionType::JSFunction ? "native" : "internal", entryType == EnterViaJumpWithSavedTags ? "Tail With Saved Tags " : entryType == EnterViaJumpWithoutSavedTags ? "Tail Without Saved Tags " : "", toCString(kind).data()); 431 442 } 432 443 … … 476 487 jit.pop(JSInterfaceJIT::regT4); 477 488 # endif 489 jit.tagReturnAddress(); 478 490 jit.move(JSInterfaceJIT::callFrameRegister, JSInterfaceJIT::regT3); 479 491 jit.load32(JSInterfaceJIT::Address(JSInterfaceJIT::callFrameRegister, CallFrameSlot::argumentCount * sizeof(Register)), JSInterfaceJIT::argumentGPR2); … … 502 514 jit.lshift64(JSInterfaceJIT::TrustedImm32(3), extraTemp); 503 515 jit.addPtr(extraTemp, JSInterfaceJIT::callFrameRegister); 516 jit.untagReturnAddress(); 504 517 jit.addPtr(extraTemp, JSInterfaceJIT::stackPointerRegister); 518 jit.tagReturnAddress(); 505 519 506 520 // Move current frame down argumentGPR0 number of slots … … 525 539 # endif 526 540 jit.ret(); 527 #else 541 #else // USE(JSVALUE64) section above, USE(JSVALUE32_64) section below. 528 542 # if CPU(X86) 529 543 jit.pop(JSInterfaceJIT::regT4); … … 582 596 # endif 583 597 jit.ret(); 584 #endif 598 #endif // End of USE(JSVALUE32_64) section. 585 599 586 600 LinkBuffer patchBuffer(jit, GLOBAL_THUNK_ID); 587 return FINALIZE_CODE(patchBuffer, NoPtrTag, "fixup arity"); 601 PtrTag tag = ptrTag(JITThunkPtrTag, vm); 602 return FINALIZE_CODE(patchBuffer, tag, "fixup arity"); 588 603 } 589 604 … … 595 610 596 611 LinkBuffer patchBuffer(jit, GLOBAL_THUNK_ID); 597 return FINALIZE_CODE(patchBuffer, N oPtrTag, "unreachable thunk");612 return FINALIZE_CODE(patchBuffer, NearCallPtrTag, "unreachable thunk"); 598 613 } 599 614 … … 1182 1197 jit.xor64(GPRInfo::regT1, GPRInfo::regT0); 1183 1198 #endif 1184 emitPointerValidation(jit, GPRInfo::regT0 );1185 jit.call(GPRInfo::regT0, NoPtrTag);1186 1199 emitPointerValidation(jit, GPRInfo::regT0, CodeEntryWithArityCheckPtrTag); 1200 jit.call(GPRInfo::regT0, CodeEntryWithArityCheckPtrTag); 1201 1187 1202 jit.emitFunctionEpilogue(); 1188 1203 jit.ret(); … … 1191 1206 linkBuffer.link(noCode, CodeLocationLabel(vm->jitStubs->ctiNativeTailCallWithoutSavedTags(vm))); 1192 1207 return FINALIZE_CODE( 1193 linkBuffer, NoPtrTag, "Specialized thunk for bound function calls with no arguments");1208 linkBuffer, CodeEntryPtrTag, "Specialized thunk for bound function calls with no arguments"); 1194 1209 } 1195 1210 -
trunk/Source/JavaScriptCore/llint/LLIntData.cpp
r229547 r230129 64 64 } 65 65 66 if (VM::canUseJIT()) { 67 for (int i = NUMBER_OF_BYTECODE_IDS; i < NUMBER_OF_BYTECODE_IDS + NUMBER_OF_BYTECODE_HELPER_IDS; ++i) 68 Data::s_opcodeMap[i] = tagCodePtr(Data::s_opcodeMap[i], ptrTag(BytecodeHelperPtrTag, i)); 69 } else { 70 static const PtrTag tagsForOpcode[] = { 71 CodeEntryPtrTag, // llint_program_prologue 72 CodeEntryPtrTag, // llint_eval_prologue 73 CodeEntryPtrTag, // llint_module_program_prologue 74 CodeEntryPtrTag, // llint_function_for_call_prologue 75 CodeEntryPtrTag, // llint_function_for_construct_prologue 76 CodeEntryWithArityCheckPtrTag, // llint_function_for_call_arity_check 77 CodeEntryWithArityCheckPtrTag, // llint_function_for_construct_arity_check 78 CodeEntryPtrTag, // llint_generic_return_point 79 BytecodePtrTag, // llint_throw_from_slow_path_trampoline 80 ExceptionHandlerPtrTag, // llint_throw_during_call_trampoline 81 NativeCodePtrTag, // llint_native_call_trampoline 82 NativeCodePtrTag, // llint_native_construct_trampoline 83 NativeCodePtrTag, // llint_internal_function_call_trampoline 84 NativeCodePtrTag, // llint_internal_function_construct_trampoline 85 ExceptionHandlerPtrTag, // handleUncaughtException 86 }; 87 88 static_assert(sizeof(tagsForOpcode) / sizeof(tagsForOpcode[0]) == NUMBER_OF_BYTECODE_HELPER_IDS, ""); 89 static_assert(static_cast<uintptr_t>(llint_program_prologue) == NUMBER_OF_BYTECODE_IDS, ""); 90 91 for (int i = 0; i < NUMBER_OF_BYTECODE_HELPER_IDS; ++i) { 92 int opcodeID = i + NUMBER_OF_BYTECODE_IDS; 93 Data::s_opcodeMap[opcodeID] = tagCodePtr(Data::s_opcodeMap[opcodeID], tagsForOpcode[i]); 94 } 66 static const PtrTag tagsForOpcode[] = { 67 CodeEntryPtrTag, // llint_program_prologue 68 CodeEntryPtrTag, // llint_eval_prologue 69 CodeEntryPtrTag, // llint_module_program_prologue 70 CodeEntryPtrTag, // llint_function_for_call_prologue 71 CodeEntryPtrTag, // llint_function_for_construct_prologue 72 CodeEntryWithArityCheckPtrTag, // llint_function_for_call_arity_check 73 CodeEntryWithArityCheckPtrTag, // llint_function_for_construct_arity_check 74 CodeEntryPtrTag, // llint_generic_return_point 75 BytecodePtrTag, // llint_throw_from_slow_path_trampoline 76 ExceptionHandlerPtrTag, // llint_throw_during_call_trampoline 77 CodeEntryPtrTag, // llint_native_call_trampoline 78 CodeEntryPtrTag, // llint_native_construct_trampoline 79 CodeEntryPtrTag, // llint_internal_function_call_trampoline 80 CodeEntryPtrTag, // llint_internal_function_construct_trampoline 81 ExceptionHandlerPtrTag, // handleUncaughtException 82 }; 83 84 static_assert(sizeof(tagsForOpcode) / sizeof(tagsForOpcode[0]) == NUMBER_OF_BYTECODE_HELPER_IDS, ""); 85 static_assert(static_cast<uintptr_t>(llint_program_prologue) == NUMBER_OF_BYTECODE_IDS, ""); 86 87 for (int i = 0; i < NUMBER_OF_BYTECODE_HELPER_IDS; ++i) { 88 int opcodeID = i + NUMBER_OF_BYTECODE_IDS; 89 Data::s_opcodeMap[opcodeID] = tagCodePtr(Data::s_opcodeMap[opcodeID], tagsForOpcode[i]); 95 90 } 96 91 -
trunk/Source/JavaScriptCore/llint/LLIntData.h
r229481 r230129 86 86 87 87 #if ENABLE(JIT) 88 88 89 ALWAYS_INLINE LLIntCode getCodeFunctionPtr(OpcodeID codeId) 89 90 { … … 94 95 ALWAYS_INLINE void* getCodePtr(JSC::EncodedJSValue glueHelper()) 95 96 { 96 return WTF_PREPARE_FUNCTION_POINTER_FOR_EXECUTION(glueHelper);97 return bitwise_cast<void*>(glueHelper); 97 98 } 98 99 -
trunk/Source/JavaScriptCore/llint/LLIntEntrypoint.cpp
r226011 r230129 1 1 /* 2 * Copyright (C) 2012 , 2013Apple Inc. All rights reserved.2 * Copyright (C) 2012-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 72 72 #if ENABLE(JIT) 73 73 if (VM::canUseJIT()) { 74 MacroAssemblerCodeRef codeRef = vm.getCTIStub(evalEntryThunkGenerator); 74 75 codeBlock->setJITCode( 75 adoptRef(*new DirectJITCode( vm.getCTIStub(evalEntryThunkGenerator), MacroAssemblerCodePtr(), JITCode::InterpreterThunk)));76 adoptRef(*new DirectJITCode(codeRef, codeRef.retaggedCode(CodeEntryPtrTag, CodeEntryWithArityCheckPtrTag), JITCode::InterpreterThunk))); 76 77 return; 77 78 } … … 79 80 80 81 UNUSED_PARAM(vm); 82 MacroAssemblerCodeRef codeRef = MacroAssemblerCodeRef::createLLIntCodeRef(llint_eval_prologue); 81 83 codeBlock->setJITCode( 82 adoptRef(*new DirectJITCode( MacroAssemblerCodeRef::createLLIntCodeRef(llint_eval_prologue), MacroAssemblerCodePtr(), JITCode::InterpreterThunk)));84 adoptRef(*new DirectJITCode(codeRef, codeRef.retaggedCode(CodeEntryPtrTag, CodeEntryWithArityCheckPtrTag), JITCode::InterpreterThunk))); 83 85 } 84 86 … … 87 89 #if ENABLE(JIT) 88 90 if (VM::canUseJIT()) { 91 MacroAssemblerCodeRef codeRef = vm.getCTIStub(programEntryThunkGenerator); 89 92 codeBlock->setJITCode( 90 adoptRef(*new DirectJITCode( vm.getCTIStub(programEntryThunkGenerator), MacroAssemblerCodePtr(), JITCode::InterpreterThunk)));93 adoptRef(*new DirectJITCode(codeRef, codeRef.retaggedCode(CodeEntryPtrTag, CodeEntryWithArityCheckPtrTag), JITCode::InterpreterThunk))); 91 94 return; 92 95 } … … 94 97 95 98 UNUSED_PARAM(vm); 99 MacroAssemblerCodeRef codeRef = MacroAssemblerCodeRef::createLLIntCodeRef(llint_program_prologue); 96 100 codeBlock->setJITCode( 97 adoptRef(*new DirectJITCode( MacroAssemblerCodeRef::createLLIntCodeRef(llint_program_prologue), MacroAssemblerCodePtr(), JITCode::InterpreterThunk)));101 adoptRef(*new DirectJITCode(codeRef, codeRef.retaggedCode(CodeEntryPtrTag, CodeEntryWithArityCheckPtrTag), JITCode::InterpreterThunk))); 98 102 } 99 103 … … 102 106 #if ENABLE(JIT) 103 107 if (VM::canUseJIT()) { 108 MacroAssemblerCodeRef codeRef = vm.getCTIStub(moduleProgramEntryThunkGenerator); 104 109 codeBlock->setJITCode( 105 adoptRef(*new DirectJITCode( vm.getCTIStub(moduleProgramEntryThunkGenerator), MacroAssemblerCodePtr(), JITCode::InterpreterThunk)));110 adoptRef(*new DirectJITCode(codeRef, codeRef.retaggedCode(CodeEntryPtrTag, CodeEntryWithArityCheckPtrTag), JITCode::InterpreterThunk))); 106 111 return; 107 112 } … … 109 114 110 115 UNUSED_PARAM(vm); 116 MacroAssemblerCodeRef codeRef = MacroAssemblerCodeRef::createLLIntCodeRef(llint_module_program_prologue); 111 117 codeBlock->setJITCode( 112 adoptRef(*new DirectJITCode( MacroAssemblerCodeRef::createLLIntCodeRef(llint_module_program_prologue), MacroAssemblerCodePtr(), JITCode::InterpreterThunk)));118 adoptRef(*new DirectJITCode(codeRef, codeRef.retaggedCode(CodeEntryPtrTag, CodeEntryWithArityCheckPtrTag), JITCode::InterpreterThunk))); 113 119 } 114 120 -
trunk/Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
r229957 r230129 1402 1402 callLinkInfo->callee.set(vm, callerCodeBlock, internalFunction); 1403 1403 callLinkInfo->lastSeenCallee.set(vm, callerCodeBlock, internalFunction); 1404 callLinkInfo->machineCodeTarget = codePtr; 1405 callLinkInfo->callPtrTag = NativeCodePtrTag; 1404 callLinkInfo->machineCodeTarget = codePtr.retagged(CodeEntryPtrTag, LLIntCallICPtrTag); 1406 1405 } 1407 1406 1407 assertIsTaggedWith(codePtr.executableAddress(), CodeEntryPtrTag); 1408 1408 PoisonedMasmPtr::assertIsNotPoisoned(codePtr.executableAddress()); 1409 LLINT_CALL_RETURN(exec, execCallee, codePtr.executableAddress(), NativeCodePtrTag);1409 LLINT_CALL_RETURN(exec, execCallee, codePtr.executableAddress(), CodeEntryPtrTag); 1410 1410 } 1411 1411 throwScope.release(); … … 1421 1421 if (executable->isHostFunction()) { 1422 1422 codePtr = executable->entrypointFor(kind, MustCheckArity); 1423 callPtrTag = NativeCodePtrTag;1423 callPtrTag = CodeEntryWithArityCheckPtrTag; 1424 1424 } else { 1425 1425 FunctionExecutable* functionExecutable = static_cast<FunctionExecutable*>(executable); … … 1445 1445 codePtr = functionExecutable->entrypointFor(kind, arity); 1446 1446 } 1447 assertIsTaggedWith(codePtr.executableAddress(), callPtrTag); 1447 1448 1448 1449 ASSERT(!!codePtr); … … 1457 1458 callLinkInfo->callee.set(vm, callerCodeBlock, callee); 1458 1459 callLinkInfo->lastSeenCallee.set(vm, callerCodeBlock, callee); 1459 callLinkInfo->machineCodeTarget = codePtr ;1460 callLinkInfo->machineCodeTarget = codePtr.retagged(callPtrTag, LLIntCallICPtrTag); 1460 1461 RELEASE_ASSERT(callPtrTag != NoPtrTag); 1461 callLinkInfo->callPtrTag = callPtrTag;1462 1462 if (codeBlock) 1463 1463 codeBlock->linkIncomingCall(exec, callLinkInfo); 1464 1464 } 1465 1465 1466 assertIsTaggedWith(codePtr.executableAddress(), callPtrTag); 1466 1467 PoisonedMasmPtr::assertIsNotPoisoned(codePtr.executableAddress()); 1467 1468 LLINT_CALL_RETURN(exec, execCallee, codePtr.executableAddress(), callPtrTag); -
trunk/Source/JavaScriptCore/llint/LLIntThunks.cpp
r229609 r230129 59 59 LLIntCode target = LLInt::getCodeFunctionPtr(opcodeID); 60 60 jit.move(JSInterfaceJIT::TrustedImmPtr(bitwise_cast<void*>(target)), JSInterfaceJIT::regT0); 61 jit.jump(JSInterfaceJIT::regT0, ptrTag(BytecodeHelperPtrTag, opcodeID));61 jit.jump(JSInterfaceJIT::regT0, thunkTag); 62 62 63 63 LinkBuffer patchBuffer(jit, GLOBAL_THUNK_ID); -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter.asm
r229957 r230129 265 265 const CodeEntryWithArityCheckPtrTag = constexpr CodeEntryWithArityCheckPtrTag 266 266 const ExceptionHandlerPtrTag = constexpr ExceptionHandlerPtrTag 267 const NativeCodePtrTag = constexpr NativeCodePtrTag267 const LLIntCallICPtrTag = constexpr LLIntCallICPtrTag 268 268 const NoPtrTag = constexpr NoPtrTag 269 269 const SlowPathPtrTag = constexpr SlowPathPtrTag … … 854 854 end 855 855 856 macro prepareForRegularCall(callee, temp1, temp2, temp3, prepareCallPtrTag)856 macro prepareForRegularCall(callee, temp1, temp2, temp3, callPtrTag) 857 857 addp CallerFrameAndPCSize, sp 858 858 end 859 859 860 860 # sp points to the new frame 861 macro prepareForTailCall(callee, temp1, temp2, temp3, prepareCallPtrTag)861 macro prepareForTailCall(callee, temp1, temp2, temp3, callPtrTag) 862 862 restoreCalleeSavesUsedByLLInt() 863 863 … … 908 908 btinz temp2, .copyLoop 909 909 910 prepareCallPtrTag(temp2)911 910 move temp1, sp 912 jmp callee, temp2911 jmp callee, callPtrTag 913 912 end 914 913 … … 920 919 btpz calleeFramePtr, .dontUpdateSP 921 920 move calleeFramePtr, sp 922 prepareCall(callee, t2, t3, t4, macro (callPtrTagReg) 923 if POINTER_PROFILING 924 move SlowPathPtrTag, callPtrTagReg 925 end 926 end) 921 prepareCall(callee, t2, t3, t4, SlowPathPtrTag) 927 922 .dontUpdateSP: 928 923 callTargetFunction(callee, SlowPathPtrTag) … … 1006 1001 # Do the bare minimum required to execute code. Sets up the PC, leave the CodeBlock* 1007 1002 # in t1. May also trigger prologue entry OSR. 1008 macro prologue(codeBlockGetter, codeBlockSetter, osrSlowPath, traceSlowPath )1003 macro prologue(codeBlockGetter, codeBlockSetter, osrSlowPath, traceSlowPath, targetPtrTag) 1009 1004 # Set up the call frame and check if we should OSR. 1010 1005 tagReturnAddress sp … … 1037 1032 if ARM64 or ARM64E 1038 1033 pop lr, cfr 1034 untagReturnAddress sp 1039 1035 elsif ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS 1040 1036 pop cfr … … 1043 1039 pop cfr 1044 1040 end 1045 jmp r0, CodeEntryPtrTag1041 jmp r0, targetPtrTag 1046 1042 .recover: 1047 1043 codeBlockGetter(t1, t2) … … 1161 1157 _vmEntryToJavaScript: 1162 1158 end 1163 doVMEntry(makeJavaScriptCall , CodeEntryPtrTag, CodeEntryWithArityCheckPtrTag)1159 doVMEntry(makeJavaScriptCall) 1164 1160 1165 1161 … … 1170 1166 _vmEntryToNative: 1171 1167 end 1172 doVMEntry(makeHostFunctionCall , NativeCodePtrTag, NativeCodePtrTag)1168 doVMEntry(makeHostFunctionCall) 1173 1169 1174 1170 … … 1288 1284 1289 1285 _llint_program_prologue: 1290 prologue(notFunctionCodeBlockGetter, notFunctionCodeBlockSetter, _llint_entry_osr, _llint_trace_prologue )1286 prologue(notFunctionCodeBlockGetter, notFunctionCodeBlockSetter, _llint_entry_osr, _llint_trace_prologue, CodeEntryPtrTag) 1291 1287 dispatch(0) 1292 1288 1293 1289 1294 1290 _llint_module_program_prologue: 1295 prologue(notFunctionCodeBlockGetter, notFunctionCodeBlockSetter, _llint_entry_osr, _llint_trace_prologue )1291 prologue(notFunctionCodeBlockGetter, notFunctionCodeBlockSetter, _llint_entry_osr, _llint_trace_prologue, CodeEntryPtrTag) 1296 1292 dispatch(0) 1297 1293 1298 1294 1299 1295 _llint_eval_prologue: 1300 prologue(notFunctionCodeBlockGetter, notFunctionCodeBlockSetter, _llint_entry_osr, _llint_trace_prologue )1296 prologue(notFunctionCodeBlockGetter, notFunctionCodeBlockSetter, _llint_entry_osr, _llint_trace_prologue, CodeEntryPtrTag) 1301 1297 dispatch(0) 1302 1298 1303 1299 1304 1300 _llint_function_for_call_prologue: 1305 prologue(functionForCallCodeBlockGetter, functionCodeBlockSetter, _llint_entry_osr_function_for_call, _llint_trace_prologue_function_for_call )1301 prologue(functionForCallCodeBlockGetter, functionCodeBlockSetter, _llint_entry_osr_function_for_call, _llint_trace_prologue_function_for_call, CodeEntryPtrTag) 1306 1302 functionInitialization(0) 1307 1303 dispatch(0) … … 1309 1305 1310 1306 _llint_function_for_construct_prologue: 1311 prologue(functionForConstructCodeBlockGetter, functionCodeBlockSetter, _llint_entry_osr_function_for_construct, _llint_trace_prologue_function_for_construct )1307 prologue(functionForConstructCodeBlockGetter, functionCodeBlockSetter, _llint_entry_osr_function_for_construct, _llint_trace_prologue_function_for_construct, CodeEntryPtrTag) 1312 1308 functionInitialization(1) 1313 1309 dispatch(0) … … 1315 1311 1316 1312 _llint_function_for_call_arity_check: 1317 prologue(functionForCallCodeBlockGetter, functionCodeBlockSetter, _llint_entry_osr_function_for_call_arityCheck, _llint_trace_arityCheck_for_call )1313 prologue(functionForCallCodeBlockGetter, functionCodeBlockSetter, _llint_entry_osr_function_for_call_arityCheck, _llint_trace_arityCheck_for_call, CodeEntryWithArityCheckPtrTag) 1318 1314 functionArityCheck(.functionForCallBegin, _slow_path_call_arityCheck) 1319 1315 .functionForCallBegin: … … 1323 1319 1324 1320 _llint_function_for_construct_arity_check: 1325 prologue(functionForConstructCodeBlockGetter, functionCodeBlockSetter, _llint_entry_osr_function_for_construct_arityCheck, _llint_trace_arityCheck_for_construct )1321 prologue(functionForConstructCodeBlockGetter, functionCodeBlockSetter, _llint_entry_osr_function_for_construct_arityCheck, _llint_trace_arityCheck_for_construct, CodeEntryWithArityCheckPtrTag) 1326 1322 functionArityCheck(.functionForConstructBegin, _slow_path_construct_arityCheck) 1327 1323 .functionForConstructBegin: -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
r229957 r230129 97 97 end 98 98 99 macro doVMEntry(makeCall , unused1, unused2)99 macro doVMEntry(makeCall) 100 100 functionPrologue() 101 101 pushCalleeSaves() … … 1996 1996 storei CellTag, Callee + TagOffset[t3] 1997 1997 move t3, sp 1998 prepareCall(LLIntCallLinkInfo::machineCodeTarget[t1], t2, t3, t4, macro (callPtrTag) end)1999 callTargetFunction(LLIntCallLinkInfo::machineCodeTarget[t1], NoPtrTag)1998 prepareCall(LLIntCallLinkInfo::machineCodeTarget[t1], t2, t3, t4, LLIntCallICPtrTag) 1999 callTargetFunction(LLIntCallLinkInfo::machineCodeTarget[t1], LLIntCallICPtrTag) 2000 2000 2001 2001 .opCallSlow: -
trunk/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
r229957 r230129 116 116 end 117 117 118 macro doVMEntry(makeCall , callTag, callWithArityCheckTag)118 macro doVMEntry(makeCall) 119 119 functionPrologue() 120 120 pushCalleeSaves() … … 226 226 checkStackPointerAlignment(extraTempReg, 0xbad0dc02) 227 227 228 if POINTER_PROFILING 229 btbnz ProtoCallFrame::hasArityMismatch[protoCallFrame], .doCallWithArityCheck 230 move callTag, t2 231 jmp .readyToCall 232 .doCallWithArityCheck: 233 move callWithArityCheckTag, t2 234 .readyToCall: 235 end 236 237 makeCall(entry, t3, t2) 228 makeCall(entry, t3) 238 229 239 230 # We may have just made a call into a JS function, so we can't rely on sp … … 259 250 260 251 261 macro makeJavaScriptCall(entry, temp , callTag)252 macro makeJavaScriptCall(entry, temp) 262 253 addp 16, sp 263 254 if C_LOOP 264 255 cloopCallJSFunction entry 265 256 else 266 call entry, callTag257 call entry, CodeEntryWithArityCheckPtrTag 267 258 end 268 259 subp 16, sp … … 270 261 271 262 272 macro makeHostFunctionCall(entry, temp , callTag)263 macro makeHostFunctionCall(entry, temp) 273 264 move entry, temp 274 265 storep cfr, [sp] … … 280 271 # We need to allocate 32 bytes on the stack for the shadow space. 281 272 subp 32, sp 282 call temp, callTag273 call temp, CodeEntryPtrTag 283 274 addp 32, sp 284 275 else 285 call temp, callTag276 call temp, CodeEntryPtrTag 286 277 end 287 278 end … … 2061 2052 loadp _g_JITCodePoison, t2 2062 2053 xorp LLIntCallLinkInfo::machineCodeTarget[t1], t2 2063 prepareCall(t2, t1, t3, t4, macro (callPtrTag) 2064 if POINTER_PROFILING 2065 loadp LLIntCallLinkInfo::callPtrTag[t5], callPtrTag 2066 end 2067 end) 2068 if POINTER_PROFILING 2069 loadp LLIntCallLinkInfo::callPtrTag[t5], t3 2070 end 2071 callTargetFunction(t2, t3) 2054 prepareCall(t2, t1, t3, t4, LLIntCallICPtrTag) 2055 callTargetFunction(t2, LLIntCallICPtrTag) 2072 2056 else 2073 prepareCall(LLIntCallLinkInfo::machineCodeTarget[t1], t2, t3, t4, macro (callPtrTag) 2074 if POINTER_PROFILING 2075 loadp LLIntCallLinkInfo::callPtrTag[t5], callPtrTag 2076 end 2077 end) 2078 if POINTER_PROFILING 2079 loadp LLIntCallLinkInfo::callPtrTag[t5], t3 2080 end 2081 callTargetFunction(LLIntCallLinkInfo::machineCodeTarget[t1], t3) 2057 prepareCall(LLIntCallLinkInfo::machineCodeTarget[t1], t2, t3, t4, LLIntCallICPtrTag) 2058 callTargetFunction(LLIntCallLinkInfo::machineCodeTarget[t1], LLIntCallICPtrTag) 2082 2059 end 2083 2060 … … 2210 2187 if X86_64_WIN 2211 2188 subp 32, sp 2212 call executableOffsetToFunction[t1], NativeCodePtrTag2189 call executableOffsetToFunction[t1], CodeEntryPtrTag 2213 2190 addp 32, sp 2214 2191 else 2215 2192 loadp _g_NativeCodePoison, t2 2216 2193 xorp executableOffsetToFunction[t1], t2 2217 call t2, NativeCodePtrTag2194 call t2, CodeEntryPtrTag 2218 2195 end 2219 2196 end … … 2253 2230 if X86_64_WIN 2254 2231 subp 32, sp 2255 call offsetOfFunction[t1], NativeCodePtrTag2232 call offsetOfFunction[t1], CodeEntryPtrTag 2256 2233 addp 32, sp 2257 2234 else 2258 2235 loadp _g_NativeCodePoison, t2 2259 2236 xorp offsetOfFunction[t1], t2 2260 call t2, NativeCodePtrTag2237 call t2, CodeEntryPtrTag 2261 2238 end 2262 2239 end -
trunk/Source/JavaScriptCore/runtime/ExecutableBase.h
r225314 r230129 1 1 /* 2 * Copyright (C) 2009 , 2010, 2013-2016Apple Inc. All rights reserved.2 * Copyright (C) 2009-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 58 58 class ExecutableBase : public JSCell { 59 59 friend class JIT; 60 friend MacroAssemblerCodeRef boundThisNoArgsFunctionCallGenerator(VM*); 60 61 61 62 protected: -
trunk/Source/JavaScriptCore/runtime/NativeExecutable.cpp
r229547 r230129 67 67 m_jitCodeForConstructWithArityCheck = m_jitCodeForConstruct->addressForCall(MustCheckArity); 68 68 m_name = name; 69 70 assertIsTaggedWith(m_jitCodeForCall->addressForCall(ArityCheckNotRequired).executableAddress(), CodeEntryPtrTag); 71 assertIsTaggedWith(m_jitCodeForConstruct->addressForCall(ArityCheckNotRequired).executableAddress(), CodeEntryPtrTag); 72 assertIsTaggedWith(m_jitCodeForCallWithArityCheck.executableAddress(), CodeEntryWithArityCheckPtrTag); 73 assertIsTaggedWith(m_jitCodeForConstructWithArityCheck.executableAddress(), CodeEntryWithArityCheckPtrTag); 69 74 } 70 75 -
trunk/Source/JavaScriptCore/runtime/NativeFunction.h
r229574 r230129 71 71 72 72 TaggedNativeFunction(NativeFunction func) 73 : m_ptr(tagCFunctionPtr<void*>(func.m_ptr, NativeCodePtrTag))73 : m_ptr(tagCFunctionPtr<void*>(func.m_ptr, CodeEntryPtrTag)) 74 74 { } 75 75 TaggedNativeFunction(RawNativeFunction func) 76 : m_ptr(tagCFunctionPtr<void*>(func, NativeCodePtrTag))76 : m_ptr(tagCFunctionPtr<void*>(func, CodeEntryPtrTag)) 77 77 { } 78 78 … … 87 87 { 88 88 ASSERT(m_ptr); 89 return untagCFunctionPtr<NativeFunction>(m_ptr, NativeCodePtrTag);89 return untagCFunctionPtr<NativeFunction>(m_ptr, CodeEntryPtrTag); 90 90 } 91 91 -
trunk/Source/JavaScriptCore/runtime/PropertySlot.h
r222671 r230129 1 1 /* 2 * Copyright (C) 2005 , 2007, 2008, 2015-2016Apple Inc. All rights reserved.2 * Copyright (C) 2005-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * This library is free software; you can redistribute it and/or … … 258 258 259 259 ASSERT(getValue); 260 assertIsCFunctionPtr(getValue); 260 261 m_data.custom.getValue = getValue; 261 262 m_attributes = attributes; … … 279 280 280 281 ASSERT(getValue); 282 assertIsCFunctionPtr(getValue); 281 283 m_data.custom.getValue = getValue; 282 284 m_attributes = attributes; -
trunk/Source/JavaScriptCore/runtime/PtrTag.h
r230106 r230129 42 42 v(CodeEntryWithArityCheckPtrTag) \ 43 43 v(ExceptionHandlerPtrTag) \ 44 v(GetPropertyPtrTag) \ 45 v(GetterSetterPtrTag) \ 46 v(HasPropertyPtrTag) \ 44 47 v(JITCodePtrTag) \ 45 48 v(JITOperationPtrTag) \ 46 49 v(JITThunkPtrTag) \ 50 v(JITWriteThunkPtrTag) \ 51 v(LLIntCallICPtrTag) \ 52 v(MathICPtrTag) \ 47 53 v(NativeCodePtrTag) \ 54 v(OperationLinkCallPtrTag) \ 55 v(OperationLinkPolymorphicCallPtrTag) \ 56 v(OperationVirtualCallPtrTag) \ 57 v(PutPropertyPtrTag) \ 48 58 v(SlowPathPtrTag) \ 59 v(SpecializedThunkPtrTag) \ 49 60 v(SwitchTablePtrTag) \ 50 61 \ -
trunk/Source/JavaScriptCore/runtime/PutPropertySlot.h
r206525 r230129 1 1 /* 2 * Copyright (C) 2008-201 5Apple Inc. All rights reserved.2 * Copyright (C) 2008-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 70 70 void setCustomValue(JSObject* base, PutValueFunc function) 71 71 { 72 assertIsNullOrCFunctionPtr(function); 72 73 m_type = CustomValue; 73 74 m_base = base; … … 77 78 void setCustomAccessor(JSObject* base, PutValueFunc function) 78 79 { 80 assertIsNullOrCFunctionPtr(function); 79 81 m_type = CustomAccessor; 80 82 m_base = base; -
trunk/Source/JavaScriptCore/runtime/SamplingProfiler.cpp
r229815 r230129 1 1 /* 2 * Copyright (C) 2016-201 7Apple Inc. All rights reserved.2 * Copyright (C) 2016-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 358 358 callFrame = static_cast<ExecState*>(machineFrame); 359 359 machinePC = MachineContext::instructionPointer(registers); 360 llintPC = MachineContext::llintInstructionPointer(registers); 360 llintPC = removeCodePtrTag(MachineContext::llintInstructionPointer(registers)); 361 assertIsNotTagged(machinePC); 361 362 } 362 363 // FIXME: Lets have a way of detecting when we're parsing code. -
trunk/Source/JavaScriptCore/runtime/VMTraps.cpp
r228488 r230129 1 1 /* 2 * Copyright (C) 2017 Apple Inc. All rights reserved.2 * Copyright (C) 2017-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 60 60 , framePointer(MachineContext::framePointer(registers)) 61 61 { 62 assertIsNotTagged(trapPC); 62 63 } 63 64 … … 94 95 // mallocs, and we must prove the JS thread isn't holding the malloc lock 95 96 // to be able to do that without risking a deadlock. 97 assertIsNotTagged(trapPC); 96 98 if (!isJITPC(trapPC) && !LLInt::isLLIntPC(trapPC)) 97 99 return; … … 188 190 SignalContext context(registers); 189 191 192 assertIsNotTagged(context.trapPC); 190 193 if (!isJITPC(context.trapPC)) 191 194 return SignalAction::NotHandled; -
trunk/Source/JavaScriptCore/tools/SigillCrashAnalyzer.cpp
r217669 r230129 1 1 /* 2 * Copyright (C) 2017 Apple Inc. All rights reserved.2 * Copyright (C) 2017-2018 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 146 146 SignalContext context(registers); 147 147 148 assertIsNotTagged(context.machinePC); 148 149 if (!isJITPC(context.machinePC)) 149 150 return SignalAction::NotHandled; -
trunk/Source/JavaScriptCore/yarr/YarrJIT.cpp
r229609 r230129 3283 3283 m_tryReadUnicodeCharacterEntry = label(); 3284 3284 3285 #if CPU(ARM64) 3286 tagPtr(linkRegister, stackPointerRegister); 3287 #endif 3285 tagReturnAddress(); 3288 3286 3289 3287 tryReadUnicodeCharImpl(regT0); … … 3348 3346 #endif 3349 3347 #elif CPU(ARM64) 3350 tag Ptr(linkRegister, stackPointerRegister);3348 tagReturnAddress(); 3351 3349 if (m_decodeSurrogatePairs) { 3352 3350 pushPair(framePointerRegister, linkRegister); -
trunk/Source/WTF/ChangeLog
r230116 r230129 1 2018-03-30 Mark Lam <mark.lam@apple.com> 2 3 Add pointer profiling support in baseline JIT and supporting files. 4 https://bugs.webkit.org/show_bug.cgi?id=184200 5 <rdar://problem/39057300> 6 7 Reviewed by Filip Pizlo. 8 9 * wtf/PointerPreparations.h: 10 - Remove WTF_PREPARE_FUNCTION_POINTER_FOR_EXECUTION. It is no longer needed. 11 1 12 2018-03-30 JF Bastien <jfbastien@apple.com> 2 13 -
trunk/Source/WTF/wtf/PointerPreparations.h
r228397 r230129 30 30 #endif 31 31 32 #ifndef WTF_PREPARE_FUNCTION_POINTER_FOR_EXECUTION33 #define WTF_PREPARE_FUNCTION_POINTER_FOR_EXECUTION(vtblPtr) (reinterpret_cast<void*>(vtblPtr))34 #endif35 36 32 #ifndef WTF_PREPARE_VTBL_POINTER_FOR_INSPECTION 37 33 #define WTF_PREPARE_VTBL_POINTER_FOR_INSPECTION(vtblPtr) (reinterpret_cast<void*>(vtblPtr))
Note:
See TracChangeset
for help on using the changeset viewer.