Changeset 213652 in webkit
- Timestamp:
- Mar 9, 2017 11:08:46 AM (7 years ago)
- Location:
- trunk/Source
- Files:
-
- 47 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/JavaScriptCore/ChangeLog
r213648 r213652 1 2017-03-09 Mark Lam <mark.lam@apple.com> 2 3 Make the VM Traps mechanism non-polling for the DFG and FTL. 4 https://bugs.webkit.org/show_bug.cgi?id=168920 5 <rdar://problem/30738588> 6 7 Reviewed by Filip Pizlo. 8 9 1. Added a ENABLE(SIGNAL_BASED_VM_TRAPS) configuration in Platform.h. 10 This is currently only enabled for OS(DARWIN) and ENABLE(JIT). 11 2. Added assembler functions for overwriting an instruction with a breakpoint. 12 3. Added a new JettisonDueToVMTraps jettison reason. 13 4. Added CodeBlock and DFG::CommonData utility functions for over-writing 14 invalidation points with breakpoint instructions. 15 5. The BytecodeGenerator now emits the op_check_traps bytecode unconditionally. 16 6. Remove the JSC_alwaysCheckTraps option because of (4) above. 17 For ports that don't ENABLE(SIGNAL_BASED_VM_TRAPS), we'll force 18 Options::usePollingTraps() to always be true. This makes the VMTraps 19 implementation fall back to using polling based traps only. 20 21 7. Make VMTraps support signal based traps. 22 23 Some design and implementation details of signal based VM traps: 24 25 - The implementation makes use of 2 signal handlers for SIGUSR1 and SIGTRAP. 26 27 - VMTraps::fireTrap() will set the flag for the requested trap and instantiate 28 a SignalSender. The SignalSender will send SIGUSR1 to the mutator thread that 29 we want to trap, and check for the occurence of one of the following events: 30 31 a. VMTraps::handleTraps() has been called for the requested trap, or 32 33 b. the VM is inactive and is no longer executing any JS code. We determine 34 this to be the case if the thread no longer owns the JSLock and the VM's 35 entryScope is null. 36 37 Note: the thread can relinquish the JSLock while the VM's entryScope is not 38 null. This happens when the thread calls JSLock::dropAllLocks() before 39 calling a host function that may block on IO (or whatever). For our purpose, 40 this counts as the VM still running JS code, and VM::fireTrap() will still 41 be waiting. 42 43 If the SignalSender does not see either of these events, it will sleep for a 44 while and then re-send SIGUSR1 and check for the events again. When it sees 45 one of these events, it will consider the mutator to have received the trap 46 request. 47 48 - The SIGUSR1 handler will try to insert breakpoints at the invalidation points 49 in the DFG/FTL codeBlock at the top of the stack. This allows the mutator 50 thread to break (with a SIGTRAP) exactly at an invalidation point, where it's 51 safe to jettison the codeBlock. 52 53 Note: we cannot have the requester thread (that called VMTraps::fireTrap()) 54 insert the breakpoint instructions itself. This is because we need the 55 register state of the the mutator thread (that we want to trap in) in order to 56 find the codeBlocks that we wish to insert the breakpoints in. Currently, 57 we don't have a generic way for the requester thread to get the register state 58 of another thread. 59 60 - The SIGTRAP handler will check to see if it is trapping on a breakpoint at an 61 invalidation point. If so, it will jettison the codeBlock and adjust the PC 62 to re-execute the invalidation OSR exit off-ramp. After the OSR exit, the 63 baseline JIT code will eventually reach an op_check_traps and call 64 VMTraps::handleTraps(). 65 66 If the handler is not trapping at an invalidation point, then it must be 67 observing an assertion failure (which also uses the breakpoint instruction). 68 In this case, the handler will defer to the default SIGTRAP handler and crash. 69 70 - The reason we need the SignalSender is because SignalSender::send() is called 71 from another thread in a loop, so that VMTraps::fireTrap() can return sooner. 72 send() needs to make use of the VM pointer, and it is not guaranteed that the 73 VM will outlive the thread. SignalSender provides the mechanism by which we 74 can nullify the VM pointer when the VM dies so that the thread does not 75 continue to use it. 76 77 * assembler/ARM64Assembler.h: 78 (JSC::ARM64Assembler::replaceWithBrk): 79 * assembler/ARMAssembler.h: 80 (JSC::ARMAssembler::replaceWithBrk): 81 * assembler/ARMv7Assembler.h: 82 (JSC::ARMv7Assembler::replaceWithBkpt): 83 * assembler/MIPSAssembler.h: 84 (JSC::MIPSAssembler::replaceWithBkpt): 85 * assembler/MacroAssemblerARM.h: 86 (JSC::MacroAssemblerARM::replaceWithJump): 87 * assembler/MacroAssemblerARM64.h: 88 (JSC::MacroAssemblerARM64::replaceWithBreakpoint): 89 * assembler/MacroAssemblerARMv7.h: 90 (JSC::MacroAssemblerARMv7::replaceWithBreakpoint): 91 * assembler/MacroAssemblerMIPS.h: 92 (JSC::MacroAssemblerMIPS::replaceWithJump): 93 * assembler/MacroAssemblerX86Common.h: 94 (JSC::MacroAssemblerX86Common::replaceWithBreakpoint): 95 * assembler/X86Assembler.h: 96 (JSC::X86Assembler::replaceWithInt3): 97 * bytecode/CodeBlock.cpp: 98 (JSC::CodeBlock::jettison): 99 (JSC::CodeBlock::hasInstalledVMTrapBreakpoints): 100 (JSC::CodeBlock::installVMTrapBreakpoints): 101 * bytecode/CodeBlock.h: 102 * bytecompiler/BytecodeGenerator.cpp: 103 (JSC::BytecodeGenerator::emitCheckTraps): 104 * dfg/DFGCommonData.cpp: 105 (JSC::DFG::CommonData::installVMTrapBreakpoints): 106 (JSC::DFG::CommonData::isVMTrapBreakpoint): 107 * dfg/DFGCommonData.h: 108 (JSC::DFG::CommonData::hasInstalledVMTrapsBreakpoints): 109 * dfg/DFGJumpReplacement.cpp: 110 (JSC::DFG::JumpReplacement::installVMTrapBreakpoint): 111 * dfg/DFGJumpReplacement.h: 112 (JSC::DFG::JumpReplacement::dataLocation): 113 * dfg/DFGNodeType.h: 114 * heap/CodeBlockSet.cpp: 115 (JSC::CodeBlockSet::contains): 116 * heap/CodeBlockSet.h: 117 * heap/CodeBlockSetInlines.h: 118 (JSC::CodeBlockSet::iterate): 119 * heap/Heap.cpp: 120 (JSC::Heap::forEachCodeBlockIgnoringJITPlansImpl): 121 * heap/Heap.h: 122 * heap/HeapInlines.h: 123 (JSC::Heap::forEachCodeBlockIgnoringJITPlans): 124 * heap/MachineStackMarker.h: 125 (JSC::MachineThreads::threadsListHead): 126 * jit/ExecutableAllocator.cpp: 127 (JSC::ExecutableAllocator::isValidExecutableMemory): 128 * jit/ExecutableAllocator.h: 129 * profiler/ProfilerJettisonReason.cpp: 130 (WTF::printInternal): 131 * profiler/ProfilerJettisonReason.h: 132 * runtime/JSLock.cpp: 133 (JSC::JSLock::didAcquireLock): 134 * runtime/Options.cpp: 135 (JSC::overrideDefaults): 136 * runtime/Options.h: 137 * runtime/PlatformThread.h: 138 (JSC::platformThreadSignal): 139 * runtime/VM.cpp: 140 (JSC::VM::~VM): 141 (JSC::VM::ensureWatchdog): 142 (JSC::VM::handleTraps): Deleted. 143 (JSC::VM::setNeedAsynchronousTerminationSupport): Deleted. 144 * runtime/VM.h: 145 (JSC::VM::ownerThread): 146 (JSC::VM::traps): 147 (JSC::VM::handleTraps): 148 (JSC::VM::needTrapHandling): 149 (JSC::VM::needAsynchronousTerminationSupport): Deleted. 150 * runtime/VMTraps.cpp: 151 (JSC::VMTraps::vm): 152 (JSC::SignalContext::SignalContext): 153 (JSC::SignalContext::adjustPCToPointToTrappingInstruction): 154 (JSC::vmIsInactive): 155 (JSC::findActiveVMAndStackBounds): 156 (JSC::handleSigusr1): 157 (JSC::handleSigtrap): 158 (JSC::installSignalHandlers): 159 (JSC::sanitizedTopCallFrame): 160 (JSC::isSaneFrame): 161 (JSC::VMTraps::tryInstallTrapBreakpoints): 162 (JSC::VMTraps::invalidateCodeBlocksOnStack): 163 (JSC::VMTraps::VMTraps): 164 (JSC::VMTraps::willDestroyVM): 165 (JSC::VMTraps::addSignalSender): 166 (JSC::VMTraps::removeSignalSender): 167 (JSC::VMTraps::SignalSender::willDestroyVM): 168 (JSC::VMTraps::SignalSender::send): 169 (JSC::VMTraps::fireTrap): 170 (JSC::VMTraps::handleTraps): 171 * runtime/VMTraps.h: 172 (JSC::VMTraps::~VMTraps): 173 (JSC::VMTraps::needTrapHandling): 174 (JSC::VMTraps::notifyGrabAllLocks): 175 (JSC::VMTraps::SignalSender::SignalSender): 176 (JSC::VMTraps::invalidateCodeBlocksOnStack): 177 * tools/VMInspector.cpp: 178 * tools/VMInspector.h: 179 (JSC::VMInspector::getLock): 180 (JSC::VMInspector::iterate): 181 1 182 2017-03-09 Filip Pizlo <fpizlo@apple.com> 2 183 -
trunk/Source/JavaScriptCore/assembler/ARM64Assembler.h
r213381 r213652 2537 2537 } 2538 2538 2539 static void replaceWithBrk(void* where) 2540 { 2541 int insn = excepnGeneration(ExcepnOp_BREAKPOINT, 0, 0); 2542 performJITMemcpy(where, &insn, sizeof(int)); 2543 cacheFlush(where, sizeof(int)); 2544 } 2545 2539 2546 static void replaceWithJump(void* where, void* to) 2540 2547 { -
trunk/Source/JavaScriptCore/assembler/ARMAssembler.h
r213425 r213652 996 996 } 997 997 998 static void replaceWithBrk(void* instructionStart) 999 { 1000 ARMWord* instruction = reinterpret_cast<ARMWord*>(instructionStart); 1001 instruction[0] = BKPT; 1002 cacheFlush(instruction, sizeof(ARMWord)); 1003 } 1004 998 1005 static void replaceWithJump(void* instructionStart, void* to) 999 1006 { -
trunk/Source/JavaScriptCore/assembler/ARMv7Assembler.h
r213376 r213652 2328 2328 return reinterpret_cast<void*>(readInt32(where)); 2329 2329 } 2330 2330 2331 static void replaceWithBkpt(void* instructionStart) 2332 { 2333 ASSERT(!(bitwise_cast<uintptr_t>(instructionStart) & 1)); 2334 2335 uint16_t* ptr = reinterpret_cast<uint16_t*>(instructionStart); 2336 uint16_t instructions = OP_BKPT; 2337 performJITMemcpy(ptr, &instructions, sizeof(uint16_t)); 2338 cacheFlush(ptr, sizeof(uint16_t)); 2339 } 2340 2331 2341 static void replaceWithJump(void* instructionStart, void* to) 2332 2342 { -
trunk/Source/JavaScriptCore/assembler/MIPSAssembler.h
r213376 r213652 916 916 } 917 917 918 static void replaceWithBkpt(void* instructionStart) 919 { 920 ASSERT(!(bitwise_cast<uintptr_t>(instructionStart) & 3)); 921 MIPSWord* insn = reinterpret_cast<MIPSWord*>(reinterpret_cast<intptr_t>(code)); 922 int value = 512; /* BRK_BUG */ 923 insn[0] = (0x0000000d | ((value & 0x3ff) << OP_SH_CODE)); 924 cacheFlush(instructionStart, sizeof(MIPSWord)); 925 } 926 918 927 static void replaceWithJump(void* instructionStart, void* to) 919 928 { -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerARM.h
r213376 r213652 1483 1483 } 1484 1484 1485 static void replaceWithJump(CodeLocationLabel instructionStart) 1486 { 1487 ARMAssembler::replaceWithBkpt(instructionStart.executableAddress()); 1488 } 1489 1485 1490 static void replaceWithJump(CodeLocationLabel instructionStart, CodeLocationLabel destination) 1486 1491 { -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
r213376 r213652 3407 3407 } 3408 3408 3409 static void replaceWithBreakpoint(CodeLocationLabel instructionStart) 3410 { 3411 ARM64Assembler::replaceWithBrk(instructionStart.executableAddress()); 3412 } 3413 3409 3414 static void replaceWithJump(CodeLocationLabel instructionStart, CodeLocationLabel destination) 3410 3415 { -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerARMv7.h
r213376 r213652 1350 1350 } 1351 1351 1352 static void replaceWithBreakpoint(CodeLocationLabel instructionStart) 1353 { 1354 ARMv7Assembler::replaceWithBkpt(instructionStart.dataLocation()); 1355 } 1356 1352 1357 static void replaceWithJump(CodeLocationLabel instructionStart, CodeLocationLabel destination) 1353 1358 { -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerMIPS.h
r213376 r213652 2979 2979 } 2980 2980 2981 static void replaceWithJump(CodeLocationLabel instructionStart) 2982 { 2983 MIPSAssembler::replaceWithBkpt(instructionStart.executableAddress()); 2984 } 2985 2981 2986 static void replaceWithJump(CodeLocationLabel instructionStart, CodeLocationLabel destination) 2982 2987 { -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerX86Common.h
r213376 r213652 2755 2755 void loadFence() 2756 2756 { 2757 } 2758 2759 static void replaceWithBreakpoint(CodeLocationLabel instructionStart) 2760 { 2761 X86Assembler::replaceWithInt3(instructionStart.executableAddress()); 2757 2762 } 2758 2763 -
trunk/Source/JavaScriptCore/assembler/X86Assembler.h
r213376 r213652 2903 2903 } 2904 2904 2905 static void replaceWithInt3(void* instructionStart) 2906 { 2907 uint8_t* ptr = reinterpret_cast<uint8_t*>(instructionStart); 2908 ptr[0] = static_cast<uint8_t>(OP_INT3); 2909 } 2910 2905 2911 static void replaceWithJump(void* instructionStart, void* to) 2906 2912 { -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.cpp
r213209 r213652 1 1 /* 2 * Copyright (C) 2008-201 0, 2012-2017 Apple Inc. All rights reserved.2 * Copyright (C) 2008-2017 Apple Inc. All rights reserved. 3 3 * Copyright (C) 2008 Cameron Zwarich <cwzwarich@uwaterloo.ca> 4 4 * … … 1905 1905 alternative()->optimizeAfterWarmUp(); 1906 1906 1907 if (reason != Profiler::JettisonDueToOldAge )1907 if (reason != Profiler::JettisonDueToOldAge && reason != Profiler::JettisonDueToVMTraps) 1908 1908 tallyFrequentExitSites(); 1909 1909 #endif // ENABLE(DFG_JIT) … … 2967 2967 } 2968 2968 2969 bool CodeBlock::hasInstalledVMTrapBreakpoints() const 2970 { 2971 #if ENABLE(SIGNAL_BASED_VM_TRAPS) 2972 2973 // This function may be called from a signal handler. We need to be 2974 // careful to not call anything that is not signal handler safe, e.g. 2975 // we should not perturb the refCount of m_jitCode. 2976 if (!JITCode::isOptimizingJIT(jitType())) 2977 return false; 2978 return m_jitCode->dfgCommon()->hasInstalledVMTrapsBreakpoints(); 2979 #else 2980 return false; 2981 #endif 2982 } 2983 2984 bool CodeBlock::installVMTrapBreakpoints() 2985 { 2986 #if ENABLE(SIGNAL_BASED_VM_TRAPS) 2987 // This function may be called from a signal handler. We need to be 2988 // careful to not call anything that is not signal handler safe, e.g. 2989 // we should not perturb the refCount of m_jitCode. 2990 if (!JITCode::isOptimizingJIT(jitType())) 2991 return false; 2992 m_jitCode->dfgCommon()->installVMTrapBreakpoints(); 2993 return true; 2994 #else 2995 return false; 2996 #endif 2997 } 2998 2969 2999 void CodeBlock::dumpMathICStats() 2970 3000 { -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.h
r213209 r213652 1 1 /* 2 * Copyright (C) 2008-201 6Apple Inc. All rights reserved.2 * Copyright (C) 2008-2017 Apple Inc. All rights reserved. 3 3 * Copyright (C) 2008 Cameron Zwarich <cwzwarich@uwaterloo.ca> 4 4 * … … 204 204 ECMAMode ecmaMode() const { return isStrictMode() ? StrictMode : NotStrictMode; } 205 205 206 bool hasInstalledVMTrapBreakpoints() const; 207 bool installVMTrapBreakpoints(); 208 206 209 inline bool isKnownNotImmediate(int index) 207 210 { -
trunk/Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
r213302 r213652 1275 1275 void BytecodeGenerator::emitCheckTraps() 1276 1276 { 1277 if (Options::alwaysCheckTraps() || vm()->watchdog() || vm()->needAsynchronousTerminationSupport()) 1278 emitOpcode(op_check_traps); 1277 emitOpcode(op_check_traps); 1279 1278 } 1280 1279 -
trunk/Source/JavaScriptCore/dfg/DFGCommonData.cpp
r211237 r213652 1 1 /* 2 * Copyright (C) 2013 , 2015Apple Inc. All rights reserved.2 * Copyright (C) 2013-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 98 98 } 99 99 100 void CommonData::installVMTrapBreakpoints() 101 { 102 if (!isStillValid || hasVMTrapsBreakpointsInstalled) 103 return; 104 hasVMTrapsBreakpointsInstalled = true; 105 for (unsigned i = jumpReplacements.size(); i--;) 106 jumpReplacements[i].installVMTrapBreakpoint(); 107 } 108 109 bool CommonData::isVMTrapBreakpoint(void* address) 110 { 111 if (!isStillValid) 112 return false; 113 for (unsigned i = jumpReplacements.size(); i--;) { 114 if (address == jumpReplacements[i].dataLocation()) 115 return true; 116 } 117 return false; 118 } 119 100 120 void CommonData::validateReferences(const TrackedReferences& trackedReferences) 101 121 { -
trunk/Source/JavaScriptCore/dfg/DFGCommonData.h
r206525 r213652 1 1 /* 2 * Copyright (C) 2013 , 2015Apple Inc. All rights reserved.2 * Copyright (C) 2013-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 87 87 88 88 bool invalidate(); // Returns true if we did invalidate, or false if the code block was already invalidated. 89 89 bool hasInstalledVMTrapsBreakpoints() const { return isStillValid && hasVMTrapsBreakpointsInstalled; } 90 void installVMTrapBreakpoints(); 91 bool isVMTrapBreakpoint(void* address); 92 90 93 unsigned requiredRegisterCountForExecutionAndExit() const 91 94 { … … 113 116 bool allTransitionsHaveBeenMarked; // Initialized and used on every GC. 114 117 bool isStillValid; 118 bool hasVMTrapsBreakpointsInstalled { false }; 115 119 116 120 #if USE(JSVALUE32_64) -
trunk/Source/JavaScriptCore/dfg/DFGJumpReplacement.cpp
r191058 r213652 1 1 /* 2 * Copyright (C) 2013 Apple Inc. All rights reserved.2 * Copyright (C) 2013-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 42 42 } 43 43 44 void JumpReplacement::installVMTrapBreakpoint() 45 { 46 MacroAssembler::replaceWithBreakpoint(m_source); 47 } 48 44 49 } } // namespace JSC::DFG 45 50 -
trunk/Source/JavaScriptCore/dfg/DFGJumpReplacement.h
r206525 r213652 1 1 /* 2 * Copyright (C) 2013 Apple Inc. All rights reserved.2 * Copyright (C) 2013-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 41 41 42 42 void fire(); 43 void installVMTrapBreakpoint(); 44 void* dataLocation() const { return m_source.dataLocation(); } 43 45 44 46 private: -
trunk/Source/JavaScriptCore/dfg/DFGNodeType.h
r213107 r213652 392 392 macro(BottomValue, NodeResultJS) \ 393 393 \ 394 /* Checks for VM traps. If there is a trap, we call operation operationHandleTraps*/ \394 /* Checks for VM traps. If there is a trap, we'll jettison or call operation operationHandleTraps. */ \ 395 395 macro(CheckTraps, NodeMustGenerate) \ 396 396 /* Write barriers */\ -
trunk/Source/JavaScriptCore/heap/CodeBlockSet.cpp
r211247 r213652 104 104 } 105 105 106 bool CodeBlockSet::contains(const LockHolder&, void* candidateCodeBlock)106 bool CodeBlockSet::contains(const AbstractLocker&, void* candidateCodeBlock) 107 107 { 108 108 RELEASE_ASSERT(m_lock.isLocked()); -
trunk/Source/JavaScriptCore/heap/CodeBlockSet.h
r211247 r213652 73 73 void clearCurrentlyExecuting(); 74 74 75 bool contains(const LockHolder&, void* candidateCodeBlock);75 bool contains(const AbstractLocker&, void* candidateCodeBlock); 76 76 Lock& getLock() { return m_lock; } 77 77 … … 80 80 // visited. 81 81 template<typename Functor> void iterate(const Functor&); 82 template<typename Functor> void iterate(const AbstractLocker&, const Functor&); 82 83 83 84 template<typename Functor> void iterateCurrentlyExecuting(const Functor&); -
trunk/Source/JavaScriptCore/heap/CodeBlockSetInlines.h
r210521 r213652 64 64 void CodeBlockSet::iterate(const Functor& functor) 65 65 { 66 LockHolder locker(m_lock); 66 auto locker = holdLock(m_lock); 67 iterate(locker, functor); 68 } 69 70 template<typename Functor> 71 void CodeBlockSet::iterate(const AbstractLocker&, const Functor& functor) 72 { 67 73 for (auto& codeBlock : m_oldCodeBlocks) { 68 74 bool done = functor(codeBlock); -
trunk/Source/JavaScriptCore/heap/Heap.cpp
r213645 r213652 2331 2331 } 2332 2332 2333 void Heap::forEachCodeBlockIgnoringJITPlansImpl(const ScopedLambda<bool(CodeBlock*)>& func)2334 { 2335 return m_codeBlocks->iterate( func);2333 void Heap::forEachCodeBlockIgnoringJITPlansImpl(const AbstractLocker& locker, const ScopedLambda<bool(CodeBlock*)>& func) 2334 { 2335 return m_codeBlocks->iterate(locker, func); 2336 2336 } 2337 2337 -
trunk/Source/JavaScriptCore/heap/Heap.h
r212778 r213652 227 227 template<typename Functor> void forEachProtectedCell(const Functor&); 228 228 template<typename Functor> void forEachCodeBlock(const Functor&); 229 template<typename Functor> void forEachCodeBlockIgnoringJITPlans(const Functor&);229 template<typename Functor> void forEachCodeBlockIgnoringJITPlans(const AbstractLocker& codeBlockSetLocker, const Functor&); 230 230 231 231 HandleSet* handleSet() { return &m_handleSet; } … … 500 500 501 501 void forEachCodeBlockImpl(const ScopedLambda<bool(CodeBlock*)>&); 502 void forEachCodeBlockIgnoringJITPlansImpl(const ScopedLambda<bool(CodeBlock*)>&);502 void forEachCodeBlockIgnoringJITPlansImpl(const AbstractLocker& codeBlockSetLocker, const ScopedLambda<bool(CodeBlock*)>&); 503 503 504 504 void setMutatorShouldBeFenced(bool value); -
trunk/Source/JavaScriptCore/heap/HeapInlines.h
r213645 r213652 156 156 } 157 157 158 template<typename Functor> inline void Heap::forEachCodeBlockIgnoringJITPlans(const Functor& func)159 { 160 forEachCodeBlockIgnoringJITPlansImpl( scopedLambdaRef<bool(CodeBlock*)>(func));158 template<typename Functor> inline void Heap::forEachCodeBlockIgnoringJITPlans(const AbstractLocker& codeBlockSetLocker, const Functor& func) 159 { 160 forEachCodeBlockIgnoringJITPlansImpl(codeBlockSetLocker, scopedLambdaRef<bool(CodeBlock*)>(func)); 161 161 } 162 162 -
trunk/Source/JavaScriptCore/heap/MachineStackMarker.h
r213238 r213652 136 136 137 137 Lock& getLock() { return m_registeredThreadsMutex; } 138 Thread* threadsListHead(const LockHolder&) const { ASSERT(m_registeredThreadsMutex.isLocked()); return m_registeredThreads; }138 Thread* threadsListHead(const AbstractLocker&) const { ASSERT(m_registeredThreadsMutex.isLocked()); return m_registeredThreads; } 139 139 Thread* machineThreadForCurrentThread(); 140 140 -
trunk/Source/JavaScriptCore/jit/ExecutableAllocator.cpp
r213483 r213652 400 400 } 401 401 402 bool ExecutableAllocator::isValidExecutableMemory(const LockHolder& locker, void* address)402 bool ExecutableAllocator::isValidExecutableMemory(const AbstractLocker& locker, void* address) 403 403 { 404 404 return allocator->isInAllocatedMemory(locker, address); -
trunk/Source/JavaScriptCore/jit/ExecutableAllocator.h
r213483 r213652 137 137 RefPtr<ExecutableMemoryHandle> allocate(VM&, size_t sizeInBytes, void* ownerUID, JITCompilationEffort); 138 138 139 bool isValidExecutableMemory(const LockHolder&, void* address);139 bool isValidExecutableMemory(const AbstractLocker&, void* address); 140 140 141 141 static size_t committedByteCount(); -
trunk/Source/JavaScriptCore/profiler/ProfilerJettisonReason.cpp
r208588 r213652 1 1 /* 2 * Copyright (C) 2014 Apple Inc. All rights reserved.2 * Copyright (C) 2014-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 66 66 out.print("JettisonDueToOldAge"); 67 67 return; 68 case JettisonDueToVMTraps: 69 out.print("JettisonDueToVMTraps"); 70 return; 68 71 } 69 72 RELEASE_ASSERT_NOT_REACHED(); -
trunk/Source/JavaScriptCore/profiler/ProfilerJettisonReason.h
r208588 r213652 1 1 /* 2 * Copyright (C) 2014 Apple Inc. All rights reserved.2 * Copyright (C) 2014-2017 Apple Inc. All rights reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 38 38 JettisonDueToProfiledWatchpoint, 39 39 JettisonDueToUnprofiledWatchpoint, 40 JettisonDueToOldAge 40 JettisonDueToOldAge, 41 JettisonDueToVMTraps 41 42 }; 42 43 -
trunk/Source/JavaScriptCore/runtime/JSLock.cpp
r213238 r213652 145 145 m_vm->heap.machineThreads().addCurrentThread(); 146 146 147 m_vm->traps().notifyGrabAllLocks(); 148 147 149 #if ENABLE(SAMPLING_PROFILER) 148 150 // Note: this must come after addCurrentThread(). -
trunk/Source/JavaScriptCore/runtime/Options.cpp
r213242 r213652 333 333 Options::useSigillCrashAnalyzer() = true; 334 334 #endif 335 336 #if !ENABLE(SIGNAL_BASED_VM_TRAPS) 337 Options::usePollingTraps() = true; 338 #endif 335 339 } 336 340 -
trunk/Source/JavaScriptCore/runtime/Options.h
r213386 r213652 407 407 \ 408 408 v(unsigned, watchdog, 0, Normal, "watchdog timeout (0 = Disabled, N = a timeout period of N milliseconds)") \ 409 v(bool, alwaysCheckTraps, false, Normal, "always emit op_check_traps bytecode") \410 409 v(bool, usePollingTraps, false, Normal, "use polling (instead of signalling) VM traps") \ 411 410 \ -
trunk/Source/JavaScriptCore/runtime/PlatformThread.h
r213238 r213652 57 57 } 58 58 59 #if OS(DARWIN) 60 inline bool platformThreadSignal(PlatformThread platformThread, int signalNumber) 61 { 62 pthread_t pthreadID = pthread_from_mach_thread_np(platformThread); 63 int errNo = pthread_kill(pthreadID, signalNumber); 64 return !errNo; // A 0 errNo means success. 65 } 66 #endif 67 59 68 } // namespace JSC -
trunk/Source/JavaScriptCore/runtime/VM.cpp
r213386 r213652 99 99 #include "StrongInlines.h" 100 100 #include "StructureInlines.h" 101 #include "ThrowScope.h"102 101 #include "TypeProfiler.h" 103 102 #include "TypeProfilerLog.h" … … 361 360 if (UNLIKELY(m_watchdog)) 362 361 m_watchdog->willDestroyVM(this); 362 m_traps.willDestroyVM(); 363 363 VMInspector::instance().remove(this); 364 364 … … 463 463 Watchdog& VM::ensureWatchdog() 464 464 { 465 if (!m_watchdog) { 466 Options::usePollingTraps() = true; // Force polling traps on until we have support for signal based traps. 467 465 if (!m_watchdog) 468 466 m_watchdog = adoptRef(new Watchdog(this)); 469 470 // The LLINT peeks into the Watchdog object directly. In order to do that,471 // the LLINT assumes that the internal shape of a std::unique_ptr is the472 // same as a plain C++ pointer, and loads the address of Watchdog from it.473 RELEASE_ASSERT(*reinterpret_cast<Watchdog**>(&m_watchdog) == m_watchdog.get());474 475 // And if we've previously compiled any functions, we need to revert476 // them because they don't have the needed polling checks for the watchdog477 // yet.478 deleteAllCode(PreventCollectionAndDeleteAllCode);479 }480 467 return *m_watchdog; 481 468 } … … 950 937 #endif 951 938 952 void VM::handleTraps(ExecState* exec, VMTraps::Mask mask)953 {954 auto scope = DECLARE_THROW_SCOPE(*this);955 956 ASSERT(needTrapHandling(mask));957 while (needTrapHandling(mask)) {958 auto trapEventType = m_traps.takeTopPriorityTrap(mask);959 switch (trapEventType) {960 case VMTraps::NeedDebuggerBreak:961 if (Options::alwaysCheckTraps())962 dataLog("VM ", RawPointer(this), " on pid ", getCurrentProcessID(), " received NeedDebuggerBreak trap\n");963 return;964 965 case VMTraps::NeedWatchdogCheck:966 ASSERT(m_watchdog);967 if (LIKELY(!m_watchdog->shouldTerminate(exec)))968 continue;969 FALLTHROUGH;970 971 case VMTraps::NeedTermination:972 JSC::throwException(exec, scope, createTerminatedExecutionException(this));973 return;974 975 default:976 RELEASE_ASSERT_NOT_REACHED();977 }978 }979 }980 981 void VM::setNeedAsynchronousTerminationSupport()982 {983 Options::usePollingTraps() = true; // Force polling traps on until we have support for signal based traps.984 m_needAsynchronousTerminationSupport = true;985 }986 987 939 } // namespace JSC -
trunk/Source/JavaScriptCore/runtime/VM.h
r213386 r213652 270 270 JS_EXPORT_PRIVATE ~VM(); 271 271 272 JS_EXPORT_PRIVATEWatchdog& ensureWatchdog();272 Watchdog& ensureWatchdog(); 273 273 Watchdog* watchdog() { return m_watchdog.get(); } 274 274 … … 315 315 // FIXME: This should be a void*, because it might not point to a CallFrame. 316 316 // https://bugs.webkit.org/show_bug.cgi?id=160441 317 ExecState* topCallFrame ;317 ExecState* topCallFrame { nullptr }; 318 318 JSWebAssemblyInstance* topJSWebAssemblyInstance; 319 319 Strong<Structure> structureStructure; … … 673 673 void logEvent(CodeBlock*, const char* summary, const Func& func); 674 674 675 void handleTraps(ExecState*, VMTraps::Mask = VMTraps::Mask::allEventTypes()); 676 677 bool needTrapHandling(VMTraps::Mask mask = VMTraps::Mask::allEventTypes()) { return m_traps.needTrapHandling(mask); } 675 std::optional<PlatformThread> ownerThread() const { return m_apiLock->ownerThread(); } 676 677 VMTraps& traps() { return m_traps; } 678 679 void handleTraps(ExecState* exec, VMTraps::Mask mask = VMTraps::Mask::allEventTypes()) { m_traps.handleTraps(exec, mask); } 680 681 bool needTrapHandling() { return m_traps.needTrapHandling(); } 682 bool needTrapHandling(VMTraps::Mask mask) { return m_traps.needTrapHandling(mask); } 678 683 void* needTrapHandlingAddress() { return m_traps.needTrapHandlingAddress(); } 679 684 … … 681 686 void notifyNeedTermination() { m_traps.fireTrap(VMTraps::NeedTermination); } 682 687 void notifyNeedWatchdogCheck() { m_traps.fireTrap(VMTraps::NeedWatchdogCheck); } 683 684 bool needAsynchronousTerminationSupport() const { return m_needAsynchronousTerminationSupport; }685 JS_EXPORT_PRIVATE void setNeedAsynchronousTerminationSupport();686 688 687 689 private: … … 725 727 bool isSafeToRecurseSoftCLoop() const; 726 728 #endif // !ENABLE(JIT) 727 728 std::optional<PlatformThread> ownerThread() const { return m_apiLock->ownerThread(); }729 729 730 730 JS_EXPORT_PRIVATE void throwException(ExecState*, Exception*); … … 771 771 bool m_globalConstRedeclarationShouldThrow { true }; 772 772 bool m_shouldBuildPCToCodeOriginMapping { false }; 773 bool m_needAsynchronousTerminationSupport { false };774 773 std::unique_ptr<CodeCache> m_codeCache; 775 774 std::unique_ptr<BuiltinExecutables> m_builtinExecutables; … … 800 799 friend class ExceptionScope; 801 800 friend class ThrowScope; 801 friend class VMTraps; 802 802 friend class WTF::DoublyLinkedListNode<VM>; 803 803 }; -
trunk/Source/JavaScriptCore/runtime/VMTraps.cpp
r213295 r213652 27 27 #include "VMTraps.h" 28 28 29 #include "CallFrame.h" 30 #include "CodeBlock.h" 31 #include "CodeBlockSet.h" 32 #include "DFGCommonData.h" 33 #include "ExceptionHelpers.h" 34 #include "HeapInlines.h" 35 #include "LLIntPCRanges.h" 36 #include "MachineStackMarker.h" 37 #include "MacroAssembler.h" 38 #include "VM.h" 39 #include "VMInspector.h" 40 #include "Watchdog.h" 41 #include <wtf/ProcessID.h> 42 43 #if OS(DARWIN) 44 #include <signal.h> 45 #endif 46 29 47 namespace JSC { 30 48 49 ALWAYS_INLINE VM& VMTraps::vm() const 50 { 51 return *bitwise_cast<VM*>(bitwise_cast<uintptr_t>(this) - OBJECT_OFFSETOF(VM, m_traps)); 52 } 53 54 #if ENABLE(SIGNAL_BASED_VM_TRAPS) 55 56 struct sigaction originalSigusr1Action; 57 struct sigaction originalSigtrapAction; 58 59 #if CPU(X86_64) 60 61 struct SignalContext { 62 SignalContext(mcontext_t& mcontext) 63 : mcontext(mcontext) 64 , trapPC(reinterpret_cast<void*>(mcontext->__ss.__rip)) 65 , stackPointer(reinterpret_cast<void*>(mcontext->__ss.__rsp)) 66 , framePointer(reinterpret_cast<void*>(mcontext->__ss.__rbp)) 67 { 68 // On X86_64, SIGTRAP reports the address after the trapping PC. So, dec by 1. 69 trapPC = reinterpret_cast<uint8_t*>(trapPC) - 1; 70 } 71 72 void adjustPCToPointToTrappingInstruction() 73 { 74 mcontext->__ss.__rip = reinterpret_cast<uintptr_t>(trapPC); 75 } 76 77 mcontext_t& mcontext; 78 void* trapPC; 79 void* stackPointer; 80 void* framePointer; 81 }; 82 83 #elif CPU(X86) 84 85 struct SignalContext { 86 SignalContext(mcontext_t& mcontext) 87 : mcontext(mcontext) 88 , trapPC(reinterpret_cast<void*>(mcontext->__ss.__eip)) 89 , stackPointer(reinterpret_cast<void*>(mcontext->__ss.__esp)) 90 , framePointer(reinterpret_cast<void*>(mcontext->__ss.__ebp)) 91 { 92 // On X86, SIGTRAP reports the address after the trapping PC. So, dec by 1. 93 trapPC = reinterpret_cast<uint8_t*>(trapPC) - 1; 94 } 95 96 void adjustPCToPointToTrappingInstruction() 97 { 98 mcontext->__ss.__eip = reinterpret_cast<uintptr_t>(trapPC); 99 } 100 101 mcontext_t& mcontext; 102 void* trapPC; 103 void* stackPointer; 104 void* framePointer; 105 }; 106 107 #elif CPU(ARM64) || CPU(ARM_THUMB2) || CPU(ARM) 108 109 struct SignalContext { 110 SignalContext(mcontext_t& mcontext) 111 : mcontext(mcontext) 112 , trapPC(reinterpret_cast<void*>(mcontext->__ss.__pc)) 113 , stackPointer(reinterpret_cast<void*>(mcontext->__ss.__sp)) 114 #if CPU(ARM64) 115 , framePointer(reinterpret_cast<void*>(mcontext->__ss.__fp)) 116 #elif CPU(ARM_THUMB2) 117 , framePointer(reinterpret_cast<void*>(mcontext->__ss.__r[7])) 118 #elif CPU(ARM) 119 , framePointer(reinterpret_cast<void*>(mcontext->__ss.__r[11])) 120 #endif 121 { } 122 123 void adjustPCToPointToTrappingInstruction() { } 124 125 mcontext_t& mcontext; 126 void* trapPC; 127 void* stackPointer; 128 void* framePointer; 129 }; 130 131 #endif 132 133 inline static bool vmIsInactive(VM& vm) 134 { 135 return !vm.entryScope && !vm.ownerThread(); 136 } 137 138 static Expected<std::pair<VM*, StackBounds>, VMTraps::Error> findActiveVMAndStackBounds(SignalContext& context) 139 { 140 VMInspector& inspector = VMInspector::instance(); 141 auto locker = tryHoldLock(inspector.getLock()); 142 if (UNLIKELY(!locker)) 143 return makeUnexpected(VMTraps::Error::LockUnavailable); 144 145 VM* activeVM = nullptr; 146 StackBounds stackBounds = StackBounds::emptyBounds(); 147 void* stackPointer = context.stackPointer; 148 bool unableToAcquireMachineThreadsLock = false; 149 inspector.iterate(locker, [&] (VM& vm) { 150 if (vmIsInactive(vm)) 151 return VMInspector::FunctorStatus::Continue; 152 153 auto& machineThreads = vm.heap.machineThreads(); 154 auto machineThreadsLocker = tryHoldLock(machineThreads.getLock()); 155 if (UNLIKELY(!machineThreadsLocker)) { 156 unableToAcquireMachineThreadsLock = true; 157 return VMInspector::FunctorStatus::Continue; // Try next VM. 158 } 159 160 for (MachineThreads::Thread* thread = machineThreads.threadsListHead(machineThreadsLocker); thread; thread = thread->next) { 161 RELEASE_ASSERT(thread->stackBase); 162 RELEASE_ASSERT(thread->stackEnd); 163 if (stackPointer <= thread->stackBase && stackPointer >= thread->stackEnd) { 164 activeVM = &vm; 165 stackBounds = StackBounds(thread->stackBase, thread->stackEnd); 166 return VMInspector::FunctorStatus::Done; 167 } 168 } 169 return VMInspector::FunctorStatus::Continue; 170 }); 171 172 if (!activeVM && unableToAcquireMachineThreadsLock) 173 return makeUnexpected(VMTraps::Error::LockUnavailable); 174 return std::make_pair(activeVM, stackBounds); 175 } 176 177 static void handleSigusr1(int signalNumber, siginfo_t* info, void* uap) 178 { 179 SignalContext context(static_cast<ucontext_t*>(uap)->uc_mcontext); 180 auto activeVMAndStackBounds = findActiveVMAndStackBounds(context); 181 if (activeVMAndStackBounds) { 182 VM* vm = activeVMAndStackBounds.value().first; 183 if (vm) { 184 StackBounds stackBounds = activeVMAndStackBounds.value().second; 185 VMTraps& traps = vm->traps(); 186 if (traps.needTrapHandling()) 187 traps.tryInstallTrapBreakpoints(context, stackBounds); 188 } 189 } 190 191 auto originalAction = originalSigusr1Action.sa_sigaction; 192 if (originalAction) 193 originalAction(signalNumber, info, uap); 194 } 195 196 static void handleSigtrap(int signalNumber, siginfo_t* info, void* uap) 197 { 198 SignalContext context(static_cast<ucontext_t*>(uap)->uc_mcontext); 199 auto activeVMAndStackBounds = findActiveVMAndStackBounds(context); 200 if (!activeVMAndStackBounds) 201 return; // Let the SignalSender try again later. 202 203 VM* vm = activeVMAndStackBounds.value().first; 204 if (vm) { 205 VMTraps& traps = vm->traps(); 206 if (!traps.needTrapHandling()) 207 return; // The polling code beat us to handling the trap already. 208 209 auto expectedSuccess = traps.tryJettisonCodeBlocksOnStack(context); 210 if (!expectedSuccess) 211 return; // Let the SignalSender try again later. 212 if (expectedSuccess.value()) 213 return; // We've success jettison the codeBlocks. 214 } 215 216 // If we get here, then this SIGTRAP is not due to a VMTrap. Let's do the default action. 217 auto originalAction = originalSigtrapAction.sa_sigaction; 218 if (originalAction) { 219 // It is always safe to just invoke the original handler using the sa_sigaction form 220 // without checking for the SA_SIGINFO flag. If the original handler is of the 221 // sa_handler form, it will just ignore the 2nd and 3rd arguments since sa_handler is a 222 // subset of sa_sigaction. This is what the man pages says the OS does anyway. 223 originalAction(signalNumber, info, uap); 224 } 225 226 // Pre-emptively restore the default handler but we may roll it back below. 227 struct sigaction currentAction; 228 struct sigaction defaultAction; 229 defaultAction.sa_handler = SIG_DFL; 230 sigfillset(&defaultAction.sa_mask); 231 defaultAction.sa_flags = 0; 232 sigaction(SIGTRAP, &defaultAction, ¤tAction); 233 234 if (currentAction.sa_sigaction != handleSigtrap) { 235 // This means that there's a client handler installed after us. This also means 236 // that the client handler thinks it was able to recover from the SIGTRAP, and 237 // did not uninstall itself. We can't argue with this because the signal isn't 238 // known to be from a VMTraps signal. Hence, restore the client handler 239 // and keep going. 240 sigaction(SIGTRAP, ¤tAction, nullptr); 241 } 242 } 243 244 static void installSignalHandlers() 245 { 246 typedef void (* SigactionHandler)(int, siginfo_t *, void *); 247 struct sigaction action; 248 249 action.sa_sigaction = reinterpret_cast<SigactionHandler>(handleSigusr1); 250 sigfillset(&action.sa_mask); 251 action.sa_flags = SA_SIGINFO; 252 sigaction(SIGUSR1, &action, &originalSigusr1Action); 253 254 action.sa_sigaction = reinterpret_cast<SigactionHandler>(handleSigtrap); 255 sigfillset(&action.sa_mask); 256 action.sa_flags = SA_SIGINFO; 257 sigaction(SIGTRAP, &action, &originalSigtrapAction); 258 } 259 260 ALWAYS_INLINE static CallFrame* sanitizedTopCallFrame(CallFrame* topCallFrame) 261 { 262 #if !defined(NDEBUG) && !CPU(ARM) && !CPU(MIPS) 263 // prepareForExternalCall() in DFGSpeculativeJIT.h may set topCallFrame to a bad word 264 // before calling native functions, but tryInstallTrapBreakpoints() below expects 265 // topCallFrame to be null if not set. 266 #if USE(JSVALUE64) 267 const uintptr_t badBeefWord = 0xbadbeef0badbeef; 268 #else 269 const uintptr_t badBeefWord = 0xbadbeef; 270 #endif 271 if (topCallFrame == reinterpret_cast<CallFrame*>(badBeefWord)) 272 topCallFrame = nullptr; 273 #endif 274 return topCallFrame; 275 } 276 277 static bool isSaneFrame(CallFrame* frame, CallFrame* calleeFrame, VMEntryFrame* entryFrame, StackBounds stackBounds) 278 { 279 if (reinterpret_cast<void*>(frame) >= reinterpret_cast<void*>(entryFrame)) 280 return false; 281 if (calleeFrame >= frame) 282 return false; 283 return stackBounds.contains(frame); 284 } 285 286 void VMTraps::tryInstallTrapBreakpoints(SignalContext& context, StackBounds stackBounds) 287 { 288 // This must be the initial signal to get the mutator thread's attention. 289 // Let's get the thread to break at invalidation points if needed. 290 VM& vm = this->vm(); 291 void* trapPC = context.trapPC; 292 293 CallFrame* callFrame = reinterpret_cast<CallFrame*>(context.framePointer); 294 295 auto codeBlockSetLocker = tryHoldLock(vm.heap.codeBlockSet().getLock()); 296 if (!codeBlockSetLocker) 297 return; // Let the SignalSender try again later. 298 299 { 300 auto allocator = vm.executableAllocator; 301 auto allocatorLocker = tryHoldLock(allocator.getLock()); 302 if (!allocatorLocker) 303 return; // Let the SignalSender try again later. 304 305 if (allocator.isValidExecutableMemory(allocatorLocker, trapPC)) { 306 if (vm.isExecutingInRegExpJIT) { 307 // We need to do this because a regExpJIT frame isn't a JS frame. 308 callFrame = sanitizedTopCallFrame(vm.topCallFrame); 309 } 310 } else if (LLInt::isLLIntPC(trapPC)) { 311 // The framePointer probably has the callFrame. We're good to go. 312 } else { 313 // We resort to topCallFrame to see if we can get anything 314 // useful. We usually get here when we're executing C code. 315 callFrame = sanitizedTopCallFrame(vm.topCallFrame); 316 } 317 } 318 319 CodeBlock* foundCodeBlock = nullptr; 320 VMEntryFrame* vmEntryFrame = vm.topVMEntryFrame; 321 322 // We don't have a callee to start with. So, use the end of the stack to keep the 323 // isSaneFrame() checker below happy for the first iteration. It will still check 324 // to ensure that the address is in the stackBounds. 325 CallFrame* calleeFrame = reinterpret_cast<CallFrame*>(stackBounds.end()); 326 327 if (!vmEntryFrame || !callFrame) 328 return; // Not running JS code. Let the SignalSender try again later. 329 330 do { 331 if (!isSaneFrame(callFrame, calleeFrame, vmEntryFrame, stackBounds)) 332 return; // Let the SignalSender try again later. 333 334 CodeBlock* candidateCodeBlock = callFrame->codeBlock(); 335 if (candidateCodeBlock && vm.heap.codeBlockSet().contains(codeBlockSetLocker, candidateCodeBlock)) { 336 foundCodeBlock = candidateCodeBlock; 337 break; 338 } 339 340 calleeFrame = callFrame; 341 callFrame = callFrame->callerFrame(vmEntryFrame); 342 343 } while (callFrame && vmEntryFrame); 344 345 if (!foundCodeBlock) { 346 // We may have just entered the frame and the codeBlock pointer is not 347 // initialized yet. Just bail and let the SignalSender try again later. 348 return; 349 } 350 351 if (JITCode::isOptimizingJIT(foundCodeBlock->jitType())) { 352 auto locker = tryHoldLock(m_lock); 353 if (!locker) 354 return; // Let the SignalSender try again later. 355 356 if (!foundCodeBlock->hasInstalledVMTrapBreakpoints()) 357 foundCodeBlock->installVMTrapBreakpoints(); 358 return; 359 } 360 } 361 362 auto VMTraps::tryJettisonCodeBlocksOnStack(SignalContext& context) -> Expected<bool, Error> 363 { 364 VM& vm = this->vm(); 365 auto codeBlockSetLocker = tryHoldLock(vm.heap.codeBlockSet().getLock()); 366 if (!codeBlockSetLocker) 367 return makeUnexpected(Error::LockUnavailable); 368 369 CallFrame* topCallFrame = reinterpret_cast<CallFrame*>(context.framePointer); 370 void* trapPC = context.trapPC; 371 bool trapPCIsVMTrap = false; 372 373 vm.heap.forEachCodeBlockIgnoringJITPlans(codeBlockSetLocker, [&] (CodeBlock* codeBlock) { 374 if (!codeBlock->hasInstalledVMTrapBreakpoints()) 375 return false; // Not found yet. 376 377 JITCode* jitCode = codeBlock->jitCode().get(); 378 ASSERT(JITCode::isOptimizingJIT(jitCode->jitType())); 379 if (jitCode->dfgCommon()->isVMTrapBreakpoint(trapPC)) { 380 trapPCIsVMTrap = true; 381 // At the codeBlock trap point, we're guaranteed that: 382 // 1. the pc is not in the middle of any range of JIT code which invalidation points 383 // may write over. Hence, it's now safe to patch those invalidation points and 384 // jettison the codeBlocks. 385 // 2. The top frame must be an optimized JS frame. 386 ASSERT(codeBlock == topCallFrame->codeBlock()); 387 codeBlock->jettison(Profiler::JettisonDueToVMTraps); 388 return true; 389 } 390 391 return false; // Not found yet. 392 }); 393 394 if (!trapPCIsVMTrap) 395 return false; 396 397 invalidateCodeBlocksOnStack(codeBlockSetLocker, topCallFrame); 398 399 // Re-run the trapping instruction now that we've patched it with the invalidation 400 // OSR exit off-ramp. 401 context.adjustPCToPointToTrappingInstruction(); 402 return true; 403 } 404 405 void VMTraps::invalidateCodeBlocksOnStack() 406 { 407 invalidateCodeBlocksOnStack(vm().topCallFrame); 408 } 409 410 void VMTraps::invalidateCodeBlocksOnStack(ExecState* topCallFrame) 411 { 412 auto codeBlockSetLocker = holdLock(vm().heap.codeBlockSet().getLock()); 413 invalidateCodeBlocksOnStack(codeBlockSetLocker, topCallFrame); 414 } 415 416 void VMTraps::invalidateCodeBlocksOnStack(Locker<Lock>&, ExecState* topCallFrame) 417 { 418 if (!m_needToInvalidatedCodeBlocks) 419 return; 420 421 m_needToInvalidatedCodeBlocks = false; 422 423 VMEntryFrame* vmEntryFrame = vm().topVMEntryFrame; 424 CallFrame* callFrame = topCallFrame; 425 426 if (!vmEntryFrame) 427 return; // Not running JS code. Nothing to invalidate. 428 429 while (callFrame) { 430 CodeBlock* codeBlock = callFrame->codeBlock(); 431 if (codeBlock && JITCode::isOptimizingJIT(codeBlock->jitType())) 432 codeBlock->jettison(Profiler::JettisonDueToVMTraps); 433 callFrame = callFrame->callerFrame(vmEntryFrame); 434 } 435 } 436 437 #endif // ENABLE(SIGNAL_BASED_VM_TRAPS) 438 439 VMTraps::VMTraps() 440 { 441 #if ENABLE(SIGNAL_BASED_VM_TRAPS) 442 if (!Options::usePollingTraps()) { 443 static std::once_flag once; 444 std::call_once(once, [] { 445 installSignalHandlers(); 446 }); 447 } 448 #endif 449 } 450 451 void VMTraps::willDestroyVM() 452 { 453 #if ENABLE(SIGNAL_BASED_VM_TRAPS) 454 while (!m_signalSenders.isEmpty()) { 455 RefPtr<SignalSender> sender; 456 { 457 // We don't want to be holding the VMTraps lock when calling 458 // SignalSender::willDestroyVM() because SignalSender::willDestroyVM() 459 // will acquire the SignalSender lock, and SignalSender::send() needs 460 // to acquire these locks in the opposite order. 461 auto locker = holdLock(m_lock); 462 sender = m_signalSenders.takeAny(); 463 } 464 sender->willDestroyVM(); 465 } 466 #endif 467 } 468 469 #if ENABLE(SIGNAL_BASED_VM_TRAPS) 470 void VMTraps::addSignalSender(VMTraps::SignalSender* sender) 471 { 472 auto locker = holdLock(m_lock); 473 m_signalSenders.add(sender); 474 } 475 476 void VMTraps::removeSignalSender(VMTraps::SignalSender* sender) 477 { 478 auto locker = holdLock(m_lock); 479 m_signalSenders.remove(sender); 480 } 481 482 void VMTraps::SignalSender::willDestroyVM() 483 { 484 auto locker = holdLock(m_lock); 485 m_vm = nullptr; 486 } 487 488 void VMTraps::SignalSender::send() 489 { 490 while (true) { 491 // We need a nested scope so that we'll release the lock before we sleep below. 492 { 493 auto locker = holdLock(m_lock); 494 if (!m_vm) 495 break; 496 497 VM& vm = *m_vm; 498 auto optionalOwnerThread = vm.ownerThread(); 499 if (optionalOwnerThread) { 500 platformThreadSignal(optionalOwnerThread.value(), SIGUSR1); 501 break; 502 } 503 504 if (vmIsInactive(vm)) 505 break; 506 507 VMTraps::Mask mask(m_eventType); 508 if (!vm.needTrapHandling(mask)) 509 break; 510 } 511 512 sleepMS(1); 513 } 514 515 auto locker = holdLock(m_lock); 516 if (m_vm) 517 m_vm->traps().removeSignalSender(this); 518 } 519 #endif // ENABLE(SIGNAL_BASED_VM_TRAPS) 520 31 521 void VMTraps::fireTrap(VMTraps::EventType eventType) 32 522 { 33 auto locker = holdLock(m_lock); 34 setTrapForEvent(locker, eventType); 523 ASSERT(!vm().currentThreadIsHoldingAPILock()); 524 { 525 auto locker = holdLock(m_lock); 526 setTrapForEvent(locker, eventType); 527 m_needToInvalidatedCodeBlocks = true; 528 } 529 530 #if ENABLE(SIGNAL_BASED_VM_TRAPS) 531 if (!Options::usePollingTraps()) { 532 // sendSignal() can loop until it has confirmation that the mutator thread 533 // has received the trap request. We'll call it from another trap so that 534 // fireTrap() does not block. 535 RefPtr<SignalSender> sender = adoptRef(new SignalSender(vm(), eventType)); 536 addSignalSender(sender.get()); 537 createThread("jsc.vmtraps.signalling.thread", [sender] { 538 sender->send(); 539 }); 540 } 541 #endif 542 } 543 544 void VMTraps::handleTraps(ExecState* exec, VMTraps::Mask mask) 545 { 546 VM& vm = this->vm(); 547 auto scope = DECLARE_THROW_SCOPE(vm); 548 549 ASSERT(needTrapHandling(mask)); 550 while (needTrapHandling(mask)) { 551 auto eventType = takeTopPriorityTrap(mask); 552 switch (eventType) { 553 case NeedDebuggerBreak: 554 dataLog("VM ", RawPointer(&vm), " on pid ", getCurrentProcessID(), " received NeedDebuggerBreak trap\n"); 555 invalidateCodeBlocksOnStack(exec); 556 break; 557 558 case NeedWatchdogCheck: 559 ASSERT(vm.watchdog()); 560 if (LIKELY(!vm.watchdog()->shouldTerminate(exec))) 561 continue; 562 FALLTHROUGH; 563 564 case NeedTermination: 565 invalidateCodeBlocksOnStack(exec); 566 throwException(exec, scope, createTerminatedExecutionException(&vm)); 567 return; 568 569 default: 570 RELEASE_ASSERT_NOT_REACHED(); 571 } 572 } 35 573 } 36 574 -
trunk/Source/JavaScriptCore/runtime/VMTraps.h
r213295 r213652 26 26 #pragma once 27 27 28 #include <wtf/Expected.h> 29 #include <wtf/HashSet.h> 28 30 #include <wtf/Lock.h> 29 31 #include <wtf/Locker.h> 32 #include <wtf/RefPtr.h> 33 #include <wtf/StackBounds.h> 30 34 31 35 namespace JSC { 32 36 37 class ExecState; 33 38 class VM; 34 39 … … 36 41 typedef uint8_t BitField; 37 42 public: 43 enum class Error { 44 None, 45 LockUnavailable 46 }; 47 38 48 enum EventType { 39 49 // Sorted in servicing priority order from highest to lowest. … … 76 86 }; 77 87 88 VMTraps(); 89 ~VMTraps() 90 { 91 #if ENABLE(SIGNAL_BASED_VM_TRAPS) 92 ASSERT(m_signalSenders.isEmpty()); 93 #endif 94 } 95 96 void willDestroyVM(); 97 98 bool needTrapHandling() { return m_needTrapHandling; } 78 99 bool needTrapHandling(Mask mask) { return m_needTrapHandling & mask.bits(); } 79 100 void* needTrapHandlingAddress() { return &m_needTrapHandling; } 80 101 102 void notifyGrabAllLocks() 103 { 104 if (needTrapHandling()) 105 invalidateCodeBlocksOnStack(); 106 } 107 81 108 JS_EXPORT_PRIVATE void fireTrap(EventType); 82 109 83 EventType takeTopPriorityTrap(Mask); 110 void handleTraps(ExecState*, VMTraps::Mask); 111 112 void tryInstallTrapBreakpoints(struct SignalContext&, StackBounds); 113 Expected<bool, Error> tryJettisonCodeBlocksOnStack(struct SignalContext&); 84 114 85 115 private: … … 102 132 } 103 133 134 EventType takeTopPriorityTrap(Mask); 135 136 #if ENABLE(SIGNAL_BASED_VM_TRAPS) 137 class SignalSender : public ThreadSafeRefCounted<SignalSender> { 138 public: 139 SignalSender(VM& vm, EventType eventType) 140 : m_vm(&vm) 141 , m_eventType(eventType) 142 { } 143 144 void willDestroyVM(); 145 void send(); 146 147 private: 148 Lock m_lock; 149 VM* m_vm; 150 EventType m_eventType; 151 }; 152 153 void invalidateCodeBlocksOnStack(); 154 void invalidateCodeBlocksOnStack(ExecState* topCallFrame); 155 void invalidateCodeBlocksOnStack(Locker<Lock>& codeBlockSetLocker, ExecState* topCallFrame); 156 157 void addSignalSender(SignalSender*); 158 void removeSignalSender(SignalSender*); 159 #else 160 void invalidateCodeBlocksOnStack() { } 161 void invalidateCodeBlocksOnStack(ExecState*) { } 162 #endif 163 104 164 Lock m_lock; 105 165 union { … … 107 167 BitField m_trapsBitField; 108 168 }; 169 bool m_needToInvalidatedCodeBlocks { false }; 170 171 #if ENABLE(SIGNAL_BASED_VM_TRAPS) 172 HashSet<RefPtr<SignalSender>> m_signalSenders; 173 #endif 109 174 110 175 friend class LLIntOffsetsExtractor; 176 friend class SignalSender; 111 177 }; 112 178 -
trunk/Source/JavaScriptCore/tools/VMInspector.cpp
r213238 r213652 147 147 // they are handed to the JIT plans. Those codeBlocks will have a null jitCode, 148 148 // but we check for that in our lambda functor. 149 // 2. CodeBlockSet::iterate()will acquire the CodeBlockSet lock before iterating.149 // 2. We will acquire the CodeBlockSet lock before iterating. 150 150 // This ensures that a CodeBlock won't be GCed while we're iterating. 151 151 // 3. We do a tryLock on the CodeBlockSet's lock first to ensure that it is … … 154 154 // re-entering the lock and deadlocking on it. 155 155 156 auto& lock = vm.heap.codeBlockSet().getLock();157 bool isSafeToLock = ensureIsSafeToLock( lock);156 auto& codeBlockSetLock = vm.heap.codeBlockSet().getLock(); 157 bool isSafeToLock = ensureIsSafeToLock(codeBlockSetLock); 158 158 if (!isSafeToLock) { 159 159 hasTimeout = true; … … 161 161 } 162 162 163 vm.heap.forEachCodeBlockIgnoringJITPlans([&] (CodeBlock* cb) { 163 auto locker = holdLock(codeBlockSetLock); 164 vm.heap.forEachCodeBlockIgnoringJITPlans(locker, [&] (CodeBlock* cb) { 164 165 JITCode* jitCode = cb->jitCode().get(); 165 166 if (!jitCode) { -
trunk/Source/JavaScriptCore/tools/VMInspector.h
r211684 r213652 47 47 void remove(VM*); 48 48 49 Lock& getLock() { return m_lock; } 50 51 enum class FunctorStatus { 52 Continue, 53 Done 54 }; 55 56 template <typename Functor> 57 void iterate(const Locker&, const Functor& functor) { iterate(functor); } 58 49 59 Expected<Locker, Error> lock(Seconds timeout = Seconds::infinity()); 50 60 … … 53 63 54 64 private: 55 enum class FunctorStatus {56 Continue,57 Done58 };59 65 template <typename Functor> void iterate(const Functor& functor) 60 66 { -
trunk/Source/WTF/ChangeLog
r213645 r213652 1 2017-03-09 Mark Lam <mark.lam@apple.com> 2 3 Make the VM Traps mechanism non-polling for the DFG and FTL. 4 https://bugs.webkit.org/show_bug.cgi?id=168920 5 <rdar://problem/30738588> 6 7 Reviewed by Filip Pizlo. 8 9 Make StackBounds more useful for checking if a pointer is within stack bounds. 10 11 * wtf/MetaAllocator.cpp: 12 (WTF::MetaAllocator::isInAllocatedMemory): 13 * wtf/MetaAllocator.h: 14 * wtf/Platform.h: 15 * wtf/StackBounds.h: 16 (WTF::StackBounds::emptyBounds): 17 (WTF::StackBounds::StackBounds): 18 (WTF::StackBounds::isEmpty): 19 (WTF::StackBounds::contains): 20 1 21 2017-03-07 Filip Pizlo <fpizlo@apple.com> 2 22 -
trunk/Source/WTF/wtf/MetaAllocator.cpp
r201782 r213652 427 427 } 428 428 429 bool MetaAllocator::isInAllocatedMemory(const LockHolder&, void* address)429 bool MetaAllocator::isInAllocatedMemory(const AbstractLocker&, void* address) 430 430 { 431 431 ASSERT(m_lock.isLocked()); -
trunk/Source/WTF/wtf/MetaAllocator.h
r205989 r213652 99 99 100 100 Lock& getLock() { return m_lock; } 101 WTF_EXPORT_PRIVATE bool isInAllocatedMemory(const LockHolder&, void* address);101 WTF_EXPORT_PRIVATE bool isInAllocatedMemory(const AbstractLocker&, void* address); 102 102 103 103 #if ENABLE(META_ALLOCATOR_PROFILE) -
trunk/Source/WTF/wtf/Platform.h
r213645 r213652 1 1 /* 2 * Copyright (C) 2006-20 09, 2013-2015Apple Inc. All rights reserved.2 * Copyright (C) 2006-2017 Apple Inc. All rights reserved. 3 3 * Copyright (C) 2007-2009 Torch Mobile, Inc. 4 4 * Copyright (C) 2010, 2011 Research In Motion Limited. All rights reserved. … … 914 914 #endif 915 915 916 #if OS(DARWIN) && ENABLE(JIT) 917 #define ENABLE_SIGNAL_BASED_VM_TRAPS 1 918 #endif 919 916 920 /* CSS Selector JIT Compiler */ 917 921 #if !defined(ENABLE_CSS_SELECTOR_JIT) -
trunk/Source/WTF/wtf/StackBounds.h
r189517 r213652 1 1 /* 2 * Copyright (C) 2010 , 2013Apple Inc. All Rights Reserved.2 * Copyright (C) 2010-2017 Apple Inc. All Rights Reserved. 3 3 * 4 4 * Redistribution and use in source and binary forms, with or without … … 41 41 42 42 public: 43 static StackBounds emptyBounds() { return StackBounds(); } 44 43 45 static StackBounds currentThreadStackBounds() 44 46 { … … 47 49 bounds.checkConsistency(); 48 50 return bounds; 51 } 52 53 StackBounds(void* origin, void* end) 54 : m_origin(origin) 55 , m_bound(end) 56 { 57 checkConsistency(); 49 58 } 50 59 … … 66 75 return static_cast<char*>(m_origin) - static_cast<char*>(m_bound); 67 76 return static_cast<char*>(m_bound) - static_cast<char*>(m_origin); 77 } 78 79 bool isEmpty() const { return !m_origin; } 80 81 bool contains(void* p) const 82 { 83 if (isEmpty()) 84 return false; 85 if (isGrowingDownward()) 86 return (m_origin >= p) && (p > m_bound); 87 return (m_bound > p) && (p >= m_origin); 68 88 } 69 89 -
trunk/Source/WebCore/ChangeLog
r213650 r213652 1 2017-03-09 Mark Lam <mark.lam@apple.com> 2 3 Make the VM Traps mechanism non-polling for the DFG and FTL. 4 https://bugs.webkit.org/show_bug.cgi?id=168920 5 <rdar://problem/30738588> 6 7 Reviewed by Filip Pizlo. 8 9 No new tests needed. This is covered by existing tests. 10 11 * bindings/js/WorkerScriptController.cpp: 12 (WebCore::WorkerScriptController::WorkerScriptController): 13 (WebCore::WorkerScriptController::scheduleExecutionTermination): 14 1 15 2017-03-08 Dean Jackson <dino@apple.com> 2 16 -
trunk/Source/WebCore/bindings/js/WorkerScriptController.cpp
r213107 r213652 52 52 { 53 53 m_vm->heap.acquireAccess(); // It's not clear that we have good discipline for heap access, so turn it on permanently. 54 m_vm->setNeedAsynchronousTerminationSupport();55 54 JSVMClientData::initNormalWorld(m_vm.get()); 56 55 } … … 152 151 void WorkerScriptController::scheduleExecutionTermination() 153 152 { 154 // The mutex provides a memory barrier to ensure that once 155 // termination is scheduled, isTerminatingExecution() will 156 // accurately reflect that state when called from another thread. 157 LockHolder locker(m_scheduledTerminationMutex); 158 m_isTerminatingExecution = true; 153 { 154 // The mutex provides a memory barrier to ensure that once 155 // termination is scheduled, isTerminatingExecution() will 156 // accurately reflect that state when called from another thread. 157 LockHolder locker(m_scheduledTerminationMutex); 158 m_isTerminatingExecution = true; 159 } 159 160 m_vm->notifyNeedTermination(); 160 161 }
Note: See TracChangeset
for help on using the changeset viewer.