Changeset 222058 in webkit
- Timestamp:
- Sep 14, 2017, 4:08:54 PM (8 years ago)
- Location:
- trunk/Source
- Files:
-
- 7 edited
Legend:
- Unmodified
- Added
- Removed
-
TabularUnified trunk/Source/JavaScriptCore/ChangeLog ¶
r222035 r222058 1 2017-09-14 Mark Lam <mark.lam@apple.com> 2 3 AddressSanitizer: stack-buffer-underflow in JSC::Probe::Page::Page 4 https://bugs.webkit.org/show_bug.cgi?id=176874 5 <rdar://problem/34436415> 6 7 Reviewed by Saam Barati. 8 9 1. Make Probe::Stack play nice with ASan by: 10 11 a. using a local memcpy implementation that suppresses ASan on ASan builds. 12 We don't want to use std:memcpy() which validates stack memory because 13 we are intentionally copying stack memory beyond the current frame. 14 15 b. changing Stack::s_chunkSize to equal sizeof(uintptr_t) on ASan builds. 16 This ensures that Page::flushWrites() only writes stack memory that was 17 modified by a probe. The probes should only modify stack memory that 18 belongs to JSC stack data structures. We don't want to inadvertently 19 modify adjacent words that may belong to ASan (which may happen if 20 s_chunkSize is larger than sizeof(uintptr_t)). 21 22 c. fixing a bug in Page dirtyBits management for when the size of the value to 23 write is greater than s_chunkSize. The fix in generic, but in practice, 24 this currently only manifests on 32-bit ASan builds because 25 sizeof(uintptr_t) and s_chunkSize are 32-bit, and we may write 64-bit 26 values. 27 28 d. making Page::m_dirtyBits 64 bits always. This maximizes the number of 29 s_chunksPerPage we can have even on ASan builds. 30 31 2. Fixed the bottom most Probe::Context and Probe::Stack get/set methods to use 32 std::memcpy to avoid strict aliasing issues. 33 34 3. Optimized the implementation of Page::physicalAddressFor(). 35 36 4. Optimized the implementation of Stack::set() in the recording of the low 37 watermark. We just record the lowest raw pointer now, and only compute the 38 alignment to its chuck boundary later when the low watermark is requested. 39 40 5. Changed a value in testmasm to make the test less vulnerable to rounding issues. 41 42 No new test needed because this is already covered by testmasm with ASan enabled. 43 44 * assembler/ProbeContext.h: 45 (JSC::Probe::CPUState::gpr const): 46 (JSC::Probe::CPUState::spr const): 47 (JSC::Probe::Context::gpr): 48 (JSC::Probe::Context::spr): 49 (JSC::Probe::Context::fpr): 50 (JSC::Probe::Context::gprName): 51 (JSC::Probe::Context::sprName): 52 (JSC::Probe::Context::fprName): 53 (JSC::Probe::Context::gpr const): 54 (JSC::Probe::Context::spr const): 55 (JSC::Probe::Context::fpr const): 56 (JSC::Probe::Context::pc): 57 (JSC::Probe::Context::fp): 58 (JSC::Probe::Context::sp): 59 (JSC::Probe:: const): Deleted. 60 * assembler/ProbeStack.cpp: 61 (JSC::Probe::copyStackPage): 62 (JSC::Probe::Page::Page): 63 (JSC::Probe::Page::flushWrites): 64 * assembler/ProbeStack.h: 65 (JSC::Probe::Page::get): 66 (JSC::Probe::Page::set): 67 (JSC::Probe::Page::dirtyBitFor): 68 (JSC::Probe::Page::physicalAddressFor): 69 (JSC::Probe::Stack::lowWatermark): 70 (JSC::Probe::Stack::get): 71 (JSC::Probe::Stack::set): 72 * assembler/testmasm.cpp: 73 (JSC::testProbeModifiesStackValues): 74 1 75 2017-09-14 Yusuke Suzuki <utatane.tea@gmail.com> 2 76 -
TabularUnified trunk/Source/JavaScriptCore/assembler/ProbeContext.h ¶
r222009 r222058 46 46 inline double& fpr(FPRegisterID); 47 47 48 template<typename T, typename std::enable_if<std::is_integral<T>::value>::type* = nullptr> 49 T gpr(RegisterID) const; 50 template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type* = nullptr> 51 T gpr(RegisterID) const; 52 template<typename T, typename std::enable_if<std::is_integral<T>::value>::type* = nullptr> 53 T spr(SPRegisterID) const; 54 template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type* = nullptr> 55 T spr(SPRegisterID) const; 48 template<typename T> T gpr(RegisterID) const; 49 template<typename T> T spr(SPRegisterID) const; 56 50 template<typename T> T fpr(FPRegisterID) const; 57 51 … … 86 80 } 87 81 88 template<typename T , typename std::enable_if<std::is_integral<T>::value>::type*>82 template<typename T> 89 83 T CPUState::gpr(RegisterID id) const 90 84 { 91 85 CPUState* cpu = const_cast<CPUState*>(this); 92 return static_cast<T>(cpu->gpr(id)); 93 } 94 95 template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type*> 96 T CPUState::gpr(RegisterID id) const 97 { 98 CPUState* cpu = const_cast<CPUState*>(this); 99 return reinterpret_cast<T>(cpu->gpr(id)); 100 } 101 102 template<typename T, typename std::enable_if<std::is_integral<T>::value>::type*> 86 auto& from = cpu->gpr(id); 87 typename std::remove_const<T>::type to { }; 88 std::memcpy(&to, &from, sizeof(to)); // Use std::memcpy to avoid strict aliasing issues. 89 return to; 90 } 91 92 template<typename T> 103 93 T CPUState::spr(SPRegisterID id) const 104 94 { 105 95 CPUState* cpu = const_cast<CPUState*>(this); 106 return static_cast<T>(cpu->spr(id)); 107 } 108 109 template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type*> 110 T CPUState::spr(SPRegisterID id) const 111 { 112 CPUState* cpu = const_cast<CPUState*>(this); 113 return reinterpret_cast<T>(cpu->spr(id)); 96 auto& from = cpu->spr(id); 97 typename std::remove_const<T>::type to { }; 98 std::memcpy(&to, &from, sizeof(to)); // Use std::memcpy to avoid strict aliasing issues. 99 return to; 114 100 } 115 101 … … 211 197 { } 212 198 213 uintptr_t& gpr(RegisterID id) { return m_state->cpu.gpr(id); } 214 uintptr_t& spr(SPRegisterID id) { return m_state->cpu.spr(id); } 215 double& fpr(FPRegisterID id) { return m_state->cpu.fpr(id); } 216 const char* gprName(RegisterID id) { return m_state->cpu.gprName(id); } 217 const char* sprName(SPRegisterID id) { return m_state->cpu.sprName(id); } 218 const char* fprName(FPRegisterID id) { return m_state->cpu.fprName(id); } 219 220 void*& pc() { return m_state->cpu.pc(); } 221 void*& fp() { return m_state->cpu.fp(); } 222 void*& sp() { return m_state->cpu.sp(); } 223 224 template<typename T> T pc() { return m_state->cpu.pc<T>(); } 225 template<typename T> T fp() { return m_state->cpu.fp<T>(); } 226 template<typename T> T sp() { return m_state->cpu.sp<T>(); } 199 uintptr_t& gpr(RegisterID id) { return cpu.gpr(id); } 200 uintptr_t& spr(SPRegisterID id) { return cpu.spr(id); } 201 double& fpr(FPRegisterID id) { return cpu.fpr(id); } 202 const char* gprName(RegisterID id) { return cpu.gprName(id); } 203 const char* sprName(SPRegisterID id) { return cpu.sprName(id); } 204 const char* fprName(FPRegisterID id) { return cpu.fprName(id); } 205 206 template<typename T> T gpr(RegisterID id) const { return cpu.gpr<T>(id); } 207 template<typename T> T spr(SPRegisterID id) const { return cpu.spr<T>(id); } 208 template<typename T> T fpr(FPRegisterID id) const { return cpu.fpr<T>(id); } 209 210 void*& pc() { return cpu.pc(); } 211 void*& fp() { return cpu.fp(); } 212 void*& sp() { return cpu.sp(); } 213 214 template<typename T> T pc() { return cpu.pc<T>(); } 215 template<typename T> T fp() { return cpu.fp<T>(); } 216 template<typename T> T sp() { return cpu.sp<T>(); } 227 217 228 218 Stack& stack() -
TabularUnified trunk/Source/JavaScriptCore/assembler/ProbeStack.cpp ¶
r222009 r222058 28 28 29 29 #include <memory> 30 #include <wtf/StdLibExtras.h> 30 31 31 32 #if ENABLE(MASM_PROBE) … … 34 35 namespace Probe { 35 36 37 #if ASAN_ENABLED 38 // FIXME: we should consider using the copy function for both ASan and non-ASan builds. 39 // https://bugs.webkit.org/show_bug.cgi?id=176961 40 SUPPRESS_ASAN 41 static void copyStackPage(void* dst, void* src, size_t size) 42 { 43 ASSERT(roundUpToMultipleOf<sizeof(uintptr_t)>(dst) == dst); 44 ASSERT(roundUpToMultipleOf<sizeof(uintptr_t)>(src) == src); 45 46 uintptr_t* dstPointer = reinterpret_cast<uintptr_t*>(dst); 47 uintptr_t* srcPointer = reinterpret_cast<uintptr_t*>(src); 48 for (; size; size -= sizeof(uintptr_t)) 49 *dstPointer++ = *srcPointer++; 50 } 51 #else 52 #define copyStackPage(dst, src, size) std::memcpy(dst, src, size); 53 #endif 54 36 55 Page::Page(void* baseAddress) 37 56 : m_baseLogicalAddress(baseAddress) 57 , m_physicalAddressOffset(reinterpret_cast<uint8_t*>(&m_buffer) - reinterpret_cast<uint8_t*>(baseAddress)) 38 58 { 39 memcpy(&m_buffer, baseAddress, s_pageSize);59 copyStackPage(&m_buffer, baseAddress, s_pageSize); 40 60 } 41 61 42 62 void Page::flushWrites() 43 63 { 44 uint ptr_t dirtyBits = m_dirtyBits;64 uint64_t dirtyBits = m_dirtyBits; 45 65 size_t offset = 0; 46 66 while (dirtyBits) { … … 57 77 uint8_t* src = reinterpret_cast<uint8_t*>(&m_buffer) + startOffset; 58 78 uint8_t* dst = reinterpret_cast<uint8_t*>(m_baseLogicalAddress) + startOffset; 59 memcpy(dst, src, size);79 copyStackPage(dst, src, size); 60 80 } 61 81 dirtyBits = dirtyBits >> 1; -
TabularUnified trunk/Source/JavaScriptCore/assembler/ProbeStack.h ¶
r222009 r222058 26 26 #pragma once 27 27 28 #include "CPU.h" 28 29 #include <wtf/HashMap.h> 29 30 #include <wtf/StdLibExtras.h> … … 57 58 T get(void* logicalAddress) 58 59 { 59 return *physicalAddressFor<T*>(logicalAddress); 60 void* from = physicalAddressFor(logicalAddress); 61 typename std::remove_const<T>::type to { }; 62 std::memcpy(&to, from, sizeof(to)); // Use std::memcpy to avoid strict aliasing issues. 63 return to; 64 } 65 template<typename T> 66 T get(void* logicalBaseAddress, ptrdiff_t offset) 67 { 68 return get<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset); 60 69 } 61 70 … … 63 72 void set(void* logicalAddress, T value) 64 73 { 65 m_dirtyBits |= dirtyBitFor(logicalAddress); 66 *physicalAddressFor<T*>(logicalAddress) = value; 74 if (sizeof(T) <= s_chunkSize) 75 m_dirtyBits |= dirtyBitFor(logicalAddress); 76 else { 77 size_t numberOfChunks = roundUpToMultipleOf<sizeof(T)>(s_chunkSize) / s_chunkSize; 78 uint8_t* dirtyAddress = reinterpret_cast<uint8_t*>(logicalAddress); 79 for (size_t i = 0; i < numberOfChunks; ++i, dirtyAddress += s_chunkSize) 80 m_dirtyBits |= dirtyBitFor(dirtyAddress); 81 } 82 void* to = physicalAddressFor(logicalAddress); 83 std::memcpy(to, &value, sizeof(T)); // Use std::memcpy to avoid strict aliasing issues. 84 } 85 template<typename T> 86 void set(void* logicalBaseAddress, ptrdiff_t offset, T value) 87 { 88 set<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset, value); 67 89 } 68 90 … … 75 97 76 98 private: 77 uint ptr_t dirtyBitFor(void* logicalAddress)99 uint64_t dirtyBitFor(void* logicalAddress) 78 100 { 79 101 uintptr_t offset = reinterpret_cast<uintptr_t>(logicalAddress) & s_pageMask; 80 return static_cast<uintptr_t>(1) << (offset >> s_chunkSizeShift); 81 } 82 83 template<typename T, typename = typename std::enable_if<std::is_pointer<T>::value>::type> 84 T physicalAddressFor(void* logicalAddress) 85 { 86 uintptr_t offset = reinterpret_cast<uintptr_t>(logicalAddress) & s_pageMask; 87 void* physicalAddress = reinterpret_cast<uint8_t*>(&m_buffer) + offset; 88 return reinterpret_cast<T>(physicalAddress); 102 return static_cast<uint64_t>(1) << (offset >> s_chunkSizeShift); 103 } 104 105 void* physicalAddressFor(void* logicalAddress) 106 { 107 return reinterpret_cast<uint8_t*>(logicalAddress) + m_physicalAddressOffset; 89 108 } 90 109 … … 92 111 93 112 void* m_baseLogicalAddress { nullptr }; 94 uintptr_t m_dirtyBits { 0 }; 95 113 ptrdiff_t m_physicalAddressOffset; 114 uint64_t m_dirtyBits { 0 }; 115 116 #if ASAN_ENABLED 117 // The ASan stack may contain poisoned words that may be manipulated at ASan's discretion. 118 // We would never touch those words anyway, but let's ensure that the page size is set 119 // such that the chunk size is guaranteed to be exactly sizeof(uintptr_t) so that we won't 120 // inadvertently overwrite one of ASan's words on the stack when we copy back the dirty 121 // chunks. 122 // FIXME: we should consider using the same page size for both ASan and non-ASan builds. 123 // https://bugs.webkit.org/show_bug.cgi?id=176961 124 static constexpr size_t s_pageSize = 64 * sizeof(uintptr_t); // because there are 64 bits in m_dirtyBits. 125 #else // not ASAN_ENABLED 96 126 static constexpr size_t s_pageSize = 1024; 127 #endif // ASAN_ENABLED 97 128 static constexpr uintptr_t s_pageMask = s_pageSize - 1; 98 static constexpr size_t s_chunksPerPage = sizeof(uint ptr_t) * 8; // sizeof(m_dirtyBits) in bits.129 static constexpr size_t s_chunksPerPage = sizeof(uint64_t) * 8; // number of bits in m_dirtyBits. 99 130 static constexpr size_t s_chunkSize = s_pageSize / s_chunksPerPage; 100 131 static constexpr uintptr_t s_chunkMask = s_chunkSize - 1; 101 #if USE(JSVALUE64) 132 #if ASAN_ENABLED 133 static_assert(s_chunkSize == sizeof(uintptr_t), "bad chunkSizeShift"); 134 static constexpr size_t s_chunkSizeShift = is64Bit() ? 3 : 2; 135 #else // no ASAN_ENABLED 102 136 static constexpr size_t s_chunkSizeShift = 4; 103 #else 104 static constexpr size_t s_chunkSizeShift = 5; 105 #endif 137 #endif // ASAN_ENABLED 106 138 static_assert(s_pageSize > s_chunkSize, "bad pageSize or chunkSize"); 107 139 static_assert(s_chunkSize == (1 << s_chunkSizeShift), "bad chunkSizeShift"); 108 109 140 110 141 typedef typename std::aligned_storage<s_pageSize, std::alignment_of<uintptr_t>::value>::type Buffer; … … 121 152 Stack(Stack&& other); 122 153 123 void* lowWatermark() { return m_lowWatermark; } 124 125 template<typename T> 126 typename std::enable_if<!std::is_same<double, typename std::remove_cv<T>::type>::value, T>::type get(void* address) 127 { 128 Page* page = pageFor(address); 129 return page->get<T>(address); 130 } 131 132 template<typename T, typename = typename std::enable_if<!std::is_same<double, typename std::remove_cv<T>::type>::value>::type> 133 void set(void* address, T value) 134 { 135 Page* page = pageFor(address); 136 page->set<T>(address, value); 137 154 void* lowWatermark() 155 { 138 156 // We use the chunkAddress for the low watermark because we'll be doing write backs 139 157 // to the stack in increments of chunks. Hence, we'll treat the lowest address of 140 158 // the chunk as the low watermark of any given set address. 141 void* chunkAddress = Page::chunkAddressFor(address); 142 if (chunkAddress < m_lowWatermark) 143 m_lowWatermark = chunkAddress; 144 } 145 146 template<typename T> 147 typename std::enable_if<std::is_same<double, typename std::remove_cv<T>::type>::value, T>::type get(void* address) 159 return Page::chunkAddressFor(m_lowWatermark); 160 } 161 162 template<typename T> 163 T get(void* address) 148 164 { 149 165 Page* page = pageFor(address); 150 return bitwise_cast<double>(page->get<uint64_t>(address)); 151 } 152 153 template<typename T, typename = typename std::enable_if<std::is_same<double, typename std::remove_cv<T>::type>::value>::type> 154 void set(void* address, double value) 155 { 156 set<uint64_t>(address, bitwise_cast<uint64_t>(value)); 166 return page->get<T>(address); 167 } 168 template<typename T> 169 T get(void* logicalBaseAddress, ptrdiff_t offset) 170 { 171 return get<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset); 172 } 173 174 template<typename T> 175 void set(void* address, T value) 176 { 177 Page* page = pageFor(address); 178 page->set<T>(address, value); 179 180 if (address < m_lowWatermark) 181 m_lowWatermark = address; 182 } 183 184 template<typename T> 185 void set(void* logicalBaseAddress, ptrdiff_t offset, T value) 186 { 187 set<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset, value); 157 188 } 158 189 -
TabularUnified trunk/Source/JavaScriptCore/assembler/testmasm.cpp ¶
r221012 r222058 601 601 uintptr_t* p = reinterpret_cast<uintptr_t*>(newSP); 602 602 int count = 0; 603 stack.set<double>(p++, 1.234567 89);603 stack.set<double>(p++, 1.234567); 604 604 if (is32Bit()) 605 605 p++; // On 32-bit targets, a double takes up 2 uintptr_t. … … 632 632 uintptr_t* p = reinterpret_cast<uintptr_t*>(newSP); 633 633 int count = 0; 634 CHECK_EQ(stack.get<double>(p++), 1.234567 89);634 CHECK_EQ(stack.get<double>(p++), 1.234567); 635 635 if (is32Bit()) 636 636 p++; // On 32-bit targets, a double takes up 2 uintptr_t. -
TabularUnified trunk/Source/WTF/ChangeLog ¶
r222039 r222058 1 2017-09-14 Mark Lam <mark.lam@apple.com> 2 3 AddressSanitizer: stack-buffer-underflow in JSC::Probe::Page::Page 4 https://bugs.webkit.org/show_bug.cgi?id=176874 5 <rdar://problem/34436415> 6 7 Reviewed by Saam Barati. 8 9 Added a convenience version of roundUpToMultipleOf() so that it can be applied to 10 pointers without the client having to cast explicitly. 11 12 * wtf/StdLibExtras.h: 13 (WTF::roundUpToMultipleOf): 14 1 15 2017-09-14 Youenn Fablet <youenn@apple.com> 2 16 -
TabularUnified trunk/Source/WTF/wtf/StdLibExtras.h ¶
r217724 r222058 1 1 /* 2 * Copyright (C) 2008 , 2016Apple Inc. All Rights Reserved.2 * Copyright (C) 2008-2017 Apple Inc. All Rights Reserved. 3 3 * Copyright (C) 2013 Patrick Gansterer <paroga@paroga.com> 4 4 * … … 205 205 } 206 206 207 template<size_t divisor, typename T> inline T* roundUpToMultipleOf(T* x) 208 { 209 static_assert(sizeof(T*) == sizeof(size_t), ""); 210 return reinterpret_cast<T*>(roundUpToMultipleOf<divisor>(reinterpret_cast<size_t>(x))); 211 } 212 207 213 enum BinarySearchMode { 208 214 KeyMustBePresentInArray,
Note:
See TracChangeset
for help on using the changeset viewer.