Changeset 94920 in webkit
- Timestamp:
- Sep 10, 2011 10:49:36 PM (13 years ago)
- Location:
- trunk
- Files:
-
- 8 added
- 41 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/Source/JavaScriptCore/ChangeLog
r94919 r94920 1 2011-09-08 Filip Pizlo <fpizlo@apple.com> 2 3 The executable allocator makes it difficult to free individual 4 chunks of executable memory 5 https://bugs.webkit.org/show_bug.cgi?id=66363 6 7 Reviewed by Oliver Hunt. 8 9 Introduced a best-fit, balanced-tree based allocator. The allocator 10 required a balanced tree that does not allocate memory and that 11 permits the removal of individual nodes directly (as opposed to by 12 key); neither AVLTree nor WebCore's PODRedBlackTree supported this. 13 Changed all references to executable code to use a reference counted 14 handle. 15 16 * GNUmakefile.list.am: 17 * JavaScriptCore.exp: 18 * JavaScriptCore.vcproj/WTF/WTF.vcproj: 19 * JavaScriptCore.xcodeproj/project.pbxproj: 20 * assembler/AssemblerBuffer.h: 21 (JSC::AssemblerBuffer::executableCopy): 22 * assembler/LinkBuffer.h: 23 (JSC::LinkBuffer::LinkBuffer): 24 (JSC::LinkBuffer::finalizeCode): 25 (JSC::LinkBuffer::linkCode): 26 * assembler/MacroAssemblerCodeRef.h: 27 (JSC::MacroAssemblerCodeRef::MacroAssemblerCodeRef): 28 (JSC::MacroAssemblerCodeRef::createSelfManagedCodeRef): 29 (JSC::MacroAssemblerCodeRef::executableMemory): 30 (JSC::MacroAssemblerCodeRef::code): 31 (JSC::MacroAssemblerCodeRef::size): 32 (JSC::MacroAssemblerCodeRef::operator!): 33 * assembler/X86Assembler.h: 34 (JSC::X86Assembler::executableCopy): 35 (JSC::X86Assembler::X86InstructionFormatter::executableCopy): 36 * bytecode/CodeBlock.h: 37 * bytecode/Instruction.h: 38 * bytecode/StructureStubInfo.h: 39 * dfg/DFGJITCompiler.cpp: 40 (JSC::DFG::JITCompiler::compile): 41 (JSC::DFG::JITCompiler::compileFunction): 42 * dfg/DFGRepatch.cpp: 43 (JSC::DFG::generateProtoChainAccessStub): 44 (JSC::DFG::tryCacheGetByID): 45 (JSC::DFG::tryBuildGetByIDList): 46 (JSC::DFG::tryBuildGetByIDProtoList): 47 (JSC::DFG::tryCachePutByID): 48 * jit/ExecutableAllocator.cpp: 49 (JSC::ExecutableAllocator::initializeAllocator): 50 (JSC::ExecutableAllocator::ExecutableAllocator): 51 (JSC::ExecutableAllocator::allocate): 52 (JSC::ExecutableAllocator::committedByteCount): 53 (JSC::ExecutableAllocator::dumpProfile): 54 * jit/ExecutableAllocator.h: 55 (JSC::ExecutableAllocator::dumpProfile): 56 * jit/ExecutableAllocatorFixedVMPool.cpp: 57 (JSC::ExecutableAllocator::initializeAllocator): 58 (JSC::ExecutableAllocator::ExecutableAllocator): 59 (JSC::ExecutableAllocator::isValid): 60 (JSC::ExecutableAllocator::underMemoryPressure): 61 (JSC::ExecutableAllocator::allocate): 62 (JSC::ExecutableAllocator::committedByteCount): 63 (JSC::ExecutableAllocator::dumpProfile): 64 * jit/JIT.cpp: 65 (JSC::JIT::privateCompile): 66 * jit/JIT.h: 67 (JSC::JIT::compileCTIMachineTrampolines): 68 (JSC::JIT::compileCTINativeCall): 69 * jit/JITCode.h: 70 (JSC::JITCode::operator !): 71 (JSC::JITCode::addressForCall): 72 (JSC::JITCode::offsetOf): 73 (JSC::JITCode::execute): 74 (JSC::JITCode::start): 75 (JSC::JITCode::size): 76 (JSC::JITCode::getExecutableMemory): 77 (JSC::JITCode::HostFunction): 78 (JSC::JITCode::JITCode): 79 * jit/JITOpcodes.cpp: 80 (JSC::JIT::privateCompileCTIMachineTrampolines): 81 (JSC::JIT::privateCompileCTINativeCall): 82 * jit/JITOpcodes32_64.cpp: 83 (JSC::JIT::privateCompileCTIMachineTrampolines): 84 (JSC::JIT::privateCompileCTINativeCall): 85 * jit/JITPropertyAccess.cpp: 86 (JSC::JIT::stringGetByValStubGenerator): 87 (JSC::JIT::emitSlow_op_get_by_val): 88 (JSC::JIT::privateCompilePutByIdTransition): 89 (JSC::JIT::privateCompilePatchGetArrayLength): 90 (JSC::JIT::privateCompileGetByIdProto): 91 (JSC::JIT::privateCompileGetByIdSelfList): 92 (JSC::JIT::privateCompileGetByIdProtoList): 93 (JSC::JIT::privateCompileGetByIdChainList): 94 (JSC::JIT::privateCompileGetByIdChain): 95 * jit/JITPropertyAccess32_64.cpp: 96 (JSC::JIT::stringGetByValStubGenerator): 97 (JSC::JIT::emitSlow_op_get_by_val): 98 (JSC::JIT::privateCompilePutByIdTransition): 99 (JSC::JIT::privateCompilePatchGetArrayLength): 100 (JSC::JIT::privateCompileGetByIdProto): 101 (JSC::JIT::privateCompileGetByIdSelfList): 102 (JSC::JIT::privateCompileGetByIdProtoList): 103 (JSC::JIT::privateCompileGetByIdChainList): 104 (JSC::JIT::privateCompileGetByIdChain): 105 * jit/JITStubs.cpp: 106 (JSC::JITThunks::JITThunks): 107 (JSC::DEFINE_STUB_FUNCTION): 108 (JSC::getPolymorphicAccessStructureListSlot): 109 (JSC::JITThunks::ctiStub): 110 (JSC::JITThunks::hostFunctionStub): 111 * jit/JITStubs.h: 112 * jit/SpecializedThunkJIT.h: 113 (JSC::SpecializedThunkJIT::SpecializedThunkJIT): 114 (JSC::SpecializedThunkJIT::finalize): 115 * jit/ThunkGenerators.cpp: 116 (JSC::charCodeAtThunkGenerator): 117 (JSC::charAtThunkGenerator): 118 (JSC::fromCharCodeThunkGenerator): 119 (JSC::sqrtThunkGenerator): 120 (JSC::floorThunkGenerator): 121 (JSC::ceilThunkGenerator): 122 (JSC::roundThunkGenerator): 123 (JSC::expThunkGenerator): 124 (JSC::logThunkGenerator): 125 (JSC::absThunkGenerator): 126 (JSC::powThunkGenerator): 127 * jit/ThunkGenerators.h: 128 * runtime/Executable.h: 129 (JSC::NativeExecutable::create): 130 * runtime/InitializeThreading.cpp: 131 (JSC::initializeThreadingOnce): 132 * runtime/JSGlobalData.cpp: 133 (JSC::JSGlobalData::JSGlobalData): 134 (JSC::JSGlobalData::dumpSampleData): 135 * runtime/JSGlobalData.h: 136 (JSC::JSGlobalData::getCTIStub): 137 * wtf/CMakeLists.txt: 138 * wtf/MetaAllocator.cpp: Added. 139 (WTF::MetaAllocatorHandle::MetaAllocatorHandle): 140 (WTF::MetaAllocatorHandle::~MetaAllocatorHandle): 141 (WTF::MetaAllocatorHandle::shrink): 142 (WTF::MetaAllocator::MetaAllocator): 143 (WTF::MetaAllocator::allocate): 144 (WTF::MetaAllocator::currentStatistics): 145 (WTF::MetaAllocator::findAndRemoveFreeSpace): 146 (WTF::MetaAllocator::addFreeSpaceFromReleasedHandle): 147 (WTF::MetaAllocator::addFreshFreeSpace): 148 (WTF::MetaAllocator::debugFreeSpaceSize): 149 (WTF::MetaAllocator::addFreeSpace): 150 (WTF::MetaAllocator::incrementPageOccupancy): 151 (WTF::MetaAllocator::decrementPageOccupancy): 152 (WTF::MetaAllocator::roundUp): 153 (WTF::MetaAllocator::allocFreeSpaceNode): 154 (WTF::MetaAllocator::freeFreeSpaceNode): 155 (WTF::MetaAllocator::dumpProfile): 156 * wtf/MetaAllocator.h: Added. 157 (WTF::MetaAllocator::bytesAllocated): 158 (WTF::MetaAllocator::bytesReserved): 159 (WTF::MetaAllocator::bytesCommitted): 160 (WTF::MetaAllocator::dumpProfile): 161 (WTF::MetaAllocator::~MetaAllocator): 162 * wtf/MetaAllocatorHandle.h: Added. 163 * wtf/RedBlackTree.h: Added. 164 (WTF::RedBlackTree::Node::Node): 165 (WTF::RedBlackTree::Node::successor): 166 (WTF::RedBlackTree::Node::predecessor): 167 (WTF::RedBlackTree::Node::reset): 168 (WTF::RedBlackTree::Node::parent): 169 (WTF::RedBlackTree::Node::setParent): 170 (WTF::RedBlackTree::Node::left): 171 (WTF::RedBlackTree::Node::setLeft): 172 (WTF::RedBlackTree::Node::right): 173 (WTF::RedBlackTree::Node::setRight): 174 (WTF::RedBlackTree::Node::color): 175 (WTF::RedBlackTree::Node::setColor): 176 (WTF::RedBlackTree::RedBlackTree): 177 (WTF::RedBlackTree::insert): 178 (WTF::RedBlackTree::remove): 179 (WTF::RedBlackTree::findExact): 180 (WTF::RedBlackTree::findLeastGreaterThanOrEqual): 181 (WTF::RedBlackTree::findGreatestLessThanOrEqual): 182 (WTF::RedBlackTree::first): 183 (WTF::RedBlackTree::last): 184 (WTF::RedBlackTree::size): 185 (WTF::RedBlackTree::isEmpty): 186 (WTF::RedBlackTree::treeMinimum): 187 (WTF::RedBlackTree::treeMaximum): 188 (WTF::RedBlackTree::treeInsert): 189 (WTF::RedBlackTree::leftRotate): 190 (WTF::RedBlackTree::rightRotate): 191 (WTF::RedBlackTree::removeFixup): 192 * wtf/wtf.pri: 193 * yarr/YarrJIT.cpp: 194 (JSC::Yarr::YarrGenerator::compile): 195 * yarr/YarrJIT.h: 196 (JSC::Yarr::YarrCodeBlock::execute): 197 (JSC::Yarr::YarrCodeBlock::getAddr): 198 1 199 2011-09-10 Sam Weinig <sam@webkit.org> 2 200 -
trunk/Source/JavaScriptCore/GNUmakefile.list.am
r94814 r94920 530 530 Source/JavaScriptCore/wtf/MD5.cpp \ 531 531 Source/JavaScriptCore/wtf/MD5.h \ 532 Source/JavaScriptCore/wtf/MetaAllocator.cpp \ 533 Source/JavaScriptCore/wtf/MetaAllocator.h \ 534 Source/JavaScriptCore/wtf/MetaAllocatorHandle.h \ 532 535 Source/JavaScriptCore/wtf/MessageQueue.h \ 533 536 Source/JavaScriptCore/wtf/NonCopyingSort.h \ … … 562 565 Source/JavaScriptCore/wtf/RandomNumber.h \ 563 566 Source/JavaScriptCore/wtf/RandomNumberSeed.h \ 567 Source/JavaScriptCore/wtf/RedBlackTree.h \ 564 568 Source/JavaScriptCore/wtf/RefCounted.h \ 565 569 Source/JavaScriptCore/wtf/RefCountedLeakCounter.cpp \ -
trunk/Source/JavaScriptCore/JavaScriptCore.exp
r94919 r94920 408 408 __ZN3WTF12isMainThreadEv 409 409 __ZN3WTF12randomNumberEv 410 __ZN3WTF13MetaAllocator17addFreshFreeSpaceEPvm 411 __ZN3WTF13MetaAllocator17freeFreeSpaceNodeEPNS_12RedBlackTreeImPvE4NodeE 412 __ZN3WTF13MetaAllocator18debugFreeSpaceSizeEv 413 __ZN3WTF13MetaAllocator8allocateEm 414 __ZN3WTF13MetaAllocatorC2Em 410 415 __ZN3WTF13StringBuilder11reifyStringEv 411 416 __ZN3WTF13StringBuilder11shrinkToFitEv … … 442 447 __ZN3WTF18dateToDaysFrom1970Eiii 443 448 __ZN3WTF18monthFromDayInYearEib 449 __ZN3WTF19MetaAllocatorHandle6shrinkEm 450 __ZN3WTF19MetaAllocatorHandleD1Ev 444 451 __ZN3WTF19initializeThreadingEv 445 452 __ZN3WTF20equalIgnoringNullityEPNS_10StringImplES1_ … … 522 529 __ZN3WTF8msToYearEd 523 530 __ZN3WTF8nullAtomE 531 __ZN3WTF8pageSizeEv 524 532 __ZN3WTF8starAtomE 525 533 __ZN3WTF8textAtomE -
trunk/Source/JavaScriptCore/JavaScriptCore.vcproj/WTF/WTF.vcproj
r94890 r94920 838 838 </File> 839 839 <File 840 RelativePath="..\..\wtf\MetaAllocator.cpp" 841 > 842 </File> 843 <File 844 RelativePath="..\..\wtf\MetaAllocator.h" 845 > 846 </File> 847 <File 848 RelativePath="..\..\wtf\MetaAllocatorHandle.h" 849 > 850 </File> 851 <File 840 852 RelativePath="..\..\wtf\MD5.cpp" 841 853 > … … 975 987 <File 976 988 RelativePath="..\..\wtf\RandomNumberSeed.h" 989 > 990 </File> 991 <File 992 RelativePath="..\..\wtf\RedBlackTree.h" 977 993 > 978 994 </File> -
trunk/Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
r94814 r94920 52 52 0F242DA713F3B1E8007ADD4C /* WeakReferenceHarvester.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F242DA513F3B1BB007ADD4C /* WeakReferenceHarvester.h */; settings = {ATTRIBUTES = (Private, ); }; }; 53 53 0F7700921402FF3C0078EB39 /* SamplingCounter.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F7700911402FF280078EB39 /* SamplingCounter.cpp */; }; 54 0F963B2713F753BB0002D9B2 /* RedBlackTree.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F963B2613F753990002D9B2 /* RedBlackTree.h */; settings = {ATTRIBUTES = (Private, ); }; }; 55 0F963B2C13F853EC0002D9B2 /* MetaAllocator.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F963B2B13F853C70002D9B2 /* MetaAllocator.cpp */; settings = {COMPILER_FLAGS = "-fno-strict-aliasing"; }; }; 56 0F963B2D13F854020002D9B2 /* MetaAllocator.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F963B2A13F853BD0002D9B2 /* MetaAllocator.h */; settings = {ATTRIBUTES = (Private, ); }; }; 57 0F963B2F13FC66BB0002D9B2 /* MetaAllocatorHandle.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F963B2E13FC66AE0002D9B2 /* MetaAllocatorHandle.h */; settings = {ATTRIBUTES = (Private, ); }; }; 54 58 0F963B3813FC6FE90002D9B2 /* ValueProfile.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F963B3613FC6FDE0002D9B2 /* ValueProfile.h */; settings = {ATTRIBUTES = (Private, ); }; }; 55 59 0FC8150A14043BF500CFA603 /* WriteBarrierSupport.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FC8150914043BD200CFA603 /* WriteBarrierSupport.h */; settings = {ATTRIBUTES = (Private, ); }; }; … … 559 563 BC18C46C0E16F5CD00B34460 /* TCPackedCache.h in Headers */ = {isa = PBXBuildFile; fileRef = 5DA479650CFBCF56009328A0 /* TCPackedCache.h */; }; 560 564 BC18C46D0E16F5CD00B34460 /* TCPageMap.h in Headers */ = {isa = PBXBuildFile; fileRef = 6541BD6E08E80A17002CBEE7 /* TCPageMap.h */; }; 561 BC18C46E0E16F5CD00B34460 /* TCSpinLock.h in Headers */ = {isa = PBXBuildFile; fileRef = 6541BD6F08E80A17002CBEE7 /* TCSpinLock.h */; };565 BC18C46E0E16F5CD00B34460 /* TCSpinLock.h in Headers */ = {isa = PBXBuildFile; fileRef = 6541BD6F08E80A17002CBEE7 /* TCSpinLock.h */; settings = {ATTRIBUTES = (Private, ); }; }; 562 566 BC18C46F0E16F5CD00B34460 /* TCSystemAlloc.h in Headers */ = {isa = PBXBuildFile; fileRef = 6541BD7108E80A17002CBEE7 /* TCSystemAlloc.h */; }; 563 567 BC18C4700E16F5CD00B34460 /* Threading.h in Headers */ = {isa = PBXBuildFile; fileRef = E1EE79220D6C95CD00FEA3BA /* Threading.h */; settings = {ATTRIBUTES = (Private, ); }; }; … … 779 783 0F77008E1402FDD60078EB39 /* SamplingCounter.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SamplingCounter.h; sourceTree = "<group>"; }; 780 784 0F7700911402FF280078EB39 /* SamplingCounter.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = SamplingCounter.cpp; sourceTree = "<group>"; }; 785 0F963B2613F753990002D9B2 /* RedBlackTree.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RedBlackTree.h; sourceTree = "<group>"; }; 786 0F963B2A13F853BD0002D9B2 /* MetaAllocator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MetaAllocator.h; sourceTree = "<group>"; }; 787 0F963B2B13F853C70002D9B2 /* MetaAllocator.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = MetaAllocator.cpp; sourceTree = "<group>"; }; 788 0F963B2E13FC66AE0002D9B2 /* MetaAllocatorHandle.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MetaAllocatorHandle.h; sourceTree = "<group>"; }; 781 789 0F963B3613FC6FDE0002D9B2 /* ValueProfile.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ValueProfile.h; sourceTree = "<group>"; }; 782 790 0FC8150814043BCA00CFA603 /* WriteBarrierSupport.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = WriteBarrierSupport.cpp; sourceTree = "<group>"; }; … … 1764 1772 isa = PBXGroup; 1765 1773 children = ( 1774 0F963B2E13FC66AE0002D9B2 /* MetaAllocatorHandle.h */, 1775 0F963B2B13F853C70002D9B2 /* MetaAllocator.cpp */, 1776 0F963B2A13F853BD0002D9B2 /* MetaAllocator.h */, 1777 0F963B2613F753990002D9B2 /* RedBlackTree.h */, 1766 1778 C2EE599D13FC972A009CEAFE /* DecimalNumber.cpp */, 1767 1779 C2EE599E13FC972A009CEAFE /* DecimalNumber.h */, … … 2384 2396 86ADD1450FDDEA980006EEC2 /* ARMv7Assembler.h in Headers */, 2385 2397 BC18C3E60E16F5CD00B34460 /* ArrayConstructor.h in Headers */, 2398 BC18C46E0E16F5CD00B34460 /* TCSpinLock.h in Headers */, 2399 0F963B2F13FC66BB0002D9B2 /* MetaAllocatorHandle.h in Headers */, 2400 0F963B2D13F854020002D9B2 /* MetaAllocator.h in Headers */, 2401 0F963B2713F753BB0002D9B2 /* RedBlackTree.h in Headers */, 2386 2402 0FC815151405119B00CFA603 /* VTableSpectrum.h in Headers */, 2387 2403 C22B31B9140577D700DB475A /* SamplingCounter.h in Headers */, … … 2687 2703 BC18C46C0E16F5CD00B34460 /* TCPackedCache.h in Headers */, 2688 2704 BC18C46D0E16F5CD00B34460 /* TCPageMap.h in Headers */, 2689 BC18C46E0E16F5CD00B34460 /* TCSpinLock.h in Headers */,2690 2705 BC18C46F0E16F5CD00B34460 /* TCSystemAlloc.h in Headers */, 2691 2706 971EDEA61169E0D3005E4262 /* Terminator.h in Headers */, … … 3050 3065 buildActionMask = 2147483647; 3051 3066 files = ( 3067 0F963B2C13F853EC0002D9B2 /* MetaAllocator.cpp in Sources */, 3052 3068 0FD82E2114172CE300179C94 /* DFGCapabilities.cpp in Sources */, 3053 3069 0FD3C82514115D4000FD81CB /* DFGPropagator.cpp in Sources */, -
trunk/Source/JavaScriptCore/assembler/AssemblerBuffer.h
r87527 r94920 29 29 #if ENABLE(ASSEMBLER) 30 30 31 #include "JSGlobalData.h" 31 32 #include "stdint.h" 32 33 #include <string.h> … … 129 130 } 130 131 131 void* executableCopy(JSGlobalData& globalData, ExecutablePool* allocator)132 PassRefPtr<ExecutableMemoryHandle> executableCopy(JSGlobalData& globalData) 132 133 { 133 134 if (!m_index) 134 135 return 0; 135 136 136 void* result = allocator->alloc(globalData, m_index);137 RefPtr<ExecutableMemoryHandle> result = globalData.executableAllocator.allocate(globalData, m_index); 137 138 138 139 if (!result) 139 140 return 0; 140 141 141 ExecutableAllocator::makeWritable(result , m_index);142 ExecutableAllocator::makeWritable(result->start(), result->sizeInBytes()); 142 143 143 return memcpy(result, m_buffer, m_index); 144 memcpy(result->start(), m_buffer, m_index); 145 146 return result.release(); 144 147 } 145 148 -
trunk/Source/JavaScriptCore/assembler/LinkBuffer.h
r93238 r94920 70 70 71 71 public: 72 LinkBuffer(JSGlobalData& globalData, MacroAssembler* masm, PassRefPtr<ExecutablePool> executablePool) 73 : m_executablePool(executablePool) 74 , m_size(0) 72 LinkBuffer(JSGlobalData& globalData, MacroAssembler* masm) 73 : m_size(0) 75 74 , m_code(0) 76 75 , m_assembler(masm) … … 83 82 } 84 83 85 LinkBuffer(JSGlobalData& globalData, MacroAssembler* masm, ExecutableAllocator& allocator)86 : m_executablePool(allocator.poolForSize(globalData, masm->m_assembler.codeSize()))87 , m_size(0)88 , m_code(0)89 , m_assembler(masm)90 , m_globalData(&globalData)91 #ifndef NDEBUG92 , m_completed(false)93 #endif94 {95 linkCode();96 }97 98 84 ~LinkBuffer() 99 85 { … … 186 172 performFinalization(); 187 173 188 return CodeRef(m_code, m_executablePool, m_size); 189 } 190 191 CodeLocationLabel finalizeCodeAddendum() 192 { 193 performFinalization(); 194 195 return CodeLocationLabel(code()); 174 return CodeRef(m_executableMemory); 196 175 } 197 176 … … 233 212 ASSERT(!m_code); 234 213 #if !ENABLE(BRANCH_COMPACTION) 235 m_code = m_assembler->m_assembler.executableCopy(*m_globalData, m_executablePool.get()); 214 m_executableMemory = m_assembler->m_assembler.executableCopy(*m_globalData); 215 if (!m_executableMemory) 216 return; 217 m_code = m_executableMemory->start(); 236 218 m_size = m_assembler->m_assembler.codeSize(); 237 219 ASSERT(m_code); 238 220 #else 239 221 size_t initialSize = m_assembler->m_assembler.codeSize(); 240 m_ code = (uint8_t*)m_executablePool->alloc(*m_globalData, initialSize);241 if (!m_ code)222 m_executableMemory = m_globalData->executableAllocator.allocate(*m_globalData, initialSize); 223 if (!m_executableMemory) 242 224 return; 225 m_code = (uint8_t*)m_executableMemory->start(); 226 ASSERT(m_code); 243 227 ExecutableAllocator::makeWritable(m_code, initialSize); 244 228 uint8_t* inData = (uint8_t*)m_assembler->unlinkedCode(); … … 298 282 jumpsToLink.clear(); 299 283 m_size = writePtr + initialSize - readPtr; 300 m_executable Pool->tryShrink(m_code, initialSize,m_size);284 m_executableMemory->shrink(m_size); 301 285 302 286 #if DUMP_LINK_STATISTICS … … 367 351 #endif 368 352 369 RefPtr<Executable Pool> m_executablePool;353 RefPtr<ExecutableMemoryHandle> m_executableMemory; 370 354 size_t m_size; 371 355 void* m_code; -
trunk/Source/JavaScriptCore/assembler/MacroAssemblerCodeRef.h
r89630 r94920 200 200 // was allocated. 201 201 class MacroAssemblerCodeRef { 202 private: 203 // This is private because it's dangerous enough that we want uses of it 204 // to be easy to find - hence the static create method below. 205 explicit MacroAssemblerCodeRef(MacroAssemblerCodePtr codePtr) 206 : m_codePtr(codePtr) 207 { 208 ASSERT(m_codePtr); 209 } 210 202 211 public: 203 212 MacroAssemblerCodeRef() 204 : m_size(0) 205 { 206 } 207 208 MacroAssemblerCodeRef(void* code, PassRefPtr<ExecutablePool> executablePool, size_t size) 209 : m_code(code) 210 , m_executablePool(executablePool) 211 , m_size(size) 212 { 213 } 214 215 MacroAssemblerCodePtr m_code; 216 RefPtr<ExecutablePool> m_executablePool; 217 size_t m_size; 213 { 214 } 215 216 MacroAssemblerCodeRef(PassRefPtr<ExecutableMemoryHandle> executableMemory) 217 : m_codePtr(executableMemory->start()) 218 , m_executableMemory(executableMemory) 219 { 220 ASSERT(m_executableMemory->isManaged()); 221 ASSERT(m_executableMemory->start()); 222 ASSERT(m_codePtr); 223 } 224 225 // Use this only when you know that the codePtr refers to code that is 226 // already being kept alive through some other means. Typically this means 227 // that codePtr is immortal. 228 static MacroAssemblerCodeRef createSelfManagedCodeRef(MacroAssemblerCodePtr codePtr) 229 { 230 return MacroAssemblerCodeRef(codePtr); 231 } 232 233 ExecutableMemoryHandle* executableMemory() const 234 { 235 return m_executableMemory.get(); 236 } 237 238 MacroAssemblerCodePtr code() const 239 { 240 return m_codePtr; 241 } 242 243 size_t size() const 244 { 245 if (!m_executableMemory) 246 return 0; 247 return m_executableMemory->sizeInBytes(); 248 } 249 250 bool operator!() const { return !m_codePtr; } 251 252 private: 253 MacroAssemblerCodePtr m_codePtr; 254 RefPtr<ExecutableMemoryHandle> m_executableMemory; 218 255 }; 219 256 -
trunk/Source/JavaScriptCore/assembler/X86Assembler.h
r90237 r94920 1596 1596 } 1597 1597 1598 void* executableCopy(JSGlobalData& globalData, ExecutablePool* allocator)1599 { 1600 return m_formatter.executableCopy(globalData , allocator);1598 PassRefPtr<ExecutableMemoryHandle> executableCopy(JSGlobalData& globalData) 1599 { 1600 return m_formatter.executableCopy(globalData); 1601 1601 } 1602 1602 … … 1940 1940 void* data() const { return m_buffer.data(); } 1941 1941 1942 void* executableCopy(JSGlobalData& globalData, ExecutablePool* allocator)1943 { 1944 return m_buffer.executableCopy(globalData , allocator);1942 PassRefPtr<ExecutableMemoryHandle> executableCopy(JSGlobalData& globalData) 1943 { 1944 return m_buffer.executableCopy(globalData); 1945 1945 } 1946 1946 -
trunk/Source/JavaScriptCore/bytecode/CodeBlock.h
r94802 r94920 325 325 JITCode& getJITCode() { return m_jitCode; } 326 326 JITCode::JITType getJITType() { return m_jitCode.jitType(); } 327 Executable Pool* executablePool() { return getJITCode().getExecutablePool(); }327 ExecutableMemoryHandle* executableMemory() { return getJITCode().getExecutableMemory(); } 328 328 virtual JSObject* compileOptimized(ExecState*, ScopeChainNode*) = 0; 329 329 virtual CodeBlock* replacement() = 0; -
trunk/Source/JavaScriptCore/bytecode/Instruction.h
r91952 r94920 51 51 52 52 #if ENABLE(JIT) 53 typedef CodeLocationLabelPolymorphicAccessStructureListStubRoutineType;53 typedef MacroAssemblerCodeRef PolymorphicAccessStructureListStubRoutineType; 54 54 55 55 // Structure used by op_get_by_id_self_list and op_get_by_id_proto_list instruction to hold data off the main opcode stream. -
trunk/Source/JavaScriptCore/bytecode/StructureStubInfo.h
r92911 r94920 174 174 } u; 175 175 176 CodeLocationLabelstubRoutine;176 MacroAssemblerCodeRef stubRoutine; 177 177 CodeLocationCall callReturnLocation; 178 178 CodeLocationLabel hotPathBegin; -
trunk/Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
r94914 r94920 956 956 compileBody(); 957 957 // Link 958 LinkBuffer linkBuffer(*m_globalData, this , m_globalData->executableAllocator);958 LinkBuffer linkBuffer(*m_globalData, this); 959 959 link(linkBuffer); 960 960 entry = JITCode(linkBuffer.finalizeCode(), JITCode::DFGJIT); … … 1014 1014 1015 1015 // === Link === 1016 LinkBuffer linkBuffer(*m_globalData, this , m_globalData->executableAllocator);1016 LinkBuffer linkBuffer(*m_globalData, this); 1017 1017 link(linkBuffer); 1018 1018 -
trunk/Source/JavaScriptCore/dfg/DFGRepatch.cpp
r94701 r94920 94 94 } 95 95 96 static void generateProtoChainAccessStub(ExecState* exec, StructureStubInfo& stubInfo, StructureChain* chain, size_t count, size_t offset, Structure* structure, CodeLocationLabel successLabel, CodeLocationLabel slowCaseLabel, CodeLocationLabel& stubRoutine)96 static void generateProtoChainAccessStub(ExecState* exec, StructureStubInfo& stubInfo, StructureChain* chain, size_t count, size_t offset, Structure* structure, CodeLocationLabel successLabel, CodeLocationLabel slowCaseLabel, MacroAssemblerCodeRef& stubRoutine) 97 97 { 98 98 JSGlobalData* globalData = &exec->globalData(); … … 134 134 emitRestoreScratch(stubJit, needToRestoreScratch, scratchGPR, success, fail, failureCases); 135 135 136 LinkBuffer patchBuffer(*globalData, &stubJit , exec->codeBlock()->executablePool());136 LinkBuffer patchBuffer(*globalData, &stubJit); 137 137 138 138 linkRestoreScratch(patchBuffer, needToRestoreScratch, success, fail, failureCases, successLabel, slowCaseLabel); 139 139 140 stubRoutine = patchBuffer.finalizeCode Addendum();140 stubRoutine = patchBuffer.finalizeCode(); 141 141 } 142 142 … … 177 177 emitRestoreScratch(stubJit, needToRestoreScratch, scratchGPR, success, fail, failureCases); 178 178 179 LinkBuffer patchBuffer(*globalData, &stubJit , codeBlock->executablePool());179 LinkBuffer patchBuffer(*globalData, &stubJit); 180 180 181 181 linkRestoreScratch(patchBuffer, needToRestoreScratch, stubInfo, success, fail, failureCases); 182 182 183 CodeLocationLabel entryLabel = patchBuffer.finalizeCodeAddendum(); 184 stubInfo.stubRoutine = entryLabel; 183 stubInfo.stubRoutine = patchBuffer.finalizeCode(); 185 184 186 185 RepatchBuffer repatchBuffer(codeBlock); 187 repatchBuffer.relink(stubInfo.callReturnLocation.jumpAtOffset(stubInfo.deltaCallToStructCheck), entryLabel);186 repatchBuffer.relink(stubInfo.callReturnLocation.jumpAtOffset(stubInfo.deltaCallToStructCheck), CodeLocationLabel(stubInfo.stubRoutine.code())); 188 187 repatchBuffer.relink(stubInfo.callReturnLocation, operationGetById); 189 188 … … 232 231 233 232 RepatchBuffer repatchBuffer(codeBlock); 234 repatchBuffer.relink(stubInfo.callReturnLocation.jumpAtOffset(stubInfo.deltaCallToStructCheck), stubInfo.stubRoutine);233 repatchBuffer.relink(stubInfo.callReturnLocation.jumpAtOffset(stubInfo.deltaCallToStructCheck), CodeLocationLabel(stubInfo.stubRoutine.code())); 235 234 repatchBuffer.relink(stubInfo.callReturnLocation, operationGetByIdProtoBuildList); 236 235 … … 325 324 if (stubInfo.accessType == access_get_by_id_self) { 326 325 ASSERT(!stubInfo.stubRoutine); 327 polymorphicStructureList = new PolymorphicAccessStructureList(*globalData, codeBlock->ownerExecutable(), stubInfo.callReturnLocation.labelAtOffset(stubInfo.deltaCallToSlowCase), stubInfo.u.getByIdSelf.baseObjectStructure.get());326 polymorphicStructureList = new PolymorphicAccessStructureList(*globalData, codeBlock->ownerExecutable(), MacroAssemblerCodeRef::createSelfManagedCodeRef(stubInfo.callReturnLocation.labelAtOffset(stubInfo.deltaCallToSlowCase)), stubInfo.u.getByIdSelf.baseObjectStructure.get()); 328 327 stubInfo.initGetByIdSelfList(polymorphicStructureList, 1); 329 328 } else { … … 351 350 MacroAssembler::Jump success = stubJit.jump(); 352 351 353 LinkBuffer patchBuffer(*globalData, &stubJit , codeBlock->executablePool());354 355 CodeLocationLabel lastProtoBegin = polymorphicStructureList->list[listIndex - 1].stubRoutine;352 LinkBuffer patchBuffer(*globalData, &stubJit); 353 354 CodeLocationLabel lastProtoBegin = CodeLocationLabel(polymorphicStructureList->list[listIndex - 1].stubRoutine.code()); 356 355 ASSERT(!!lastProtoBegin); 357 356 … … 359 358 patchBuffer.link(success, stubInfo.callReturnLocation.labelAtOffset(stubInfo.deltaCallToDone)); 360 359 361 CodeLocationLabel entryLabel = patchBuffer.finalizeCodeAddendum();362 363 polymorphicStructureList->list[listIndex].set(*globalData, codeBlock->ownerExecutable(), entryLabel, structure);360 MacroAssemblerCodeRef stubRoutine = patchBuffer.finalizeCode(); 361 362 polymorphicStructureList->list[listIndex].set(*globalData, codeBlock->ownerExecutable(), stubRoutine, structure); 364 363 365 364 CodeLocationJump jumpLocation = stubInfo.callReturnLocation.jumpAtOffset(stubInfo.deltaCallToStructCheck); 366 365 RepatchBuffer repatchBuffer(codeBlock); 367 repatchBuffer.relink(jumpLocation, entryLabel);366 repatchBuffer.relink(jumpLocation, CodeLocationLabel(stubRoutine.code())); 368 367 369 368 if (listIndex < (POLYMORPHIC_LIST_CACHE_SIZE - 1)) … … 409 408 ASSERT(!!stubInfo.stubRoutine); 410 409 polymorphicStructureList = new PolymorphicAccessStructureList(*globalData, codeBlock->ownerExecutable(), stubInfo.stubRoutine, stubInfo.u.getByIdChain.baseObjectStructure.get(), stubInfo.u.getByIdChain.chain.get()); 411 stubInfo.stubRoutine = CodeLocationLabel();410 stubInfo.stubRoutine = MacroAssemblerCodeRef(); 412 411 stubInfo.initGetByIdProtoList(polymorphicStructureList, 1); 413 412 } else { … … 420 419 stubInfo.u.getByIdProtoList.listSize++; 421 420 422 CodeLocationLabel lastProtoBegin = polymorphicStructureList->list[listIndex - 1].stubRoutine;421 CodeLocationLabel lastProtoBegin = CodeLocationLabel(polymorphicStructureList->list[listIndex - 1].stubRoutine.code()); 423 422 ASSERT(!!lastProtoBegin); 424 423 425 CodeLocationLabel entryLabel;426 427 generateProtoChainAccessStub(exec, stubInfo, prototypeChain, count, offset, structure, stubInfo.callReturnLocation.labelAtOffset(stubInfo.deltaCallToDone), lastProtoBegin, entryLabel);428 429 polymorphicStructureList->list[listIndex].set(*globalData, codeBlock->ownerExecutable(), entryLabel, structure);424 MacroAssemblerCodeRef stubRoutine; 425 426 generateProtoChainAccessStub(exec, stubInfo, prototypeChain, count, offset, structure, stubInfo.callReturnLocation.labelAtOffset(stubInfo.deltaCallToDone), lastProtoBegin, stubRoutine); 427 428 polymorphicStructureList->list[listIndex].set(*globalData, codeBlock->ownerExecutable(), stubRoutine, structure); 430 429 431 430 CodeLocationJump jumpLocation = stubInfo.callReturnLocation.jumpAtOffset(stubInfo.deltaCallToStructCheck); 432 431 RepatchBuffer repatchBuffer(codeBlock); 433 repatchBuffer.relink(jumpLocation, entryLabel);432 repatchBuffer.relink(jumpLocation, CodeLocationLabel(stubRoutine.code())); 434 433 435 434 if (listIndex < (POLYMORPHIC_LIST_CACHE_SIZE - 1)) … … 549 548 success = stubJit.jump(); 550 549 551 LinkBuffer patchBuffer(*globalData, &stubJit , codeBlock->executablePool());550 LinkBuffer patchBuffer(*globalData, &stubJit); 552 551 patchBuffer.link(success, stubInfo.callReturnLocation.labelAtOffset(stubInfo.deltaCallToDone)); 553 552 if (needToRestoreScratch) … … 556 555 patchBuffer.link(failureCases, stubInfo.callReturnLocation.labelAtOffset(stubInfo.deltaCallToSlowCase)); 557 556 558 CodeLocationLabel entryLabel = patchBuffer.finalizeCodeAddendum(); 559 stubInfo.stubRoutine = entryLabel; 557 stubInfo.stubRoutine = patchBuffer.finalizeCode(); 560 558 561 559 RepatchBuffer repatchBuffer(codeBlock); 562 repatchBuffer.relink(stubInfo.callReturnLocation.jumpAtOffset(stubInfo.deltaCallToStructCheck), entryLabel);560 repatchBuffer.relink(stubInfo.callReturnLocation.jumpAtOffset(stubInfo.deltaCallToStructCheck), CodeLocationLabel(stubInfo.stubRoutine.code())); 563 561 repatchBuffer.relink(stubInfo.callReturnLocation, appropriatePutByIdFunction(slot, putKind)); 564 562 -
trunk/Source/JavaScriptCore/jit/ExecutableAllocator.cpp
r87198 r94920 28 28 #include "ExecutableAllocator.h" 29 29 30 #if ENABLE(EXECUTABLE_ALLOCATOR_DEMAND) 31 #include <wtf/MetaAllocator.h> 32 #include <wtf/PageReservation.h> 33 #include <wtf/VMTags.h> 34 #endif 35 30 36 #if ENABLE(ASSEMBLER) 37 38 using namespace WTF; 31 39 32 40 namespace JSC { … … 34 42 #if ENABLE(EXECUTABLE_ALLOCATOR_DEMAND) 35 43 36 ExecutablePool::Allocation ExecutablePool::systemAlloc(size_t size) 44 class DemandExecutableAllocator: public MetaAllocator { 45 public: 46 DemandExecutableAllocator() 47 : MetaAllocator(32) // round up all allocations to 32 bytes 48 { 49 // Don't preallocate any memory here. 50 } 51 52 virtual ~DemandExecutableAllocator() 53 { 54 for (unsigned i = 0; i < reservations.size(); ++i) 55 reservations.at(i).deallocate(); 56 } 57 58 protected: 59 virtual void* allocateNewSpace(size_t& numPages) 60 { 61 size_t newNumPages = (((numPages * pageSize() + JIT_ALLOCATOR_LARGE_ALLOC_SIZE - 1) / JIT_ALLOCATOR_LARGE_ALLOC_SIZE * JIT_ALLOCATOR_LARGE_ALLOC_SIZE) + pageSize() - 1) / pageSize(); 62 63 ASSERT(newNumPages >= numPages); 64 65 numPages = newNumPages; 66 67 PageReservation reservation = PageReservation::reserve(numPages * pageSize(), OSAllocator::JSJITCodePages, EXECUTABLE_POOL_WRITABLE, true); 68 if (!reservation) 69 CRASH(); 70 71 reservations.append(reservation); 72 73 return reservation.base(); 74 } 75 76 virtual void notifyNeedPage(void* page) 77 { 78 OSAllocator::commit(page, pageSize(), EXECUTABLE_POOL_WRITABLE, true); 79 } 80 81 virtual void notifyPageIsFree(void* page) 82 { 83 OSAllocator::decommit(page, pageSize()); 84 } 85 86 private: 87 Vector<PageReservation, 16> reservations; 88 }; 89 90 static DemandExecutableAllocator* allocator; 91 92 void ExecutableAllocator::initializeAllocator() 37 93 { 38 PageAllocation allocation = PageAllocation::allocate(size, OSAllocator::JSJITCodePages, EXECUTABLE_POOL_WRITABLE, true); 39 if (!allocation) 40 CRASH(); 41 return allocation; 94 ASSERT(!allocator); 95 allocator = new DemandExecutableAllocator(); 42 96 } 43 97 44 void ExecutablePool::systemRelease(ExecutablePool::Allocation& allocation)98 ExecutableAllocator::ExecutableAllocator(JSGlobalData&) 45 99 { 46 allocation.deallocate();100 ASSERT(allocator); 47 101 } 48 102 … … 51 105 return true; 52 106 } 53 107 54 108 bool ExecutableAllocator::underMemoryPressure() 55 109 { 56 110 return false; 57 111 } 58 112 113 PassRefPtr<ExecutableMemoryHandle> ExecutableAllocator::allocate(JSGlobalData&, size_t sizeInBytes) 114 { 115 RefPtr<ExecutableMemoryHandle> result = allocator->allocate(sizeInBytes); 116 if (!result) 117 CRASH(); 118 return result.release(); 119 } 120 59 121 size_t ExecutableAllocator::committedByteCount() 60 122 { 61 return 0;62 } 123 return allocator->bytesCommitted(); 124 } 63 125 126 #if ENABLE(META_ALLOCATOR_PROFILE) 127 void ExecutableAllocator::dumpProfile() 128 { 129 allocator->dumpProfile(); 130 } 64 131 #endif 132 133 #endif // ENABLE(EXECUTABLE_ALLOCATOR_DEMAND) 65 134 66 135 #if ENABLE(ASSEMBLER_WX_EXCLUSIVE) -
trunk/Source/JavaScriptCore/jit/ExecutableAllocator.h
r93462 r94920 29 29 #include <limits> 30 30 #include <wtf/Assertions.h> 31 #include <wtf/MetaAllocatorHandle.h> 31 32 #include <wtf/PageAllocation.h> 32 33 #include <wtf/PassRefPtr.h> … … 103 104 namespace JSC { 104 105 105 class ExecutablePool : public RefCounted<ExecutablePool> { 106 public: 107 #if ENABLE(EXECUTABLE_ALLOCATOR_DEMAND) 108 typedef PageAllocation Allocation; 109 #else 110 class Allocation { 111 public: 112 Allocation(void* base, size_t size) 113 : m_base(base) 114 , m_size(size) 115 { 116 } 117 void* base() { return m_base; } 118 size_t size() { return m_size; } 119 bool operator!() const { return !m_base; } 120 121 private: 122 void* m_base; 123 size_t m_size; 124 }; 125 #endif 126 typedef Vector<Allocation, 2> AllocationList; 127 128 static PassRefPtr<ExecutablePool> create(JSGlobalData& globalData, size_t n) 129 { 130 return adoptRef(new ExecutablePool(globalData, n)); 131 } 132 133 void* alloc(JSGlobalData& globalData, size_t n) 134 { 135 ASSERT(m_freePtr <= m_end); 136 137 // Round 'n' up to a multiple of word size; if all allocations are of 138 // word sized quantities, then all subsequent allocations will be aligned. 139 n = roundUpAllocationSize(n, sizeof(void*)); 140 141 if (static_cast<ptrdiff_t>(n) < (m_end - m_freePtr)) { 142 void* result = m_freePtr; 143 m_freePtr += n; 144 return result; 145 } 146 147 // Insufficient space to allocate in the existing pool 148 // so we need allocate into a new pool 149 return poolAllocate(globalData, n); 150 } 151 152 void tryShrink(void* allocation, size_t oldSize, size_t newSize) 153 { 154 if (static_cast<char*>(allocation) + oldSize != m_freePtr) 155 return; 156 m_freePtr = static_cast<char*>(allocation) + roundUpAllocationSize(newSize, sizeof(void*)); 157 } 158 159 ~ExecutablePool() 160 { 161 AllocationList::iterator end = m_pools.end(); 162 for (AllocationList::iterator ptr = m_pools.begin(); ptr != end; ++ptr) 163 ExecutablePool::systemRelease(*ptr); 164 } 165 166 size_t available() const { return (m_pools.size() > 1) ? 0 : m_end - m_freePtr; } 167 168 private: 169 static Allocation systemAlloc(size_t n); 170 static void systemRelease(Allocation& alloc); 171 172 ExecutablePool(JSGlobalData&, size_t n); 173 174 void* poolAllocate(JSGlobalData&, size_t n); 175 176 char* m_freePtr; 177 char* m_end; 178 AllocationList m_pools; 179 }; 106 typedef WTF::MetaAllocatorHandle ExecutableMemoryHandle; 180 107 181 108 class ExecutableAllocator { … … 183 110 184 111 public: 185 ExecutableAllocator(JSGlobalData& globalData) 186 { 187 if (isValid()) 188 m_smallAllocationPool = ExecutablePool::create(globalData, JIT_ALLOCATOR_LARGE_ALLOC_SIZE); 189 #if !ENABLE(INTERPRETER) 190 else 191 CRASH(); 192 #endif 193 } 112 ExecutableAllocator(JSGlobalData&); 113 114 static void initializeAllocator(); 194 115 195 116 bool isValid() const; 196 117 197 118 static bool underMemoryPressure(); 198 199 PassRefPtr<ExecutablePool> poolForSize(JSGlobalData& globalData, size_t n) 200 { 201 // Try to fit in the existing small allocator 202 ASSERT(m_smallAllocationPool); 203 if (n < m_smallAllocationPool->available()) 204 return m_smallAllocationPool; 205 206 // If the request is large, we just provide a unshared allocator 207 if (n > JIT_ALLOCATOR_LARGE_ALLOC_SIZE) 208 return ExecutablePool::create(globalData, n); 209 210 // Create a new allocator 211 RefPtr<ExecutablePool> pool = ExecutablePool::create(globalData, JIT_ALLOCATOR_LARGE_ALLOC_SIZE); 212 213 // If the new allocator will result in more free space than in 214 // the current small allocator, then we will use it instead 215 if ((pool->available() - n) > m_smallAllocationPool->available()) 216 m_smallAllocationPool = pool; 217 return pool.release(); 218 } 119 120 #if ENABLE(META_ALLOCATOR_PROFILE) 121 static void dumpProfile(); 122 #else 123 static void dumpProfile() { } 124 #endif 125 126 PassRefPtr<ExecutableMemoryHandle> allocate(JSGlobalData&, size_t sizeInBytes); 219 127 220 128 #if ENABLE(ASSEMBLER_WX_EXCLUSIVE) … … 349 257 static void reprotectRegion(void*, size_t, ProtectionSetting); 350 258 #endif 351 352 RefPtr<ExecutablePool> m_smallAllocationPool;353 259 }; 354 260 355 inline ExecutablePool::ExecutablePool(JSGlobalData& globalData, size_t n) 356 { 357 size_t allocSize = roundUpAllocationSize(n, pageSize()); 358 Allocation mem = systemAlloc(allocSize); 359 if (!mem.base()) { 360 releaseExecutableMemory(globalData); 361 mem = systemAlloc(allocSize); 362 } 363 m_pools.append(mem); 364 m_freePtr = static_cast<char*>(mem.base()); 365 if (!m_freePtr) 366 CRASH(); // Failed to allocate 367 m_end = m_freePtr + allocSize; 368 deprecatedTurnOffVerifier(); 369 } 370 371 inline void* ExecutablePool::poolAllocate(JSGlobalData& globalData, size_t n) 372 { 373 size_t allocSize = roundUpAllocationSize(n, pageSize()); 374 375 Allocation result = systemAlloc(allocSize); 376 if (!result.base()) { 377 releaseExecutableMemory(globalData); 378 result = systemAlloc(allocSize); 379 if (!result.base()) 380 CRASH(); // Failed to allocate 381 } 382 383 ASSERT(m_end >= m_freePtr); 384 if ((allocSize - n) > static_cast<size_t>(m_end - m_freePtr)) { 385 // Replace allocation pool 386 m_freePtr = static_cast<char*>(result.base()) + n; 387 m_end = static_cast<char*>(result.base()) + allocSize; 388 } 389 390 m_pools.append(result); 391 return result.base(); 392 } 393 394 } 261 } // namespace JSC 395 262 396 263 #endif // ENABLE(JIT) && ENABLE(ASSEMBLER) -
trunk/Source/JavaScriptCore/jit/ExecutableAllocatorFixedVMPool.cpp
r87527 r94920 32 32 #include <errno.h> 33 33 34 #include "TCSpinLock.h"35 34 #include <sys/mman.h> 36 35 #include <unistd.h> 37 #include <wtf/ AVLTree.h>36 #include <wtf/MetaAllocator.h> 38 37 #include <wtf/PageReservation.h> 39 38 #include <wtf/VMTags.h> … … 47 46 namespace JSC { 48 47 49 #define TwoPow(n) (1ull << n) 48 #if CPU(ARM) 49 static const size_t fixedPoolSize = 16 * 1024 * 1024; 50 #elif CPU(X86_64) 51 static const size_t fixedPoolSize = 1024 * 1024 * 1024; 52 #else 53 static const size_t fixedPoolSize = 32 * 1024 * 1024; 54 #endif 50 55 51 class AllocationTableSizeClass{56 class FixedVMPoolExecutableAllocator: public MetaAllocator { 52 57 public: 53 AllocationTableSizeClass(size_t size, size_t blockSize, unsigned log2BlockSize)54 : m_blockSize(blockSize)58 FixedVMPoolExecutableAllocator() 59 : MetaAllocator(32) // round up all allocations to 32 bytes 55 60 { 56 ASSERT(blockSize == TwoPow(log2BlockSize)); 57 58 // Calculate the number of blocks needed to hold size. 59 size_t blockMask = blockSize - 1; 60 m_blockCount = (size + blockMask) >> log2BlockSize; 61 62 // Align to the smallest power of two >= m_blockCount. 63 m_blockAlignment = 1; 64 while (m_blockAlignment < m_blockCount) 65 m_blockAlignment += m_blockAlignment; 61 m_reservation = PageReservation::reserveWithGuardPages(fixedPoolSize, OSAllocator::JSJITCodePages, EXECUTABLE_POOL_WRITABLE, true); 62 #if !ENABLE(INTERPRETER) 63 if (!m_reservation) 64 CRASH(); 65 #endif 66 if (m_reservation) { 67 ASSERT(m_reservation.size() == fixedPoolSize); 68 addFreshFreeSpace(m_reservation.base(), m_reservation.size()); 69 } 66 70 } 67 68 size_t blockSize() const { return m_blockSize; } 69 size_t blockCount() const { return m_blockCount; } 70 size_t blockAlignment() const { return m_blockAlignment; } 71 72 size_t size() 71 72 protected: 73 virtual void* allocateNewSpace(size_t&) 73 74 { 74 return m_blockSize * m_blockCount; 75 // We're operating in a fixed pool, so new allocation is always prohibited. 76 return 0; 77 } 78 79 virtual void notifyNeedPage(void* page) 80 { 81 m_reservation.commit(page, pageSize()); 82 } 83 84 virtual void notifyPageIsFree(void* page) 85 { 86 m_reservation.decommit(page, pageSize()); 75 87 } 76 88 77 89 private: 78 size_t m_blockSize; 79 size_t m_blockCount; 80 size_t m_blockAlignment; 90 PageReservation m_reservation; 81 91 }; 82 92 83 template<unsigned log2Entries> 84 class AllocationTableLeaf { 85 typedef uint64_t BitField; 93 static FixedVMPoolExecutableAllocator* allocator; 86 94 87 public: 88 static const unsigned log2SubregionSize = 12; // 2^12 == pagesize 89 static const unsigned log2RegionSize = log2SubregionSize + log2Entries; 95 void ExecutableAllocator::initializeAllocator() 96 { 97 ASSERT(!allocator); 98 allocator = new FixedVMPoolExecutableAllocator(); 99 } 90 100 91 static const size_t subregionSize = TwoPow(log2SubregionSize); 92 static const size_t regionSize = TwoPow(log2RegionSize); 93 static const unsigned entries = TwoPow(log2Entries); 94 COMPILE_ASSERT(entries <= (sizeof(BitField) * 8), AllocationTableLeaf_entries_fit_in_BitField); 95 96 AllocationTableLeaf() 97 : m_allocated(0) 98 { 99 } 100 101 ~AllocationTableLeaf() 102 { 103 ASSERT(isEmpty()); 104 } 105 106 size_t allocate(AllocationTableSizeClass& sizeClass) 107 { 108 ASSERT(sizeClass.blockSize() == subregionSize); 109 ASSERT(!isFull()); 110 111 size_t alignment = sizeClass.blockAlignment(); 112 size_t count = sizeClass.blockCount(); 113 // Use this mask to check for spans of free blocks. 114 BitField mask = ((1ull << count) - 1) << (alignment - count); 115 116 // Step in units of alignment size. 117 for (unsigned i = 0; i < entries; i += alignment) { 118 if (!(m_allocated & mask)) { 119 m_allocated |= mask; 120 return (i + (alignment - count)) << log2SubregionSize; 121 } 122 mask <<= alignment; 123 } 124 return notFound; 125 } 126 127 void free(size_t location, AllocationTableSizeClass& sizeClass) 128 { 129 ASSERT(sizeClass.blockSize() == subregionSize); 130 131 size_t entry = location >> log2SubregionSize; 132 size_t count = sizeClass.blockCount(); 133 BitField mask = ((1ull << count) - 1) << entry; 134 135 ASSERT((m_allocated & mask) == mask); 136 m_allocated &= ~mask; 137 } 138 139 bool isEmpty() 140 { 141 return !m_allocated; 142 } 143 144 bool isFull() 145 { 146 return !~m_allocated; 147 } 148 149 static size_t size() 150 { 151 return regionSize; 152 } 153 154 static AllocationTableSizeClass classForSize(size_t size) 155 { 156 return AllocationTableSizeClass(size, subregionSize, log2SubregionSize); 157 } 158 159 #ifndef NDEBUG 160 void dump(size_t parentOffset = 0, unsigned indent = 0) 161 { 162 for (unsigned i = 0; i < indent; ++i) 163 fprintf(stderr, " "); 164 fprintf(stderr, "%08x: [%016llx]\n", (int)parentOffset, m_allocated); 165 } 166 #endif 167 168 private: 169 BitField m_allocated; 170 }; 171 172 173 template<class NextLevel> 174 class LazyAllocationTable { 175 public: 176 static const unsigned log2RegionSize = NextLevel::log2RegionSize; 177 static const unsigned entries = NextLevel::entries; 178 179 LazyAllocationTable() 180 : m_ptr(0) 181 { 182 } 183 184 ~LazyAllocationTable() 185 { 186 ASSERT(isEmpty()); 187 } 188 189 size_t allocate(AllocationTableSizeClass& sizeClass) 190 { 191 if (!m_ptr) 192 m_ptr = new NextLevel(); 193 return m_ptr->allocate(sizeClass); 194 } 195 196 void free(size_t location, AllocationTableSizeClass& sizeClass) 197 { 198 ASSERT(m_ptr); 199 m_ptr->free(location, sizeClass); 200 if (m_ptr->isEmpty()) { 201 delete m_ptr; 202 m_ptr = 0; 203 } 204 } 205 206 bool isEmpty() 207 { 208 return !m_ptr; 209 } 210 211 bool isFull() 212 { 213 return m_ptr && m_ptr->isFull(); 214 } 215 216 static size_t size() 217 { 218 return NextLevel::size(); 219 } 220 221 #ifndef NDEBUG 222 void dump(size_t parentOffset = 0, unsigned indent = 0) 223 { 224 ASSERT(m_ptr); 225 m_ptr->dump(parentOffset, indent); 226 } 227 #endif 228 229 static AllocationTableSizeClass classForSize(size_t size) 230 { 231 return NextLevel::classForSize(size); 232 } 233 234 private: 235 NextLevel* m_ptr; 236 }; 237 238 template<class NextLevel, unsigned log2Entries> 239 class AllocationTableDirectory { 240 typedef uint64_t BitField; 241 242 public: 243 static const unsigned log2SubregionSize = NextLevel::log2RegionSize; 244 static const unsigned log2RegionSize = log2SubregionSize + log2Entries; 245 246 static const size_t subregionSize = TwoPow(log2SubregionSize); 247 static const size_t regionSize = TwoPow(log2RegionSize); 248 static const unsigned entries = TwoPow(log2Entries); 249 COMPILE_ASSERT(entries <= (sizeof(BitField) * 8), AllocationTableDirectory_entries_fit_in_BitField); 250 251 AllocationTableDirectory() 252 : m_full(0) 253 , m_hasSuballocation(0) 254 { 255 } 256 257 ~AllocationTableDirectory() 258 { 259 ASSERT(isEmpty()); 260 } 261 262 size_t allocate(AllocationTableSizeClass& sizeClass) 263 { 264 ASSERT(sizeClass.blockSize() <= subregionSize); 265 ASSERT(!isFull()); 266 267 if (sizeClass.blockSize() < subregionSize) { 268 BitField bit = 1; 269 for (unsigned i = 0; i < entries; ++i, bit += bit) { 270 if (m_full & bit) 271 continue; 272 size_t location = m_suballocations[i].allocate(sizeClass); 273 if (location != notFound) { 274 // If this didn't already have a subregion, it does now! 275 m_hasSuballocation |= bit; 276 // Mirror the suballocation's full bit. 277 if (m_suballocations[i].isFull()) 278 m_full |= bit; 279 return (i * subregionSize) | location; 280 } 281 } 282 return notFound; 283 } 284 285 // A block is allocated if either it is fully allocated or contains suballocations. 286 BitField allocated = m_full | m_hasSuballocation; 287 288 size_t alignment = sizeClass.blockAlignment(); 289 size_t count = sizeClass.blockCount(); 290 // Use this mask to check for spans of free blocks. 291 BitField mask = ((1ull << count) - 1) << (alignment - count); 292 293 // Step in units of alignment size. 294 for (unsigned i = 0; i < entries; i += alignment) { 295 if (!(allocated & mask)) { 296 m_full |= mask; 297 return (i + (alignment - count)) << log2SubregionSize; 298 } 299 mask <<= alignment; 300 } 301 return notFound; 302 } 303 304 void free(size_t location, AllocationTableSizeClass& sizeClass) 305 { 306 ASSERT(sizeClass.blockSize() <= subregionSize); 307 308 size_t entry = location >> log2SubregionSize; 309 310 if (sizeClass.blockSize() < subregionSize) { 311 BitField bit = 1ull << entry; 312 m_suballocations[entry].free(location & (subregionSize - 1), sizeClass); 313 // Check if the suballocation is now empty. 314 if (m_suballocations[entry].isEmpty()) 315 m_hasSuballocation &= ~bit; 316 // No need to check, it clearly isn't full any more! 317 m_full &= ~bit; 318 } else { 319 size_t count = sizeClass.blockCount(); 320 BitField mask = ((1ull << count) - 1) << entry; 321 ASSERT((m_full & mask) == mask); 322 ASSERT(!(m_hasSuballocation & mask)); 323 m_full &= ~mask; 324 } 325 } 326 327 bool isEmpty() 328 { 329 return !(m_full | m_hasSuballocation); 330 } 331 332 bool isFull() 333 { 334 return !~m_full; 335 } 336 337 static size_t size() 338 { 339 return regionSize; 340 } 341 342 static AllocationTableSizeClass classForSize(size_t size) 343 { 344 if (size < subregionSize) { 345 AllocationTableSizeClass sizeClass = NextLevel::classForSize(size); 346 if (sizeClass.size() < NextLevel::size()) 347 return sizeClass; 348 } 349 return AllocationTableSizeClass(size, subregionSize, log2SubregionSize); 350 } 351 352 #ifndef NDEBUG 353 void dump(size_t parentOffset = 0, unsigned indent = 0) 354 { 355 for (unsigned i = 0; i < indent; ++i) 356 fprintf(stderr, " "); 357 fprintf(stderr, "%08x: [", (int)parentOffset); 358 for (unsigned i = 0; i < entries; ++i) { 359 BitField bit = 1ull << i; 360 char c = m_hasSuballocation & bit 361 ? (m_full & bit ? 'N' : 'n') 362 : (m_full & bit ? 'F' : '-'); 363 fprintf(stderr, "%c", c); 364 } 365 fprintf(stderr, "]\n"); 366 367 for (unsigned i = 0; i < entries; ++i) { 368 BitField bit = 1ull << i; 369 size_t offset = parentOffset | (subregionSize * i); 370 if (m_hasSuballocation & bit) 371 m_suballocations[i].dump(offset, indent + 1); 372 } 373 } 374 #endif 375 376 private: 377 NextLevel m_suballocations[entries]; 378 // Subregions exist in one of four states: 379 // (1) empty (both bits clear) 380 // (2) fully allocated as a single allocation (m_full set) 381 // (3) partially allocated through suballocations (m_hasSuballocation set) 382 // (4) fully allocated through suballocations (both bits set) 383 BitField m_full; 384 BitField m_hasSuballocation; 385 }; 386 387 388 typedef AllocationTableLeaf<6> PageTables256KB; 389 typedef AllocationTableDirectory<PageTables256KB, 6> PageTables16MB; 390 typedef AllocationTableDirectory<LazyAllocationTable<PageTables16MB>, 1> PageTables32MB; 391 typedef AllocationTableDirectory<LazyAllocationTable<PageTables16MB>, 6> PageTables1GB; 392 393 #if CPU(ARM) 394 typedef PageTables16MB FixedVMPoolPageTables; 395 #elif CPU(X86_64) 396 typedef PageTables1GB FixedVMPoolPageTables; 397 #else 398 typedef PageTables32MB FixedVMPoolPageTables; 399 #endif 400 401 402 class FixedVMPoolAllocator 101 ExecutableAllocator::ExecutableAllocator(JSGlobalData&) 403 102 { 404 public: 405 FixedVMPoolAllocator() 406 { 407 ASSERT(PageTables256KB::size() == 256 * 1024); 408 ASSERT(PageTables16MB::size() == 16 * 1024 * 1024); 409 ASSERT(PageTables32MB::size() == 32 * 1024 * 1024); 410 ASSERT(PageTables1GB::size() == 1024 * 1024 * 1024); 411 412 m_reservation = PageReservation::reserveWithGuardPages(FixedVMPoolPageTables::size(), OSAllocator::JSJITCodePages, EXECUTABLE_POOL_WRITABLE, true); 413 #if !ENABLE(INTERPRETER) 414 if (!isValid()) 415 CRASH(); 416 #endif 417 } 418 419 ExecutablePool::Allocation alloc(size_t requestedSize) 420 { 421 ASSERT(requestedSize); 422 AllocationTableSizeClass sizeClass = classForSize(requestedSize); 423 size_t size = sizeClass.size(); 424 ASSERT(size); 425 426 if (size >= FixedVMPoolPageTables::size()) 427 return ExecutablePool::Allocation(0, 0); 428 if (m_pages.isFull()) 429 return ExecutablePool::Allocation(0, 0); 430 431 size_t offset = m_pages.allocate(sizeClass); 432 if (offset == notFound) 433 return ExecutablePool::Allocation(0, 0); 434 435 void* pointer = offsetToPointer(offset); 436 m_reservation.commit(pointer, size); 437 return ExecutablePool::Allocation(pointer, size); 438 } 439 440 void free(ExecutablePool::Allocation allocation) 441 { 442 void* pointer = allocation.base(); 443 size_t size = allocation.size(); 444 ASSERT(size); 445 446 m_reservation.decommit(pointer, size); 447 448 AllocationTableSizeClass sizeClass = classForSize(size); 449 ASSERT(sizeClass.size() == size); 450 m_pages.free(pointerToOffset(pointer), sizeClass); 451 } 452 453 size_t allocated() 454 { 455 return m_reservation.committed(); 456 } 457 458 bool isValid() const 459 { 460 return !!m_reservation; 461 } 462 463 private: 464 AllocationTableSizeClass classForSize(size_t size) 465 { 466 return FixedVMPoolPageTables::classForSize(size); 467 } 468 469 void* offsetToPointer(size_t offset) 470 { 471 return reinterpret_cast<void*>(reinterpret_cast<intptr_t>(m_reservation.base()) + offset); 472 } 473 474 size_t pointerToOffset(void* pointer) 475 { 476 return reinterpret_cast<intptr_t>(pointer) - reinterpret_cast<intptr_t>(m_reservation.base()); 477 } 478 479 PageReservation m_reservation; 480 FixedVMPoolPageTables m_pages; 481 }; 482 483 484 static SpinLock spinlock = SPINLOCK_INITIALIZER; 485 static FixedVMPoolAllocator* allocator = 0; 486 487 488 size_t ExecutableAllocator::committedByteCount() 489 { 490 SpinLockHolder lockHolder(&spinlock); 491 return allocator ? allocator->allocated() : 0; 492 } 103 ASSERT(allocator); 104 } 493 105 494 106 bool ExecutableAllocator::isValid() const 495 107 { 496 SpinLockHolder lock_holder(&spinlock); 497 if (!allocator) 498 allocator = new FixedVMPoolAllocator(); 499 return allocator->isValid(); 108 return !!allocator->bytesReserved(); 500 109 } 501 110 502 111 bool ExecutableAllocator::underMemoryPressure() 503 112 { 504 // Technically we should take the spin lock here, but we don't care if we get stale data. 505 // This is only really a heuristic anyway. 506 return allocator && (allocator->allocated() > (FixedVMPoolPageTables::size() / 2)); 113 MetaAllocator::Statistics statistics = allocator->currentStatistics(); 114 return statistics.bytesAllocated > statistics.bytesReserved / 2; 507 115 } 508 116 509 ExecutablePool::Allocation ExecutablePool::systemAlloc(size_t size)117 PassRefPtr<ExecutableMemoryHandle> ExecutableAllocator::allocate(JSGlobalData& globalData, size_t sizeInBytes) 510 118 { 511 SpinLockHolder lock_holder(&spinlock); 512 ASSERT(allocator); 513 return allocator->alloc(size); 119 RefPtr<ExecutableMemoryHandle> result = allocator->allocate(sizeInBytes); 120 if (!result) { 121 releaseExecutableMemory(globalData); 122 result = allocator->allocate(sizeInBytes); 123 if (!result) 124 CRASH(); 125 } 126 return result.release(); 514 127 } 515 128 516 void ExecutablePool::systemRelease(ExecutablePool::Allocation& allocation) 129 size_t ExecutableAllocator::committedByteCount() 517 130 { 518 SpinLockHolder lock_holder(&spinlock); 519 ASSERT(allocator); 520 allocator->free(allocation); 131 return allocator->bytesCommitted(); 521 132 } 133 134 #if ENABLE(META_ALLOCATOR_PROFILE) 135 void ExecutableAllocator::dumpProfile() 136 { 137 allocator->dumpProfile(); 138 } 139 #endif 522 140 523 141 } 524 142 525 143 526 #endif // HAVE(ASSEMBLER)144 #endif // ENABLE(EXECUTABLE_ALLOCATOR_FIXED) -
trunk/Source/JavaScriptCore/jit/JIT.cpp
r94802 r94920 571 571 ASSERT(m_jmpTable.isEmpty()); 572 572 573 LinkBuffer patchBuffer(*m_globalData, this , m_globalData->executableAllocator);573 LinkBuffer patchBuffer(*m_globalData, this); 574 574 575 575 // Translate vPC offsets into addresses in JIT generated code, for switch tables. -
trunk/Source/JavaScriptCore/jit/JIT.h
r94802 r94920 222 222 } 223 223 224 static void compileCTIMachineTrampolines(JSGlobalData* globalData, RefPtr<ExecutablePool>* executablePool, TrampolineStructure *trampolines)224 static PassRefPtr<ExecutableMemoryHandle> compileCTIMachineTrampolines(JSGlobalData* globalData, TrampolineStructure *trampolines) 225 225 { 226 226 if (!globalData->canUseJIT()) 227 return ;227 return 0; 228 228 JIT jit(globalData, 0); 229 jit.privateCompileCTIMachineTrampolines(executablePool,globalData, trampolines);230 } 231 232 static Code Ptr compileCTINativeCall(JSGlobalData* globalData, PassRefPtr<ExecutablePool> executablePool, NativeFunction func)229 return jit.privateCompileCTIMachineTrampolines(globalData, trampolines); 230 } 231 232 static CodeRef compileCTINativeCall(JSGlobalData* globalData, NativeFunction func) 233 233 { 234 234 if (!globalData->canUseJIT()) 235 return Code Ptr();235 return CodeRef(); 236 236 JIT jit(globalData, 0); 237 return jit.privateCompileCTINativeCall( executablePool,globalData, func);237 return jit.privateCompileCTINativeCall(globalData, func); 238 238 } 239 239 … … 275 275 void privateCompilePutByIdTransition(StructureStubInfo*, Structure*, Structure*, size_t cachedOffset, StructureChain*, ReturnAddressPtr returnAddress, bool direct); 276 276 277 void privateCompileCTIMachineTrampolines(RefPtr<ExecutablePool>* executablePool, JSGlobalData* data, TrampolineStructure *trampolines);277 PassRefPtr<ExecutableMemoryHandle> privateCompileCTIMachineTrampolines(JSGlobalData*, TrampolineStructure*); 278 278 Label privateCompileCTINativeCall(JSGlobalData*, bool isConstruct = false); 279 Code Ptr privateCompileCTINativeCall(PassRefPtr<ExecutablePool> executablePool, JSGlobalData* data, NativeFunction func);279 CodeRef privateCompileCTINativeCall(JSGlobalData*, NativeFunction); 280 280 void privateCompilePatchGetArrayLength(ReturnAddressPtr returnAddress); 281 281 … … 1051 1051 #endif 1052 1052 WeakRandom m_randomGenerator; 1053 static Code Ptr stringGetByValStubGenerator(JSGlobalData* globalData, ExecutablePool* pool);1053 static CodeRef stringGetByValStubGenerator(JSGlobalData*); 1054 1054 1055 1055 #if ENABLE(TIERED_COMPILATION) -
trunk/Source/JavaScriptCore/jit/JITCode.h
r94559 r94920 84 84 bool operator !() const 85 85 { 86 return !m_ref .m_code.executableAddress();86 return !m_ref; 87 87 } 88 88 89 89 CodePtr addressForCall() 90 90 { 91 return m_ref. m_code;91 return m_ref.code(); 92 92 } 93 93 … … 97 97 unsigned offsetOf(void* pointerIntoCode) 98 98 { 99 intptr_t result = reinterpret_cast<intptr_t>(pointerIntoCode) - reinterpret_cast<intptr_t>(m_ref. m_code.executableAddress());99 intptr_t result = reinterpret_cast<intptr_t>(pointerIntoCode) - reinterpret_cast<intptr_t>(m_ref.code().executableAddress()); 100 100 ASSERT(static_cast<intptr_t>(static_cast<unsigned>(result)) == result); 101 101 return static_cast<unsigned>(result); … … 105 105 inline JSValue execute(RegisterFile* registerFile, CallFrame* callFrame, JSGlobalData* globalData) 106 106 { 107 JSValue result = JSValue::decode(ctiTrampoline(m_ref. m_code.executableAddress(), registerFile, callFrame, 0, Profiler::enabledProfilerReference(), globalData));107 JSValue result = JSValue::decode(ctiTrampoline(m_ref.code().executableAddress(), registerFile, callFrame, 0, Profiler::enabledProfilerReference(), globalData)); 108 108 return globalData->exception ? jsNull() : result; 109 109 } … … 111 111 void* start() 112 112 { 113 return m_ref. m_code.dataLocation();113 return m_ref.code().dataLocation(); 114 114 } 115 115 116 116 size_t size() 117 117 { 118 ASSERT(m_ref. m_code.executableAddress());119 return m_ref. m_size;118 ASSERT(m_ref.code().executableAddress()); 119 return m_ref.size(); 120 120 } 121 121 122 Executable Pool* getExecutablePool()122 ExecutableMemoryHandle* getExecutableMemory() 123 123 { 124 return m_ref. m_executablePool.get();124 return m_ref.executableMemory(); 125 125 } 126 126 … … 132 132 // Host functions are a bit special; they have a m_code pointer but they 133 133 // do not individully ref the executable pool containing the trampoline. 134 static JITCode HostFunction(Code Ptrcode)134 static JITCode HostFunction(CodeRef code) 135 135 { 136 return JITCode(code .dataLocation(), 0, 0, HostCallThunk);136 return JITCode(code, HostCallThunk); 137 137 } 138 138 … … 144 144 145 145 private: 146 JITCode( void* code, PassRefPtr<ExecutablePool> executablePool, size_t size, JITType jitType)147 : m_ref( code, executablePool, size)146 JITCode(PassRefPtr<ExecutableMemoryHandle> executableMemory, JITType jitType) 147 : m_ref(executableMemory) 148 148 , m_jitType(jitType) 149 149 { -
trunk/Source/JavaScriptCore/jit/JITOpcodes.cpp
r94688 r94920 43 43 #if USE(JSVALUE64) 44 44 45 void JIT::privateCompileCTIMachineTrampolines(RefPtr<ExecutablePool>* executablePool,JSGlobalData* globalData, TrampolineStructure *trampolines)45 PassRefPtr<ExecutableMemoryHandle> JIT::privateCompileCTIMachineTrampolines(JSGlobalData* globalData, TrampolineStructure *trampolines) 46 46 { 47 47 // (2) The second function provides fast property access for string length … … 154 154 155 155 // All trampolines constructed! copy the code, link up calls, and set the pointers on the Machine object. 156 LinkBuffer patchBuffer(*m_globalData, this , m_globalData->executableAllocator);156 LinkBuffer patchBuffer(*m_globalData, this); 157 157 158 158 patchBuffer.link(string_failureCases1Call, FunctionPtr(cti_op_get_by_id_string_fail)); … … 165 165 166 166 CodeRef finalCode = patchBuffer.finalizeCode(); 167 *executablePool = finalCode.m_executablePool;167 RefPtr<ExecutableMemoryHandle> executableMemory = finalCode.executableMemory(); 168 168 169 169 trampolines->ctiVirtualCallLink = patchBuffer.trampolineAt(virtualCallLinkBegin); … … 174 174 trampolines->ctiNativeConstruct = patchBuffer.trampolineAt(nativeConstructThunk); 175 175 trampolines->ctiStringLengthTrampoline = patchBuffer.trampolineAt(stringLengthBegin); 176 177 return executableMemory.release(); 176 178 } 177 179 … … 292 294 } 293 295 294 JIT::Code Ptr JIT::privateCompileCTINativeCall(PassRefPtr<ExecutablePool>,JSGlobalData* globalData, NativeFunction)295 { 296 return globalData->jitStubs->ctiNativeCall();296 JIT::CodeRef JIT::privateCompileCTINativeCall(JSGlobalData* globalData, NativeFunction) 297 { 298 return CodeRef::createSelfManagedCodeRef(globalData->jitStubs->ctiNativeCall()); 297 299 } 298 300 -
trunk/Source/JavaScriptCore/jit/JITOpcodes32_64.cpp
r94688 r94920 41 41 namespace JSC { 42 42 43 void JIT::privateCompileCTIMachineTrampolines(RefPtr<ExecutablePool>* executablePool,JSGlobalData* globalData, TrampolineStructure *trampolines)43 PassRefPtr<ExecutableMemoryHandle> JIT::privateCompileCTIMachineTrampolines(JSGlobalData* globalData, TrampolineStructure *trampolines) 44 44 { 45 45 #if ENABLE(JIT_USE_SOFT_MODULO) … … 153 153 154 154 // All trampolines constructed! copy the code, link up calls, and set the pointers on the Machine object. 155 LinkBuffer patchBuffer(*m_globalData, this , m_globalData->executableAllocator);155 LinkBuffer patchBuffer(*m_globalData, this); 156 156 157 157 patchBuffer.link(string_failureCases1Call, FunctionPtr(cti_op_get_by_id_string_fail)); … … 164 164 165 165 CodeRef finalCode = patchBuffer.finalizeCode(); 166 *executablePool = finalCode.m_executablePool;166 RefPtr<ExecutableMemoryHandle> executableMemory = finalCode.executableMemory(); 167 167 168 168 trampolines->ctiVirtualCall = patchBuffer.trampolineAt(virtualCallBegin); … … 176 176 trampolines->ctiSoftModulo = patchBuffer.trampolineAt(softModBegin); 177 177 #endif 178 179 return executableMemory.release(); 178 180 } 179 181 … … 313 315 } 314 316 315 JIT::Code Ptr JIT::privateCompileCTINativeCall(PassRefPtr<ExecutablePool> executablePool,JSGlobalData* globalData, NativeFunction func)317 JIT::CodeRef JIT::privateCompileCTINativeCall(JSGlobalData* globalData, NativeFunction func) 316 318 { 317 319 Call nativeCall; 318 Label nativeCallThunk = align();319 320 320 321 emitPutImmediateToCallFrameHeader(0, RegisterFile::CodeBlock); … … 448 449 449 450 // All trampolines constructed! copy the code, link up calls, and set the pointers on the Machine object. 450 LinkBuffer patchBuffer(*m_globalData, this , executablePool);451 LinkBuffer patchBuffer(*m_globalData, this); 451 452 452 453 patchBuffer.link(nativeCall, FunctionPtr(func)); 453 patchBuffer.finalizeCode(); 454 455 return patchBuffer.trampolineAt(nativeCallThunk); 454 return patchBuffer.finalizeCode(); 456 455 } 457 456 -
trunk/Source/JavaScriptCore/jit/JITPropertyAccess.cpp
r94701 r94920 51 51 #if USE(JSVALUE64) 52 52 53 JIT::Code Ptr JIT::stringGetByValStubGenerator(JSGlobalData* globalData, ExecutablePool* pool)53 JIT::CodeRef JIT::stringGetByValStubGenerator(JSGlobalData* globalData) 54 54 { 55 55 JSInterfaceJIT jit; … … 78 78 jit.ret(); 79 79 80 LinkBuffer patchBuffer(*globalData, &jit , pool);81 return patchBuffer.finalizeCode() .m_code;80 LinkBuffer patchBuffer(*globalData, &jit); 81 return patchBuffer.finalizeCode(); 82 82 } 83 83 … … 123 123 linkSlowCase(iter); // base array check 124 124 Jump notString = branchPtr(NotEqual, Address(regT0), TrustedImmPtr(m_globalData->jsStringVPtr)); 125 emitNakedCall( m_globalData->getCTIStub(stringGetByValStubGenerator));125 emitNakedCall(CodeLocationLabel(m_globalData->getCTIStub(stringGetByValStubGenerator).code())); 126 126 Jump failed = branchTestPtr(Zero, regT0); 127 127 emitPutVirtualRegister(dst, regT0); … … 556 556 Call failureCall = tailRecursiveCall(); 557 557 558 LinkBuffer patchBuffer(*m_globalData, this , m_codeBlock->executablePool());558 LinkBuffer patchBuffer(*m_globalData, this); 559 559 560 560 patchBuffer.link(failureCall, FunctionPtr(direct ? cti_op_put_by_id_direct_fail : cti_op_put_by_id_fail)); … … 565 565 } 566 566 567 CodeLocationLabel entryLabel = patchBuffer.finalizeCodeAddendum(); 568 stubInfo->stubRoutine = entryLabel; 567 stubInfo->stubRoutine = patchBuffer.finalizeCode(); 569 568 RepatchBuffer repatchBuffer(m_codeBlock); 570 repatchBuffer.relinkCallerToTrampoline(returnAddress, entryLabel);569 repatchBuffer.relinkCallerToTrampoline(returnAddress, CodeLocationLabel(stubInfo->stubRoutine.code())); 571 570 } 572 571 … … 616 615 Jump success = jump(); 617 616 618 LinkBuffer patchBuffer(*m_globalData, this , m_codeBlock->executablePool());617 LinkBuffer patchBuffer(*m_globalData, this); 619 618 620 619 // Use the patch information to link the failure cases back to the original slow case routine. … … 627 626 628 627 // Track the stub we have created so that it will be deleted later. 629 CodeLocationLabel entryLabel = patchBuffer.finalizeCodeAddendum(); 630 stubInfo->stubRoutine = entryLabel; 628 stubInfo->stubRoutine = patchBuffer.finalizeCode(); 631 629 632 630 // Finally patch the jump to slow case back in the hot path to jump here instead. 633 631 CodeLocationJump jumpLocation = stubInfo->hotPathBegin.jumpAtOffset(patchOffsetGetByIdBranchToSlowCase); 634 632 RepatchBuffer repatchBuffer(m_codeBlock); 635 repatchBuffer.relink(jumpLocation, entryLabel);633 repatchBuffer.relink(jumpLocation, CodeLocationLabel(stubInfo->stubRoutine.code())); 636 634 637 635 // We don't want to patch more than once - in future go to cti_op_put_by_id_generic. … … 674 672 compileGetDirectOffset(protoObject, regT0, cachedOffset); 675 673 Jump success = jump(); 676 LinkBuffer patchBuffer(*m_globalData, this , m_codeBlock->executablePool());674 LinkBuffer patchBuffer(*m_globalData, this); 677 675 678 676 // Use the patch information to link the failure cases back to the original slow case routine. … … 691 689 } 692 690 // Track the stub we have created so that it will be deleted later. 693 CodeLocationLabel entryLabel = patchBuffer.finalizeCodeAddendum(); 694 stubInfo->stubRoutine = entryLabel; 691 stubInfo->stubRoutine = patchBuffer.finalizeCode(); 695 692 696 693 // Finally patch the jump to slow case back in the hot path to jump here instead. 697 694 CodeLocationJump jumpLocation = stubInfo->hotPathBegin.jumpAtOffset(patchOffsetGetByIdBranchToSlowCase); 698 695 RepatchBuffer repatchBuffer(m_codeBlock); 699 repatchBuffer.relink(jumpLocation, entryLabel);696 repatchBuffer.relink(jumpLocation, CodeLocationLabel(stubInfo->stubRoutine.code())); 700 697 701 698 // We don't want to patch more than once - in future go to cti_op_put_by_id_generic. … … 727 724 Jump success = jump(); 728 725 729 LinkBuffer patchBuffer(*m_globalData, this , m_codeBlock->executablePool());726 LinkBuffer patchBuffer(*m_globalData, this); 730 727 731 728 if (needsStubLink) { … … 737 734 738 735 // Use the patch information to link the failure cases back to the original slow case routine. 739 CodeLocationLabel lastProtoBegin = polymorphicStructures->list[currentIndex - 1].stubRoutine;736 CodeLocationLabel lastProtoBegin = CodeLocationLabel(polymorphicStructures->list[currentIndex - 1].stubRoutine.code()); 740 737 if (!lastProtoBegin) 741 738 lastProtoBegin = stubInfo->callReturnLocation.labelAtOffset(-patchOffsetGetByIdSlowCaseCall); … … 746 743 patchBuffer.link(success, stubInfo->hotPathBegin.labelAtOffset(patchOffsetGetByIdPutResult)); 747 744 748 CodeLocationLabel entryLabel = patchBuffer.finalizeCodeAddendum();749 750 polymorphicStructures->list[currentIndex].set(*m_globalData, m_codeBlock->ownerExecutable(), entryLabel, structure);745 MacroAssemblerCodeRef stubCode = patchBuffer.finalizeCode(); 746 747 polymorphicStructures->list[currentIndex].set(*m_globalData, m_codeBlock->ownerExecutable(), stubCode, structure); 751 748 752 749 // Finally patch the jump to slow case back in the hot path to jump here instead. 753 750 CodeLocationJump jumpLocation = stubInfo->hotPathBegin.jumpAtOffset(patchOffsetGetByIdBranchToSlowCase); 754 751 RepatchBuffer repatchBuffer(m_codeBlock); 755 repatchBuffer.relink(jumpLocation, entryLabel);752 repatchBuffer.relink(jumpLocation, CodeLocationLabel(stubCode.code())); 756 753 } 757 754 … … 792 789 Jump success = jump(); 793 790 794 LinkBuffer patchBuffer(*m_globalData, this , m_codeBlock->executablePool());791 LinkBuffer patchBuffer(*m_globalData, this); 795 792 796 793 if (needsStubLink) { … … 802 799 803 800 // Use the patch information to link the failure cases back to the original slow case routine. 804 CodeLocationLabel lastProtoBegin = prototypeStructures->list[currentIndex - 1].stubRoutine;801 CodeLocationLabel lastProtoBegin = CodeLocationLabel(prototypeStructures->list[currentIndex - 1].stubRoutine.code()); 805 802 patchBuffer.link(failureCases1, lastProtoBegin); 806 803 patchBuffer.link(failureCases2, lastProtoBegin); … … 809 806 patchBuffer.link(success, stubInfo->hotPathBegin.labelAtOffset(patchOffsetGetByIdPutResult)); 810 807 811 CodeLocationLabel entryLabel = patchBuffer.finalizeCodeAddendum();812 prototypeStructures->list[currentIndex].set(*m_globalData, m_codeBlock->ownerExecutable(), entryLabel, structure, prototypeStructure);808 MacroAssemblerCodeRef stubCode = patchBuffer.finalizeCode(); 809 prototypeStructures->list[currentIndex].set(*m_globalData, m_codeBlock->ownerExecutable(), stubCode, structure, prototypeStructure); 813 810 814 811 // Finally patch the jump to slow case back in the hot path to jump here instead. 815 812 CodeLocationJump jumpLocation = stubInfo->hotPathBegin.jumpAtOffset(patchOffsetGetByIdBranchToSlowCase); 816 813 RepatchBuffer repatchBuffer(m_codeBlock); 817 repatchBuffer.relink(jumpLocation, entryLabel);814 repatchBuffer.relink(jumpLocation, CodeLocationLabel(stubCode.code())); 818 815 } 819 816 … … 858 855 Jump success = jump(); 859 856 860 LinkBuffer patchBuffer(*m_globalData, this , m_codeBlock->executablePool());857 LinkBuffer patchBuffer(*m_globalData, this); 861 858 862 859 if (needsStubLink) { … … 868 865 869 866 // Use the patch information to link the failure cases back to the original slow case routine. 870 CodeLocationLabel lastProtoBegin = prototypeStructures->list[currentIndex - 1].stubRoutine;867 CodeLocationLabel lastProtoBegin = CodeLocationLabel(prototypeStructures->list[currentIndex - 1].stubRoutine.code()); 871 868 872 869 patchBuffer.link(bucketsOfFail, lastProtoBegin); … … 875 872 patchBuffer.link(success, stubInfo->hotPathBegin.labelAtOffset(patchOffsetGetByIdPutResult)); 876 873 877 Code LocationLabel entryLabel = patchBuffer.finalizeCodeAddendum();874 CodeRef stubRoutine = patchBuffer.finalizeCode(); 878 875 879 876 // Track the stub we have created so that it will be deleted later. 880 prototypeStructures->list[currentIndex].set(callFrame->globalData(), m_codeBlock->ownerExecutable(), entryLabel, structure, chain);877 prototypeStructures->list[currentIndex].set(callFrame->globalData(), m_codeBlock->ownerExecutable(), stubRoutine, structure, chain); 881 878 882 879 // Finally patch the jump to slow case back in the hot path to jump here instead. 883 880 CodeLocationJump jumpLocation = stubInfo->hotPathBegin.jumpAtOffset(patchOffsetGetByIdBranchToSlowCase); 884 881 RepatchBuffer repatchBuffer(m_codeBlock); 885 repatchBuffer.relink(jumpLocation, entryLabel);882 repatchBuffer.relink(jumpLocation, CodeLocationLabel(stubRoutine.code())); 886 883 } 887 884 … … 926 923 Jump success = jump(); 927 924 928 LinkBuffer patchBuffer(*m_globalData, this , m_codeBlock->executablePool());925 LinkBuffer patchBuffer(*m_globalData, this); 929 926 930 927 if (needsStubLink) { … … 942 939 943 940 // Track the stub we have created so that it will be deleted later. 944 Code LocationLabel entryLabel = patchBuffer.finalizeCodeAddendum();945 stubInfo->stubRoutine = entryLabel;941 CodeRef stubRoutine = patchBuffer.finalizeCode(); 942 stubInfo->stubRoutine = stubRoutine; 946 943 947 944 // Finally patch the jump to slow case back in the hot path to jump here instead. 948 945 CodeLocationJump jumpLocation = stubInfo->hotPathBegin.jumpAtOffset(patchOffsetGetByIdBranchToSlowCase); 949 946 RepatchBuffer repatchBuffer(m_codeBlock); 950 repatchBuffer.relink(jumpLocation, entryLabel);947 repatchBuffer.relink(jumpLocation, CodeLocationLabel(stubRoutine.code())); 951 948 952 949 // We don't want to patch more than once - in future go to cti_op_put_by_id_generic. -
trunk/Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp
r93698 r94920 166 166 } 167 167 168 JIT::Code Ptr JIT::stringGetByValStubGenerator(JSGlobalData* globalData, ExecutablePool* pool)168 JIT::CodeRef JIT::stringGetByValStubGenerator(JSGlobalData* globalData) 169 169 { 170 170 JSInterfaceJIT jit; … … 194 194 jit.ret(); 195 195 196 LinkBuffer patchBuffer(*globalData, &jit , pool);197 return patchBuffer.finalizeCode() .m_code;196 LinkBuffer patchBuffer(*globalData, &jit); 197 return patchBuffer.finalizeCode(); 198 198 } 199 199 … … 233 233 linkSlowCase(iter); // base array check 234 234 Jump notString = branchPtr(NotEqual, Address(regT0), TrustedImmPtr(m_globalData->jsStringVPtr)); 235 emitNakedCall(m_globalData->getCTIStub(stringGetByValStubGenerator) );235 emitNakedCall(m_globalData->getCTIStub(stringGetByValStubGenerator).code()); 236 236 Jump failed = branchTestPtr(Zero, regT0); 237 237 emitStore(dst, regT1, regT0); … … 542 542 Call failureCall = tailRecursiveCall(); 543 543 544 LinkBuffer patchBuffer(*m_globalData, this , m_codeBlock->executablePool());544 LinkBuffer patchBuffer(*m_globalData, this); 545 545 546 546 patchBuffer.link(failureCall, FunctionPtr(direct ? cti_op_put_by_id_direct_fail : cti_op_put_by_id_fail)); … … 551 551 } 552 552 553 CodeLocationLabel entryLabel = patchBuffer.finalizeCodeAddendum(); 554 stubInfo->stubRoutine = entryLabel; 553 stubInfo->stubRoutine = patchBuffer.finalizeCode(); 555 554 RepatchBuffer repatchBuffer(m_codeBlock); 556 repatchBuffer.relinkCallerToTrampoline(returnAddress, entryLabel);555 repatchBuffer.relinkCallerToTrampoline(returnAddress, CodeLocationLabel(stubInfo->stubRoutine.code())); 557 556 } 558 557 … … 607 606 Jump success = jump(); 608 607 609 LinkBuffer patchBuffer(*m_globalData, this , m_codeBlock->executablePool());608 LinkBuffer patchBuffer(*m_globalData, this); 610 609 611 610 // Use the patch information to link the failure cases back to the original slow case routine. … … 618 617 619 618 // Track the stub we have created so that it will be deleted later. 620 CodeLocationLabel entryLabel = patchBuffer.finalizeCodeAddendum(); 621 stubInfo->stubRoutine = entryLabel; 619 stubInfo->stubRoutine = patchBuffer.finalizeCode(); 622 620 623 621 // Finally patch the jump to slow case back in the hot path to jump here instead. 624 622 CodeLocationJump jumpLocation = stubInfo->hotPathBegin.jumpAtOffset(patchOffsetGetByIdBranchToSlowCase); 625 623 RepatchBuffer repatchBuffer(m_codeBlock); 626 repatchBuffer.relink(jumpLocation, entryLabel);624 repatchBuffer.relink(jumpLocation, CodeLocationLabel(stubInfo->stubRoutine.code())); 627 625 628 626 // We don't want to patch more than once - in future go to cti_op_put_by_id_generic. … … 667 665 Jump success = jump(); 668 666 669 LinkBuffer patchBuffer(*m_globalData, this , m_codeBlock->executablePool());667 LinkBuffer patchBuffer(*m_globalData, this); 670 668 671 669 // Use the patch information to link the failure cases back to the original slow case routine. … … 685 683 686 684 // Track the stub we have created so that it will be deleted later. 687 CodeLocationLabel entryLabel = patchBuffer.finalizeCodeAddendum(); 688 stubInfo->stubRoutine = entryLabel; 685 stubInfo->stubRoutine = patchBuffer.finalizeCode(); 689 686 690 687 // Finally patch the jump to slow case back in the hot path to jump here instead. 691 688 CodeLocationJump jumpLocation = stubInfo->hotPathBegin.jumpAtOffset(patchOffsetGetByIdBranchToSlowCase); 692 689 RepatchBuffer repatchBuffer(m_codeBlock); 693 repatchBuffer.relink(jumpLocation, entryLabel);690 repatchBuffer.relink(jumpLocation, CodeLocationLabel(stubInfo->stubRoutine.code())); 694 691 695 692 // We don't want to patch more than once - in future go to cti_op_put_by_id_generic. … … 724 721 Jump success = jump(); 725 722 726 LinkBuffer patchBuffer(*m_globalData, this , m_codeBlock->executablePool());723 LinkBuffer patchBuffer(*m_globalData, this); 727 724 if (needsStubLink) { 728 725 for (Vector<CallRecord>::iterator iter = m_calls.begin(); iter != m_calls.end(); ++iter) { … … 732 729 } 733 730 // Use the patch information to link the failure cases back to the original slow case routine. 734 CodeLocationLabel lastProtoBegin = polymorphicStructures->list[currentIndex - 1].stubRoutine;731 CodeLocationLabel lastProtoBegin = CodeLocationLabel(polymorphicStructures->list[currentIndex - 1].stubRoutine.code()); 735 732 if (!lastProtoBegin) 736 733 lastProtoBegin = stubInfo->callReturnLocation.labelAtOffset(-patchOffsetGetByIdSlowCaseCall); … … 741 738 patchBuffer.link(success, stubInfo->hotPathBegin.labelAtOffset(patchOffsetGetByIdPutResult)); 742 739 743 Code LocationLabel entryLabel = patchBuffer.finalizeCodeAddendum();744 745 polymorphicStructures->list[currentIndex].set(*m_globalData, m_codeBlock->ownerExecutable(), entryLabel, structure);740 CodeRef stubRoutine = patchBuffer.finalizeCode(); 741 742 polymorphicStructures->list[currentIndex].set(*m_globalData, m_codeBlock->ownerExecutable(), stubRoutine, structure); 746 743 747 744 // Finally patch the jump to slow case back in the hot path to jump here instead. 748 745 CodeLocationJump jumpLocation = stubInfo->hotPathBegin.jumpAtOffset(patchOffsetGetByIdBranchToSlowCase); 749 746 RepatchBuffer repatchBuffer(m_codeBlock); 750 repatchBuffer.relink(jumpLocation, entryLabel);747 repatchBuffer.relink(jumpLocation, CodeLocationLabel(stubRoutine.code())); 751 748 } 752 749 … … 788 785 Jump success = jump(); 789 786 790 LinkBuffer patchBuffer(*m_globalData, this , m_codeBlock->executablePool());787 LinkBuffer patchBuffer(*m_globalData, this); 791 788 if (needsStubLink) { 792 789 for (Vector<CallRecord>::iterator iter = m_calls.begin(); iter != m_calls.end(); ++iter) { … … 796 793 } 797 794 // Use the patch information to link the failure cases back to the original slow case routine. 798 CodeLocationLabel lastProtoBegin = prototypeStructures->list[currentIndex - 1].stubRoutine;795 CodeLocationLabel lastProtoBegin = CodeLocationLabel(prototypeStructures->list[currentIndex - 1].stubRoutine.code()); 799 796 patchBuffer.link(failureCases1, lastProtoBegin); 800 797 patchBuffer.link(failureCases2, lastProtoBegin); … … 803 800 patchBuffer.link(success, stubInfo->hotPathBegin.labelAtOffset(patchOffsetGetByIdPutResult)); 804 801 805 Code LocationLabel entryLabel = patchBuffer.finalizeCodeAddendum();806 807 prototypeStructures->list[currentIndex].set(callFrame->globalData(), m_codeBlock->ownerExecutable(), entryLabel, structure, prototypeStructure);802 CodeRef stubRoutine = patchBuffer.finalizeCode(); 803 804 prototypeStructures->list[currentIndex].set(callFrame->globalData(), m_codeBlock->ownerExecutable(), stubRoutine, structure, prototypeStructure); 808 805 809 806 // Finally patch the jump to slow case back in the hot path to jump here instead. 810 807 CodeLocationJump jumpLocation = stubInfo->hotPathBegin.jumpAtOffset(patchOffsetGetByIdBranchToSlowCase); 811 808 RepatchBuffer repatchBuffer(m_codeBlock); 812 repatchBuffer.relink(jumpLocation, entryLabel);809 repatchBuffer.relink(jumpLocation, CodeLocationLabel(stubRoutine.code())); 813 810 } 814 811 … … 855 852 Jump success = jump(); 856 853 857 LinkBuffer patchBuffer(*m_globalData, this , m_codeBlock->executablePool());854 LinkBuffer patchBuffer(*m_globalData, this); 858 855 if (needsStubLink) { 859 856 for (Vector<CallRecord>::iterator iter = m_calls.begin(); iter != m_calls.end(); ++iter) { … … 863 860 } 864 861 // Use the patch information to link the failure cases back to the original slow case routine. 865 CodeLocationLabel lastProtoBegin = prototypeStructures->list[currentIndex - 1].stubRoutine;862 CodeLocationLabel lastProtoBegin = CodeLocationLabel(prototypeStructures->list[currentIndex - 1].stubRoutine.code()); 866 863 867 864 patchBuffer.link(bucketsOfFail, lastProtoBegin); … … 870 867 patchBuffer.link(success, stubInfo->hotPathBegin.labelAtOffset(patchOffsetGetByIdPutResult)); 871 868 872 Code LocationLabel entryLabel = patchBuffer.finalizeCodeAddendum();869 CodeRef stubRoutine = patchBuffer.finalizeCode(); 873 870 874 871 // Track the stub we have created so that it will be deleted later. 875 prototypeStructures->list[currentIndex].set(callFrame->globalData(), m_codeBlock->ownerExecutable(), entryLabel, structure, chain);872 prototypeStructures->list[currentIndex].set(callFrame->globalData(), m_codeBlock->ownerExecutable(), stubRoutine, structure, chain); 876 873 877 874 // Finally patch the jump to slow case back in the hot path to jump here instead. 878 875 CodeLocationJump jumpLocation = stubInfo->hotPathBegin.jumpAtOffset(patchOffsetGetByIdBranchToSlowCase); 879 876 RepatchBuffer repatchBuffer(m_codeBlock); 880 repatchBuffer.relink(jumpLocation, entryLabel);877 repatchBuffer.relink(jumpLocation, CodeLocationLabel(stubRoutine.code())); 881 878 } 882 879 … … 922 919 Jump success = jump(); 923 920 924 LinkBuffer patchBuffer(*m_globalData, this , m_codeBlock->executablePool());921 LinkBuffer patchBuffer(*m_globalData, this); 925 922 if (needsStubLink) { 926 923 for (Vector<CallRecord>::iterator iter = m_calls.begin(); iter != m_calls.end(); ++iter) { … … 936 933 937 934 // Track the stub we have created so that it will be deleted later. 938 Code LocationLabel entryLabel = patchBuffer.finalizeCodeAddendum();939 stubInfo->stubRoutine = entryLabel;935 CodeRef stubRoutine = patchBuffer.finalizeCode(); 936 stubInfo->stubRoutine = stubRoutine; 940 937 941 938 // Finally patch the jump to slow case back in the hot path to jump here instead. 942 939 CodeLocationJump jumpLocation = stubInfo->hotPathBegin.jumpAtOffset(patchOffsetGetByIdBranchToSlowCase); 943 940 RepatchBuffer repatchBuffer(m_codeBlock); 944 repatchBuffer.relink(jumpLocation, entryLabel);941 repatchBuffer.relink(jumpLocation, CodeLocationLabel(stubRoutine.code())); 945 942 946 943 // We don't want to patch more than once - in future go to cti_op_put_by_id_generic. -
trunk/Source/JavaScriptCore/jit/JITStubs.cpp
r94814 r94920 758 758 return; 759 759 760 JIT::compileCTIMachineTrampolines(globalData, &m_executablePool, &m_trampolineStructure);761 ASSERT( m_executablePool);760 m_executableMemory = JIT::compileCTIMachineTrampolines(globalData, &m_trampolineStructure); 761 ASSERT(!!m_executableMemory); 762 762 #if CPU(ARM_THUMB2) 763 763 // Unfortunate the arm compiler does not like the use of offsetof on JITStackFrame (since it contains non POD types), … … 1609 1609 if (stubInfo->accessType == access_get_by_id_self) { 1610 1610 ASSERT(!stubInfo->stubRoutine); 1611 polymorphicStructureList = new PolymorphicAccessStructureList(callFrame->globalData(), codeBlock->ownerExecutable(), CodeLocationLabel(), stubInfo->u.getByIdSelf.baseObjectStructure.get());1611 polymorphicStructureList = new PolymorphicAccessStructureList(callFrame->globalData(), codeBlock->ownerExecutable(), MacroAssemblerCodeRef(), stubInfo->u.getByIdSelf.baseObjectStructure.get()); 1612 1612 stubInfo->initGetByIdSelfList(polymorphicStructureList, 1); 1613 1613 } else { … … 1635 1635 case access_get_by_id_proto: 1636 1636 prototypeStructureList = new PolymorphicAccessStructureList(globalData, owner, stubInfo->stubRoutine, stubInfo->u.getByIdProto.baseObjectStructure.get(), stubInfo->u.getByIdProto.prototypeStructure.get()); 1637 stubInfo->stubRoutine = CodeLocationLabel();1637 stubInfo->stubRoutine = MacroAssemblerCodeRef(); 1638 1638 stubInfo->initGetByIdProtoList(prototypeStructureList, 2); 1639 1639 break; 1640 1640 case access_get_by_id_chain: 1641 1641 prototypeStructureList = new PolymorphicAccessStructureList(globalData, owner, stubInfo->stubRoutine, stubInfo->u.getByIdChain.baseObjectStructure.get(), stubInfo->u.getByIdChain.chain.get()); 1642 stubInfo->stubRoutine = CodeLocationLabel();1642 stubInfo->stubRoutine = MacroAssemblerCodeRef(); 1643 1643 stubInfo->initGetByIdProtoList(prototypeStructureList, 2); 1644 1644 break; … … 3599 3599 } 3600 3600 3601 MacroAssemblerCode PtrJITThunks::ctiStub(JSGlobalData* globalData, ThunkGenerator generator)3602 { 3603 std::pair<CTIStubMap::iterator, bool> entry = m_ctiStubMap.add(generator, MacroAssemblerCode Ptr());3601 MacroAssemblerCodeRef JITThunks::ctiStub(JSGlobalData* globalData, ThunkGenerator generator) 3602 { 3603 std::pair<CTIStubMap::iterator, bool> entry = m_ctiStubMap.add(generator, MacroAssemblerCodeRef()); 3604 3604 if (entry.second) 3605 entry.first->second = generator(globalData , m_executablePool.get());3605 entry.first->second = generator(globalData); 3606 3606 return entry.first->second; 3607 3607 } … … 3611 3611 std::pair<HostFunctionStubMap::iterator, bool> entry = m_hostFunctionStubMap->add(function, Weak<NativeExecutable>()); 3612 3612 if (!*entry.first->second) 3613 entry.first->second.set(*globalData, NativeExecutable::create(*globalData, JIT::compileCTINativeCall(globalData, m_executablePool, function), function, ctiNativeConstruct(), callHostFunctionAsConstructor));3613 entry.first->second.set(*globalData, NativeExecutable::create(*globalData, JIT::compileCTINativeCall(globalData, function), function, MacroAssemblerCodeRef::createSelfManagedCodeRef(ctiNativeConstruct()), callHostFunctionAsConstructor)); 3614 3614 return entry.first->second.get(); 3615 3615 } … … 3619 3619 std::pair<HostFunctionStubMap::iterator, bool> entry = m_hostFunctionStubMap->add(function, Weak<NativeExecutable>()); 3620 3620 if (!*entry.first->second) { 3621 MacroAssemblerCode Ptr code = globalData->canUseJIT() ? generator(globalData, m_executablePool.get()) : MacroAssemblerCodePtr();3622 entry.first->second.set(*globalData, NativeExecutable::create(*globalData, code, function, ctiNativeConstruct(), callHostFunctionAsConstructor));3621 MacroAssemblerCodeRef code = globalData->canUseJIT() ? generator(globalData) : MacroAssemblerCodeRef(); 3622 entry.first->second.set(*globalData, NativeExecutable::create(*globalData, code, function, MacroAssemblerCodeRef::createSelfManagedCodeRef(ctiNativeConstruct()), callHostFunctionAsConstructor)); 3623 3623 } 3624 3624 return entry.first->second.get(); -
trunk/Source/JavaScriptCore/jit/JITStubs.h
r94559 r94920 295 295 MacroAssemblerCodePtr ctiSoftModulo() { return m_trampolineStructure.ctiSoftModulo; } 296 296 297 MacroAssemblerCode Ptr ctiStub(JSGlobalData* globalData, ThunkGenerator generator);297 MacroAssemblerCodeRef ctiStub(JSGlobalData*, ThunkGenerator); 298 298 299 299 NativeExecutable* hostFunctionStub(JSGlobalData*, NativeFunction); … … 303 303 304 304 private: 305 typedef HashMap<ThunkGenerator, MacroAssemblerCode Ptr> CTIStubMap;305 typedef HashMap<ThunkGenerator, MacroAssemblerCodeRef> CTIStubMap; 306 306 CTIStubMap m_ctiStubMap; 307 307 typedef HashMap<NativeFunction, Weak<NativeExecutable> > HostFunctionStubMap; 308 308 OwnPtr<HostFunctionStubMap> m_hostFunctionStubMap; 309 RefPtr<Executable Pool> m_executablePool;309 RefPtr<ExecutableMemoryHandle> m_executableMemory; 310 310 311 311 TrampolineStructure m_trampolineStructure; -
trunk/Source/JavaScriptCore/jit/SpecializedThunkJIT.h
r94500 r94920 38 38 public: 39 39 static const int ThisArgument = -1; 40 SpecializedThunkJIT(int expectedArgCount, JSGlobalData* globalData , ExecutablePool* pool)40 SpecializedThunkJIT(int expectedArgCount, JSGlobalData* globalData) 41 41 : m_expectedArgCount(expectedArgCount) 42 42 , m_globalData(globalData) 43 , m_pool(pool)44 43 { 45 44 // Check that we have the expected number of arguments … … 135 134 } 136 135 137 MacroAssemblerCode Ptrfinalize(JSGlobalData& globalData, MacroAssemblerCodePtr fallback)136 MacroAssemblerCodeRef finalize(JSGlobalData& globalData, MacroAssemblerCodePtr fallback) 138 137 { 139 LinkBuffer patchBuffer(globalData, this , m_pool.get());138 LinkBuffer patchBuffer(globalData, this); 140 139 patchBuffer.link(m_failures, CodeLocationLabel(fallback)); 141 140 for (unsigned i = 0; i < m_calls.size(); i++) 142 141 patchBuffer.link(m_calls[i].first, m_calls[i].second); 143 return patchBuffer.finalizeCode() .m_code;142 return patchBuffer.finalizeCode(); 144 143 } 145 144 … … 175 174 int m_expectedArgCount; 176 175 JSGlobalData* m_globalData; 177 RefPtr<ExecutablePool> m_pool;178 176 MacroAssembler::JumpList m_failures; 179 177 Vector<std::pair<Call, FunctionPtr> > m_calls; -
trunk/Source/JavaScriptCore/jit/ThunkGenerators.cpp
r90425 r94920 64 64 } 65 65 66 MacroAssemblerCode Ptr charCodeAtThunkGenerator(JSGlobalData* globalData, ExecutablePool* pool)67 { 68 SpecializedThunkJIT jit(1, globalData , pool);66 MacroAssemblerCodeRef charCodeAtThunkGenerator(JSGlobalData* globalData) 67 { 68 SpecializedThunkJIT jit(1, globalData); 69 69 stringCharLoad(jit); 70 70 jit.returnInt32(SpecializedThunkJIT::regT0); … … 72 72 } 73 73 74 MacroAssemblerCode Ptr charAtThunkGenerator(JSGlobalData* globalData, ExecutablePool* pool)75 { 76 SpecializedThunkJIT jit(1, globalData , pool);74 MacroAssemblerCodeRef charAtThunkGenerator(JSGlobalData* globalData) 75 { 76 SpecializedThunkJIT jit(1, globalData); 77 77 stringCharLoad(jit); 78 78 charToString(jit, globalData, SpecializedThunkJIT::regT0, SpecializedThunkJIT::regT0, SpecializedThunkJIT::regT1); … … 81 81 } 82 82 83 MacroAssemblerCode Ptr fromCharCodeThunkGenerator(JSGlobalData* globalData, ExecutablePool* pool)84 { 85 SpecializedThunkJIT jit(1, globalData , pool);83 MacroAssemblerCodeRef fromCharCodeThunkGenerator(JSGlobalData* globalData) 84 { 85 SpecializedThunkJIT jit(1, globalData); 86 86 // load char code 87 87 jit.loadInt32Argument(0, SpecializedThunkJIT::regT0); … … 91 91 } 92 92 93 MacroAssemblerCode Ptr sqrtThunkGenerator(JSGlobalData* globalData, ExecutablePool* pool)94 { 95 SpecializedThunkJIT jit(1, globalData , pool);93 MacroAssemblerCodeRef sqrtThunkGenerator(JSGlobalData* globalData) 94 { 95 SpecializedThunkJIT jit(1, globalData); 96 96 if (!jit.supportsFloatingPointSqrt()) 97 return globalData->jitStubs->ctiNativeCall();97 return MacroAssemblerCodeRef::createSelfManagedCodeRef(globalData->jitStubs->ctiNativeCall()); 98 98 99 99 jit.loadDoubleArgument(0, SpecializedThunkJIT::fpRegT0, SpecializedThunkJIT::regT0); … … 179 179 defineUnaryDoubleOpWrapper(ceil); 180 180 181 MacroAssemblerCode Ptr floorThunkGenerator(JSGlobalData* globalData, ExecutablePool* pool)182 { 183 SpecializedThunkJIT jit(1, globalData , pool);181 MacroAssemblerCodeRef floorThunkGenerator(JSGlobalData* globalData) 182 { 183 SpecializedThunkJIT jit(1, globalData); 184 184 MacroAssembler::Jump nonIntJump; 185 185 if (!UnaryDoubleOpWrapper(floor) || !jit.supportsFloatingPoint()) 186 return globalData->jitStubs->ctiNativeCall();186 return MacroAssemblerCodeRef::createSelfManagedCodeRef(globalData->jitStubs->ctiNativeCall()); 187 187 jit.loadInt32Argument(0, SpecializedThunkJIT::regT0, nonIntJump); 188 188 jit.returnInt32(SpecializedThunkJIT::regT0); … … 198 198 } 199 199 200 MacroAssemblerCode Ptr ceilThunkGenerator(JSGlobalData* globalData, ExecutablePool* pool)201 { 202 SpecializedThunkJIT jit(1, globalData , pool);200 MacroAssemblerCodeRef ceilThunkGenerator(JSGlobalData* globalData) 201 { 202 SpecializedThunkJIT jit(1, globalData); 203 203 if (!UnaryDoubleOpWrapper(ceil) || !jit.supportsFloatingPoint()) 204 return globalData->jitStubs->ctiNativeCall();204 return MacroAssemblerCodeRef::createSelfManagedCodeRef(globalData->jitStubs->ctiNativeCall()); 205 205 MacroAssembler::Jump nonIntJump; 206 206 jit.loadInt32Argument(0, SpecializedThunkJIT::regT0, nonIntJump); … … 221 221 static const double negativeHalfConstant = -0.5; 222 222 223 MacroAssemblerCode Ptr roundThunkGenerator(JSGlobalData* globalData, ExecutablePool* pool)224 { 225 SpecializedThunkJIT jit(1, globalData , pool);223 MacroAssemblerCodeRef roundThunkGenerator(JSGlobalData* globalData) 224 { 225 SpecializedThunkJIT jit(1, globalData); 226 226 if (!UnaryDoubleOpWrapper(jsRound) || !jit.supportsFloatingPoint()) 227 return globalData->jitStubs->ctiNativeCall();227 return MacroAssemblerCodeRef::createSelfManagedCodeRef(globalData->jitStubs->ctiNativeCall()); 228 228 MacroAssembler::Jump nonIntJump; 229 229 jit.loadInt32Argument(0, SpecializedThunkJIT::regT0, nonIntJump); … … 240 240 } 241 241 242 MacroAssemblerCode Ptr expThunkGenerator(JSGlobalData* globalData, ExecutablePool* pool)242 MacroAssemblerCodeRef expThunkGenerator(JSGlobalData* globalData) 243 243 { 244 244 if (!UnaryDoubleOpWrapper(exp)) 245 return globalData->jitStubs->ctiNativeCall();246 SpecializedThunkJIT jit(1, globalData , pool);245 return MacroAssemblerCodeRef::createSelfManagedCodeRef(globalData->jitStubs->ctiNativeCall()); 246 SpecializedThunkJIT jit(1, globalData); 247 247 if (!jit.supportsFloatingPoint()) 248 return globalData->jitStubs->ctiNativeCall();248 return MacroAssemblerCodeRef::createSelfManagedCodeRef(globalData->jitStubs->ctiNativeCall()); 249 249 jit.loadDoubleArgument(0, SpecializedThunkJIT::fpRegT0, SpecializedThunkJIT::regT0); 250 250 jit.callDoubleToDouble(UnaryDoubleOpWrapper(exp)); … … 253 253 } 254 254 255 MacroAssemblerCode Ptr logThunkGenerator(JSGlobalData* globalData, ExecutablePool* pool)255 MacroAssemblerCodeRef logThunkGenerator(JSGlobalData* globalData) 256 256 { 257 257 if (!UnaryDoubleOpWrapper(log)) 258 return globalData->jitStubs->ctiNativeCall();259 SpecializedThunkJIT jit(1, globalData , pool);258 return MacroAssemblerCodeRef::createSelfManagedCodeRef(globalData->jitStubs->ctiNativeCall()); 259 SpecializedThunkJIT jit(1, globalData); 260 260 if (!jit.supportsFloatingPoint()) 261 return globalData->jitStubs->ctiNativeCall();261 return MacroAssemblerCodeRef::createSelfManagedCodeRef(globalData->jitStubs->ctiNativeCall()); 262 262 jit.loadDoubleArgument(0, SpecializedThunkJIT::fpRegT0, SpecializedThunkJIT::regT0); 263 263 jit.callDoubleToDouble(UnaryDoubleOpWrapper(log)); … … 266 266 } 267 267 268 MacroAssemblerCode Ptr absThunkGenerator(JSGlobalData* globalData, ExecutablePool* pool)269 { 270 SpecializedThunkJIT jit(1, globalData , pool);268 MacroAssemblerCodeRef absThunkGenerator(JSGlobalData* globalData) 269 { 270 SpecializedThunkJIT jit(1, globalData); 271 271 if (!jit.supportsDoubleBitops()) 272 return globalData->jitStubs->ctiNativeCall();272 return MacroAssemblerCodeRef::createSelfManagedCodeRef(globalData->jitStubs->ctiNativeCall()); 273 273 MacroAssembler::Jump nonIntJump; 274 274 jit.loadInt32Argument(0, SpecializedThunkJIT::regT0, nonIntJump); … … 287 287 } 288 288 289 MacroAssemblerCode Ptr powThunkGenerator(JSGlobalData* globalData, ExecutablePool* pool)290 { 291 SpecializedThunkJIT jit(2, globalData , pool);289 MacroAssemblerCodeRef powThunkGenerator(JSGlobalData* globalData) 290 { 291 SpecializedThunkJIT jit(2, globalData); 292 292 if (!jit.supportsFloatingPoint()) 293 return globalData->jitStubs->ctiNativeCall();293 return MacroAssemblerCodeRef::createSelfManagedCodeRef(globalData->jitStubs->ctiNativeCall()); 294 294 295 295 jit.loadDouble(&oneConstant, SpecializedThunkJIT::fpRegT1); -
trunk/Source/JavaScriptCore/jit/ThunkGenerators.h
r90237 r94920 32 32 class JSGlobalData; 33 33 class NativeExecutable; 34 class MacroAssemblerCode Ptr;34 class MacroAssemblerCodeRef; 35 35 36 typedef MacroAssemblerCode Ptr (*ThunkGenerator)(JSGlobalData*, ExecutablePool*);37 MacroAssemblerCode Ptr charCodeAtThunkGenerator(JSGlobalData*, ExecutablePool*);38 MacroAssemblerCode Ptr charAtThunkGenerator(JSGlobalData*, ExecutablePool*);39 MacroAssemblerCode Ptr fromCharCodeThunkGenerator(JSGlobalData*, ExecutablePool*);40 MacroAssemblerCode Ptr absThunkGenerator(JSGlobalData*, ExecutablePool*);41 MacroAssemblerCode Ptr ceilThunkGenerator(JSGlobalData*, ExecutablePool*);42 MacroAssemblerCode Ptr expThunkGenerator(JSGlobalData*, ExecutablePool*);43 MacroAssemblerCode Ptr floorThunkGenerator(JSGlobalData*, ExecutablePool*);44 MacroAssemblerCode Ptr logThunkGenerator(JSGlobalData*, ExecutablePool*);45 MacroAssemblerCode Ptr roundThunkGenerator(JSGlobalData*, ExecutablePool*);46 MacroAssemblerCode Ptr sqrtThunkGenerator(JSGlobalData*, ExecutablePool*);47 MacroAssemblerCode Ptr powThunkGenerator(JSGlobalData*, ExecutablePool*);36 typedef MacroAssemblerCodeRef (*ThunkGenerator)(JSGlobalData*); 37 MacroAssemblerCodeRef charCodeAtThunkGenerator(JSGlobalData*); 38 MacroAssemblerCodeRef charAtThunkGenerator(JSGlobalData*); 39 MacroAssemblerCodeRef fromCharCodeThunkGenerator(JSGlobalData*); 40 MacroAssemblerCodeRef absThunkGenerator(JSGlobalData*); 41 MacroAssemblerCodeRef ceilThunkGenerator(JSGlobalData*); 42 MacroAssemblerCodeRef expThunkGenerator(JSGlobalData*); 43 MacroAssemblerCodeRef floorThunkGenerator(JSGlobalData*); 44 MacroAssemblerCodeRef logThunkGenerator(JSGlobalData*); 45 MacroAssemblerCodeRef roundThunkGenerator(JSGlobalData*); 46 MacroAssemblerCodeRef sqrtThunkGenerator(JSGlobalData*); 47 MacroAssemblerCodeRef powThunkGenerator(JSGlobalData*); 48 48 } 49 49 #endif -
trunk/Source/JavaScriptCore/runtime/Executable.h
r94599 r94920 174 174 175 175 #if ENABLE(JIT) 176 static NativeExecutable* create(JSGlobalData& globalData, MacroAssemblerCode Ptr callThunk, NativeFunction function, MacroAssemblerCodePtrconstructThunk, NativeFunction constructor)176 static NativeExecutable* create(JSGlobalData& globalData, MacroAssemblerCodeRef callThunk, NativeFunction function, MacroAssemblerCodeRef constructThunk, NativeFunction constructor) 177 177 { 178 178 NativeExecutable* executable; -
trunk/Source/JavaScriptCore/runtime/InitializeThreading.cpp
r94514 r94920 30 30 #include "InitializeThreading.h" 31 31 32 #include "ExecutableAllocator.h" 32 33 #include "Heap.h" 33 34 #include "Identifier.h" … … 35 36 #include "UString.h" 36 37 #include "WriteBarrier.h" 38 #include "dtoa.h" 37 39 #include <wtf/DateMath.h> 38 40 #include <wtf/Threading.h> … … 55 57 #endif 56 58 JSGlobalData::storeVPtrs(); 59 ExecutableAllocator::initializeAllocator(); 57 60 #if ENABLE(JSC_MULTIPLE_THREADS) 58 61 RegisterFile::initializeThreading(); -
trunk/Source/JavaScriptCore/runtime/JSGlobalData.cpp
r94811 r94920 187 187 #if ENABLE(ASSEMBLER) 188 188 , executableAllocator(*this) 189 , regexAllocator(*this)190 189 #endif 191 190 , lexer(new Lexer(this)) … … 441 440 { 442 441 interpreter->dumpSampleData(exec); 442 #if ENABLE(ASSEMBLER) 443 ExecutableAllocator::dumpProfile(); 444 #endif 443 445 } 444 446 -
trunk/Source/JavaScriptCore/runtime/JSGlobalData.h
r94629 r94920 194 194 #if ENABLE(ASSEMBLER) 195 195 ExecutableAllocator executableAllocator; 196 ExecutableAllocator regexAllocator;197 196 #endif 198 197 … … 217 216 #if ENABLE(JIT) 218 217 OwnPtr<JITThunks> jitStubs; 219 MacroAssemblerCode PtrgetCTIStub(ThunkGenerator generator)218 MacroAssemblerCodeRef getCTIStub(ThunkGenerator generator) 220 219 { 221 220 return jitStubs->ctiStub(this, generator); -
trunk/Source/JavaScriptCore/wtf/CMakeLists.txt
r94452 r94920 43 43 MathExtras.h 44 44 MessageQueue.h 45 MetaAllocator.cpp 46 MetaAllocator.h 47 MetaAllocatorHandle.h 45 48 NonCopyingSort.h 46 49 ThreadRestrictionVerifier.h … … 70 73 RandomNumber.h 71 74 RandomNumberSeed.h 75 RedBlackTree.h 72 76 RefCounted.h 73 77 RefCountedLeakCounter.h -
trunk/Source/JavaScriptCore/wtf/wtf.pri
r94890 r94920 23 23 wtf/MD5.cpp \ 24 24 wtf/MainThread.cpp \ 25 wtf/MetaAllocator.cpp \ 25 26 wtf/NullPtr.cpp \ 26 27 wtf/OSRandomSource.cpp \ -
trunk/Source/JavaScriptCore/yarr/YarrJIT.cpp
r94254 r94920 2430 2430 2431 2431 // Link & finalize the code. 2432 LinkBuffer linkBuffer(*globalData, this , globalData->regexAllocator);2432 LinkBuffer linkBuffer(*globalData, this); 2433 2433 m_backtrackingState.linkDataLabels(linkBuffer); 2434 2434 jitObject.set(linkBuffer.finalizeCode()); -
trunk/Source/JavaScriptCore/yarr/YarrJIT.h
r78042 r94920 66 66 int execute(const UChar* input, unsigned start, unsigned length, int* output) 67 67 { 68 return reinterpret_cast<YarrJITCode>(m_ref. m_code.executableAddress())(input, start, length, output);68 return reinterpret_cast<YarrJITCode>(m_ref.code().executableAddress())(input, start, length, output); 69 69 } 70 70 71 71 #if ENABLE(REGEXP_TRACING) 72 void *getAddr() { return m_ref. m_code.executableAddress(); }72 void *getAddr() { return m_ref.code().executableAddress(); } 73 73 #endif 74 74 -
trunk/Source/JavaScriptGlue/ChangeLog
r94875 r94920 1 2011-08-18 Filip Pizlo <fpizlo@apple.com> 2 3 The executable allocator makes it difficult to free individual 4 chunks of executable memory 5 https://bugs.webkit.org/show_bug.cgi?id=66363 6 7 Reviewed by Oliver Hunt. 8 9 Introduced a best-fit, balanced-tree based allocator. The allocator 10 required a balanced tree that does not allocate memory and that 11 permits the removal of individual nodes directly (as opposed to by 12 key); neither AVLTree nor WebCore's PODRedBlackTree supported this. 13 Changed all references to executable code to use a reference counted 14 handle. 15 16 * ForwardingHeaders/wtf/MetaAllocatorHandle.h: Added. 17 1 18 2011-09-09 Mark Hahnenberg <mhahnenberg@apple.com> 2 19 -
trunk/Source/WebCore/ChangeLog
r94918 r94920 1 2011-09-01 Filip Pizlo <fpizlo@apple.com> 2 3 The executable allocator makes it difficult to free individual 4 chunks of executable memory 5 https://bugs.webkit.org/show_bug.cgi?id=66363 6 7 Reviewed by Oliver Hunt. 8 9 Introduced a best-fit, balanced-tree based allocator. The allocator 10 required a balanced tree that does not allocate memory and that 11 permits the removal of individual nodes directly (as opposed to by 12 key); neither AVLTree nor WebCore's PODRedBlackTree supported this. 13 Changed all references to executable code to use a reference counted 14 handle. 15 16 No new layout tests because behavior is not changed. New API unit 17 tests: 18 Tests/WTF/RedBlackTree.cpp 19 Tests/WTF/MetaAllocator.cpp 20 21 * ForwardingHeaders/wtf/MetaAllocatorHandle.h: Added. 22 1 23 2011-09-10 Sam Weinig <sam@webkit.org> 2 24 -
trunk/Tools/ChangeLog
r94917 r94920 1 2011-09-01 Filip Pizlo <fpizlo@apple.com> 2 3 The executable allocator makes it difficult to free individual 4 chunks of executable memory 5 https://bugs.webkit.org/show_bug.cgi?id=66363 6 7 Reviewed by Oliver Hunt. 8 9 Introduced a best-fit, balanced-tree based allocator. The allocator 10 required a balanced tree that does not allocate memory and that 11 permits the removal of individual nodes directly (as opposed to by 12 key); neither AVLTree nor WebCore's PODRedBlackTree supported this. 13 Changed all references to executable code to use a reference counted 14 handle. 15 16 * TestWebKitAPI/TestWebKitAPI.xcodeproj/project.pbxproj: 17 * TestWebKitAPI/Tests/WTF/MetaAllocator.cpp: Added. 18 (TestWebKitAPI::TEST_F): 19 * TestWebKitAPI/Tests/WTF/RedBlackTree.cpp: Added. 20 (TestWebKitAPI::Pair::findExact): 21 (TestWebKitAPI::Pair::remove): 22 (TestWebKitAPI::Pair::findLeastGreaterThanOrEqual): 23 (TestWebKitAPI::Pair::assertFoundAndRemove): 24 (TestWebKitAPI::Pair::assertEqual): 25 (TestWebKitAPI::Pair::assertSameValuesForKey): 26 (TestWebKitAPI::Pair::testDriver): 27 (TestWebKitAPI::TEST_F): 28 1 29 2011-09-10 Andy Estes <aestes@apple.com> 2 30 -
trunk/Tools/TestWebKitAPI/TestWebKitAPI.xcodeproj/project.pbxproj
r94812 r94920 8 8 9 9 /* Begin PBXBuildFile section */ 10 0FC6C4CC141027E0005B7F0C /* RedBlackTree.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FC6C4CB141027E0005B7F0C /* RedBlackTree.cpp */; }; 11 0FC6C4CF141034AD005B7F0C /* MetaAllocator.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FC6C4CE141034AD005B7F0C /* MetaAllocator.cpp */; }; 10 12 1A02C84F125D4A8400E3F4BD /* Find.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 1A02C84E125D4A8400E3F4BD /* Find.cpp */; }; 11 13 1A02C870125D4CFD00E3F4BD /* find.html in Copy Resources */ = {isa = PBXBuildFile; fileRef = 1A02C84B125D4A5E00E3F4BD /* find.html */; }; … … 131 133 132 134 /* Begin PBXFileReference section */ 135 0FC6C4CB141027E0005B7F0C /* RedBlackTree.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = RedBlackTree.cpp; path = WTF/RedBlackTree.cpp; sourceTree = "<group>"; }; 136 0FC6C4CE141034AD005B7F0C /* MetaAllocator.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = MetaAllocator.cpp; path = WTF/MetaAllocator.cpp; sourceTree = "<group>"; }; 133 137 1A02C84B125D4A5E00E3F4BD /* find.html */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.html; path = find.html; sourceTree = "<group>"; }; 134 138 1A02C84E125D4A8400E3F4BD /* Find.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = Find.cpp; sourceTree = "<group>"; }; … … 368 372 isa = PBXGroup; 369 373 children = ( 374 0FC6C4CE141034AD005B7F0C /* MetaAllocator.cpp */, 375 0FC6C4CB141027E0005B7F0C /* RedBlackTree.cpp */, 370 376 A7A966DA140ECCC8005EF9B4 /* CheckedArithmeticOperations.cpp */, 371 377 C01363C713C3997300EF3964 /* StringOperators.cpp */, … … 578 584 C085880013FEC3A6001EF4E5 /* InstanceMethodSwizzler.mm in Sources */, 579 585 37DC678D140D7C5000ABCCDB /* DOMRangeOfString.mm in Sources */, 586 0FC6C4CC141027E0005B7F0C /* RedBlackTree.cpp in Sources */, 587 0FC6C4CF141034AD005B7F0C /* MetaAllocator.cpp in Sources */, 580 588 A7A966DB140ECCC8005EF9B4 /* CheckedArithmeticOperations.cpp in Sources */, 581 589 939BA91714103412001A01BD /* DeviceScaleFactorOnBack.mm in Sources */,
Note: See TracChangeset
for help on using the changeset viewer.