Changeset 142536 in webkit


Ignore:
Timestamp:
Feb 11, 2013 3:53:22 PM (11 years ago)
Author:
oliver@apple.com
Message:

Harden FastMalloc (again)
https://bugs.webkit.org/show_bug.cgi?id=109334

Reviewed by Mark Hahnenberg.

Re-implement hardening of linked lists in TCMalloc.

In order to keep heap introspection working, we need to thread the
heap entropy manually as the introspection process can't use the
address of a global in determining the mask. Given we now have to
thread a value through anyway, I've stopped relying on ASLR for entropy
and am simply using arc4random() on darwin, and time + ASLR everywhere
else.

I've also made an explicit struct type for the FastMalloc singly linked
lists, as it seemed like the only way to reliably distinguish between
void*'s that were lists vs. void* that were not. This also made it
somewhat easier to reason about things across processes.

Verified that all the introspection tools work as expected.

  • wtf/FastMalloc.cpp:

(WTF::internalEntropyValue):
(WTF):
(HardenedSLL):
(WTF::HardenedSLL::create):
(WTF::HardenedSLL::null):
(WTF::HardenedSLL::setValue):
(WTF::HardenedSLL::value):
(WTF::HardenedSLL::operator!):
(WTF::HardenedSLL::operator UnspecifiedBoolType):
(TCEntry):
(WTF::SLL_Next):
(WTF::SLL_SetNext):
(WTF::SLL_Push):
(WTF::SLL_Pop):
(WTF::SLL_PopRange):
(WTF::SLL_PushRange):
(WTF::SLL_Size):
(PageHeapAllocator):
(WTF::PageHeapAllocator::Init):
(WTF::PageHeapAllocator::New):
(WTF::PageHeapAllocator::Delete):
(WTF::PageHeapAllocator::recordAdministrativeRegions):
(WTF::Span::next):
(WTF::Span::remoteNext):
(WTF::Span::prev):
(WTF::Span::setNext):
(WTF::Span::setPrev):
(Span):
(WTF::DLL_Init):
(WTF::DLL_Remove):
(WTF::DLL_IsEmpty):
(WTF::DLL_Length):
(WTF::DLL_Prepend):
(TCMalloc_Central_FreeList):
(WTF::TCMalloc_Central_FreeList::enumerateFreeObjects):
(WTF::TCMalloc_Central_FreeList::entropy):
(TCMalloc_PageHeap):
(WTF::TCMalloc_PageHeap::init):
(WTF::TCMalloc_PageHeap::scavenge):
(WTF::TCMalloc_PageHeap::New):
(WTF::TCMalloc_PageHeap::AllocLarge):
(WTF::TCMalloc_PageHeap::Carve):
(WTF::TCMalloc_PageHeap::Delete):
(WTF::TCMalloc_PageHeap::ReturnedBytes):
(WTF::TCMalloc_PageHeap::Check):
(WTF::TCMalloc_PageHeap::CheckList):
(WTF::TCMalloc_PageHeap::ReleaseFreeList):
(TCMalloc_ThreadCache_FreeList):
(WTF::TCMalloc_ThreadCache_FreeList::Init):
(WTF::TCMalloc_ThreadCache_FreeList::empty):
(WTF::TCMalloc_ThreadCache_FreeList::Push):
(WTF::TCMalloc_ThreadCache_FreeList::PushRange):
(WTF::TCMalloc_ThreadCache_FreeList::PopRange):
(WTF::TCMalloc_ThreadCache_FreeList::Pop):
(WTF::TCMalloc_ThreadCache_FreeList::enumerateFreeObjects):
(TCMalloc_ThreadCache):
(WTF::TCMalloc_Central_FreeList::Init):
(WTF::TCMalloc_Central_FreeList::ReleaseListToSpans):
(WTF::TCMalloc_Central_FreeList::ReleaseToSpans):
(WTF::TCMalloc_Central_FreeList::InsertRange):
(WTF::TCMalloc_Central_FreeList::RemoveRange):
(WTF::TCMalloc_Central_FreeList::FetchFromSpansSafe):
(WTF::TCMalloc_Central_FreeList::FetchFromSpans):
(WTF::TCMalloc_Central_FreeList::Populate):
(WTF::TCMalloc_ThreadCache::Init):
(WTF::TCMalloc_ThreadCache::Deallocate):
(WTF::TCMalloc_ThreadCache::FetchFromCentralCache):
(WTF::TCMalloc_ThreadCache::ReleaseToCentralCache):
(WTF::TCMalloc_ThreadCache::InitModule):
(WTF::TCMalloc_ThreadCache::NewHeap):
(WTF::TCMalloc_ThreadCache::CreateCacheIfNecessary):

  • wtf/MallocZoneSupport.h:

(RemoteMemoryReader):

Location:
trunk/Source/WTF
Files:
3 edited

Legend:

Unmodified
Added
Removed
  • trunk/Source/WTF/ChangeLog

    r142533 r142536  
     12013-02-08  Oliver Hunt  <oliver@apple.com>
     2
     3        Harden FastMalloc (again)
     4        https://bugs.webkit.org/show_bug.cgi?id=109334
     5
     6        Reviewed by Mark Hahnenberg.
     7
     8        Re-implement hardening of linked lists in TCMalloc.
     9
     10        In order to keep heap introspection working, we need to thread the
     11        heap entropy manually as the introspection process can't use the
     12        address of a global in determining the mask.  Given we now have to
     13        thread a value through anyway, I've stopped relying on ASLR for entropy
     14        and am simply using arc4random() on darwin, and time + ASLR everywhere
     15        else.
     16
     17        I've also made an explicit struct type for the FastMalloc singly linked
     18        lists, as it seemed like the only way to reliably distinguish between
     19        void*'s that were lists vs. void* that were not.  This also made it
     20        somewhat easier to reason about things across processes.
     21
     22        Verified that all the introspection tools work as expected.
     23
     24        * wtf/FastMalloc.cpp:
     25        (WTF::internalEntropyValue):
     26        (WTF):
     27        (HardenedSLL):
     28        (WTF::HardenedSLL::create):
     29        (WTF::HardenedSLL::null):
     30        (WTF::HardenedSLL::setValue):
     31        (WTF::HardenedSLL::value):
     32        (WTF::HardenedSLL::operator!):
     33        (WTF::HardenedSLL::operator UnspecifiedBoolType):
     34        (TCEntry):
     35        (WTF::SLL_Next):
     36        (WTF::SLL_SetNext):
     37        (WTF::SLL_Push):
     38        (WTF::SLL_Pop):
     39        (WTF::SLL_PopRange):
     40        (WTF::SLL_PushRange):
     41        (WTF::SLL_Size):
     42        (PageHeapAllocator):
     43        (WTF::PageHeapAllocator::Init):
     44        (WTF::PageHeapAllocator::New):
     45        (WTF::PageHeapAllocator::Delete):
     46        (WTF::PageHeapAllocator::recordAdministrativeRegions):
     47        (WTF::Span::next):
     48        (WTF::Span::remoteNext):
     49        (WTF::Span::prev):
     50        (WTF::Span::setNext):
     51        (WTF::Span::setPrev):
     52        (Span):
     53        (WTF::DLL_Init):
     54        (WTF::DLL_Remove):
     55        (WTF::DLL_IsEmpty):
     56        (WTF::DLL_Length):
     57        (WTF::DLL_Prepend):
     58        (TCMalloc_Central_FreeList):
     59        (WTF::TCMalloc_Central_FreeList::enumerateFreeObjects):
     60        (WTF::TCMalloc_Central_FreeList::entropy):
     61        (TCMalloc_PageHeap):
     62        (WTF::TCMalloc_PageHeap::init):
     63        (WTF::TCMalloc_PageHeap::scavenge):
     64        (WTF::TCMalloc_PageHeap::New):
     65        (WTF::TCMalloc_PageHeap::AllocLarge):
     66        (WTF::TCMalloc_PageHeap::Carve):
     67        (WTF::TCMalloc_PageHeap::Delete):
     68        (WTF::TCMalloc_PageHeap::ReturnedBytes):
     69        (WTF::TCMalloc_PageHeap::Check):
     70        (WTF::TCMalloc_PageHeap::CheckList):
     71        (WTF::TCMalloc_PageHeap::ReleaseFreeList):
     72        (TCMalloc_ThreadCache_FreeList):
     73        (WTF::TCMalloc_ThreadCache_FreeList::Init):
     74        (WTF::TCMalloc_ThreadCache_FreeList::empty):
     75        (WTF::TCMalloc_ThreadCache_FreeList::Push):
     76        (WTF::TCMalloc_ThreadCache_FreeList::PushRange):
     77        (WTF::TCMalloc_ThreadCache_FreeList::PopRange):
     78        (WTF::TCMalloc_ThreadCache_FreeList::Pop):
     79        (WTF::TCMalloc_ThreadCache_FreeList::enumerateFreeObjects):
     80        (TCMalloc_ThreadCache):
     81        (WTF::TCMalloc_Central_FreeList::Init):
     82        (WTF::TCMalloc_Central_FreeList::ReleaseListToSpans):
     83        (WTF::TCMalloc_Central_FreeList::ReleaseToSpans):
     84        (WTF::TCMalloc_Central_FreeList::InsertRange):
     85        (WTF::TCMalloc_Central_FreeList::RemoveRange):
     86        (WTF::TCMalloc_Central_FreeList::FetchFromSpansSafe):
     87        (WTF::TCMalloc_Central_FreeList::FetchFromSpans):
     88        (WTF::TCMalloc_Central_FreeList::Populate):
     89        (WTF::TCMalloc_ThreadCache::Init):
     90        (WTF::TCMalloc_ThreadCache::Deallocate):
     91        (WTF::TCMalloc_ThreadCache::FetchFromCentralCache):
     92        (WTF::TCMalloc_ThreadCache::ReleaseToCentralCache):
     93        (WTF::TCMalloc_ThreadCache::InitModule):
     94        (WTF::TCMalloc_ThreadCache::NewHeap):
     95        (WTF::TCMalloc_ThreadCache::CreateCacheIfNecessary):
     96        * wtf/MallocZoneSupport.h:
     97        (RemoteMemoryReader):
     98
    1992013-02-11  Enrica Casucci  <enrica@apple.com>
    2100
  • trunk/Source/WTF/wtf/FastMalloc.cpp

    r141955 r142536  
    7979
    8080#include "Assertions.h"
     81#include "CurrentTime.h"
     82
    8183#include <limits>
    8284#if OS(WINDOWS)
     
    103105// Harden the pointers stored in the TCMalloc linked lists
    104106#if COMPILER(GCC)
    105 #define ENABLE_TCMALLOC_HARDENING 0
     107#define ENABLE_TCMALLOC_HARDENING 1
    106108#endif
    107109
     
    521523static const char kLLHardeningMask = 0;
    522524enum {
    523     MaskAddrShift = 8,
    524     MaskKeyShift = 4
     525    MaskKeyShift = 13
    525526};
     527
     528template <unsigned> struct EntropySource;
     529template <> struct EntropySource<4> {
     530    static uint32_t value()
     531    {
     532#if OS(DARWIN)
     533        return arc4random();
     534#else
     535        return static_cast<uint32_t>(static_cast<uintptr_t>(currentTime() * 10000) ^ reinterpret_cast<uintptr_t>(&kLLHardeningMask));
     536#endif
     537    }
     538};
     539
     540template <> struct EntropySource<8> {
     541    static uint64_t value()
     542    {
     543        return EntropySource<4>::value() | (static_cast<uint64_t>(EntropySource<4>::value()) << 32);
     544    }
     545};
     546
     547static ALWAYS_INLINE uintptr_t internalEntropyValue() {
     548    static uintptr_t value = EntropySource<sizeof(uintptr_t)>::value();
     549    ASSERT(value);
     550    return value;
     551}
     552
     553#define HARDENING_ENTROPY internalEntropyValue()
    526554#define ROTATE_VALUE(value, amount) (((value) >> (amount)) | ((value) << (sizeof(value) * 8 - (amount))))
    527 #define XOR_MASK_PTR_WITH_KEY(ptr, key) (reinterpret_cast<typeof(ptr)>(reinterpret_cast<uintptr_t>(ptr)^ROTATE_VALUE(reinterpret_cast<uintptr_t>(key), MaskKeyShift)^ROTATE_VALUE(reinterpret_cast<uintptr_t>(&kLLHardeningMask), MaskAddrShift)))
    528 #else
    529 #define XOR_MASK_PTR_WITH_KEY(ptr, key) (ptr)
     555#define XOR_MASK_PTR_WITH_KEY(ptr, key, entropy) (reinterpret_cast<typeof(ptr)>(reinterpret_cast<uintptr_t>(ptr)^(ROTATE_VALUE(reinterpret_cast<uintptr_t>(key), MaskKeyShift)^entropy)))
     556
     557#else
     558#define XOR_MASK_PTR_WITH_KEY(ptr, key, entropy) (((void)entropy), ((void)key), ptr)
     559#define HARDENING_ENTROPY 0
    530560#endif
    531561
     
    663693static size_t class_to_pages[kNumClasses];
    664694
     695// Hardened singly linked list.  We make this a class to allow compiler to
     696// statically prevent mismatching hardened and non-hardened list
     697class HardenedSLL {
     698public:
     699    static ALWAYS_INLINE HardenedSLL create(void* value)
     700    {
     701        HardenedSLL result;
     702        result.m_value = value;
     703        return result;
     704    }
     705
     706    static ALWAYS_INLINE HardenedSLL null()
     707    {
     708        HardenedSLL result;
     709        result.m_value = 0;
     710        return result;
     711    }
     712
     713    ALWAYS_INLINE void setValue(void* value) { m_value = value; }
     714    ALWAYS_INLINE void* value() const { return m_value; }
     715    ALWAYS_INLINE bool operator!() const { return !m_value; }
     716    typedef void* (HardenedSLL::*UnspecifiedBoolType);
     717    ALWAYS_INLINE operator UnspecifiedBoolType() const { return m_value ? &HardenedSLL::m_value : 0; }
     718
     719private:
     720    void* m_value;
     721};
     722
    665723// TransferCache is used to cache transfers of num_objects_to_move[size_class]
    666724// back and forth between thread caches and the central cache for a given size
    667725// class.
    668726struct TCEntry {
    669   void *head;  // Head of chain of objects.
    670   void *tail;  // Tail of chain of objects.
     727  HardenedSLL head;  // Head of chain of objects.
     728  HardenedSLL tail;  // Tail of chain of objects.
    671729};
    672730// A central cache freelist can have anywhere from 0 to kNumTransferEntries
     
    693751}
    694752
    695 // Some very basic linked list functions for dealing with using void * as
    696 // storage.
    697 
    698 static inline void *SLL_Next(void *t) {
    699   return XOR_MASK_PTR_WITH_KEY(*(reinterpret_cast<void**>(t)), t);
    700 }
    701 
    702 static inline void SLL_SetNext(void *t, void *n) {
    703   *(reinterpret_cast<void**>(t)) = XOR_MASK_PTR_WITH_KEY(n, t);
    704 }
    705 
    706 static inline void SLL_Push(void **list, void *element) {
    707   SLL_SetNext(element, *list);
     753// Functions for using our simple hardened singly linked list
     754static ALWAYS_INLINE HardenedSLL SLL_Next(HardenedSLL t, uintptr_t entropy) {
     755    return HardenedSLL::create(XOR_MASK_PTR_WITH_KEY(*(reinterpret_cast<void**>(t.value())), t.value(), entropy));
     756}
     757
     758static ALWAYS_INLINE void SLL_SetNext(HardenedSLL t, HardenedSLL n, uintptr_t entropy) {
     759    *(reinterpret_cast<void**>(t.value())) = XOR_MASK_PTR_WITH_KEY(n.value(), t.value(), entropy);
     760}
     761
     762static ALWAYS_INLINE void SLL_Push(HardenedSLL* list, HardenedSLL element, uintptr_t entropy) {
     763  SLL_SetNext(element, *list, entropy);
    708764  *list = element;
    709765}
    710766
    711 static inline void *SLL_Pop(void **list) {
    712   void *result = *list;
    713   *list = SLL_Next(*list);
     767static ALWAYS_INLINE HardenedSLL SLL_Pop(HardenedSLL *list, uintptr_t entropy) {
     768  HardenedSLL result = *list;
     769  *list = SLL_Next(*list, entropy);
    714770  return result;
    715771}
    716 
    717772
    718773// Remove N elements from a linked list to which head points.  head will be
     
    720775// and last nodes of the range.  Note that end will point to NULL after this
    721776// function is called.
    722 static inline void SLL_PopRange(void **head, int N, void **start, void **end) {
     777
     778static ALWAYS_INLINE void SLL_PopRange(HardenedSLL* head, int N, HardenedSLL *start, HardenedSLL *end, uintptr_t entropy) {
    723779  if (N == 0) {
    724     *start = NULL;
    725     *end = NULL;
     780    *start = HardenedSLL::null();
     781    *end = HardenedSLL::null();
    726782    return;
    727783  }
    728784
    729   void *tmp = *head;
     785  HardenedSLL tmp = *head;
    730786  for (int i = 1; i < N; ++i) {
    731     tmp = SLL_Next(tmp);
     787    tmp = SLL_Next(tmp, entropy);
    732788  }
    733789
    734790  *start = *head;
    735791  *end = tmp;
    736   *head = SLL_Next(tmp);
     792  *head = SLL_Next(tmp, entropy);
    737793  // Unlink range from list.
    738   SLL_SetNext(tmp, NULL);
    739 }
    740 
    741 static inline void SLL_PushRange(void **head, void *start, void *end) {
     794  SLL_SetNext(tmp, HardenedSLL::null(), entropy);
     795}
     796
     797static ALWAYS_INLINE void SLL_PushRange(HardenedSLL *head, HardenedSLL start, HardenedSLL end, uintptr_t entropy) {
    742798  if (!start) return;
    743   SLL_SetNext(end, *head);
     799  SLL_SetNext(end, *head, entropy);
    744800  *head = start;
    745801}
    746802
    747 static inline size_t SLL_Size(void *head) {
     803static ALWAYS_INLINE size_t SLL_Size(HardenedSLL head, uintptr_t entropy) {
    748804  int count = 0;
    749805  while (head) {
    750806    count++;
    751     head = SLL_Next(head);
     807    head = SLL_Next(head, entropy);
    752808  }
    753809  return count;
     
    943999
    9441000  // Linked list of all regions allocated by this allocator
    945   void* allocated_regions_;
     1001  HardenedSLL allocated_regions_;
    9461002
    9471003  // Free list of already carved objects
    948   void* free_list_;
     1004  HardenedSLL free_list_;
    9491005
    9501006  // Number of allocated but unfreed objects
    9511007  int inuse_;
     1008  uintptr_t entropy_;
    9521009
    9531010 public:
    954   void Init() {
     1011  void Init(uintptr_t entropy) {
    9551012    ASSERT(kAlignedSize <= kAllocIncrement);
    9561013    inuse_ = 0;
    957     allocated_regions_ = 0;
     1014    allocated_regions_ = HardenedSLL::null();
    9581015    free_area_ = NULL;
    9591016    free_avail_ = 0;
    960     free_list_ = NULL;
     1017    free_list_.setValue(NULL);
     1018    entropy_ = entropy;
    9611019  }
    9621020
     
    9641022    // Consult free list
    9651023    void* result;
    966     if (free_list_ != NULL) {
    967       result = free_list_;
    968       free_list_ = *(reinterpret_cast<void**>(result));
     1024    if (free_list_) {
     1025      result = free_list_.value();
     1026      free_list_ = SLL_Next(free_list_, entropy_);
    9691027    } else {
    9701028      if (free_avail_ < kAlignedSize) {
     
    9741032          CRASH();
    9751033
    976         *reinterpret_cast_ptr<void**>(new_allocation) = allocated_regions_;
    977         allocated_regions_ = new_allocation;
     1034        HardenedSLL new_head = HardenedSLL::create(new_allocation);
     1035        SLL_SetNext(new_head, allocated_regions_, entropy_);
     1036        allocated_regions_ = new_head;
    9781037        free_area_ = new_allocation + kAlignedSize;
    9791038        free_avail_ = kAllocIncrement - kAlignedSize;
     
    9881047
    9891048  void Delete(T* p) {
    990     *(reinterpret_cast<void**>(p)) = free_list_;
    991     free_list_ = p;
     1049    HardenedSLL new_head = HardenedSLL::create(p);
     1050    SLL_SetNext(new_head, free_list_, entropy_);
     1051    free_list_ = new_head;
    9921052    inuse_--;
    9931053  }
     
    9991059  void recordAdministrativeRegions(Recorder& recorder, const RemoteMemoryReader& reader)
    10001060  {
    1001       for (void* adminAllocation = allocated_regions_; adminAllocation; adminAllocation = reader.nextEntryInLinkedList(reinterpret_cast<void**>(adminAllocation)))
    1002           recorder.recordRegion(reinterpret_cast<vm_address_t>(adminAllocation), kAllocIncrement);
     1061      for (HardenedSLL adminAllocation = allocated_regions_; adminAllocation; adminAllocation.setValue(reader.nextEntryInHardenedLinkedList(reinterpret_cast<void**>(adminAllocation.value()), entropy_)))
     1062          recorder.recordRegion(reinterpret_cast<vm_address_t>(adminAllocation.value()), kAllocIncrement);
    10031063  }
    10041064#endif
     
    10411101  PageID        start;          // Starting page number
    10421102  Length        length;         // Number of pages in span
    1043   Span* next() const { return XOR_MASK_PTR_WITH_KEY(m_next, this); }
    1044   Span* prev() const { return XOR_MASK_PTR_WITH_KEY(m_prev, this); }
    1045   void setNext(Span* next) { m_next = XOR_MASK_PTR_WITH_KEY(next, this); }
    1046   void setPrev(Span* prev) { m_prev = XOR_MASK_PTR_WITH_KEY(prev, this); }
     1103  Span* next(uintptr_t entropy) const { return XOR_MASK_PTR_WITH_KEY(m_next, this, entropy); }
     1104  Span* remoteNext(const Span* remoteSpanPointer, uintptr_t entropy) const { return XOR_MASK_PTR_WITH_KEY(m_next, remoteSpanPointer, entropy); }
     1105  Span* prev(uintptr_t entropy) const { return XOR_MASK_PTR_WITH_KEY(m_prev, this, entropy); }
     1106  void setNext(Span* next, uintptr_t entropy) { m_next = XOR_MASK_PTR_WITH_KEY(next, this, entropy); }
     1107  void setPrev(Span* prev, uintptr_t entropy) { m_prev = XOR_MASK_PTR_WITH_KEY(prev, this, entropy); }
    10471108
    10481109private:
     
    10501111  Span*         m_prev;           // Used when in link list
    10511112public:
    1052   void*         objects;        // Linked list of free objects
     1113  HardenedSLL    objects;        // Linked list of free objects
    10531114  unsigned int  free : 1;       // Is the span free
    10541115#ifndef NO_TCMALLOC_SAMPLES
     
    11061167// -------------------------------------------------------------------------
    11071168
    1108 static inline void DLL_Init(Span* list) {
    1109   list->setNext(list);
    1110   list->setPrev(list);
    1111 }
    1112 
    1113 static inline void DLL_Remove(Span* span) {
    1114   span->prev()->setNext(span->next());
    1115   span->next()->setPrev(span->prev());
    1116   span->setPrev(NULL);
    1117   span->setNext(NULL);
    1118 }
    1119 
    1120 static ALWAYS_INLINE bool DLL_IsEmpty(const Span* list) {
    1121   return list->next() == list;
    1122 }
    1123 
    1124 static int DLL_Length(const Span* list) {
     1169static inline void DLL_Init(Span* list, uintptr_t entropy) {
     1170  list->setNext(list, entropy);
     1171  list->setPrev(list, entropy);
     1172}
     1173
     1174static inline void DLL_Remove(Span* span, uintptr_t entropy) {
     1175  span->prev(entropy)->setNext(span->next(entropy), entropy);
     1176  span->next(entropy)->setPrev(span->prev(entropy), entropy);
     1177  span->setPrev(NULL, entropy);
     1178  span->setNext(NULL, entropy);
     1179}
     1180
     1181static ALWAYS_INLINE bool DLL_IsEmpty(const Span* list, uintptr_t entropy) {
     1182  return list->next(entropy) == list;
     1183}
     1184
     1185static int DLL_Length(const Span* list, uintptr_t entropy) {
    11251186  int result = 0;
    1126   for (Span* s = list->next(); s != list; s = s->next()) {
     1187  for (Span* s = list->next(entropy); s != list; s = s->next(entropy)) {
    11271188    result++;
    11281189  }
     
    11401201#endif
    11411202
    1142 static inline void DLL_Prepend(Span* list, Span* span) {
    1143   ASSERT(span->next() == NULL);
    1144   ASSERT(span->prev() == NULL);
    1145   span->setNext(list->next());
    1146   span->setPrev(list);
    1147   list->next()->setPrev(span);
    1148   list->setNext(span);
     1203static inline void DLL_Prepend(Span* list, Span* span, uintptr_t entropy) {
     1204  span->setNext(list->next(entropy), entropy);
     1205  span->setPrev(list, entropy);
     1206  list->next(entropy)->setPrev(span, entropy);
     1207  list->setNext(span, entropy);
    11491208}
    11501209
     
    11551214class TCMalloc_Central_FreeList {
    11561215 public:
    1157   void Init(size_t cl);
     1216  void Init(size_t cl, uintptr_t entropy);
    11581217
    11591218  // These methods all do internal locking.
     
    11611220  // Insert the specified range into the central freelist.  N is the number of
    11621221  // elements in the range.
    1163   void InsertRange(void *start, void *end, int N);
     1222  void InsertRange(HardenedSLL start, HardenedSLL end, int N);
    11641223
    11651224  // Returns the actual number of fetched elements into N.
    1166   void RemoveRange(void **start, void **end, int *N);
     1225  void RemoveRange(HardenedSLL* start, HardenedSLL* end, int *N);
    11671226
    11681227  // Returns the number of free objects in cache.
     
    11821241  void enumerateFreeObjects(Finder& finder, const Reader& reader, TCMalloc_Central_FreeList* remoteCentralFreeList)
    11831242  {
    1184     for (Span* span = &empty_; span && span != &empty_; span = (span->next() ? reader(span->next()) : 0))
    1185       ASSERT(!span->objects);
     1243    {
     1244      static const ptrdiff_t emptyOffset = reinterpret_cast<const char*>(&empty_) - reinterpret_cast<const char*>(this);
     1245      Span* remoteEmpty = reinterpret_cast<Span*>(reinterpret_cast<char*>(remoteCentralFreeList) + emptyOffset);
     1246      Span* remoteSpan = nonempty_.remoteNext(remoteEmpty, entropy_);
     1247      for (Span* span = reader(remoteEmpty); span && span != &empty_; remoteSpan = span->remoteNext(remoteSpan, entropy_), span = (remoteSpan ? reader(remoteSpan) : 0))
     1248        ASSERT(!span->objects);
     1249    }
    11861250
    11871251    ASSERT(!nonempty_.objects);
     
    11891253
    11901254    Span* remoteNonempty = reinterpret_cast<Span*>(reinterpret_cast<char*>(remoteCentralFreeList) + nonemptyOffset);
    1191     Span* remoteSpan = nonempty_.next();
    1192 
    1193     for (Span* span = reader(remoteSpan); span && remoteSpan != remoteNonempty; remoteSpan = span->next(), span = (span->next() ? reader(span->next()) : 0)) {
    1194       for (void* nextObject = span->objects; nextObject; nextObject = reader.nextEntryInLinkedList(reinterpret_cast<void**>(nextObject)))
    1195         finder.visit(nextObject);
    1196     }
    1197   }
    1198 #endif
    1199 
     1255    Span* remoteSpan = nonempty_.remoteNext(remoteNonempty, entropy_);
     1256
     1257    for (Span* span = reader(remoteSpan); span && remoteSpan != remoteNonempty; remoteSpan = span->remoteNext(remoteSpan, entropy_), span = (remoteSpan ? reader(remoteSpan) : 0)) {
     1258      for (HardenedSLL nextObject = span->objects; nextObject; nextObject.setValue(reader.nextEntryInHardenedLinkedList(reinterpret_cast<void**>(nextObject.value()), entropy_))) {
     1259        finder.visit(nextObject.value());
     1260      }
     1261    }
     1262  }
     1263#endif
     1264
     1265  uintptr_t entropy() const { return entropy_; }
    12001266 private:
    12011267  // REQUIRES: lock_ is held
    12021268  // Remove object from cache and return.
    12031269  // Return NULL if no free entries in cache.
    1204   void* FetchFromSpans();
     1270  HardenedSLL FetchFromSpans();
    12051271
    12061272  // REQUIRES: lock_ is held
     
    12081274  // from pageheap if cache is empty.  Only returns
    12091275  // NULL on allocation failure.
    1210   void* FetchFromSpansSafe();
     1276  HardenedSLL FetchFromSpansSafe();
    12111277
    12121278  // REQUIRES: lock_ is held
    12131279  // Release a linked list of objects to spans.
    12141280  // May temporarily release lock_.
    1215   void ReleaseListToSpans(void *start);
     1281  void ReleaseListToSpans(HardenedSLL start);
    12161282
    12171283  // REQUIRES: lock_ is held
    12181284  // Release an object to spans.
    12191285  // May temporarily release lock_.
    1220   ALWAYS_INLINE void ReleaseToSpans(void* object);
     1286  ALWAYS_INLINE void ReleaseToSpans(HardenedSLL object);
    12211287
    12221288  // REQUIRES: lock_ is held
     
    12691335  // on a given size class.
    12701336  int32_t cache_size_;
     1337  uintptr_t entropy_;
    12711338};
    12721339
     
    15871654  uintptr_t free_pages_;
    15881655
     1656  // Used for hardening
     1657  uintptr_t entropy_;
     1658
    15891659  // Bytes allocated from system
    15901660  uint64_t system_bytes_;
     
    16791749  free_pages_ = 0;
    16801750  system_bytes_ = 0;
     1751  entropy_ = HARDENING_ENTROPY;
    16811752
    16821753#if USE_BACKGROUND_THREAD_TO_SCAVENGE_MEMORY
     
    16891760  scavenge_index_ = kMaxPages-1;
    16901761  COMPILE_ASSERT(kNumClasses <= (1 << PageMapCache::kValuebits), valuebits);
    1691   DLL_Init(&large_.normal);
    1692   DLL_Init(&large_.returned);
     1762  DLL_Init(&large_.normal, entropy_);
     1763  DLL_Init(&large_.returned, entropy_);
    16931764  for (size_t i = 0; i < kMaxPages; i++) {
    1694     DLL_Init(&free_[i].normal);
    1695     DLL_Init(&free_[i].returned);
     1765    DLL_Init(&free_[i].normal, entropy_);
     1766    DLL_Init(&free_[i].returned, entropy_);
    16961767  }
    16971768
     
    18381909            // If the span size is bigger than kMinSpanListsWithSpans pages return all the spans in the list, else return all but 1 span. 
    18391910            // Return only 50% of a spanlist at a time so spans of size 1 are not the only ones left.
    1840             size_t length = DLL_Length(&slist->normal);
     1911            size_t length = DLL_Length(&slist->normal, entropy_);
    18411912            size_t numSpansToReturn = (i > kMinSpanListsWithSpans) ? length : length / 2;
    1842             for (int j = 0; static_cast<size_t>(j) < numSpansToReturn && !DLL_IsEmpty(&slist->normal) && free_committed_pages_ > targetPageCount; j++) {
    1843                 Span* s = slist->normal.prev();
    1844                 DLL_Remove(s);
     1913            for (int j = 0; static_cast<size_t>(j) < numSpansToReturn && !DLL_IsEmpty(&slist->normal, entropy_) && free_committed_pages_ > targetPageCount; j++) {
     1914                Span* s = slist->normal.prev(entropy_);
     1915                DLL_Remove(s, entropy_);
    18451916                ASSERT(!s->decommitted);
    18461917                if (!s->decommitted) {
     
    18511922                    s->decommitted = true;
    18521923                }
    1853                 DLL_Prepend(&slist->returned, s);
     1924                DLL_Prepend(&slist->returned, s, entropy_);
    18541925            }
    18551926        }
     
    18781949    Span* ll = NULL;
    18791950    bool released = false;
    1880     if (!DLL_IsEmpty(&free_[s].normal)) {
     1951    if (!DLL_IsEmpty(&free_[s].normal, entropy_)) {
    18811952      // Found normal span
    18821953      ll = &free_[s].normal;
    1883     } else if (!DLL_IsEmpty(&free_[s].returned)) {
     1954    } else if (!DLL_IsEmpty(&free_[s].returned, entropy_)) {
    18841955      // Found returned span; reallocate it
    18851956      ll = &free_[s].returned;
     
    18901961    }
    18911962
    1892     Span* result = ll->next();
     1963    Span* result = ll->next(entropy_);
    18931964    Carve(result, n, released);
    18941965#if USE_BACKGROUND_THREAD_TO_SCAVENGE_MEMORY
     
    19271998
    19281999  // Search through normal list
    1929   for (Span* span = large_.normal.next();
     2000  for (Span* span = large_.normal.next(entropy_);
    19302001       span != &large_.normal;
    1931        span = span->next()) {
     2002       span = span->next(entropy_)) {
    19322003    if (span->length >= n) {
    19332004      if ((best == NULL)
     
    19412012
    19422013  // Search through released list in case it has a better fit
    1943   for (Span* span = large_.returned.next();
     2014  for (Span* span = large_.returned.next(entropy_);
    19442015       span != &large_.returned;
    1945        span = span->next()) {
     2016       span = span->next(entropy_)) {
    19462017    if (span->length >= n) {
    19472018      if ((best == NULL)
     
    19902061inline void TCMalloc_PageHeap::Carve(Span* span, Length n, bool released) {
    19912062  ASSERT(n > 0);
    1992   DLL_Remove(span);
     2063  DLL_Remove(span, entropy_);
    19932064  span->free = 0;
    19942065  Event(span, 'A', n);
     
    20162087    SpanList* listpair = (static_cast<size_t>(extra) < kMaxPages) ? &free_[extra] : &large_;
    20172088    Span* dst = &listpair->normal;
    2018     DLL_Prepend(dst, leftover);
     2089    DLL_Prepend(dst, leftover, entropy_);
    20192090
    20202091    span->length = n;
     
    20662137#endif
    20672138    mergeDecommittedStates(span, prev);
    2068     DLL_Remove(prev);
     2139    DLL_Remove(prev, entropy_);
    20692140    DeleteSpan(prev);
    20702141    span->start -= len;
     
    20832154#endif
    20842155    mergeDecommittedStates(span, next);
    2085     DLL_Remove(next);
     2156    DLL_Remove(next, entropy_);
    20862157    DeleteSpan(next);
    20872158    span->length += len;
     
    20942165  if (span->decommitted) {
    20952166    if (span->length < kMaxPages)
    2096       DLL_Prepend(&free_[span->length].returned, span);
     2167      DLL_Prepend(&free_[span->length].returned, span, entropy_);
    20972168    else
    2098       DLL_Prepend(&large_.returned, span);
     2169      DLL_Prepend(&large_.returned, span, entropy_);
    20992170  } else {
    21002171    if (span->length < kMaxPages)
    2101       DLL_Prepend(&free_[span->length].normal, span);
     2172      DLL_Prepend(&free_[span->length].normal, span, entropy_);
    21022173    else
    2103       DLL_Prepend(&large_.normal, span);
     2174      DLL_Prepend(&large_.normal, span, entropy_);
    21042175  }
    21052176  free_pages_ += n;
     
    21902261    size_t result = 0;
    21912262    for (unsigned s = 0; s < kMaxPages; s++) {
    2192         const int r_length = DLL_Length(&free_[s].returned);
     2263        const int r_length = DLL_Length(&free_[s].returned, entropy_);
    21932264        unsigned r_pages = s * r_length;
    21942265        result += r_pages << kPageShift;
    21952266    }
    21962267   
    2197     for (Span* s = large_.returned.next(); s != &large_.returned; s = s->next())
     2268    for (Span* s = large_.returned.next(entropy_); s != &large_.returned; s = s->next(entropy_))
    21982269        result += s->length << kPageShift;
    21992270    return result;
     
    23222393  size_t totalFreeCommitted = 0;
    23232394#endif
    2324   ASSERT(free_[0].normal.next() == &free_[0].normal);
    2325   ASSERT(free_[0].returned.next() == &free_[0].returned);
     2395  ASSERT(free_[0].normal.next(entropy_) == &free_[0].normal);
     2396  ASSERT(free_[0].returned.next(entropy_) == &free_[0].returned);
    23262397#if USE_BACKGROUND_THREAD_TO_SCAVENGE_MEMORY
    23272398  totalFreeCommitted = CheckList(&large_.normal, kMaxPages, 1000000000, false);
     
    23512422size_t TCMalloc_PageHeap::CheckList(Span* list, Length min_pages, Length max_pages, bool decommitted) {
    23522423  size_t freeCount = 0;
    2353   for (Span* s = list->next(); s != list; s = s->next()) {
     2424  for (Span* s = list->next(entropy_); s != list; s = s->next(entropy_)) {
    23542425    CHECK_CONDITION(s->free);
    23552426    CHECK_CONDITION(s->length >= min_pages);
     
    23712442#endif
    23722443
    2373   while (!DLL_IsEmpty(list)) {
    2374     Span* s = list->prev();
    2375 
    2376     DLL_Remove(s);
     2444  while (!DLL_IsEmpty(list, entropy_)) {
     2445    Span* s = list->prev(entropy_);
     2446
     2447    DLL_Remove(s, entropy_);
    23772448    s->decommitted = true;
    2378     DLL_Prepend(returned, s);
     2449    DLL_Prepend(returned, s, entropy_);
    23792450    TCMalloc_SystemRelease(reinterpret_cast<void*>(s->start << kPageShift),
    23802451                           static_cast<size_t>(s->length << kPageShift));
     
    24052476class TCMalloc_ThreadCache_FreeList {
    24062477 private:
    2407   void*    list_;       // Linked list of nodes
     2478  HardenedSLL list_;       // Linked list of nodes
    24082479  uint16_t length_;     // Current length
    24092480  uint16_t lowater_;    // Low water mark for list length
     2481  uintptr_t entropy_;   // Entropy source for hardening
    24102482
    24112483 public:
    2412   void Init() {
    2413     list_ = NULL;
     2484  void Init(uintptr_t entropy) {
     2485    list_.setValue(NULL);
    24142486    length_ = 0;
    24152487    lowater_ = 0;
     2488    entropy_ = entropy;
     2489#if ENABLE(TCMALLOC_HARDENING)
     2490    ASSERT(entropy_);
     2491#endif
    24162492  }
    24172493
     
    24232499  // Is list empty?
    24242500  bool empty() const {
    2425     return list_ == NULL;
     2501    return !list_;
    24262502  }
    24272503
     
    24302506  void clear_lowwatermark() { lowater_ = length_; }
    24312507
    2432   ALWAYS_INLINE void Push(void* ptr) {
    2433     SLL_Push(&list_, ptr);
     2508  ALWAYS_INLINE void Push(HardenedSLL ptr) {
     2509    SLL_Push(&list_, ptr, entropy_);
    24342510    length_++;
    24352511  }
    24362512
    2437   void PushRange(int N, void *start, void *end) {
    2438     SLL_PushRange(&list_, start, end);
     2513  void PushRange(int N, HardenedSLL start, HardenedSLL end) {
     2514    SLL_PushRange(&list_, start, end, entropy_);
    24392515    length_ = length_ + static_cast<uint16_t>(N);
    24402516  }
    24412517
    2442   void PopRange(int N, void **start, void **end) {
    2443     SLL_PopRange(&list_, N, start, end);
     2518  void PopRange(int N, HardenedSLL* start, HardenedSLL* end) {
     2519    SLL_PopRange(&list_, N, start, end, entropy_);
    24442520    ASSERT(length_ >= N);
    24452521    length_ = length_ - static_cast<uint16_t>(N);
     
    24482524
    24492525  ALWAYS_INLINE void* Pop() {
    2450     ASSERT(list_ != NULL);
     2526    ASSERT(list_);
    24512527    length_--;
    24522528    if (length_ < lowater_) lowater_ = length_;
    2453     return SLL_Pop(&list_);
     2529    return SLL_Pop(&list_, entropy_).value();
    24542530  }
    24552531
     
    24582534  void enumerateFreeObjects(Finder& finder, const Reader& reader)
    24592535  {
    2460       for (void* nextObject = list_; nextObject; nextObject = reader.nextEntryInLinkedList(reinterpret_cast<void**>(nextObject)))
    2461           finder.visit(nextObject);
     2536      for (HardenedSLL nextObject = list_; nextObject; nextObject.setValue(reader.nextEntryInHardenedLinkedList(reinterpret_cast<void**>(nextObject.value()), entropy_)))
     2537          finder.visit(nextObject.value());
    24622538  }
    24632539#endif
     
    24862562  size_t        bytes_until_sample_;    // Bytes until we sample next
    24872563
     2564  uintptr_t     entropy_;               // Entropy value used for hardening
     2565
    24882566  // Allocate a new heap. REQUIRES: pageheap_lock is held.
    2489   static inline TCMalloc_ThreadCache* NewHeap(ThreadIdentifier tid);
     2567  static inline TCMalloc_ThreadCache* NewHeap(ThreadIdentifier tid, uintptr_t entropy);
    24902568
    24912569  // Use only as pthread thread-specific destructor function.
     
    24962574  TCMalloc_ThreadCache* prev_;
    24972575
    2498   void Init(ThreadIdentifier tid);
     2576  void Init(ThreadIdentifier tid, uintptr_t entropy);
    24992577  void Cleanup();
    25002578
     
    25062584
    25072585  ALWAYS_INLINE void* Allocate(size_t size);
    2508   void Deallocate(void* ptr, size_t size_class);
     2586  void Deallocate(HardenedSLL ptr, size_t size_class);
    25092587
    25102588  ALWAYS_INLINE void FetchFromCentralCache(size_t cl, size_t allocationSize);
     
    27042782//-------------------------------------------------------------------
    27052783
    2706 void TCMalloc_Central_FreeList::Init(size_t cl) {
     2784void TCMalloc_Central_FreeList::Init(size_t cl, uintptr_t entropy) {
    27072785  lock_.Init();
    27082786  size_class_ = cl;
    2709   DLL_Init(&empty_);
    2710   DLL_Init(&nonempty_);
     2787  entropy_ = entropy;
     2788#if ENABLE(TCMALLOC_HARDENING)
     2789  ASSERT(entropy_);
     2790#endif
     2791  DLL_Init(&empty_, entropy_);
     2792  DLL_Init(&nonempty_, entropy_);
    27112793  counter_ = 0;
    27122794
     
    27162798}
    27172799
    2718 void TCMalloc_Central_FreeList::ReleaseListToSpans(void* start) {
     2800void TCMalloc_Central_FreeList::ReleaseListToSpans(HardenedSLL start) {
    27192801  while (start) {
    2720     void *next = SLL_Next(start);
     2802    HardenedSLL next = SLL_Next(start, entropy_);
    27212803    ReleaseToSpans(start);
    27222804    start = next;
     
    27242806}
    27252807
    2726 ALWAYS_INLINE void TCMalloc_Central_FreeList::ReleaseToSpans(void* object) {
    2727   const PageID p = reinterpret_cast<uintptr_t>(object) >> kPageShift;
     2808ALWAYS_INLINE void TCMalloc_Central_FreeList::ReleaseToSpans(HardenedSLL object) {
     2809  const PageID p = reinterpret_cast<uintptr_t>(object.value()) >> kPageShift;
    27282810  Span* span = pageheap->GetDescriptor(p);
    27292811  ASSERT(span != NULL);
     
    27312813
    27322814  // If span is empty, move it to non-empty list
    2733   if (span->objects == NULL) {
    2734     DLL_Remove(span);
    2735     DLL_Prepend(&nonempty_, span);
     2815  if (!span->objects) {
     2816    DLL_Remove(span, entropy_);
     2817    DLL_Prepend(&nonempty_, span, entropy_);
    27362818    Event(span, 'N', 0);
    27372819  }
     
    27412823    // Check that object does not occur in list
    27422824    unsigned got = 0;
    2743     for (void* p = span->objects; p != NULL; p = *((void**) p)) {
    2744       ASSERT(p != object);
     2825    for (HardenedSLL p = span->objects; !p; SLL_Next(p, entropy_)) {
     2826      ASSERT(p.value() != object.value());
    27452827      got++;
    27462828    }
     
    27542836    Event(span, '#', 0);
    27552837    counter_ -= (span->length<<kPageShift) / ByteSizeForClass(span->sizeclass);
    2756     DLL_Remove(span);
     2838    DLL_Remove(span, entropy_);
    27572839
    27582840    // Release central list lock while operating on pageheap
     
    27642846    lock_.Lock();
    27652847  } else {
    2766     *(reinterpret_cast<void**>(object)) = span->objects;
    2767     span->objects = object;
     2848    SLL_SetNext(object, span->objects, entropy_);
     2849    span->objects.setValue(object.value());
    27682850  }
    27692851}
     
    28392921}
    28402922
    2841 void TCMalloc_Central_FreeList::InsertRange(void *start, void *end, int N) {
     2923void TCMalloc_Central_FreeList::InsertRange(HardenedSLL start, HardenedSLL end, int N) {
    28422924  SpinLockHolder h(&lock_);
    28432925  if (N == num_objects_to_move[size_class_] &&
     
    28542936}
    28552937
    2856 void TCMalloc_Central_FreeList::RemoveRange(void **start, void **end, int *N) {
     2938void TCMalloc_Central_FreeList::RemoveRange(HardenedSLL* start, HardenedSLL* end, int *N) {
    28572939  int num = *N;
    28582940  ASSERT(num > 0);
     
    28692951
    28702952  // TODO: Prefetch multiple TCEntries?
    2871   void *tail = FetchFromSpansSafe();
     2953  HardenedSLL tail = FetchFromSpansSafe();
    28722954  if (!tail) {
    28732955    // We are completely out of memory.
    2874     *start = *end = NULL;
     2956    *start = *end = HardenedSLL::null();
    28752957    *N = 0;
    28762958    return;
    28772959  }
    28782960
    2879   SLL_SetNext(tail, NULL);
    2880   void *head = tail;
     2961  SLL_SetNext(tail, HardenedSLL::null(), entropy_);
     2962  HardenedSLL head = tail;
    28812963  int count = 1;
    28822964  while (count < num) {
    2883     void *t = FetchFromSpans();
     2965    HardenedSLL t = FetchFromSpans();
    28842966    if (!t) break;
    2885     SLL_Push(&head, t);
     2967    SLL_Push(&head, t, entropy_);
    28862968    count++;
    28872969  }
     
    28922974
    28932975
    2894 void* TCMalloc_Central_FreeList::FetchFromSpansSafe() {
    2895   void *t = FetchFromSpans();
     2976HardenedSLL TCMalloc_Central_FreeList::FetchFromSpansSafe() {
     2977  HardenedSLL t = FetchFromSpans();
    28962978  if (!t) {
    28972979    Populate();
     
    29012983}
    29022984
    2903 void* TCMalloc_Central_FreeList::FetchFromSpans() {
    2904   if (DLL_IsEmpty(&nonempty_)) return NULL;
    2905   Span* span = nonempty_.next();
    2906 
    2907   ASSERT(span->objects != NULL);
     2985HardenedSLL TCMalloc_Central_FreeList::FetchFromSpans() {
     2986  if (DLL_IsEmpty(&nonempty_, entropy_)) return HardenedSLL::null();
     2987  Span* span = nonempty_.next(entropy_);
     2988
     2989  ASSERT(span->objects);
    29082990  ASSERT_SPAN_COMMITTED(span);
    29092991  span->refcount++;
    2910   void* result = span->objects;
    2911   span->objects = *(reinterpret_cast<void**>(result));
    2912   if (span->objects == NULL) {
     2992  HardenedSLL result = span->objects;
     2993  span->objects = SLL_Next(result, entropy_);
     2994  if (!span->objects) {
    29132995    // Move to empty list
    2914     DLL_Remove(span);
    2915     DLL_Prepend(&empty_, span);
     2996    DLL_Remove(span, entropy_);
     2997    DLL_Prepend(&empty_, span, entropy_);
    29162998    Event(span, 'E', 0);
    29172999  }
     
    29543036  // Split the block into pieces and add to the free-list
    29553037  // TODO: coloring of objects to avoid cache conflicts?
    2956   void** tail = &span->objects;
    2957   char* ptr = reinterpret_cast<char*>(span->start << kPageShift);
    2958   char* limit = ptr + (npages << kPageShift);
     3038  HardenedSLL head = HardenedSLL::null();
     3039  char* start = reinterpret_cast<char*>(span->start << kPageShift);
    29593040  const size_t size = ByteSizeForClass(size_class_);
     3041  char* ptr = start + (npages << kPageShift) - ((npages << kPageShift) % size);
    29603042  int num = 0;
    2961   char* nptr;
    2962   while ((nptr = ptr + size) <= limit) {
    2963     *tail = ptr;
    2964     tail = reinterpret_cast_ptr<void**>(ptr);
    2965     ptr = nptr;
     3043  while (ptr > start) {
     3044    ptr -= size;
     3045    HardenedSLL node = HardenedSLL::create(ptr);
     3046    SLL_SetNext(node, head, entropy_);
     3047    head = node;
    29663048    num++;
    29673049  }
    2968   ASSERT(ptr <= limit);
    2969   *tail = NULL;
     3050  ASSERT(ptr == start);
     3051  ASSERT(ptr == head.value());
     3052  span->objects = head;
     3053  ASSERT(span->objects.value() == head.value());
    29703054  span->refcount = 0; // No sub-object in use yet
    29713055
    29723056  // Add span to list of non-empty spans
    29733057  lock_.Lock();
    2974   DLL_Prepend(&nonempty_, span);
     3058  DLL_Prepend(&nonempty_, span, entropy_);
    29753059  counter_ += num;
    29763060}
     
    29903074}
    29913075
    2992 void TCMalloc_ThreadCache::Init(ThreadIdentifier tid) {
     3076void TCMalloc_ThreadCache::Init(ThreadIdentifier tid, uintptr_t entropy) {
    29933077  size_ = 0;
    29943078  next_ = NULL;
     
    29963080  tid_  = tid;
    29973081  in_setspecific_ = false;
     3082  entropy_ = entropy;
     3083#if ENABLE(TCMALLOC_HARDENING)
     3084  ASSERT(entropy_);
     3085#endif
    29983086  for (size_t cl = 0; cl < kNumClasses; ++cl) {
    2999     list_[cl].Init();
     3087    list_[cl].Init(entropy_);
    30003088  }
    30013089
     
    30303118}
    30313119
    3032 inline void TCMalloc_ThreadCache::Deallocate(void* ptr, size_t cl) {
     3120inline void TCMalloc_ThreadCache::Deallocate(HardenedSLL ptr, size_t cl) {
    30333121  size_ += ByteSizeForClass(cl);
    30343122  FreeList* list = &list_[cl];
     
    30443132ALWAYS_INLINE void TCMalloc_ThreadCache::FetchFromCentralCache(size_t cl, size_t allocationSize) {
    30453133  int fetch_count = num_objects_to_move[cl];
    3046   void *start, *end;
     3134  HardenedSLL start, end;
    30473135  central_cache[cl].RemoveRange(&start, &end, &fetch_count);
    30483136  list_[cl].PushRange(fetch_count, start, end);
     
    30613149  int batch_size = num_objects_to_move[cl];
    30623150  while (N > batch_size) {
    3063     void *tail, *head;
     3151    HardenedSLL tail, head;
    30643152    src->PopRange(batch_size, &head, &tail);
    30653153    central_cache[cl].InsertRange(head, tail, batch_size);
    30663154    N -= batch_size;
    30673155  }
    3068   void *tail, *head;
     3156  HardenedSLL tail, head;
    30693157  src->PopRange(N, &head, &tail);
    30703158  central_cache[cl].InsertRange(head, tail, N);
     
    31513239  SpinLockHolder h(&pageheap_lock);
    31523240  if (!phinited) {
     3241    uintptr_t entropy = HARDENING_ENTROPY;
    31533242#ifdef WTF_CHANGES
    31543243    InitTSD();
    31553244#endif
    31563245    InitSizeClasses();
    3157     threadheap_allocator.Init();
    3158     span_allocator.Init();
     3246    threadheap_allocator.Init(entropy);
     3247    span_allocator.Init(entropy);
    31593248    span_allocator.New(); // Reduce cache conflicts
    31603249    span_allocator.New(); // Reduce cache conflicts
    3161     stacktrace_allocator.Init();
    3162     DLL_Init(&sampled_objects);
     3250    stacktrace_allocator.Init(entropy);
     3251    DLL_Init(&sampled_objects, entropy);
    31633252    for (size_t i = 0; i < kNumClasses; ++i) {
    3164       central_cache[i].Init(i);
     3253      central_cache[i].Init(i, entropy);
    31653254    }
    31663255    pageheap->init();
     
    31723261}
    31733262
    3174 inline TCMalloc_ThreadCache* TCMalloc_ThreadCache::NewHeap(ThreadIdentifier tid) {
     3263inline TCMalloc_ThreadCache* TCMalloc_ThreadCache::NewHeap(ThreadIdentifier tid, uintptr_t entropy) {
    31753264  // Create the heap and add it to the linked list
    31763265  TCMalloc_ThreadCache *heap = threadheap_allocator.New();
    3177   heap->Init(tid);
     3266  heap->Init(tid, entropy);
    31783267  heap->next_ = thread_heaps;
    31793268  heap->prev_ = NULL;
     
    32893378    }
    32903379
    3291     if (heap == NULL) heap = NewHeap(me);
     3380    if (heap == NULL) heap = NewHeap(me, HARDENING_ENTROPY);
    32923381  }
    32933382
     
    37913880    TCMalloc_ThreadCache* heap = TCMalloc_ThreadCache::GetCacheIfPresent();
    37923881    if (heap != NULL) {
    3793       heap->Deallocate(ptr, cl);
     3882      heap->Deallocate(HardenedSLL::create(ptr), cl);
    37943883    } else {
    37953884      // Delete directly into central cache
    3796       SLL_SetNext(ptr, NULL);
    3797       central_cache[cl].InsertRange(ptr, ptr, 1);
     3885      SLL_SetNext(HardenedSLL::create(ptr), HardenedSLL::null(), central_cache[cl].entropy());
     3886      central_cache[cl].InsertRange(HardenedSLL::create(ptr), HardenedSLL::create(ptr), 1);
    37983887    }
    37993888  } else {
     
    44384527        return 0;
    44394528
    4440     for (void* free = span->objects; free != NULL; free = *((void**) free)) {
    4441         if (ptr == free)
     4529    for (HardenedSLL free = span->objects; free; free = SLL_Next(free, HARDENING_ENTROPY)) {
     4530        if (ptr == free.value())
    44424531            return 0;
    44434532    }
     
    44514540
    44524541#if OS(DARWIN)
     4542
     4543template <typename T>
     4544T* RemoteMemoryReader::nextEntryInHardenedLinkedList(T** remoteAddress, uintptr_t entropy) const
     4545{
     4546    T** localAddress = (*this)(remoteAddress);
     4547    if (!localAddress)
     4548        return 0;
     4549    T* hardenedNext = *localAddress;
     4550    if (!hardenedNext || hardenedNext == (void*)entropy)
     4551        return 0;
     4552    return XOR_MASK_PTR_WITH_KEY(hardenedNext, remoteAddress, entropy);
     4553}
    44534554
    44544555class FreeObjectFinder {
     
    44804581    const RemoteMemoryReader& m_reader;
    44814582    FreeObjectFinder& m_freeObjectFinder;
     4583    uintptr_t m_entropy;
    44824584
    44834585public:
    4484     PageMapFreeObjectFinder(const RemoteMemoryReader& reader, FreeObjectFinder& freeObjectFinder)
     4586    PageMapFreeObjectFinder(const RemoteMemoryReader& reader, FreeObjectFinder& freeObjectFinder, uintptr_t entropy)
    44854587        : m_reader(reader)
    44864588        , m_freeObjectFinder(freeObjectFinder)
    4487     { }
     4589        , m_entropy(entropy)
     4590    {
     4591#if ENABLE(TCMALLOC_HARDENING)
     4592        ASSERT(m_entropy);
     4593#endif
     4594    }
    44884595
    44894596    int visit(void* ptr) const
     
    45014608        } else if (span->sizeclass) {
    45024609            // Walk the free list of the small-object span, keeping track of each object seen
    4503             for (void* nextObject = span->objects; nextObject; nextObject = m_reader.nextEntryInLinkedList(reinterpret_cast<void**>(nextObject)))
    4504                 m_freeObjectFinder.visit(nextObject);
     4610            for (HardenedSLL nextObject = span->objects; nextObject; nextObject.setValue(m_reader.nextEntryInHardenedLinkedList(reinterpret_cast<void**>(nextObject.value()), m_entropy)))
     4611                m_freeObjectFinder.visit(nextObject.value());
    45054612        }
    45064613        return span->length;
     
    45364643    void recordPendingRegions()
    45374644    {
    4538         Span* lastSpan = m_coalescedSpans[m_coalescedSpans.size() - 1];
    4539         vm_range_t ptrRange = { m_coalescedSpans[0]->start << kPageShift, 0 };
    4540         ptrRange.size = (lastSpan->start << kPageShift) - ptrRange.address + (lastSpan->length * kPageSize);
    4541 
    4542         // Mark the memory region the spans represent as a candidate for containing pointers
    4543         if (m_typeMask & MALLOC_PTR_REGION_RANGE_TYPE)
    4544             (*m_recorder)(m_task, m_context, MALLOC_PTR_REGION_RANGE_TYPE, &ptrRange, 1);
    4545 
    4546         if (!(m_typeMask & MALLOC_PTR_IN_USE_RANGE_TYPE)) {
     4645        if (!(m_typeMask & (MALLOC_PTR_IN_USE_RANGE_TYPE | MALLOC_PTR_REGION_RANGE_TYPE))) {
    45474646            m_coalescedSpans.clear();
    45484647            return;
     
    45744673        }
    45754674
    4576         (*m_recorder)(m_task, m_context, MALLOC_PTR_IN_USE_RANGE_TYPE, allocatedPointers.data(), allocatedPointers.size());
     4675        (*m_recorder)(m_task, m_context, m_typeMask & (MALLOC_PTR_IN_USE_RANGE_TYPE | MALLOC_PTR_REGION_RANGE_TYPE), allocatedPointers.data(), allocatedPointers.size());
    45774676
    45784677        m_coalescedSpans.clear();
     
    46754774
    46764775    TCMalloc_PageHeap::PageMap* pageMap = &pageHeap->pagemap_;
    4677     PageMapFreeObjectFinder pageMapFinder(memoryReader, finder);
     4776    PageMapFreeObjectFinder pageMapFinder(memoryReader, finder, pageHeap->entropy_);
    46784777    pageMap->visitValues(pageMapFinder, memoryReader);
    46794778
  • trunk/Source/WTF/wtf/MallocZoneSupport.h

    r111778 r142536  
    6060
    6161    template <typename T>
    62     T* nextEntryInLinkedList(T** address) const
    63     {
    64         T** output = (*this)(address);
    65         if (!output)
    66             return 0;
    67         return *output;
    68     }
     62    T* nextEntryInHardenedLinkedList(T** address, uintptr_t entropy) const;
    6963};
    7064
Note: See TracChangeset for help on using the changeset viewer.