Changes between Version 4 and Version 5 of April 2011 Meeting/Threading

Apr 27, 2011 8:05:33 AM (12 years ago)

Dumped my notes on the session


  • April 2011 Meeting/Threading

    v4 v5  
    1 Placeholder for mtg notes. Still filling in more.[[BR]]
    4 Possible topics:[[BR]]
    63* Common pitfalls.[[BR]]
    85* Current patterns.[[BR]]
    10 * Future patterns:[[BR]]
     7Those are the areas where we have some threading:[[BR]]
     8* WebWorkers[[BR]]
     9* FileAPI[[BR]]
     10* Database (IconDatabase)[[BR]]
     11* Compositor (Chromium specific)[[BR]]
     12* Audio API[[BR]]
    12 * Moving to a dispatch type model (thread pool, etc.) -- proposal (ap?, jchaffix?).[[BR]]
     14WebWorkers (david_levin):[[BR]]
     15* Mostly uses queue of tasks to pass messages between threads[[BR]]
     16* Also some template magic to do the boiler plate code for message handling[[BR]]
     19* Another good pattern[[BR]]
     20* Very specific to this code[[BR]]
     21* Have to do a lot of explicit copies of some data structure (String...)[[BR]]
     22* Common data structure are not designed to be shared (mainly String agian)[[BR]]
     24Audio API[[BR]]
     25* They share the nodes between threads[[BR]]
     26* Had had to create a new sharing model[[BR]]
     27* The high priority audio thread is the one destroying the nodes[[BR]]
     28* Not sure it should be generalized as it uses background threads' priority to solve some of the lifetime issues[[BR]]
     30Current situation:[[BR]]
     31* Sharing is bad as most objects are not designed to be shared[[BR]]
     32* Strings for example are not shareable through threads (for performance)[[BR]]
     33* Locking also is difficult to get right[[BR]]
     34* Keep things isolated between threads![[BR]]
     35* Don't do sharing! Difficult bugs too![[BR]]
     36* Lots of copies involved to avoid sharing![[BR]]
     37* Currently to be called back on the main thread, we use a Timer that is expensive and we need to keep some state.[[BR]]
     38* Some dispatching API for running a task on this specific thread[[BR]]
     39* XHR, File, IconDB all go back to the main thread[[BR]]
     40* SQLDB has either to handle mainThread or another thread.[[BR]]
     41* Ping-ponging easy to get wrong, you would need some object to handle that![[BR]]
     42* Example is the DB where you don't need to send a CleanUp task when the thread is dead.[[BR]]
     44* Future patterns: [[BR]]
     46Why not a templated cross-shared String?[[BR]]
     47* Problem with performance![[BR]]
     48* There is some mitigation already (semi efficient implementation for long String that avoid copying)[[BR]]
     50Why not develop a push API for GraphicsContext?[[BR]]
     53* WebWorker complex because the lifetimes of both thread are independent[[BR]]
     55Proposal (see link)[[BR]]
     56* Template magic to match the API (like this is a String or something else)[[BR]]
     57* Currently using raw pointers so we will need to Decorate them[[BR]]
     58* Lifetime to be determined by the 'originated' thread[[BR]]
     59* At some point, share code of Tasks, MessageQueues... in a common implementation somehow[[BR]]
     62* Moving to a dispatch type model (thread pool, etc.) -- proposal (ap?, jchaffraix?).[[BR]]
     64Somebody (sorry I did not take his name) called this: SystemWorldThread[[BR]]
     65* Some threading cannot be moved to that (Audio...)[[BR]]
     66* Some problem with some models[[BR]]
     67* Maybe need some models[[BR]]
     68* Game engines push the Audio to its own thread, the rest uses ThreadPools[[BR]]
     69* Maybe not a size-fits-all solutions[[BR]]
     70* Lots of values in having common Task[[BR]]
     71* jchaffraix needs to sync with Ap about an API for that.[[BR]]
     73Current work ([[BR]]
     74* Parallelize SVG filters[[BR]]
     75* Nice performance improvements[[BR]]
     76* Also implemented a first API inspired by OpenMP[[BR]]
     77* However there are others libraries / API available for multi-threading[[BR]]
    1479* What to parallelize.[[BR]]
     81Not treated.