1. 11 Jul, 2017 1 commit
  2. 23 Apr, 2017 1 commit
  3. 02 Jan, 2017 2 commits
  4. 15 Sep, 2016 1 commit
    • Daniel Vrátil's avatar
      Expire dying parent from threading cache before processing children · c335c606
      Daniel Vrátil authored
      Fixes a crash in the Model when a thread leader is removed and
      ViewJob for its children is started to re-attach the subtree
      to a new parent node. The second pass would then get a pointer
      to the now-deleted parent from the threading cache leading to
      a crash eventually.
      
      This patch makes sure the parent is expired from the cache
      before the ViewJobs are started. The cache miss triggers actual
      threading calculation in Pass2 and Pass3 and updates our cache.
      
      BUG: 364994
      FIXED-IN: 16.08.1
      c335c606
  5. 05 Jun, 2016 1 commit
  6. 31 May, 2016 1 commit
  7. 30 May, 2016 1 commit
    • Daniel Vrátil's avatar
      Implement client-side threading cache · 1f8cef2c
      Daniel Vrátil authored
      The cache is a simple QHash<ChildId,ParentId> that we build when
      folder is opened for the first time and persist it in a cache
      file for each folder. Aggregation state (grouping and threading)
      is also stored in the cache file so we can check whether the cache
      matches the current aggregation configuration before we load it.
      If the aggregation has changed we simply discard the cache file
      and perform full un-cached threading.
      
      There is a second QHash<ItemId, Item*> in ThreadingCache which
      we fully populate in Pass1. Pass1 may already yield some perfect
      matches thanks to the Child-Parent cache, but only if the Parent
      Item* has already been inserted into the second QHash - this does
      not happen much as we retrieve Items in reverse order so children
      will arrive before their parents. If we can't do a perfect match
      but we have the Parent ID available in cache we just move the Item
      to Pass2 and go on to next one. Otherwise we let Pass1 do full
      evaluation which will insert the Item to cache if perfect match
      is found or queue it for Pass2.
      
      In Pass2 we have the second QHash fully populated so we perfect-
      match all Items using the ChildId->ParentId cache. Items that are
      known to not have a parent (i.e. thread leaders) are scheduled for
      Pass4, Items that are not available in cache are sent for full
      evaluation through Pass2 (and Pass3 if needed) and inserted into
      the cache.
      
      Finally Pass4 performs grouping. There is no caching for that
      right now because the grouping is dynamic and there are no real
      stable identifiers for group headers. We could possibly cache all
      the fixed groups (i.e. sender, receiver or subject) and maybe even
      fixed-date groups (e.g. "January 2014", "March 2015") and only
      let Pass4 calculate dynamic groups ("May", "Two weeks ago",
      "Yesterday", ...) but the gain here would be minimal as we are
      usually dealing with very few groups. The real bottleneck of
      Pass4 is beginInsertRows()/endInsertRows() as threads are shifted
      around - something to look into next.
      
      On my system this speeds up opening a folder with 50000 emails
      by ~30%.
      
      Differential Revision: https://phabricator.kde.org/D1636
      1f8cef2c
  8. 23 May, 2016 1 commit
  9. 22 May, 2016 3 commits
  10. 03 Apr, 2016 1 commit
  11. 09 Dec, 2015 2 commits