1. 19 Jan, 2020 1 commit
  2. 25 Dec, 2019 1 commit
  3. 19 Dec, 2019 1 commit
  4. 10 Dec, 2018 1 commit
  5. 10 Dec, 2017 1 commit
  6. 14 Sep, 2017 1 commit
  7. 04 Jun, 2017 1 commit
  8. 26 Apr, 2017 1 commit
  9. 22 Feb, 2017 1 commit
  10. 24 Jan, 2017 1 commit
  11. 22 Sep, 2015 1 commit
  12. 21 Jul, 2015 1 commit
  13. 29 Jun, 2015 1 commit
  14. 20 Apr, 2015 1 commit
    • Daniel Vrátil's avatar
      Remove Baloo references from API and binary names · 9f24e92f
      Daniel Vrátil authored
      akonadi-search has nothing to do with Baloo anymore, all parts we used to use
      from Baloo are now forked in this repo, so we call the entire thing Akonadi::Search.
      
      THe only remaining references to Baloo are config files (baloorc), and indexing
      database location (~/.local/share/baloo). We should probably migrate those too
      eventually.
      9f24e92f
  15. 19 Apr, 2015 1 commit
  16. 18 Apr, 2015 2 commits
  17. 13 Jul, 2014 1 commit
    • Christian Mollekopf's avatar
      Akonadi Agent: Introduce a scheduler · 72552503
      Christian Mollekopf authored
      With this patch the indexer is split into three parts:
      * The Scheduler: Obviously the part for deciding when to do what
      * The IndexingJob: A per collection job that makes sure that the collection is fully indexed.
      * The Index: A wrapper around the various indexes that encapsulates some db related stuff
      
      The following has changed:
      * Bumping the indexing level now simply deletes the db (so we have a clean start).
      * The IndexingJob compares the indexed items with the available items, and
      decides based on that wether it has to find not-yet indexed items. This recovery
      mechanism is used when we fail to index all items.
      * The IndexingJob is only started if no new item arrived within the last 5s.
      This is supposed to delay the indexing of a new collection until *after* the sync has completed.
      The effect is, that during a sync first akonadi/mysql are consuming all
      resources (because we're pulling in data as fast as we can), the indexer just idles meanwhile.
      As soon as the sync is complete, the indexer kicks in and indexes all the data.
      This is in my experience less stressful on the system (and each process has
      all resources of the system available).
      * During normal operation items are queued, and then indexed
      in a batch after a short delay.
      * The lastModified optimization has been removed, as it needs to be per-collection
      anyways to work as expected. However, I'm not sure it's required, since even
      with a 200k folder finding out what items to index is comparably cheap to the indexing
      process (took like 10s out of 1000s total). Also the full sync is only triggerd
      if we actually miss some updates, as long as we index everything we're not
      even hitting that codepath.
      
      REVIEW: 118231
      72552503