Incubation Request
KDE Incubator checklist
-
Incubation Sponsor is.. -
E-mailed kde-devel@ and other relevant lists on 2024-10-19 -
Compliance with the [https://manifesto.kde.org KDE Manifesto] -
Governance similar to the other KDE projects -
Clear product vision -
Healthy team (healthy proportion of volunteers, inclusive towards new contributors, ideally more than one developer) -
Uses English for code and communication -
Continuity agreement must be in place with KDE e.V. for domains and trademarks if the authors disappear -
Recommended to attend Akademy or other local KDE events -
Code in KDE Invent -
Passing CI job for reuse linting
Background of KoolIntelligence KDE plasma is a well rounded desktop environment. However beginner users coming from other operating systems have a steep learning curve as linux is different compared to windows and macOS. One more thing is that these operating systems have virtual assistants that assist the user in the tasks that they want to perform. KDE Plasma has no such functionality. Hence this project is an attempt at replicating these features and assisting users who are new to linux and/or KDE Plasma and making them comfortable. We mainly take inspiration from Siri, Apple Intelligence, Google assistant, Copilot and such tools.
Description of project
KoolIntelligence is a project that aims to be a personal, private, and secure virtual assistant for KDE Plasma users. It aspires to help users navigate their way around the KDE Plasma interface. This is to help beginners of KDE Plasma and linux become more familar with this interface. It is also an attempt at replicating the virtual assistant features found on other platforms(i.e Siri on MacOS and Cortana/Copilot on Windows).
We plan for this to be a standalone, optional application that users can install. We also will be running all models locally, ensuring that no data ever leaves the users system.
This application will have the following features:
- A locally running LLM powered by ollama: Users will have the freedom to choose their specific LLM depending on their preferences and system resources
- Ability to automatically send screenshots to the LLM: Users will have the choice of disabling this, but when enabled, if the LLM needs access to the users screen in order to more accurately assess the users situation, it can access this.
- Ability to access files: By using the Baloo API, we will use this index to search files, and then extract relevant information to be provided to the LLM to increase the helpfulness of this application
- Dictation: Users will be able to dictate to the application. We plan to achieve this by using whisper.cpp to provide accurate, real time dictation
- TTS(Text to speech): The application will also be able to speak its responses out loud if the user desires.
- Terminal Integration: The LLM will be able to perform acitons on the system via terminal commands. There will be a integrated terminal in which the LLM or user may run commands. The input from the LLM will be parsed by traditional algorithms to ensure that it is not doing anything malicious/unwanted(rm, kill, remove apps etc). If there are, warnings will be send and the user will be asked to confirm the action that the model wants. Also sudo access will not be provided to the LLM. If the LLM wants root access, it will have to ask the user to manually type in their password for each command.
While we will have LLMs, we understand that it will hallucinate, and not provide accurate information all the time. However by giving the LLM as much information as we can, we hope to improve its accuracy, although we fully understand that it still won't be accurate and will hallucinate. We plan to communicate this to users very clearly.
As we will be running multiple large deep learning models, we will have a higher than usual reccommended specs for this application. However we intend to have this run on as many systems as possible. Initially our target system will be: CPU: 4 core x86-64 chip GPU: Nvidia GeForce 1650Ti Memory: 8GB
This is a very popular configuration for many laptops and desktops out there, as suggested by the steam hardware survey. If possible, we will try to bring down these system requirements.
We plan to use the following third party projects to build this project -whisper.cpp -ollama -bark.cpp
List of people committing to the project
- Rahul Satish Vadhyar
- Abdul Amaan
- Shreya Shastry
- Shivanshi Singh