Convergence!
I think we are at the point where we can start considering how we want to design the experience when phones are connected to displays (whether by cable, or wirelessly). I am opening up the discussion purposefully to be broad and allow ideas to be thrown around. I think we need to answer the following questions:
Do we want convergence at all?
Is it worth spending all of this time to attempt to bring a Plasma Desktop experience to external screens? Is it enough to just have a single maximized window on the connected display?
What are the usecases we should be targeting?
What do we envision users using this for? For example, we could let users have a full desktop session on the other display, but perhaps another mode would be to display a single app (ex. play a video or show a presentation). In this case, they probably don't will not have access to a mouse/keyboard, and will need to be able to control solely from the phone. Would it also be valuable to show a Plasma Bigscreen experience on a connected display?
What should the phone be displaying?
While connected to a display, what kind of UI/UX should the phone show? Should users be using their phone as normal, or should it change into a mode to control the screen (ex. touchpad).
This could be a bit more of a technical topic, depending on how we implement it (it may be easier to switch both screens to a desktop mode, rather than having a mobile experience on one and a desktop one on another).
What happens immediately after a screen is connected?
Relevant issue: #54
What should happen when a screen is connected? This will depend on what kind of usecases we target.