This forum has been archived. All content is frozen. Please use KDE Discuss instead.

KWin Scripting: how to detect mouse button clicks?

Tags: None
(comma "," separated)
User avatar
motd
Registered Member
Posts
12
Karma
0
I would simply like to RMB on the right edge and go to the next desktop but I'm not sure that mouse button detection is even available in the scripting API. When looking through the spec at http://techbase.kde.org/Development/Tut ... ng/API_4.9 I can't see anything that allows me to detect and trap mouse button clicks.

. would something like this be possible at all?
. if so, could anyone point me to some similar example KWin scripting code?
. if not, would anyone care to point me to the right area in the KWin source to extend active edge detection to include mouse buttons?
mgraesslin
KDE Developer
Posts
572
Karma
7
OS
no that is not possible and I doubt it will ever be possible. The relevant code would be in screenedge.cpp - do not even think about changing anything there right now as the code is scheduled for a complete rewrite pretty soon :-) (It's using XLib, needs to be ported to XCB and we need to do adjustments to get multi-screen handling properly after X changed behavior).

Also I'm very sceptical whether I would accept a patch to get mouse buttons - at least I would not with current code base. Let's wait and see how it looks like once it's rewritten. Feel free to follow kwin@kde.org - review request will appear there. I plan to work on this in first or second week of January.
User avatar
motd
Registered Member
Posts
12
Karma
0
Thanks for clarifying. Better control of edge actions is something I've wanted for a decade, and pekwm is fairly close , so if I have to maintain a patched KWin just to get what I want then so be it but I was really hoping the scripting API would do the trick.

If I may gripe a bit, I find the current ElectricBorders activation on mouse movement alone is completely useless (to me) as I run my primary apps fully maximised, without decorations, on their own desktop so firing some action by simply moving the mouse too close to an edge more often than not does something unexpected that I really did not want to happen... especially pasting text near the bottom of konsole, that makes autohide on the bottom panel really annoying. I would much rather explicitly RMB click on the bottom edge to raise that panel and be able to use another RMB click on the bottom edge again to dismiss it, ditto for a top panel, and the same on the left and right edges to go to the previous or next desktop/workspace (or activity). In other words a very explicit and deliberate action that is not misfired by regular mouse movements or left mouse button clicks. I also think the 4 precious corner hotspots should allow me to activate ANY action including any executable or DBus command in combination with any meta key for multiple action possibilities in each corner.

Another point is that activation on mouse hover movements alone cannot be translated to a touch device whereas an explicit RMB click on an edge can be emulated by a double-tap near that edge, or corner. There is a much better chance of reusing currently available maximised and undecorated apps on a tablet, without having to rewrite them, and still maintain a lot of similar interactive control rules except for the fact that a RMB on a mouse controlled desktop is translated to a double-tap on a touch screen. Meta key multiple choice double taps cab be handled by a popup selector on a touch device, if and only if an edge or corner has multiple options.
mgraesslin
KDE Developer
Posts
572
Karma
7
OS
Some thoughts on what you wrote:

1. Plasma's auto-hidden panel is not controlled by KWin - no matter how we extend it, it would help you for this specific case.
2. Screen edges are completely useless for touch events and that's why we don't use them in Plasma Active. It's easily explained by Fitts's Law: while screen edges with a mouse triggering by movements are the most easy to hit target (an edge has an infinite width), the screen edge becomes the most difficult to hit target for touch. Even more difficult to hit than a mouse click - which is given Fitts's Law almost close to impossible (very precise aiming at one pixel target with a one pixel input device). But touch witch e.g. a thumb means having an input device which is much larger than the hit target combined with badly calibrated touch screens a very bad idea. That's why e.g. the N9 uses swiping over the screen edge with the touchable area being larger than the visible screen area.
User avatar
motd
Registered Member
Posts
12
Karma
0
1. My panel example just illustrates the kind of behaviour I'd like to take advantage of. I can't see why being able to fire any DBus or executable command could not either activate a Plasma panel directly or, at worst, emulate the same behaviour via a 3rd party panel system. The point is the flexibility of allowing me to activate *anything* via a variety of mouse and keyboard edge and corner events gives the end user the ability to initiate a huge range of options that core developers don't have to explicitly program for ahead of time.

2. My lack of experience with touch devices hampers my view but I cannot see why a hotspot area on the right hand edge (roughly screenwidth - 5% * screenheight - 10% offset from the right edge and vertically centred) can't trap a double-tap and treat it the same as a RMB click on a desktop. The main point here is attempting to enable current apps to be used on ie; Nexus 7" or 10" like devices. Ultra small mobile phone screens definitely require a different strategy but a 7/10" tablet or 20/24" touch screen could reuse current apps. If so then the overall effort to add some extra flexibility to control windows and desktops could be a lot less than rewriting 50% of every application to run on touch systems.


Bookmarks



Who is online

Registered users: bartoloni, Bing [Bot], Evergrowing, Google [Bot], ourcraft