top | item 37632726

(no title)

Tigress8780 | 2 years ago

Talking about how strange the current state of Windows API is, there are actually 3 different APIs that can move the mouse cursor: SetCursorPos [1], SendInput [2], and mouse_event [3].

Applications might react differently to input events generated by these APIs. For example, in Windows 11's display settings, the position of screens can not be dragged if input events are coming from SetCursorPos, but works fine if using SendInput. Microsoft's own PowerToys uses a mixture of both under certain (complex) conditions, but I never found out the actual difference between them.

I was writing an application that sends mouse input from a Linux machine to a Windows one (similar to Synergy), and I originally received mouse movement events that are accelerated (with user or system-defined acceleration factors), and I found out that there are no Windows API (three of them) that accepts relative movements without further acceleration (i.e. Windows will always apply further acceleration, making the mouse hard to use). I ended up directly hook into evdev to get raw mouse movements and let Windows accelerate them.

[1] https://learn.microsoft.com/en-us/windows/win32/api/winuser/... [2] https://learn.microsoft.com/en-us/windows/win32/api/winuser/... [3] https://learn.microsoft.com/en-us/windows/win32/api/winuser/...

discuss

order

jeroenhd|2 years ago

Only three ways to move the mouse seems relatively tame compared to the forest of Linux API calls when it comes to input (depends on X11/Wayland among other things). Also, your last link clearly states that the API call has been superseded, so if you follow the documentation you only have two options.

I don't think the difference between the two is all that strange. One sets the position of the cursor, the other interacts with the system like a normal mouse. The mouse and the cursor are separate things, and they're handled at different levels in the API stack, like XSendEvent and sending data to libinput.

uep|2 years ago

I guess it depends on what level you're generating the events at. On Linux, it would be completely reasonable to inject the input events at the input device level.

https://www.kernel.org/doc/html/latest/input/event-codes.htm...

This is very straightforward (EV_REL) and requires a very small amount of code. There can be different problems to deal with when working at this level, but in my experience, everything works as expected with keyboards, mice, and gamepads.

lights0123|2 years ago

What do remote desktop tools do? TeamViewer allows mouse movement from even a phone touchscreen just fine, and unless they're computing the inverse of Windows's mouse acceleration, they must have another solution.

tjoff|2 years ago

Some at least don't even bother to send relative mouse movements, what matters is where you click. So they disable the guest mouse-pointer and rely on the host instead and then only send positions every now and then and when you are actually doing something interactive.

Synergy types of applications don't have that freedom because the host mouse cursor don't extend to the other devices display.

Tigress8780|2 years ago

On desktop clients, they fetch the cursor bitmap and renders it locally, then send absolute movements. For mobile clients, it is possible (and better) to send relative coordinates and let Windows accelerate that action, since touch events are not accelerated. Doing this acceleration actually makes the client feel more like a laptop touchpad if you are using relative mode.

EsportToys|2 years ago

Actuall you're missing ClipCursor as another way of moving the cursor.

Also, mouse_event is just a wrapper around what's basically SendInput(mouse)