> Because sampling rate is fast enough to misinterpret contact bounce as keystrokes, keyboard control processor perform so-called debouncing of the signals by aggregating them across time to produce reliable output. Such a filtering introduces additional delay, which varies depending on microcontroller firmware. As manufacturers generally don’t disclose their firmware internals, let’s consider typical debouncing algorithms and assume that filtering adds ~7 ms delay,
Since it's debouncing rather than outright spurious activation, it's possible to latch/register/sample on the initial switch closure and only delay recognition of the subsequent release. This seems like a good idea since the press is almost always where a user's attention is focused, and the timing of the release being important is usually associated with holding the key much longer than the bounce period (e.g. modifier keys, cursor/player movement). Anyone know if keyboards are doing this?
I spent a few years working on embedded keypad products. Often there is a hardware keypad scanner that takes care of much of the details, not allowing the programmer this level of control. Your driver configures the rows and columns (inputs and outputs), the scan rate, debounce time, etc, and the hardware will monitor for edge transitions, then perform a scan, capture the result in a register and assert an interrupt. It will likely have a double (or more) buffer mechanism to prevent dropped events due to software latency.
In low power/mobile situations this allows the host CPU to sleep as much as possible and even the scanner, which walks the rows (or cols) need not be running all the time.
While many SOCs have a integrated keypad interface, there are also keypad controller ICs (some GPIO expanders have the ability to double as keypad controllers) that free the programmer from having to manually implement the scanner logic. I suspect these types of chips would be found in a typical USB keyboard. Software may still have to deal with additional debouncing and ghost keys depending on the complexity of the IC.
Edit: I suppose one could configure the keypad controller to apply no debouncing and then perform a software debounce the way you describe.
This is was my thought too, but most keyboards have a matrix and use chips with less inputs which makes the problem harder. There are theoretical ways to write it better (you would need to have extra state for all buttons) but most firmware I have seen just samples at a rate such that the period is longer than the longest bounce.
>Regardless of keyboard type, key switches are mechanically imperfect and are subject to contact bounce — instead of a clean transition, the switch rapidly bounces between on and off states several times before settling. Bounce time depends on switch technology, for example, for Cherry MX switches bounce time is claimed to be less that 5 ms. Though exact probability distribution is unknown, basing on related empirical data, we can assume that average bounce time is about 1.5 ms.
This is amazing. I thought I knew everything about mech keyboards, but this opens new perspective.
Cherry MX switches are much bouncier than some types of switches. (This explains why some keyboards with MX switches end up dropping or duplicating keypresses when their controller uses a poor debouncing algorithm.)
For instance old “complicated” Alps switches (circa 1990) have an extremely clean switch from off to on, with almost no bouncing.
That's the main draw I have to try out the BB KeyOne - I can still type faster on my physical keyboard on my Dell Venue Pro than I can my Nexus 6, but if I could swipe on my BBK1, who knows?
Rather than performing composition as late as possible, which would be beneficial for latency, Windows performs composition as early as possible, at the start of the frame. This introduces a completely unnecessary 16.67ms extra latency into everything you do. There is no supported way to disable the compositor on Windows 8-10 (though the article links to a scary-looking hack that apparently works in Windows 8,) so you're stuck with this.
I really hope this situation will be improved with future updates to Windows 10. Microsoft are still making improvements to the compositor, for example window resizing is much smoother in the creators update, but as far as I can tell, the one-frame delay still exists.
Anybody else seen a massive increase in the latency of graphical Emacs on recent versions of OS X? Please don't reply if you only ever use Emacs inside a terminal emulator: I'm very much attached to my mouse.
The latency problem began when I upgrade from 10.8 to 10.11. (I use Mitsuharu Emacs, but would be stunned if the problem were absent from plain-FSF Emacs.)
P.S. Has anyone gotten the typometer application described in the OP to work with a graphical emacs on OS X? Whenever I open the application, my Emacs just freezes till I close the application.
Interestingly, this topic is quite old. The original mainframe systems had channel controllers for I/O which did a lot of processing locally, which included echoing and even local editing, freeing the CPU for "real" work. This approach was thrown out when minicomputers arrived; this is why the Unix IO system looks the way it does and why C, in a then-noteworthy departure from most languages of its time, didn't include I/O operators.
Even in the pre-TCP ARPANET, network latency on interactive connections was an important topic (this is when the main backbone was a single 56K line IIRC). The MIT SUPDUP protocol (Super Duper remote access alternative to Telnet) included a local editing protocol for connections to remote machines. Even non-line-mode applications could interact with it so essentially run part of the interface remotely all in the interest of zero latency.
This is very cool. I primarily write in Word and Texmate. Word is noticeably slower (greater latency) in many if not most circumstances. As file size grows, in particular, Word seems to slow more.
This is especially noticeable when adding text in the middle of a longer document. It seems as if it is laying out many subsequent pages in a blocking fashion, even if those pages are not visible.
I'd think that would have more to do with painting the screen then actually processing the keystroke. A few ms difference wouldn't be noticeably slower unless you were watching for it specifically.
This timing is pretty coincidental for me. I never really thought all that much about refresh latency/etc... (which I realize is weird since I do play a fair amount of games) but I rebuilt my home office a few weeks ago, and for the time being all I have to connect my 2014 MBP to a 4k monitor is an HDMI cable....and the 2014MBP can only do 4k@30hz over HDMI. Lets just say the keyboard/mouse lag is....infuriating. And it gets WAY worse the less you scale the external monitor.
I found this too when I tried running my PC with a 30Hz display. I was surprised how bad it was. Windows '9x's default mouse sampling rate was 35Hz, and that was perfectly tolerable. 25Hz/30Hz games are playable. 60Hz will be better, but there's no reason a 30Hz monitor has to be an absolute outright disaster. And yet...
Obviously over time a bunch of extra frames of latency have snuck in, and at a refresh rate of 60Hz it's just not noticeable enough for enough people to have proven worth fixing.
(I've read of a lot of people finding 60Hz monitors more annoying to use after they've spent some time with 144Hz. So roll on 144+Hz... perhaps either we'll all upgrade, and the cycle will repeat, or our eyes will be retrained and we'll start to demand more from our existing equipment.)
Have you tried mosh [1]? And if latency is too big to fix it with mosh or the firewall prevents using it, I have found that using Emacs shell buffer and lsync to transfer commands and files on pressing enter or save made remote development possible over links with over 1s of latency.
I got it to work after a few tries. Used it with VSCode, terminal, iTerm, Hyper Terminal, and TextEdit.
It took a few trials, and I had to disable transparency. I think it also doesn't like blinking cursors, and if (...) is turned into it's own glyph, you should start the line with some dots of your own to prevent that.
[+] [-] 0xcde4c3db|8 years ago|reply
Since it's debouncing rather than outright spurious activation, it's possible to latch/register/sample on the initial switch closure and only delay recognition of the subsequent release. This seems like a good idea since the press is almost always where a user's attention is focused, and the timing of the release being important is usually associated with holding the key much longer than the bounce period (e.g. modifier keys, cursor/player movement). Anyone know if keyboards are doing this?
[+] [-] copperred|8 years ago|reply
In low power/mobile situations this allows the host CPU to sleep as much as possible and even the scanner, which walks the rows (or cols) need not be running all the time.
While many SOCs have a integrated keypad interface, there are also keypad controller ICs (some GPIO expanders have the ability to double as keypad controllers) that free the programmer from having to manually implement the scanner logic. I suspect these types of chips would be found in a typical USB keyboard. Software may still have to deal with additional debouncing and ghost keys depending on the complexity of the IC.
Edit: I suppose one could configure the keypad controller to apply no debouncing and then perform a software debounce the way you describe.
[+] [-] sly010|8 years ago|reply
[+] [-] rdslw|8 years ago|reply
Use a responsive editor (makes the most difference).
* Use a low-latency keyboard, if possible.
* Choose programs that add global keyboard hooks wisely.
* Turn off unnecessary “image enhancers” in you monitor.
* Enable stacking window manager in your OS (e.g. in Windows 7, 8).
TESTED EDITORS RESULTS, Average latency, ms
[+] [-] westoncb|8 years ago|reply
[+] [-] matt4077|8 years ago|reply
[+] [-] mcguire|8 years ago|reply
IDEA without the zero-latency mod is 198.8.
[+] [-] ishtu|8 years ago|reply
This is amazing. I thought I knew everything about mech keyboards, but this opens new perspective.
[+] [-] jacobolus|8 years ago|reply
For instance old “complicated” Alps switches (circa 1990) have an extremely clean switch from off to on, with almost no bouncing.
[+] [-] methyl|8 years ago|reply
[+] [-] mauro3|8 years ago|reply
Update: on my system Emacs has an average of 6ms vs VSCode 17ms.
[+] [-] andai|8 years ago|reply
https://www.geek.com/chips/john-carmack-explains-why-its-fas...
[+] [-] voidz|8 years ago|reply
[+] [-] Multicomp|8 years ago|reply
[+] [-] swiley|8 years ago|reply
My $20 raspberry pi with a cheap USB keyboard has neither problem though...
[+] [-] yunyu|8 years ago|reply
It seems difficult to make the screen worse than the previous model.
[+] [-] wkillerud|8 years ago|reply
https://webcache.googleusercontent.com/search?q=cache:l14kPB...
[+] [-] an27|8 years ago|reply
https://web.archive.org/web/20170726220513/https://pavelfati...
archive.is:
http://archive.is/JYKGR
[+] [-] ilovefood|8 years ago|reply
[+] [-] secure|8 years ago|reply
I thought it was a fascinating read.
[+] [-] rossy|8 years ago|reply
Yep. There is one built-in frame of latency in the Windows composition engine. It's even documented here: https://msdn.microsoft.com/en-us/library/windows/desktop/hh4...
Rather than performing composition as late as possible, which would be beneficial for latency, Windows performs composition as early as possible, at the start of the frame. This introduces a completely unnecessary 16.67ms extra latency into everything you do. There is no supported way to disable the compositor on Windows 8-10 (though the article links to a scary-looking hack that apparently works in Windows 8,) so you're stuck with this.
I really hope this situation will be improved with future updates to Windows 10. Microsoft are still making improvements to the compositor, for example window resizing is much smoother in the creators update, but as far as I can tell, the one-frame delay still exists.
[+] [-] noir_lord|8 years ago|reply
It made the thing a pleasure to use and I didn't hate it before, I just didn't know what I was missing.
[+] [-] hollerith|8 years ago|reply
The latency problem began when I upgrade from 10.8 to 10.11. (I use Mitsuharu Emacs, but would be stunned if the problem were absent from plain-FSF Emacs.)
[+] [-] hollerith|8 years ago|reply
[+] [-] gumby|8 years ago|reply
Interestingly, this topic is quite old. The original mainframe systems had channel controllers for I/O which did a lot of processing locally, which included echoing and even local editing, freeing the CPU for "real" work. This approach was thrown out when minicomputers arrived; this is why the Unix IO system looks the way it does and why C, in a then-noteworthy departure from most languages of its time, didn't include I/O operators.
Even in the pre-TCP ARPANET, network latency on interactive connections was an important topic (this is when the main backbone was a single 56K line IIRC). The MIT SUPDUP protocol (Super Duper remote access alternative to Telnet) included a local editing protocol for connections to remote machines. Even non-line-mode applications could interact with it so essentially run part of the interface remotely all in the interest of zero latency.
[+] [-] jseliger|8 years ago|reply
[+] [-] dom0|8 years ago|reply
[+] [-] AnkleInsurance|8 years ago|reply
[+] [-] jdc0589|8 years ago|reply
[+] [-] to3m|8 years ago|reply
Obviously over time a bunch of extra frames of latency have snuck in, and at a refresh rate of 60Hz it's just not noticeable enough for enough people to have proven worth fixing.
(I've read of a lot of people finding 60Hz monitors more annoying to use after they've spent some time with 144Hz. So roll on 144+Hz... perhaps either we'll all upgrade, and the cycle will repeat, or our eyes will be retrained and we'll start to demand more from our existing equipment.)
[+] [-] titanomachy|8 years ago|reply
[+] [-] tener|8 years ago|reply
[+] [-] _0w8t|8 years ago|reply
[1] - https://mosh.org
[+] [-] mmagin|8 years ago|reply
1. mosh instead of ssh 2. run editor locally and edit files remotely (either using the editors built-in support for that or something like sshfs)
[+] [-] ash_gti|8 years ago|reply
Does anyone know if there is an existing tool for these kinds of measurements on a Mac?
Doing a few google searches mostly turns up this article. But maybe my googling skills are weak.
[+] [-] matt4077|8 years ago|reply
It took a few trials, and I had to disable transparency. I think it also doesn't like blinking cursors, and if (...) is turned into it's own glyph, you should start the line with some dots of your own to prevent that.
[+] [-] haddr|8 years ago|reply
[+] [-] dom0|8 years ago|reply
[+] [-] mmagin|8 years ago|reply
[+] [-] thinbeige|8 years ago|reply