play_ac | 2 years ago | on: Before OpenAI, Sam Altman was fired from Y Combinator by his mentor
play_ac's comments
play_ac | 2 years ago | on: Binance founder Changpeng Zhao agrees to step down, plead guilty
play_ac | 2 years ago | on: GTK: Introducing Graphics Offload
This is not something the X maintainers can say. They can encourage people not to do it but if they stop maintaining that feature then the complaints start to roll in because someone somewhere was using it. If you think this situation is awful then yes, you're starting to get it: X is in a bad spot where these broken insecure features are holding else everything back and will continue to do so as long as people depend on it. At best they can disable it by default and make it hard to accidentally re-enable it, which is what they've already done.
>That's not something a normal desktop install does.
Yes, most normal desktop installs don't use X11 in any capacity. They use Microsoft Windows.
>It's not actually a problem that my applications are powerful and can do what I want them to do.
I notice you didn't actually respond to my comment about stopping using passwords and private keys and running everything as root. Because I'd bet even you draw a line somewhere, in a place where you think it's a risk to give an application too much power.
>It is a problem that other locked down OSes like Macs and smartphone systems are not in the user's control and programs cannot do many things by design.
This has absolutely nothing to do with Linux or even on those systems either. It's not actually a problem there. If you have root on the system then you are in control and can do whatever you want anyway. The purpose of setting security boundaries and not running everything as root is because not everything needs to access everything else all the time. The security model you're suggesting became obsolete by the mid 1990s.
And let me say this again so it's perfectly clear. When you use X11 there is effectively no security boundary between any X11 clients. So if you start up your root terminal or you use sudo or anything else like that, then any other X11 client on the system also gets root. This is unacceptable and I can't believe I still have to continually point this out to long time Linux users that should be technical enough to understand. It doesn't even matter if you personally think it's fine to run everything as root: maybe you do. But as a user you should have enough understanding of the system to know that this absolutely is not ok for lots of other users and it's simply not appropriate to be shipped as the default in the year 2023.
These are not fantasy issues, these are actual issues that the underlying system was purposely designed to fix. X11 pokes a huge gaping hole in it.
>sharing keyboard/mouse with synergy/barrier/etc is secure.
No. On a typical X11 install it's not, because it relies on insecure APIs.
play_ac | 2 years ago | on: Binance founder Changpeng Zhao agrees to step down, plead guilty
play_ac | 2 years ago | on: Binance founder Changpeng Zhao agrees to step down, plead guilty
But the whole thing is a distraction anyway. The majority of transactions happening off-chain means that Bitcoin is an utter failure at everything it ever set out to accomplish.
play_ac | 2 years ago | on: Binance founder Changpeng Zhao agrees to step down, plead guilty
play_ac | 2 years ago | on: GTK: Introducing Graphics Offload
Which people? Every recent study I've seen shows VSCode as the most popular code editor by a large margin. Maybe latency isn't as important as you think?
>Are you saying that latency in the order of 250ms when editing text is unnoticeable?
No. Sorry for the info dump here but I'm going to make it absolutely clear so there's no confusion. The latency of the entire system is the latency of the human operator plus the latency of the computer. My statement is that, assuming you have a magical computer that computes frames and displays pixels faster than the speed of light, the absolute minimum bound of this system for the average person is 250ms. You only see lower response time averages in extreme situations like with pro athletes: so basically, not computer programmers who actually spend much more time thinking about problems, and going to meetings, than they actually spend typing.
Now let's go back to reality: with a standard 60Hz monitor, the theoretical latency added by display synchronization is a maximum of about 16.67ms. That's the theoretical MAXIMUM assuming the software is fully optimized and performs rendering as fast as possible, and your OS has realtime guarantees so it doesn't preempt the rendering thread, and the display hardware doesn't add any latency. So at most, you could reduce the total system latency by about 6% just by optimizing the software. You can't go any higher than that.
However, none of those things are true in practice. Making the renderer use damage tracking everywhere significantly complicates the code and may not even be usable in some situations like syntax highlighting where the entire document state may need to be recomputed after typing a single character. All PC operating systems may have significant unpredictable lag caused by the driver. All display hardware using a scanline-based protocol also still has significant vblank periods. Adding these up you may be able to sometimes get a measurement of around 1ms of savings by doing things this way, in exchange for massively complicating your renderer, and with a high standard deviation. Meaning that you likely will perceive the total latency as being HIGHER because of all the stuttering. This is less than 1% of the total latency in the system and it's not even going to be consistent or perceptible.
Now instead consider you've got a 360Hz monitor. The theoretical maximum you can save here is about 2.78ms. This can give you a CONSISTENT 5% latency reduction against the old monitor as long as the software can keep up with it. Optimizing your software for this improves it in every other situation too, versus the other solution which could make it worse. If it doesn't make it worse, it could only save another theoretical 1% and ONLY in a badly perceptible way. It just doesn't make sense to optimize for this less than 1% when it's mostly just caused by the hardware limitations and nobody actually cares about it and they're happy to use VSCode anyway without all this.
So again, you can avoid these accusations of "utter nonsense" when it's clear you're arguing against something that I never said.
>The perceived latency when editing text is between pressing a key and your brain telling you "my eyes have detected a change on the screen.
Your brain needs to actually process what was typed. Prediction isn't helping you type at all, if it did then the latency wouldn't matter anyway. If you're not just writing boilerplate code then you may have to stop to think many many times while you're coding too.
play_ac | 2 years ago | on: GTK: Introducing Graphics Offload
Here's 6 CVEs just from last month. Check the mailing lists and you'll see many of these going back for years and years.
https://lists.x.org/archives/xorg/2023-October/061506.html
https://lists.x.org/archives/xorg/2023-October/061514.html
And before you say this is not what you meant, the X server and X client libraries do very little anymore besides parsing untrusted input and passing it somewhere else. That's its main purpose and it's completely bad at it. And because it's X, this input can also come from over the network too so every normal memory bug can also be an RCE. This is probably the single biggest attack vector on a desktop system aside from the toolkit. It's the exact wrong thing for anyone to grant access to every input on the system.
This is not just my personal opinion or me giving anecdotes either, this is paraphrasing what I've heard X developers say after many years of patching these bugs. But that's not even the whole problem as I'll explain shortly.
>But for actual computers you control it just isn't (a problem). Wayland for "security" is cargo culting smartphone user problems. It's not actually a real issue.
Yes it is a problem and no it's not cargo culting. Practically speaking the X11 security model means every X client gets access to everything including all your passwords (and the root password) as you type them, and subsequently lets every X client spawn arbitrary root processes and get access to your whole filesystem including your private keys and insert kernel modules or do whatever. If you actually think this "isn't a real issue" then you should just stop using passwords, stop protecting your private keys, run every program as root, and disable memory protection: because that's what this actually means in practice. No I'm not exaggerating. The security model of X11 has no idea about client boundaries at all. This is completely unacceptable on any other OS but for some reason it's become a meme to say that only smartphones need to care about this. Really? Come on.
>I use the keyboard/mouse sharing in X11 (via synergy) and I have for 20 years. It is vitally important to my workflow. It works on dozens of different OSes including linux. But not the waylands linuxes. Any graphical environment that can't do this is useless to me.
X11 can't do it securely so I would say that's as useless as not implementing the feature, if you have to compromise your security in order to get it.
The feature will be implemented in Wayland eventually when the design for a secure API is finished. There are people working on it now. In comparison, X11 is probably never going to gain a secure way to do that.
play_ac | 2 years ago | on: GTK: Introducing Graphics Offload
play_ac | 2 years ago | on: GTK: Introducing Graphics Offload
play_ac | 2 years ago | on: GTK: Introducing Graphics Offload
No? Where exactly do you think I've theorized about the existence of a conspiracy? Because I've actually said the exact opposite: there isn't a conspiracy and no one is cooperating at all. There's no evil group of developers secretly planning to sabotage everything. It's just the usual bad communication and planning that happens with a distributed team.
>Those diverse desktop environments contributed hugely to GTK, GNOME just didn't use their work
Can you name what any of these contributions were? Because I've never seen them. I've seen contributions here and there, lots of minor bug fixes, but nothing major.
>Nobody is going to fully "kiss the ring" unless they get something out of it
Avoid this rhetoric please. These open source projects are a volunteer collaboration. No one's kissing any rings or trying to get something out of the maintainers, other than the usual: everyone helps each other write and maintain the code.
>but they could have done a lot better than fighting third-parties tooth-and-nail. GNOME should be a proud project that leads the GNU movement
I really don't know what you're talking about here, but disagreeing about technical things isn't "fighting tooth-and-nail". That's a normal part of any project.
Personally I don't think anyone should care about leading the GNU movement, that's been plagued by petty infighting and drama since the very beginning.
play_ac | 2 years ago | on: GTK: Introducing Graphics Offload
play_ac | 2 years ago | on: GTK: Introducing Graphics Offload
play_ac | 2 years ago | on: GTK: Introducing Graphics Offload
play_ac | 2 years ago | on: GTK: Introducing Graphics Offload
play_ac | 2 years ago | on: GTK: Introducing Graphics Offload
play_ac | 2 years ago | on: GTK: Introducing Graphics Offload
play_ac | 2 years ago | on: GTK: Introducing Graphics Offload
play_ac | 2 years ago | on: GTK: Introducing Graphics Offload
play_ac | 2 years ago | on: GTK: Introducing Graphics Offload
I've seen no game consoles that allow you to turn vsync off, because it would be awful. No idea why this placebo persists in PC gaming.