play_ac's comments

play_ac | 2 years ago | on: GTK: Introducing Graphics Offload

>Uh... don't expose your X.org server to the internet naked.

This is not something the X maintainers can say. They can encourage people not to do it but if they stop maintaining that feature then the complaints start to roll in because someone somewhere was using it. If you think this situation is awful then yes, you're starting to get it: X is in a bad spot where these broken insecure features are holding else everything back and will continue to do so as long as people depend on it. At best they can disable it by default and make it hard to accidentally re-enable it, which is what they've already done.

>That's not something a normal desktop install does.

Yes, most normal desktop installs don't use X11 in any capacity. They use Microsoft Windows.

>It's not actually a problem that my applications are powerful and can do what I want them to do.

I notice you didn't actually respond to my comment about stopping using passwords and private keys and running everything as root. Because I'd bet even you draw a line somewhere, in a place where you think it's a risk to give an application too much power.

>It is a problem that other locked down OSes like Macs and smartphone systems are not in the user's control and programs cannot do many things by design.

This has absolutely nothing to do with Linux or even on those systems either. It's not actually a problem there. If you have root on the system then you are in control and can do whatever you want anyway. The purpose of setting security boundaries and not running everything as root is because not everything needs to access everything else all the time. The security model you're suggesting became obsolete by the mid 1990s.

And let me say this again so it's perfectly clear. When you use X11 there is effectively no security boundary between any X11 clients. So if you start up your root terminal or you use sudo or anything else like that, then any other X11 client on the system also gets root. This is unacceptable and I can't believe I still have to continually point this out to long time Linux users that should be technical enough to understand. It doesn't even matter if you personally think it's fine to run everything as root: maybe you do. But as a user you should have enough understanding of the system to know that this absolutely is not ok for lots of other users and it's simply not appropriate to be shipped as the default in the year 2023.

These are not fantasy issues, these are actual issues that the underlying system was purposely designed to fix. X11 pokes a huge gaping hole in it.

>sharing keyboard/mouse with synergy/barrier/etc is secure.

No. On a typical X11 install it's not, because it relies on insecure APIs.

play_ac | 2 years ago | on: Binance founder Changpeng Zhao agrees to step down, plead guilty

That won't happen and would actually be much worse for Monero because it means everything becomes a giant target for thieves and scammers, even more than it already is. The reason it's failed is because the idea of cryptocurrency is fundamentally bad. Monero isn't even trying to hide it. The developers openly say that criminals should use it to commit crimes.

play_ac | 2 years ago | on: Binance founder Changpeng Zhao agrees to step down, plead guilty

"Usable" is a massive stretch. The only way most people will ever be able to use it is through a custodial wallet, so it's right back to bank accounts and centralized exchanges.

But the whole thing is a distraction anyway. The majority of transactions happening off-chain means that Bitcoin is an utter failure at everything it ever set out to accomplish.

play_ac | 2 years ago | on: GTK: Introducing Graphics Offload

>Why do you think people cannot stand writing with VSCode for example?

Which people? Every recent study I've seen shows VSCode as the most popular code editor by a large margin. Maybe latency isn't as important as you think?

>Are you saying that latency in the order of 250ms when editing text is unnoticeable?

No. Sorry for the info dump here but I'm going to make it absolutely clear so there's no confusion. The latency of the entire system is the latency of the human operator plus the latency of the computer. My statement is that, assuming you have a magical computer that computes frames and displays pixels faster than the speed of light, the absolute minimum bound of this system for the average person is 250ms. You only see lower response time averages in extreme situations like with pro athletes: so basically, not computer programmers who actually spend much more time thinking about problems, and going to meetings, than they actually spend typing.

Now let's go back to reality: with a standard 60Hz monitor, the theoretical latency added by display synchronization is a maximum of about 16.67ms. That's the theoretical MAXIMUM assuming the software is fully optimized and performs rendering as fast as possible, and your OS has realtime guarantees so it doesn't preempt the rendering thread, and the display hardware doesn't add any latency. So at most, you could reduce the total system latency by about 6% just by optimizing the software. You can't go any higher than that.

However, none of those things are true in practice. Making the renderer use damage tracking everywhere significantly complicates the code and may not even be usable in some situations like syntax highlighting where the entire document state may need to be recomputed after typing a single character. All PC operating systems may have significant unpredictable lag caused by the driver. All display hardware using a scanline-based protocol also still has significant vblank periods. Adding these up you may be able to sometimes get a measurement of around 1ms of savings by doing things this way, in exchange for massively complicating your renderer, and with a high standard deviation. Meaning that you likely will perceive the total latency as being HIGHER because of all the stuttering. This is less than 1% of the total latency in the system and it's not even going to be consistent or perceptible.

Now instead consider you've got a 360Hz monitor. The theoretical maximum you can save here is about 2.78ms. This can give you a CONSISTENT 5% latency reduction against the old monitor as long as the software can keep up with it. Optimizing your software for this improves it in every other situation too, versus the other solution which could make it worse. If it doesn't make it worse, it could only save another theoretical 1% and ONLY in a badly perceptible way. It just doesn't make sense to optimize for this less than 1% when it's mostly just caused by the hardware limitations and nobody actually cares about it and they're happy to use VSCode anyway without all this.

So again, you can avoid these accusations of "utter nonsense" when it's clear you're arguing against something that I never said.

>The perceived latency when editing text is between pressing a key and your brain telling you "my eyes have detected a change on the screen.

Your brain needs to actually process what was typed. Prediction isn't helping you type at all, if it did then the latency wouldn't matter anyway. If you're not just writing boilerplate code then you may have to stop to think many many times while you're coding too.

play_ac | 2 years ago | on: GTK: Introducing Graphics Offload

>I've never heard of anyone having an X11 security problem in the last 20 years.

Here's 6 CVEs just from last month. Check the mailing lists and you'll see many of these going back for years and years.

https://lists.x.org/archives/xorg/2023-October/061506.html

https://lists.x.org/archives/xorg/2023-October/061514.html

And before you say this is not what you meant, the X server and X client libraries do very little anymore besides parsing untrusted input and passing it somewhere else. That's its main purpose and it's completely bad at it. And because it's X, this input can also come from over the network too so every normal memory bug can also be an RCE. This is probably the single biggest attack vector on a desktop system aside from the toolkit. It's the exact wrong thing for anyone to grant access to every input on the system.

This is not just my personal opinion or me giving anecdotes either, this is paraphrasing what I've heard X developers say after many years of patching these bugs. But that's not even the whole problem as I'll explain shortly.

>But for actual computers you control it just isn't (a problem). Wayland for "security" is cargo culting smartphone user problems. It's not actually a real issue.

Yes it is a problem and no it's not cargo culting. Practically speaking the X11 security model means every X client gets access to everything including all your passwords (and the root password) as you type them, and subsequently lets every X client spawn arbitrary root processes and get access to your whole filesystem including your private keys and insert kernel modules or do whatever. If you actually think this "isn't a real issue" then you should just stop using passwords, stop protecting your private keys, run every program as root, and disable memory protection: because that's what this actually means in practice. No I'm not exaggerating. The security model of X11 has no idea about client boundaries at all. This is completely unacceptable on any other OS but for some reason it's become a meme to say that only smartphones need to care about this. Really? Come on.

>I use the keyboard/mouse sharing in X11 (via synergy) and I have for 20 years. It is vitally important to my workflow. It works on dozens of different OSes including linux. But not the waylands linuxes. Any graphical environment that can't do this is useless to me.

X11 can't do it securely so I would say that's as useless as not implementing the feature, if you have to compromise your security in order to get it.

The feature will be implemented in Wayland eventually when the design for a secure API is finished. There are people working on it now. In comparison, X11 is probably never going to gain a secure way to do that.

play_ac | 2 years ago | on: GTK: Introducing Graphics Offload

No, really. Those APIs are too low level to be useful for normal applications. Nothing in them is useful for games at all. I don't know why you think it's appropriate to put in these insults either. Cut it out.

play_ac | 2 years ago | on: GTK: Introducing Graphics Offload

>Now that's an ahistorical conspiracy theory

No? Where exactly do you think I've theorized about the existence of a conspiracy? Because I've actually said the exact opposite: there isn't a conspiracy and no one is cooperating at all. There's no evil group of developers secretly planning to sabotage everything. It's just the usual bad communication and planning that happens with a distributed team.

>Those diverse desktop environments contributed hugely to GTK, GNOME just didn't use their work

Can you name what any of these contributions were? Because I've never seen them. I've seen contributions here and there, lots of minor bug fixes, but nothing major.

>Nobody is going to fully "kiss the ring" unless they get something out of it

Avoid this rhetoric please. These open source projects are a volunteer collaboration. No one's kissing any rings or trying to get something out of the maintainers, other than the usual: everyone helps each other write and maintain the code.

>but they could have done a lot better than fighting third-parties tooth-and-nail. GNOME should be a proud project that leads the GNU movement

I really don't know what you're talking about here, but disagreeing about technical things isn't "fighting tooth-and-nail". That's a normal part of any project.

Personally I don't think anyone should care about leading the GNU movement, that's been plagued by petty infighting and drama since the very beginning.

play_ac | 2 years ago | on: GTK: Introducing Graphics Offload

Your comment has nothing to do with the conversation. The reason to have low latency when typing text is so you can correct mistakes. That requires the full response time. There's no moving or evolving shapes. Maybe proofread your own comments before throwing around accusations of "utter nonsense".

play_ac | 2 years ago | on: GTK: Introducing Graphics Offload

The Subsurface developer did that 10 years ago and it only was because he personally preferred Qt. Take a step back for a moment and consider that in 10 years that's the only major example that anyone ever brings out. GTK is still very welcoming for contributions to maintain the GDK backends. Developers like that have to actually step up and do it and have patience, instead of outright quitting and running off to Qt which has a whole company to maintain those ports.

play_ac | 2 years ago | on: GTK: Introducing Graphics Offload

You shouldn't use raw X11 or raw Wayland unless you're writing a low-level toolkit. If you're working on games, SDL should handle all that stuff for you.

play_ac | 2 years ago | on: GTK: Introducing Graphics Offload

Keyboard/mouse sharing is completely unrelated to the Wayland protocol. Wayland is only concerned with sending input events to client windows. Generating and capturing global events is out-of-scope and it's an entirely different API. The way this works in X11 is a giant hack that requires multiple extensions and the end result is it compromises all security of those devices. It's even more delusional to pretend this was ever production-ready or that Wayland needs to be ready for anything here. The X11 implementation just shouldn't have been shipped at all.

play_ac | 2 years ago | on: GTK: Introducing Graphics Offload

I realize it's probably a waste to say this to someone with your username, but getting angry at the situation is futile. You shouldn't use Linux if you're not used to random stuff changing and breaking by now and you're not comfortable adapting to those changes. Doubly so for a rolling release distro like Arch. X was obsolete and a security disaster last decade, holding onto it for another decade is just masochism. If this all is to much trouble for you to run a Unix-like desktop and keep it updated, there's always MacOS. They never even made the initial mistake of using X.

play_ac | 2 years ago | on: GTK: Introducing Graphics Offload

GIMP has about 3-4 part-time developers and no designers. They have no resources to redesign the user interface even though it's been wanted for a long time. It's taken them an extremely long time just to get GIMP 3 out the door and that's just a port without any major UI changes. But I agree otherwise, the horrible name is completely on them.

play_ac | 2 years ago | on: GTK: Introducing Graphics Offload

No, that's an outlandish conspiracy theory and completely ahistorical. GTK was always developed on Linux first, and before it was used by GNOME it had a lot of GIMP-specific functionality that didn't extend well to other apps. Want to know why? Because GIMP and GNOME developers were the only ones contributing. Those "diverse desktop environments" almost always took from GNOME and contributed very little back. That's fine to do it but they need to accept that they don't call the shots when they do that. They don't get to pull their funding and then complain someone else is being a bad custodian, it doesn't work like that.

play_ac | 2 years ago | on: GTK: Introducing Graphics Offload

Key word here being "might". What actually gets displayed is highly dependent on the performance of the program itself and will manifest as wild stuttering depending on small variations in the scene.

I've seen no game consoles that allow you to turn vsync off, because it would be awful. No idea why this placebo persists in PC gaming.

page 1