Lt_Riza_Hawkeye's comments

Lt_Riza_Hawkeye | 2 years ago | on: Chrome: Heap buffer overflow in WebP

Right, but for most users, Chrome's differentiator is that it's fast. I can't imagine them flipping the flag on by default - and if 99.99% of users aren't going to use it, it's probably not worth it to bloat the binary size either. Maybe someone will make a chromium patch with it that people can keep rebasing, though

Lt_Riza_Hawkeye | 2 years ago | on: Towards HTTPS by Default

> How do they know this?

I believe the setting "Make searches and browsing better" (subtitle "Send URLs of pages you visit to google") under chrome://settings/syncSetup is on by default if you don't manually uncheck the send usage data checkbox when installing

Lt_Riza_Hawkeye | 2 years ago | on: Introduction to ActivityPub (2021)

Yes, plus if they support client-server then kill it later, they risk the same kind of backlash that happened to Twitter and Reddit when they killed third party clients. But if they only support server-server when they kill it, they can just send an email to all the geeks telling them "import your account into threads by X date!" and most "normal" users will never notice

Lt_Riza_Hawkeye | 2 years ago | on: I tested services to extract colors from image

gpick is also a great piece of software on linux - unfortunately they got threatened by pantone or something and had to remove the good color names like "midnight moss", "mineral green", and "bombay". Everything now pulls up as "light green", "faded green", or "bluegrey". However, the rest of the functionality is the best you could ask for.

http://0x0.st/Hqoj.png

Lt_Riza_Hawkeye | 3 years ago | on: Down the Cloudflare / Stripe / OWASP Rabbit Hole

Overall I agree with you - the only caveat I have to offer is Cloudflare's support of eSNI. My opinion on CF used to be quite black and white, but there is at least someone in there (for who knows how long) contributing to the actual security of the web. Not mutually exclusive with doing harm in other ways.

Lt_Riza_Hawkeye | 3 years ago | on: What's different about next-gen transistors

In-memory computing is just barely in active research. Unlike quantistors, memristors are being built (and have already been built) in many research laboratories.

If you are interested in learning, the idea of memristors (and other computational memory technology) is not to replace traditional memory, but rather to augment small portions of it with increased/additional computational functionality.

For example, you could add a (relatively speaking) extremely small number of memristors to an existing memory module, load two matrices into that area of memory, and reading from the adjacent region of memory would immediately yield the result of multiplying those two matrices. If you could simply instruct the RAM module where the existing data lies, this would be an immense boost in efficiency to AI/deep learning prediction algorithms. Here is a video explanation of how this could be use to perform matrix vector multiplication in O(1): https://youtu.be/30K5i8bdiyg?t=1492 .

That video mentions later that something similar can be used in solving linear and partial differential equations. Applications for deep learning training is discussed at 40:20 .

You'll notice everything discussed in the keynote has experimental results. At 41:27, he mentions that you can actually send images to their memristor chip over their website . Though the link is unfortunately dead, it definitely worked, and was in fact performing neural network computations using memristors over the network.

Lt_Riza_Hawkeye | 3 years ago | on: Qwik: No hydration, auto lazy-loading, edge-optimized, and fun

I wish there was a mode that gave me everything except lazy load.

I like the idea, though I do fear the level of complexity (while necessary for the problems it's trying to solve) will further push away non-web developers from the web space.

But my concern is that if I'm on spotty wifi, and I click a button or scroll the page, the actual behaviors of that button or the components scrolling into view may not function for a solid 5-10 seconds while my phone attempts to establish a connection. I love that the framework's demos make it really easy to see what's going on - for example, under the "Simple useWatch()" demo, you can see that the JS file that actually updates the page when the "+" button is clicked, does not get loaded from the server until the first time you click the button. Similarly with the "Below the fold Clock" button, that clock will be rendered correctly for some specific time (default to 10:20:30 in the example code) when you initially scroll down, but won't actually update to the current time until a network connection has been established, javascript downloaded, parsed, and executed - none of which kicks off until you scroll the clock into view, meaning you may be staring at a non-functional clock for 1-5 seconds until it snaps into reality.

It seems to me that there would be no downside to having these JS files preloaded in the background after the initial page load finishes. Curious why they went with the completely lazy loading strategy.

As an aside, it seems these JS files are loaded from a service worker. I have no idea why that would be remotely necessary, but it prevents me from playing around with this idea using the chrome devtools, as the service worker doesn't seem to be subjected to the network tab's "ignore cache" or internet throttling simulation (for example, returning the .js file in 2ms despite throttling being set to simulate a network latency of 300ms).

page 2