Scryptonite's comments

Scryptonite | 1 year ago | on: WebGPU-Based WiFi Simulator

When I run the Waveguide Simulator demo on my Alienware M15 Ryzen Ed. R5 (has a RTX 3070; Windows 11 Pro, Chrome v129), I hear a distinct high pitched flutter noise emanating from my laptop. I thought it was from the speakers, but no, with my volume down it was still present as long as the simulator was playing. Weird, but very cool demo (probably my hardware, never hear this during games or other WebGPU demos). The realistic house simulation yields a different signature in the sound.

Scryptonite | 1 year ago | on: Bun v1.1.22

No question, just want to say Bun is awesome and thank you.

(minor nit: release article says "Uint8Array.prototype.fromBase64()" when it's actually "Uint8Array.fromBase64()" per the code sample. Same for .fromHex)

Scryptonite | 7 years ago | on: YouTube will delete existing video annotations on January 15, 2019

Or they could keep an archive of the annotations and provide a process for the video owners to choose to have Google 'bake' them into the video permanently after they've been shut off from being used by mainstream YT interfaces. Annotation links could be visually represented in a way that indicates their link could now be found in the description.

Scryptonite | 7 years ago | on: Presidential Alerts

> The name just automatically angers a significant portion of the population, no matter who the president happens to be.

I think that pretty much speaks to how shamefully bad politics has gotten here in the US. I see a couple of comments that jump to speculate that it could be used as a tool of politics, or that they would rather opt-out of knowing about an imminent threat to themselves and/or their fellow American's lives. I think it's selfish, considering smoke and CO2 detectors can't warn you of nuclear attacks. If an office of the state was responsible for calling out a fire in the building, I would want every chance to be informed, regardless of whether or not they were my prefered elected official. Obviously, if the system was abused I would want it to be fixed.

Scryptonite | 7 years ago | on: Valid use cases for autocomplete=off

Previous discussion about this in 2016: https://news.ycombinator.com/item?id=11911116

Last I checked you can always give Chrome a stronger hint not to autocomplete using something like this:

    document.querySelectorAll("input[autocomplete=off]")
        .forEach(element =>
            element.autocomplete = window && window.chrome ? "hell-no-chrome" : "off"
        );
I think it's a shame that they didn't design UI or something to coordinate with the user to override it in less well-behaved web apps. Instead, they just decided to ignore it completely.

Scryptonite | 8 years ago | on: Mixing Vue.js templates with server-side templates can lead to XSS

Makes sense because htmlspecialchars() doesn't protect against malicious Vue template expressions, it only converts characters that are used to represent html tags, entities or attributes (<>"'&) IIRC.

I think another solution (besides v-pre) to "fixing" it (though you might say that relying on htmlspecialchars() to protect against user-supplied {{vue expressions}} was unwise to begin with) is to replace { and } with &#123; and &#125; after using htmlspecialchars/htmlentities.

EDIT: Another solution would be to pass a different set of delimiters to Vue that uses characters that would be escaped by htmlspecialchars, like demonstrated in [1] or like so:

    Vue.options.delimiters = ['<%', '%>'];
[1]: https://stackoverflow.com/a/40538194/4522571

Scryptonite | 8 years ago | on: Defending a website with Zip bombs

Reminds me of a time I once wrote a script in Node to send an endless stream of bytes at a slow & steady pace to bots that were scanning for vulnerable endpoints. It would cause them to hang, preventing them from continuing on to their next scanning job, some remaining connected for as long as weeks.

I presume the ones that gave out sooner were manually stopped by whoever maintains them or they hit some sort of memory limit. Good times.

Scryptonite | 9 years ago | on: “autocomplete=off is ignored on non-login input elements”

It's tough to say that Chrome made the right call, as I enjoy being able to (in theory) rely on browsers aiming to be standard compliant. The standard is there for a reason, no?

I noticed that the Chrome team also developed a feature called Threaded scrolling in an attempt to improve site UX, but instead (or additionally?) completely ruins the ability to rely on onscroll/onmousewheel events (let alone being able to trust the values pulled from scrollLeft/scrollTop using requestAnimationFrame or similar). This can be seen on these two GIFs, with threaded scrolling enabled (currently the default behavior): https://gfycat.com/DiligentHomelyIndianelephant and with threaded scrolling disabled (behind a chrome flag): https://gfycat.com/SlimDefinitiveErin

The same flag/feature is active on mobile Chrome as well, so the same effect can be seen. The only way to guarantee jank-free work with scroll position (for parallax or w/e) is to implement handling user-input-to-scroll-behavior entirely yourself, which is rather unfortunate imo.

You can play with this yourself: http://jsbin.com/mohehotupe/edit (might need to click "Run with JS" / or tick "Auto-run JS" to start the scroll-listening script)

My take on autocomplete is that the standard says it's something I can disable, so autocomplete="off" is simply something the browser should obey. I have no problems with users using extensions or taking "advantage" of a browser setting to "fix" websites they say have abused this attribute. But there are valid use cases for disabling autocomplete as other comments have mentioned, and all they have done is make it to where I just have to do autocomplete="sudo-off" and it "works". But what happens when other webmasters misuse the attribute again? Might as well just toss support for it if they really can't trust the page to do the right thing.

Scryptonite | 11 years ago | on: Amazon Dash Button

I think that if they are remotely serious (being almost April 1st) I think I'd like to see them offer to bundle NFC stickers with each product purchase, so that I can place them near the product's point of use should I want to replace it at some point in the future.

Scryptonite | 12 years ago | on: Who exactly is crawling my site?

I deal(t) with the same thing. I made it so my web server would try to stream a page that never ends, and some bots would stay connected for hours and hours. But over time they seem to have adapted.

I've also noticed some of them used to make requests synchronously (waiting for the previous to finish before making another), but they have adapted to make requests in parallel and add timeouts so they don't have their time wasted quite as long.

I created a log of the ones who stayed connected the longest.

https://gist.github.com/scryptonite/5324724

I don't bother to maintain it anymore, but it was pretty interesting watching them change tactics over time.

Scryptonite | 12 years ago | on: Why I'm Planning to Kill W3Schools

I used to use http://help.dottoro.com/ when I was learning to program. I would recommend it if someone was looking for a good go-to doc. But for quick reminders (which is rare) I use generally MDN or read the official spec.

They (help.dottoro.com) also have links (when available) to the Microsoft Developer Network, Mozilla Developer Network, Safari Developer Library, and the official W3C spec, which can be helpful. The only thing I would really change about their site is the URL structure. Completely nondescript and not user-friendly.

Once you move from using W3S to MDN or really anything else there is no going back.

CSS's display:

W3S: http://www.w3schools.com/css/css_display_visibility.asp

MDN: https://developer.mozilla.org/en-US/docs/Web/CSS/display

Dottoro: http://help.dottoro.com/lchnsqsb.php

Scryptonite | 12 years ago | on: Ask HN: Why is Facebook indexed in search engine, against robots.txt rules?

What if a different robots.txt is being served up for the real Googlebots?

EDIT:

Based on the comments in their robots.txt it appears that they are whitelisting certain robots. You would have to apply for your robot to crawl their site at https://www.facebook.com/apps/site_scraping_tos.php

They probably serve up unique generated robots.txt based on whitelisted robots. You'd never know what rules it contains unless you are a whitelisted robot.

page 1