fallpeak's comments

fallpeak | 6 months ago | on: Formatting code should be unnecessary

The unreadability of that example has approximately nothing to do with code formatting, which is generally understood to refer to modifying the textual representation of the code while leaving the actual logic more or less unchanged. Can you propose some alternative whitespace or indentation scheme which would make that example significantly more readable?

fallpeak | 6 months ago | on: US Visa Applications Must Be Submitted from Country of Residence or Nationality

Now do the comparison if one endpoint is realtime user facing traffic and the other is batch processing which can easily eat up all available capacity and drive up latency.

If visa shoppers are overwhelming the normal processing of applicants who actually live in a particular country, it seems entirely appropriate to say "no, sorry, this location isn't for you" to the people who don't live there.

fallpeak | 6 months ago | on: LunarEngine: An open source, Roblox-compatible game engine

People still make games for old consoles occasionally as hobby projects, and those are usually released freely as ROM files. I'm not familiar with Japanese law, but in most countries that would constitute a fairly solid proof that there are legal uses to which an emulator can be applied and thus that emulation itself isn't inherently illegal.

fallpeak | 6 months ago | on: Code formatting comes to uv experimentally

"The basic principles of the Python ecosystem" are a dumpster fire to anyone who isn't already desensitized to the situation. Just like 'uv' as a whole, this seems like a meaningful step towards making Python a little less terrible to actually use, and should be applauded.

fallpeak | 6 months ago | on: AGENTS.md – Open format for guiding coding agents

I mean, by construction you're only ever going to see the examples where people checked them in and published that. It doesn't mean that other people aren't getting more use out of local instructions customized to their particular work.

fallpeak | 6 months ago | on: Calling Their Bluff

Interesting, I'm surprised the results vary so much for a query with such an objectively correct answer. I tested it on my desktop in private browsing mode because I wanted to see whether Kagi was much better, but figured I should report the null result when Google actually did fine.

fallpeak | 6 months ago | on: AGENTS.md – Open format for guiding coding agents

In my opinion an AGENTS.md file isn't an artifact at all (in the sense of being a checked-in part of the codebase), it's part of the prompt. You should gitignore them and use them to give the LLM a brief overview of the things that matter to your work and the requests you are making.

fallpeak | 6 months ago | on: AGENTS.md – Open format for guiding coding agents

I don't know if GPT-5 is an exception and is overcooked on XML specifically, but in general Markdown and XML seem to work about equally well for LLM inputs, the important part is just that they like hierarchical structured formats. The example on that page could probably be replaced with:

  ## Code Editing Rules

  ### Guiding Principles

  - Every component should be modular and reusable
  ...

  ### Frontend Stack Defaults

  - Styling: TailwindCSS
Without any meaningful change in effectiveness.

fallpeak | 6 months ago | on: A privacy VPN you can verify

> If you pay in crypto, you have to report every conversion from fiat to monero

That's not what your link says, and as far as I'm aware it's not true. Buying crypto and then using some of it to buy goods and services has no tax reporting requirement, those only start when you're either selling crypto or receiving it as payment. Which is the same situation as the tax reporting for any other currency or valuable item you could deal in.

fallpeak | 7 months ago | on: GPT-OSS-20B extracted to a base model without alignment

While this is really neat work, it's not entirely accurate to describe this as a base model or unaligned. Instruct training does two things, broadly speaking: it teaches the model about this "assistant" character and how the assistant tends to respond, and it gives the model a strong prior that all prompts are part of just such a user/assistant conversation. GPT-OSS is notable both because the latter effect is incredibly strong (leading many to suspect that its training was very heavy on synthetic data) and because the assistant character it learned is especially sanctimonious.

This finetune seems to work by removing that default assumption that every prompt is a user/assistant chat, but the model still knows everything it was taught about the assistant persona, so inputs that remind it of a user/assistant chat will still tend to elicit the same responses as before.

fallpeak | 7 months ago | on: My Lethal Trifecta talk at the Bay Area AI Security Meetup

This is only a problem when implemented by entities who have no interest in actually solving the problem. In the case of apps, it has been obvious for years that you shouldn't outright tell the app whether a permission was granted (because even aside from outright malice, developers will take the lazy option to error out instead of making their app handle permission denials robustly), every capability needs to have at least one "sandbox" implementation: lie about GPS location, throw away the data they stored after 10 minutes, give them a valid but empty (or fictitious) contacts list, etc.

fallpeak | 7 months ago

What an auspicious coincidence, to see this the same day I finally got around to setting up a proper pseudonym.
page 1