giovannibajo1's comments

giovannibajo1 | 9 months ago | on: Bill Atkinson has died

Uhm can you better explain that? I don’t get it. D3 doesn’t get reset because it’s guaranteed to be 0 at the beginning of each scanline, and the code needs to go through all “scanline blocks” until it finds the one whose Y contains the one specified as argument. It seems to me that each scanline is still self contained and begins logically at X=0 in the “outside” state?

giovannibajo1 | 9 months ago | on: Bill Atkinson has died

Yeah those are the horizontal spans I was referring to.

It’s a sorted list of X coordinates (left to right). If you group them in couples, they are begin/end intervals of pixels within region (visibles), but it’s actually more useful to manipulate them as a flat array, as I described.

I studied a bit the code and each scanline is prefixed by the Y coordinates, and uses an out of bounds terminator (32767).

giovannibajo1 | 9 months ago | on: Bill Atkinson has died

There were far fewer abstraction layers than today. Today when your desktop application draws something, it gets drawn into a context (a "buffer") which holds the picture of the whole window. Then the window manager / compositor simply paints all the windows on the screen, one on top of the other, in the correct priority (I'm simplifying a lot, but just to get the idea). So when you are programing your application, you don't care about other applications on the screen; you just draw the contents of your window and that's done.

Back at the time, there wouldn't be enough memory to hold a copy of the full contents all possible windows. In fact, there were actually zero abstraction layers: each application was responsible to draw itself directly into the framebuffer (array of pixels), into its correct position. So how to handle overlapping windows? How could each application draw itself on the screen, but only on the pixels not covered by other windows?

QuickDraw (the graphics API written by Atkinson) contained this data structure called "region" which basically represent a "set of pixels", like a mask. And QuickDraw drawing primitives (eg: text) supported clipping to a region. So each application had a region instance representing all visible pixels of the window at any given time; the application would then clip all its drawing to the region, so that only the visibile pixels would get updated.

But how was the region implemented? Obviously it could have not been a mask of pixels (as in, a bitmask) as it would use too much RAM and would be slow to update. In fact, think that the region datastructure had to be quick at doing also operations like intersections, unions, etc. as the operating system had to update the regions for each window as windows got dragged around by the mouse.

So the region was implemented as a bounding box plus a list of visible horizontal spans (I think, I don't know exactly the details). When you represent a list of spans, a common hack is to use simply a list of coordinates that represent the coordinates at which the "state" switches between "inside the span" to "outside the span". This approach makes it for some nice tricks when doing operations like intersections.

Hope this answers the question. I'm fuzzy on many details so there might be several mistakes in this comment (and I apologize in advance) but the overall answer should be good enough to highlight the differences compared to what computers to today.

giovannibajo1 | 11 months ago | on: Libogc (Wii homebrew library) discovered to contain code stolen from RTEMS

I don’t understand if this question is legal or morale/technical. I will answer to the latter, from the point of view of a prospective user of the library that wants to make their own mind around this.

Its quite easy to prove that libdragon was fully clean roomed. There are thousands of proofs like the git history showing incremental evolution and discovery, the various hardware testsuites being developed in parallel to it, the Ares emulator also improving its accuracy as things are being discovered over the past 4-5 years. At the same time, the n64brew wiki has also evolved to provide a source of independently verified, trustable hardware details.

Plus there are tens of thousands of Discord log messages where development has incrementally happened.

This is completely different from eg romhack-related efforts like Nintendo microcode evolutions where the authors explicitly acknowledge to have used the leaks to study and understand the original commented source code.

Instead, libdragon microcode has evolved from scratch, as clearly visible from the git history, discovering things a bit at a time, writing fuzzy tests to observe corner case behaviors, down to even creating a custom RSP programming language.

I believe all of this will be apparent to anybody approaching the codebase and studying it.

giovannibajo1 | 1 year ago | on: Gemini can't be disabled on Google Docs

Google Docs does a lot of algorithms over the data you put in. For instance, it paginate them and show a page count. This is an algorithm processing your data exactly like Gemini does. There is no option in Google Docs to avoid the pagination algorithm from reading my data and processing it.

Another example: Google Docs indexes the contents of your document. That is, it stores all the words in a big database that you don't see and don't have access to, so that you can search for "tax" in the Google Docs search bar and bring up all documents that contain the word "tax". There is no option in Google Docs to avoid indexing the contents of a document for the purpose of searching for it.

When you decide to put your data into Google Docs, you are OK with Google processing your data in several ways (that should hopefully be documented). The fact that you seem so upset that a specific algorithm is processing your data just because it has the "AI" buzzword attached to it, seems like an overreaction prompted by the general panic we're living in.

I agree Google should be clear (and it is clear) whether Gemini is being trained on your data or not, because that is something that can have side effects that you have the right to be informed about. But Gemini just processing your data to provide feature N+1 among the other 2 billions available, it's really not something noteworthy.

giovannibajo1 | 2 years ago | on: Go run

There's also a technical reason, which is that the build system is written in the language it targets. So the cool tool is written in coolang. That's obviously not required, you could use any programming language for the cool tool, it just happens that all people that care about the cool tool, understand the needs of the ecosystem, have issues with missing features etc. already have a non zero intersection of languages they know of: they all know coolang.

If coolang decided to try to add coolang support to Bazel instead, they would probably have to learn Java[1]. Current maintainers or contributors to Bazel don't know coolang, and they don't care about it much, especially in the early stage. And maybe coolang developers don't know Java, or even actively hate it with a passion (that's why they were on the market for a new language). And even if some coolang developer decided to contribute to Bazel, the barrier would be much higher: being a mature build system with so many features and different needs, surely working in it is going to be complex; there will be many different concepts, and layers, and compromises, and APIs to work with. So for them it just makes more sense to use coolang so that all coolang developers can contribute to it having a real need for the cool tool to improve.

[1] I know nothing of Bazel. So just bare with the example even if it's technically not correct.

giovannibajo1 | 2 years ago | on: Don't pass structs bigger than 16 bytes on AMD64

That will still require the compiler to serialize the three registers to the stack, to be able to pass the pointer to the structure to the callee. It seems like the described benefit is avoiding any serialization from registers to stack, which cannot be avoided with pass-by-reference.

giovannibajo1 | 3 years ago | on: Golang disables Nagle's Algorithm by default

I think also “least surprise” depends on your background. In Go, also files don’t buffer by default, contrary to many languages including C. If you call Write() 100 times, you run exactly 100 syscalls. Intermediate Go programmers learn this and that they must explicitly manage buffering (eg: via bufio).

I don’t think it’s wrong that sockets follow the same design. It gives me less surprise.

giovannibajo1 | 3 years ago | on: Spotify CEO renews attack on Apple after Musk's salvo

The difference is that Facebook is in the market of mining your soul so whatever store they would create it would be targeted to that and the rules and policies will make sure that they are able to reach that goal.

Apple is in the business of selling phones and has decided that a good strategy for them is to protect the privacy of users against data miners. So their store and payment system by default protect users against practices of data collections.

> Having app stores compete for developers and users would be amazing for everyone except the current app store owners

It would be good for developers, not for users. I don’t think a single non-developer user is grasping for having multiple ways to download and install an app, and having to search for multiple stores with multiple payment systems to get one software. For users, the “iPhone” allows to download apps; none of them gives a thought to the fact that it happens via a single “store”.

giovannibajo1 | 4 years ago | on: Drawing Triangles on N64

It is pretty much understood in most aspects that pertain regular software development, though there are still corners that are investigated.

The most accurate and fast emulator right now is Ares (https://ares-emu.net), which bundles the Vulkan-accelerated RDP emulation with a recompiler for both CPU and RSP. It is extremely accurate in many regards and in general much closer to the real hardware than any other emulators (with cen64 being a close second). Other emulators manage to run most of the game library but using several hacks, while Ares keeps a zero-hack approach, so not everything works, but it is for instance far more compatible with advanced home-brew stuff which use the hardware in ways that the Nintendo SDK did not.

The most advanced open source library for N64 development is libdragon (https://github.com/DragonMinded/libdragon) which is currently growing very advanced RSP ucodes that do things that are not possible with Nintendo SDK. For instance, it was recently merged a command list support to send commands from CPU to RSP without any lock in the happy path, and fully concurrent access from both the processors. Another example would be its DMA support for fetching data from ROM that exploits undocumented partially-broken features of the RCP that were previously unknown to allow for misaligned memory transfers.

The most accurate source of hardware documentation is the n64brew wiki, which is slowly gathering accurate, hardware-tested information on how the whole console works. https://n64brew.dev/wiki/Main_Page. Unfortunately, it's still lacking in many areas (eg: RSP). It's a painstaking long work because there are many many documents floating around with partial or completely wrong information.

giovannibajo1 | 4 years ago | on: SNES Development Part 1: Getting Started

I'm sorry if my comment came across as offensive, it wasn't my intent. I wasn't aware that there was a community fork of bass, and bass used to be pretty abandoned, so I've been advising people against it for quite some time. I'm happy if there's a community willing to reprise development on it.

I will open an issue, and I've joined the discord if you want to discuss about this. I'm happy to help though I've not really used it for years now, but I am happy to at least tell you why I stopped using it so that you can ponder this for future developments.

giovannibajo1 | 4 years ago | on: SNES Development Part 1: Getting Started

I generally advise against using bass for home-brew development. Bass is not a very well thought out assembler. I've never used it for 65816, but for other architectures like MIPS has some serious design issues that cause invalid code to be silently accepted by default, which is normally a disaster when it gets to debugging.

For MIPS at least, one of the completely wrong design decision has been to map basic register indices to raw numbers. For instance, in bass, this is a valid instruction:

    add 2, 4, 5
which means "add register 4 and 5 together and write the result in register 2". Normally, one would write that line with the register aliases:

    add v0, a0, a1
The problem comes with the fact that MIPS has also a "addi" (add immediate instruction) that you would use like this:

    addi v0, a0, 5
"add the immediate value 5 to a0 and store the result in v0". So I guess the problem is clear: what happens if you instead write:

    add v0, a0, 5
There are two possible reasonable outcomes: either the assembler should reject the above line as invalid, or it should silently convert it to "addi" which is what GNU as does for instance. Instead, with bass, the above is a perfectly valid line which gets assembled to the same of "add v0, a0, a1", which basically silently generates wrong code.

I think bass was a quick hack that overgrew its intended goal. I suggest to use something more mature.

giovannibajo1 | 4 years ago | on: I decided to build a nine-bit computer

Nintendo 64 had a 9-bit RAM (Rambus RDRAM). Only 8 bits of each byte were accessible from the MIPS CPU for obvious reasons; the 9th bit was only used by the GPU (called "RDP") to store extra information while rendering (begin a UMA architecture, the CPU used the same RDRAM used by the CPU). Typically it contained a flag called "coverage" that was used to discriminate pixels on the edge of polygons, that were later subject to antialiasing. By reading back pixels using the CPU, you would be unable to see the coverage flag.

giovannibajo1 | 4 years ago | on: Trying Out Generics in Go

This, one thousands times.

The borrow checker forces you to write in the very narrow subset of code paradigms it can understand. When it fails to compile, it doesn't mean it's wrong: it means that it can't prove that it's correct, which is a completely different statement.

page 1