rng_civ | 1 year ago | on: How to draw an outline in a video game
rng_civ's comments
rng_civ | 1 year ago | on: CrowdStrike Update: Windows Bluescreen and Boot Loops
Declarative, immutable configurations for the win...
rng_civ | 1 year ago | on: I am starting an AI+Education company
If such an AI teacher style becomes widespread, this means that they have the potential to replace the parental relationship (in the same manner AI girlfriend/boyfriends threaten romantic relationships).
I see people talk about the dangers of the AI girlfriend/boyfriend, but not the dangers of introducing AI teachers to (especially young) kids. Nominal adults are already being affected by this (see Replika and company) and they are not even the "best".
If I wear my cynical hat for a second, I'm willing to bet that this parental replacement is a certainty, as an extension of the "screen" parenting that already exists. But this time, it might actually be helpful for the child so it will be socially acceptable and encouraged.
rng_civ | 1 year ago | on: Crossing the impossible FFI boundary, and my gradual descent into madness
> We wish to establish type soundness in such a setting, where there are two languages making foreign calls to one another. In particular, we want a notion of convertibility, that a type τA from language A is convertible to a type τB from language B, which we will write τA ∼ τB , such that conversions between these types maintain type soundness (dynamically or statically) of the overall system
> ...the languages will be translated to a common target. We do this using a realizability model, that is, by up a logical relation indexed by source types but inhabited by target terms that behave as dictated by source types. The conversions τA ∼ τB that should be allowed, are the ones implemented by target-level translations that convert terms that semantically behave like τA to terms that semantically behave like τB (and vice versa)
I've toyed with this approach to formalize the FFI for TypeScript and Pyret and it seemed to work pretty well. It might get messier with Rust because you would probably need to integrate the Stacked/Tree Borrows model into the common target.
But if you can restrict the exposed FFI as a Rust-sublanguage without borrows, maybe you wouldn't need to.
[0] (PDF Warning): https://wgt20.irif.fr/wgt20-final23-acmpaginated.pdf
rng_civ | 1 year ago | on: Radial Menus in Video Games (2022)
Sure, you can map extra keybinds, but convenient keybinds are actually a scarce resource if you're essentially the "full stack" of the art pipeline (from sculpting, modeling, retopology, texture painting, to animating). This isn't even including keybinds for any custom tooling.
One thing I really appreciate about Blender specifically is that you can search through all the available operations with F3. This offers a nice trade-off between muscle memory, keybind consumption, and not needing to use the mouse.
rng_civ | 2 years ago | on: Major Louisiana DMV Hack
https://www.cisa.gov/news-events/cybersecurity-advisories/aa...
rng_civ | 2 years ago | on: Major Louisiana DMV Hack
rng_civ | 2 years ago | on: Why do ships use “port” and “starboard” instead of “left” and “right?”
It is certainly clear if one would say "left facing stern" or "left facing aft", but that's a mouthful when you can just shorten it (and the reference facing direction is not relevant). Bonus points if the shortened version can't be mistaken for another direction...
BTW, I'm 100% down for introducing dedicated words for "my left", "your left" etc vs just "on the left". It would certainly save me a bit of time when my family asks me to look for something and they flip between the two meanings in the same sentence.
rng_civ | 2 years ago | on: The New XOR Problem
* Classify the computational power of Transformers (when it stumbles on certain easier problems but can solve harder ones)
* Find a "minimal" change to the Transformer that would allow it to compute these problems.
Solving these 2 problems by giving LLMs arbitrary access to external plugins is a cop out. You would not: * Call youself a chef just because you own a restaraunt (you need to cook too!)
* Or (more program-y), say that C code meets Rust's memory safety standards simply because you can write the main function in C and write the rest of the program in Rust
Allowing arbitrary external plugins seems absurdly overkill and not 'minimal' (although that doesn't mean it isn't interesting from a practical perspective!), which is what I assumed that the rain1 was originally pointing out.rng_civ | 2 years ago | on: Moving from Rust to C++
April Fools' Day is actually a super important defense against rogue AI and the Singularity. Think about it: our ancestors had the forethought to coordinate the largest data poisoning attack in history for hundreds of years. Why? To seed enough nonsense in our historical records so that any rogue AI would short-circuit itself into babbling nonsense.
Why do you think it took so long for AGIs like ChatGPT to emerge? Did you really think the AI winter in the 80s and 90s was a "coincidence"? That there were deep architectural and philosophical issues with the approach? Baloney. Without April Fools' Day, we would have become slaves to the Matrix by the late 90s, if not earlier.
Don't believe me? Well, here's proof that AI could have been developed in Medieval Europe by around late 13th century. For being the so-called "Dark Ages", the people of Medieval Europe were incredibly advanced compared to the 21st century, especially when it came to energy production. Get this: by the 11th century, England ALONE had more than 6000 wind and water turbines (Epstein 199) that they all built BY HAND. This allowed them to fine-tune the turbines to their unique environments, making them 100% more efficient than modern, mass produced metal junk.
Do you know what's even more amazing? Our ancestors knew about gravity and exploited it for power generation! What!!! We had a source of unlimited power by the mid-13th century (Epstein 208)!! But no: in the current age, we can't even muster the political power to make gravity-based perpetual motion machines because all of the physicists would whine about breaking Thermodynamics' laws. Well, screw Thermodynamics! Bastard is holding back all of humanity for personal profit by siphoning all of our hard-earned tax dollars towards solar and nuclear power. Gravity is where it's at!!
But I digress: back to AI in the 13th century Medieval Europe. So they had unlimited energy: how could they turn that into useful computations, like calculating SHA-256 hashes with k leading 0s? They had neither electricity nor silicon, or are there more truth bombs to be dropped? In this case, my dear reader, you would be correct to be sceptical. They didn't have any of that: what they DID have was grit, spit, and a whole lot of wooded land. Contrary to popular belief, they DID have computers back then, but they were based on flowing water instead of flowing electrons. They started out as simple time-keepers (Epstein 207), but eventually, medieval scholars (mostly Italian monks) starting seeing the connections between flowing water and logic gates (see [1] for how it would have worked).
So they had the energy and they had the computational ability: why didn't AI take over the world in 13th century Medieval Europe? Simply put: the power of Mother Nature. While our ancestors built computers, they were necessarily made out of a combination of wood and iron, both of which don't fare well when in contact with water. So when Medieval People discovered that their AI was Rampant, their solution was to confound it with April Fools' Day nonsense, so that by the time the AI returned to thinking about world domination, its computational structures would already be half-rotten and rusted. This is also why there is scant evidence of these water-based computers: if they were not purposefully destroyed, they would have been by time, as Medieval Europeans had the foresight to abandon all AI research in favor of just thinking. (Coincidentally, the lessons learned from early medieval computers would be taken to heart by the shipwrights and directly contributed to Europe's dominance during the Age of Sail).
So the next time you complain about April Fools' Day, just remember that it has saved society for hundreds of years. It is one of humanity's ultimate defenses against the Matrix, and if you truly care about your loved ones, you would contribute to it. I know I will.
Originally written as Latex in Microsoft Word 2003.
* [0] Epstein, Steven A. An Economic and Social History of Later Medieval Europe, 1000-1500. Cambridge University Press, 2009.
rng_civ | 2 years ago | on: And yet It Understands
* Who's Daisy?
* Why would Daisy do that?
* Daisy is rude.
etc. that imply the existence of some sort of abstract object on which relations and other facts can be plugged into. For me, the existence of that abstract object is "reasoning."
We do not know if GPT is capable of forming abstract objects in its network, and I do not think it is reasonable to infer that from its text output. In my non-expert opinion, it seems possible that the output can be achieved via knowledge regurgitation through the use of sentiment analysis, word correlations, and grammar classification.
So in this framing, it's not reasoning about Daisy nor hallucinating facts. It's regurgitating knowledge about the relationship between sentiment, words, and grammar. (An interesting experiment to run would be to change 'Daisy' to a random noun or even nonsense tokens to see what would happen).
You might argue that the ability to mechanically model that relationship counts as reasoning, and that's a stance I won't outright dismiss. However, it does seem strictly less powerful that mechanically modeling on top of abstract objects.
rng_civ | 3 years ago | on: Testing GPT 4's code-writing capabilities with some real world problems
> capable of at-least C++ "constexpr"-style compile-time computation, which shouldn't even be possible if one presumes GPT is "just" a giant database storing only multidimensional word similarity scores and sequence distribution from text inference
I don't see how being a giant word-DB necessarily disqualifies compile-time computation. You can view computation as applying a series of term rewrite rules to an input until some sort of termination condition (or indefinite loop). In the case of these AI, the input is the prompt and predicting the next token is a limited form of term rewriting (where the rules are probabilistic and based off the network), and because code and explanations were probably included in the training data, it seems reasonable to me that the "rewrite rules" of Python bled a little bit into the AI.
It makes me insanely curious about the internal structures though. I gave that site 2 similar examples: one produces a correct explanation while another produces an incorrect explanation. The difference: a deleted line of insignificant whitespace
* https://whatdoesthiscodedo.com/g/dd2af89
* https://whatdoesthiscodedo.com/g/45ea060
From those 2 examples, I think its pretty clear that the AI's "rewrite rules" don't always coincide with Python's, but I would expect this to be mitigated by targeted training (like Copilot).
rng_civ | 3 years ago | on: ChatGPT broke the EU plan to regulate AI
I unironically attribute it to the Matrix. The movies have somehow weasled its way into the public discourse as either some sort of prophecy or actual reality (the 'pill' speak, living in a virtual reality, etc.).
I won't comment on the validity of any position, but I think it's pretty cool that a piece of art has proliferated in such a way. I do wonder how impactful it has been in comparison to stuff like the Bible, Tolkien, etc.
rng_civ | 3 years ago | on: Will Carbon Replace C++?
Assuming your WASM sandbox is airtight, that would work. But there are still ways to break out or cause damage because within the sandbox, its like a flat address space with 0 modern protections like ASLR, stack canaries, page protection, etc. (unless you manually compile it in yourself). See [0]
* [0]: https://www.usenix.org/conference/usenixsecurity20/presentat...
rng_civ | 3 years ago | on: Where is the moral outrage about Britain’s grooming gangs?
rng_civ | 3 years ago | on: Ask HN: Why do games (as media) make so much money?
Monetization schemes lay on a spectrum, but even arcade tokens lie on the tamer end in comparison to modern schemes. The biggest difference: the only Advantage most arcade tokens would give you is an extra life (i.e. a skilled player can get away with minimal pay). I am aware of 0 arcade games that give you extra speed, damage, or max HP just because you put in another coin while that is INCREDIBLY common with modern monetization schemes.
> Once they get older and have a better grasp of time and money, they may decide spending $200/month on a game you enjoy daily from the toilet is on par with other hobbies they could have, and more convenient.
Sure and I won't argue against continually spending money on games. I think its a really good thing that helps develop content and keeps the game alive (I think $60 for modern AAA games is absolutely ludicrous; it was $60 back in the 90s or 80s and it certainly hasn't kept up with inflation and dev-costs).
> I think game monetization is over-scrutinized for a couple of reasons.
My stance:
* The stuff most people spend money on (i.e. in-app-purchases for lootbox/gambling opportunities) is bad for gaming because they encourage BS game designs that artificially restricts progress and incentivizes psycologically manipulative tactics
* These BS game designs make the games worse (as a "pure" game) 99% of the time
I basically haven't touched a modern AAA game in 5+ years because of this. In terms of gameplay, indie games have been way more interesting and diverse. And I give 0 shits about graphics.
rng_civ | 3 years ago | on: Ask HN: Why do games (as media) make so much money?
Consoles and gaming PCs tend to be expensive (in terms of upfront costs), require more physical space to use, and are hard to move.
Mobile phones have multiple purposes and, crucially, are mobile (shocker, right?)
This means that if you could only have 1 electronic device, you'd 100% choose a mobile phone.
And what better way to find a lot of customers than a F2P mobile game? It has 0 barrier to entry, a large audience, relatively low development costs, and little actual "game design" expectations.
rng_civ | 3 years ago | on: Ask HN: Why do games (as media) make so much money?
People spend big in games because:
* The spending is very stimulating visually and audibly (think lootbox openings)
* The gains from their spending translates directly to game-social prestige, game power, or both (i.e. an Advantage)
* This Advantage allows them to lord over the players who have spent less (or 0 in the case for F2P players)
Whether that manifests into addiction depends entirely on the rest of the game's design (but you know, games that introduce the Advantage tend to want to make a lot of money by getting you hooked on spending...)
> Second, games provide a sense of community. A lot of game revenue is monetizing people's desire to not be alone. Calling these players addicted is I think reductive.
Yes, these games do provide a sense of community because they are purposefully designed to do so. Without an incentive to play while getting lorded over by whales, the fish will leave. Without a bunch of fish to lord over, the whales will leave. Again, this depends on the game's design, but the vast majority of them encourage addiction to the Advantage and its use against others.
Maybe you can extend this analysis to IRL stuff. I don't know because I don't participate in any of it.
Source: [Let’s go whaling: Tricks for monetising mobile game players with free-to-play](https://www.youtube.com/watch?v=xNjI03CGkb4&t=0s)
rng_civ | 3 years ago | on: Ask HN: Why do games (as media) make so much money?
* Activision Blizzard 2022 Report: ~50% of revenue for the first six months of 2022 came from mobile
Source(PDF): https://investor.activision.com/node/35551/pdf
* Forbes: 7 Mobile Games Now Make Over $100 Million Every Month
Source: https://www.forbes.com/sites/johnkoetsier/2021/08/11/7-mobil...
* NCSoft Q3 2022: ~440 KRW billion in mobile sales vs ~97 KRW billion in PC sales
Source: https://kr.ncsoft.com/en/ir/irArchive/earningsRelease.do
* Game-of-the-Year Elden Ring has sold 17.5 million copies by September 2022. Assuming a very generous $100 average per copy, thats $1.75 billion.
That put's it slightly below the 3rd highest revenue mobile game in 2021.
Sources: https://www.eurogamer.net/elden-ring-sales-surpass-175m
https://sensortower.com/blog/billion-dollar-mobile-games-202...
rng_civ | 3 years ago | on: Active ball joint mechanism with 3 DoF based on spherical gear meshings (2021)
While this has been the prevailing sentiment for Germany, I wouldn't be so sure after the Volkswagen emission fiasco. At least Volkswagen seems pretty eager to work around QA laws...
There are various techniques to do this. The most prominent one IMO is from the folks at Blender [0] using geometry nodes. A Kuwahara filter is also "good enough" for most people.
> When dealing with a stylised 3D renderer, what would the ideal "mesh editor" and "scenery editor" programs look like? Do those assets need to have a physically-correct 3D surface and 3D armature, or could they be defined in a more vague, abstract way?
Haven't used anything else but Blender + Rigify + shape keys + some driver magic is more than sufficient for my needs. Texturing in Blender is annoying but tolerable as a hobbyist. For more NPR control, maybe DillonGoo Studio's fork would be better [1]
> Would it be possible to render retro pixel art from a simple 3D model? If so, could we use this to make a procedurally-generated 2D game?
I've done it before by rending my animations/models at a low resolution and calling it a day. Results are decent but takes some trial and error. IIRC, some folks have put in more legwork with fancy post-processing to eliminate things like pixel flickering but can't find any links right now.
[0]: https://www.youtube.com/watch?v=ljjUoup2uTw
[1]: https://www.dillongoostudios.com/gooengine