arc619 | 7 months ago | on: Rust running on every GPU
arc619's comments
arc619 | 1 year ago | on: Java Language Update – a look at where the language is going by Brian Goetz
However, there's no need to fuse data and code together as a single "unit" conceptually as OOP does, where you must have particular data structures to use particular behaviours.
For example, let's say I have a "movement" process that adds a velocity type to a position type. This process is one line of code. I can also use the same position type independently for, say, UI.
To do this in an OOP style, you end up with an "Entity" superclass that subclasses to "Positional" with X and Y, and another subclass for "Moves" with velocity data. These data types are now strongly coupled and everything that uses them must know about this hierarchy.
UI in this case would likely have a "UIElement" superclass and different subclass structures with different couplings. Now UI needs a separate type to represent the same position data. If you want a UI element to track your entity, you'd need adapter code to "convert" the position data to the right container to be used for UI. More code, more complexity, less code sharing.
Alternatively, maybe I could add position data to "Entity" and base UI from the "Positional" type.
Now throw in a "Render" class. Does that have its own position data? Does it inherit from "Entity", or "Positional"? So how do we share the code for rendering a graphic with "Entity" and "UIElement"?
Thus begins the inevitable march to God objects. You want a banana, you get a gorilla holding a banana and the entire jungle.
Meanwhile, I could have just written a render procedure that takes a position type and graphic type, used it in both scenarios, and moved on.
What do I gain by doing this? I've increased the complexity and made everything worse. Are you thinking about better hierarchies that could solve this particular issue? How can you future proof this for unexpected changes? This thinking process becomes a huge burden to make brittle code.
> you risk a procedural programming style lacking the benefits of encapsulation. This can increase the risk of data corruption and reduce data integrity...
You can use data encapsulation fine without taking on the mantle of OOP. I'm not sure why you think this would introduce data corruption/affect integrity.
There's plenty of compositional and/or functional patterns beyond OOP to use beyond procedural programming, but I'd hardly consider using procedural programming a "risk". Badly written code is bad regardless of the pattern you use.
That's not to say procedural programming is all you need, but at the end of the day, the computer only sees procedural code. Wrapping things in objects doesn't make the code better, just more baroque.
arc619 | 1 year ago | on: Java Language Update – a look at where the language is going by Brian Goetz
OOP's core tenet of "speciating" processing via inheritance in the hope of sharing subprocesses does precisely the opposite; defining "is-a" relationships, by definition, excludes sharing similar processing in a different context, and subclassing only makes it worse by further increasing specialisation. So we have adapters, factories, dependency injection, and so on to cope with the coupling of data and code. A big enough OOP system inevitably converges towards "God objects" where all potential states are superimposed.
On top of this, OOP requires you to carefully consider ontological categories to group your processing in the guise of "organising" your solution. Sometimes this is harder than actually solving the problem, as this static architecture has to somehow be both flexible yet predict potential future requirements without being overengineered. That's necessary because the cost to change OOP architectures is proportional to the amount of it you have.
Of course, these days most people say not to use deep inheritance stacks. So, what is OOP left with? Organising code in classes? Sounds good in theory, but again this is another artificial constraint that bakes present and future assumptions into the code. A simple parsing rule like UFCS does the job better IMHO without imposing structural assumptions.
Data wants to be pure, and code should be able to act on this free-form data independently, not architecturally chained to it.
Separating code and data lets you take advantage of compositional patterns much more easily, whilst also reducing structural coupling and thus allowing design flexibility going forward.
That's not to say we should throw out typing - quite the opposite, typing is important for data integrity. You can have strong typing without coupled relationships.
Personally, I think that grouping code and data types together as a "thing" is the issue.
arc619 | 1 year ago | on: Rusty.hpp: A Borrow Checker and Memory Ownership System for C++20
You're right. I was sure I read that it would announce when it does a copy over a sink but now I look for it I can't find it!
> The static analysis is not very transparent.
There is '--expandArc' which shows the compile time transformations performed but that's a bit more in depth.
arc619 | 1 year ago | on: Rusty.hpp: A Borrow Checker and Memory Ownership System for C++20
Where Rust won't compile when a lifetime can't be determined, IIRC Nim's static analysis will make a copy (and tell you), so it's more as a performance optimisation than for correctness.
Regardless of the details and extent of the borrow checking, however, it shows that it's possible in principle to infer lifetimes without explicit annotation. So, perhaps C++ could support it.
As you say, it's the semantics of the syntax that matter. I'm not familiar with C++'s compiler internals though so it could be impractical.
arc619 | 1 year ago | on: Rusty.hpp: A Borrow Checker and Memory Ownership System for C++20
arc619 | 1 year ago | on: Leaving Rust gamedev after 3 years
So, while yes it's nice in theory, in practice it often doesn't add as much performance as you'd expect.
arc619 | 1 year ago | on: Arraymancer – Deep learning Nim library
arc619 | 2 years ago | on: Fortran vs Python: The counter-intuitive rise of Python in scientific computing (2020)
In that regard, I'm surprised Nim hasn't taken off for scientific computing. It has a similar syntax to Python with good Python iterop (eg Nimpy), but is competitive with FORTRAN in both performance and bit twiddling. I would have thought it'd be an easier move to Nim than to FORTRAN (or Rust/C/C++). Does anyone working in SciComp have any input on this - is it just a lack of exposure/PR, or something else?
arc619 | 2 years ago | on: How to Pick a Programming Language (2002)
I was under the impression Go originally avoided generics more for a perceived abstract complexity for developers, the idea being they are hard to understand for new recruits.
arc619 | 2 years ago | on: Nim 2.0
arc619 | 2 years ago | on: Nim 2.0
"var data: MyObject" is on the stack. "var arr: array[1000, MyObject]" is allocated on the stack sequentially.
Only dynamic seq or ref types use the heap by default.
arc619 | 2 years ago | on: Nim 2.0
So "var i: int" is value, "var i: ref int" is a heap allocated reference that's deterministically managed like a borrow checked smart pointer, eliding reference counting if possible.
You can turn off GC or use a different GC, but some of the stdlib uses them, so you'd need to avoid those or write/use alternatives.
Let me say though, the GC is realtime capable and not stop the world. It's not like Java, it's not far off Rust without the hassle.
arc619 | 2 years ago | on: Nim 2.0
Hopefully it'll get updated/replaced some time, but there's plenty of faster 3rd party ones already.
arc619 | 2 years ago | on: Nim 2.0
arc619 | 2 years ago | on: Nim 2.0
After programming professionally for 25 years, IMO Nim really is the best of all worlds.
Easy to write like Python, strongly typed but with great inference, and defaults that make it fast and safe. Great for everything from embedded to HPC.
The language has an amazing way of making code simpler. Eg UFCS, generics, and concepts give the best of OOP without endless scaffolding to tie you up in brittle data relationships just to organise things. Unlike Python, though, ambiguity is a compile time error.
I find the same programs are much smaller and easier to read and understand than most other languages, yet there's not much behind the scenes magic to learn because the defaults just make sense.
Then the compile time metaprogramming is just on another level. It's straightforward to use, and a core part of the language's design, without resorting to separate dialects or substitution games. Eg, generating bespoke parsing code from files is easy - removing the toil and copypasta of boilerplate. At the same time, it compiles fast.
IMHO it's easier to write well than Python thanks to an excellent type system, but matches C/C++ for performance, and the output is trivial to distribute with small, self contained executables.
It's got native ABI to C, C++, ObjC, and JS, a fantasic FFI, and great Python interop to boot. That means you can use established ecosystems directly, without needing to rewrite them.
Imagine writing Python style pseudoocode for ESP32 and it being super efficient without trying, and with bare metal control when you want. Then writing a web app with backend and frontend in the same efficient language. Then writing a fast paced bullet hell and not even worrying about GC because everything's stack allocated unless you say otherwise. That's been my Nim experience. Easy, productive, efficient, with high control.
For business, there's a huge amount of value in hacking up a prototype like you might in Python, and it's already fast and lean enough for production. It could be a company's secret weapon.
So, ahem. If anyone wants to hire a very experienced Nim dev, hit me up!
arc619 | 2 years ago | on: Functions and algorithms implemented purely with TypeScript's type system
I'd love to know if anyone could reproduce the N-queens example in Nim: https://www.richard-towers.com/2023/03/11/typescripting-the-...
I believe it is possible, but don't have the time to try it out.
> The Nim compiler includes a simple linear equation solver, allowing it to infer static params in some situations where integer arithmetic is involved.
From: https://nim-lang.org/docs/manual_experimental.html#concepts-...
arc619 | 2 years ago | on: Functions and algorithms implemented purely with TypeScript's type system
Nim is like this and its fantastic. None of the weird special rules for metaprogramming, just Nim code manipulating ASTs at compile time procedurally. Can even use standard library.
arc619 | 2 years ago | on: “Just a statistical text predictor”
My reading of this - and please correct me if I'm wrong, I'm still learning - is that you're extracting hyper-parameter planes from the data flow in the model's embedding space?
Its really exciting to think of the hidden knowledge and relationships we could extract from our own linguistic interactions.
arc619 | 2 years ago | on: We aren't close to creating a rapidly self-improving AI
Check out https://arxiv.org/abs/2304.03442
There's also https://github.com/treeform/shady to compile Nim to GLSL.
Also, more generally, there's an LLVM-IR->SPIR-V compiler that you can use for any language that has an LLVM back end (Nim has nlvm, for example): https://github.com/KhronosGroup/SPIRV-LLVM-Translator
That's not to say this project isn't cool, though. As usual with Rust projects, it's a bit breathy with hype (eg "sophisticated conditional compilation patterns" for cfg(feature)), but it seems well developed, focused, and most importantly, well documented.
It also shows some positive signs of being dog-fooded, and the author(s) clearly intend to use it.
Unifying GPU back ends is a noble goal, and I wish the author(s) luck.