top | item 46137814

(no title)

pkilgore | 3 months ago

So it starts as a line, explodes into a huge 2D complex mess, and eventually, after many generation, returns to form the same 3.7B cells long line?

That's kind of amazing. I wish someone unpacked the units of abstraction/compilation that must surely exist here.

Surely they aren't developing this with 1 or 0 as the abstraction level!

discuss

order

layer8|3 months ago

See here: https://conwaylife.com/forums/viewtopic.php?f=2&t=2040&start...

It’s also a relatively sparse line, as the number of live cells is less than a hundredth of the line’s extent: https://conwaylife.com/wiki/Unidimensional_spaceship_1

Retr0id|3 months ago

I'm barely able to follow, but this part was fun:

> The third and fourth arms are extreme compression construction arms "ecca", where a programming language interpreter is created and individual incoming letters are interpreted as instructions specifying which phase (mod 2) and line of glider to emit.

H8crilA|3 months ago

> Work started in 2016 and was completed on December 1, 2025.

Almost 10 years of development.

dkural|3 months ago

Only about 1.5% of the human genome is protein coding. The human genome is about 3 billion base pairs long.

tantalor|3 months ago

How many steps is the period? How far does it travel in that period? What direction does it go? Does it clean up after itself?

scotty79|2 months ago

Thank you for this description. I thought it's a glider for some 1 dimensional cellular automata system.

kevincox|2 months ago

Yes, that was my first reading as well. I thought "((1D Conway's Life) glider) found" but it is "(1D (Conway's Life glider)) found".

Romario77|3 months ago

[deleted]

tomtomtom777|3 months ago

> I asked AI to explain it to me,

We all know how to do that, but that's not why were here.

alwa|3 months ago

I’m not sure where our guidelines/norms are on this kind of thing, but I get the sense that most of us feel very capable of pasting articles into LLMs ourselves.

What we’re less capable of—and the reason we look to each other here instead—is distinguishing where the LLM’s errors or misinterpretations lie. The gross mistakes are often easy enough to spot, but the subtle misstatements get masked by its overconfidence.

Luckily for us, a lot of the same people actually doing the work on the stuff we care about tend to hang out around here. And often, they’re kind enough to duck in and share.

Thank you in any case for being upfront about it. It’s just that it’d be a shame and a real loss if the slop noise came to drown out the signal here.