top | item 46410134

(no title)

DanexCodr | 2 months ago

// Other sample

collatz := [1 to 1T] for n in collatz { steps := 0 current := n while current != 1 { current = if current % 2 == 0 { current/2 } else {3*current + 1} steps += 1 } collatz[n] = steps }

// On my phone, I can instantly check: outln("27 takes " + collatz[27] + " steps") // 111 steps outln("871 takes " + collatz[871] + " steps") // 178 steps

discuss

order

onion2k|2 months ago

That doesn't explain anything.

DanexCodr|2 months ago

You've highlighted exactly why my example was poorly chosen - it sounds physically impossible. Let me explain what's actually happening:

We're NOT loading 13.66 days of 8K video. That would indeed be ~2.5 petabytes.

What we are doing is creating a virtual processing pipeline that could process that much data if you had it, but instead processes only what you actually need.

The Actual Code Behind This:

```java // 1. Virtual reference, NOT loading video := virtual_source("8k_video.mp4") // O(1) memory - just metadata

// 2. Algorithm expressed at full scale for frame in [0 to 33M] { // VIRTUAL: 33 million frames for pixel in [0 to 33M] { // VIRTUAL: 33 million pixels brightness = calculate_brightness(pixel) // FORMULA, not computation

        // Creates conditional formula
        if brightness > 128 {
            pixels[pixel] = 255
        } elif brightness > 64 {
            pixels[pixel] = 128  
        } else {
            pixels[pixel] = 0
        }
    }
}

// 3. Only compute specific frames (e.g., every 1000th frame for preview) for preview_frame in [0, 1000, 2000, 3000] { actual_pixels = video[preview_frame].compute() // Only NOW computes display(actual_pixels) // These 4 frames only } ```

What Actually Happens in 50ms:

1. 0-45ms: Pattern detection creates optimization formulas 2. 5ms: Compute the few frames actually requested 3. 0ms: Loading video (never happens)

Better, More Honest Example:

A real use case would be:

```java // Video editing app on phone // User wants to apply filter to 10-second clip (240 frames)

clip_frames := video[frame_start to frame_end] // 240 frames, virtual filter := create_filter("vintage") // Filter definition

// Design filter at full quality for frame in clip_frames { frame.apply_filter(filter) // Creates formula application }

// Preview instantly at lower resolution preview = clip_frames[0].downsample(0.25).compute() // Fast preview

// Render only when user confirms if user_confirms { for frame in clip_frames { output = frame.compute_full_quality() // Now compute 240 frames save(output) } } ```

The Real Innovation:

Separating algorithm design from data scale.

You can:

· Design algorithms assuming unlimited data · Test with tiny samples · Deploy to process only what's needed · Scale up seamlessly when you have infrastructure

My Mistake:

The '50ms for 937 billion pixels' claim was misleading. What I should have said:

'50ms to create a processing algorithm that could handle 937 billion pixels, then instant access to any specific pixel.'

The value isn't in processing everything instantly (impossible), but in designing at scale without scale anxiety.