oldfuture's comments

oldfuture | 2 months ago | on: Umbrel – Personal Cloud

this can be solved by adding an external nas - for redundancy - and an opensource application or extension that manages the syncing?

making self hosting more seamless is key, we simply can't trust to be dependent on third parties for access to our own data in the long term

oldfuture | 2 months ago | on: Democracy in the Workplace: Co-ops [video]

Behind Italy’s Parmigiano Reggiano is a radical tradition of cooperatives. In some areas, they make up nearly a fifth of the GDP without leaving democracy outside the factory's doors

oldfuture | 5 months ago | on: Impact of Google's num=100 Removal on 77% of the Web

Google's removal of num=100 parameter last month makes it harder for third party tools (e.g. chatgpt, perplexity) to access beyond the first 10 results

This is incredibly hurting the visibility of any new emergent site as we can already see in the data

oldfuture | 5 months ago | on: Road to ZK Implementation: Nethermind Client's Path to Proofs

One thing worth stressing is that the witness + executor layer is the critical trust boundary here.

In classic Ethereum, bugs are noisy: if one client diverges, other clients complain, and consensus fails until fixed.

In zk Ethereum, bugs can be silent: the proof validates the wrong execution and everyone downstream accepts it as truth.

I mean that the witness is like a transcript of everything the EVM touched while running a block: contract code, storage slots, gas usage, etc. so you can replay the block later using only this transcript, without needing the full Ethereum state.

For security, that witness ideally needs to be cryptographically bound to the block (e.g., via Merkle commitments), so no one can tamper with it.

The executor is the piece that replays that transcript deterministically. If it does so correctly, then you can generate a zk proof saying “this block really executed as Ethereum says it should.” But correctness here isn’t binary, it means bit-for-bit agreement with the Yellow Paper and all EIPs, including tricky cases like precompile gas rules. So the danger is in the details. If the witness omits even one corner case, or the executor diverges subtly, the zk system can still generate a perfectly valid proof, but of the wrong thing. zk proofs don’t check what you proved, only that you proved it consistently. In today’s consensus model, client bugs show up quickly when nodes disagree.

So while the compilation and toolchain work here is impressive, the real challenge is making sure the witness and executor are absolutely faithful to Ethereum semantics, with strong integrity guarantees. Otherwise you risk building cryptographic certainty, but about the wrong computation. This makes the witness/executor correctness layer the single point of failure in my view where human fallibility can undermine mathematical guarantees, looking forward to understand how this problem will be tackled

oldfuture | 5 months ago | on: Meta Ray-Ban Display

You have more control, in theory, on a cellphone, and so do people around you. With the glasses you really have no way to say if they are listening or watching what you see. The phone has most of the time the sensors partially blocked by a bag or a pocket so it really can't be compared with eyewear.

oldfuture | 5 months ago | on: Meta Ray-Ban Display

the fact that surveillance capitalism, or we should rather say surveillance oligarchy, is here does not mean we have to support it going forward, it can only be worse if nobody reacts

oldfuture | 5 months ago | on: Meta Ray-Ban Display

https://www.wired.com/story/tiktok-promotes-stickers-for-sec...

Why they shouldn't be allowed ---

1.The glasses have cameras and microphones capable of recording people nearby often without their knowledge (e.g. the recording indicator can be subtle or blocked, “GhostDot” stickers are being sold to block the LED indicator light so others won’t see when recording is happening)

2. As I remember Meta has changed its privacy policy so that voice recordings are stored in the cloud (up to one year) and “Hey Meta” voice-activation with camera may be enabled by default, meaning more frequent analysis of what the camera sees to train AI models.

3.The possibility that anytime someone might be recording you wearing glasses that look like ordinary sunglasses can create a chilling effect: people may feel uneasy, censor themselves, avoid public spaces, etc.

oldfuture | 6 months ago

why is he so fascinated by this topic?

why while he seemingly wish for a katechon (restraint) his portfolio appears to be accelerating concentration of power instead in the very same way as the antichrist in his view would do?

oldfuture | 7 months ago | on: g-AMIE: Towards a Safer Medical AI

Guardrailed AMIE (g-AMIE) from DeepMind is an AI system that does:

-patient interviews to gather medical history

-never shares medical advice or diagnosis if not reviewed and approved by licensed doctors

apparently g-AMIE followed safety rules 90% of the time compared to 72% for human doctors

page 1