dsrtslnd23
|
1 month ago
|
on: Show HN: LemonSlice – Upgrade your voice agents to real-time video
where can I find the 20B model? it sounded like it would be open - but I am not sure with the phrasing...
dsrtslnd23
|
1 month ago
|
on: Ask HN: What's the current best local/open speech-to-speech setup?
yes, I am currently playing with pipecat - both with ASR + LLM + TTS pipeline and also speech to text (ultravox) + TTS but haven't been successful with local speech to speech setups yet.
dsrtslnd23
|
1 month ago
|
on: Ask HN: What's the current best local/open speech-to-speech setup?
oh - very interesting indeed! thanks
dsrtslnd23
|
1 month ago
|
on: Waypoint-1: Real-Time Interactive Video Diffusion from Overworld
great work! Will the medium model be also open/apache-licensed?
dsrtslnd23
|
1 month ago
|
on: Waypoint-1: Real-Time Interactive Video Diffusion from Overworld
10,000 hours training data seems quite low for a world model?
dsrtslnd23
|
1 month ago
|
on: Linum v2 - 2B parameter, Apache 2.0 licensed text-to-video models (360p, 720p)
Any info on VRAM requirements and latency for the 720p model? With only 2B parameters it seems like it should be quite fast I guess.
dsrtslnd23
|
1 month ago
|
on: Show HN: Text-to-video model from scratch (2 brothers, 2 years, 2B params)
Any idea on the minimum VRAM footprint with those tweaks? 20GB seems high for a 2B model. I guess the T5 encoder is responsible for that.
dsrtslnd23
|
1 month ago
|
on: Claude Code is suddenly everywhere inside Microsoft
Any thoughts on how this compares to Aider? It seems polished but will probably be much more expensive in terms of tokens.
dsrtslnd23
|
1 month ago
|
on: Launch HN: Constellation Space (YC W26) – AI for satellite mission assurance
Is the inference running on-orbit or ground-side? I guess SWaP is a major constraint for the former. Not sure if you are using FPGAs or something like a Jetson?
dsrtslnd23
|
1 month ago
|
on: Why does SSH send 100 packets per keystroke?
In aerial robotics, 900MHz telemetry links (like Microhard) are standard. And running SSH over them is common practice I guess.
dsrtslnd23
|
1 month ago
|
on: Qwen3-TTS family is now open sourced: Voice design, clone, and generation
Are you using an API proxy to route GLM into the Claude Code CLI? Or do you mean side-by-side usage? Not sure if custom endpoints are supported natively yet.
dsrtslnd23
|
1 month ago
|
on: Show HN: CLI for working with Apple Core ML models
Does this handle conversion and quantization from PyTorch? Or is it strictly for running existing Core ML files?
dsrtslnd23
|
1 month ago
|
on: Extracting a UART Password via SPI Flash Instruction Tracing
do you know the SPI clock frequency? I am trying to figure out the sampling rate required to reliably capture the trace. That determines the tier of logic analyzer needed I guess.
dsrtslnd23
|
1 month ago
|
on: Keeping 20k GPUs healthy
Jetson uses LPDDR though. H100 failures seem driven by HBM heat sensitivity and the 700W+ envelope. That is a completely different thermal density I guess.
dsrtslnd23
|
1 month ago
|
on: Show HN: First Claude Code client for Ollama local models
What hardware are you running the 30b model on? I guess it needs at least 24GB VRAM for decent inference speeds.
dsrtslnd23
|
1 month ago
|
on: Qwen3-TTS family is now open sourced: Voice design, clone, and generation
do you have the RTF for the 1080? I am trying to figure out if the 0.6B model is viable for real-time inference on edge devices.
dsrtslnd23
|
1 month ago
|
on: Qwen3-TTS family is now open sourced: Voice design, clone, and generation
Any idea on the VRAM footprint for the 1.7B model? I guess it fits on consumer cards but I am wondering if it works on edge devices.
dsrtslnd23
|
2 months ago
|
on: 1.5 TB of VRAM on Mac Studio – RDMA over Thunderbolt 5
Any thoughts on the GB300 workstation with 768GB RAM (from NVIDA, Asus, Dell, ...)?
Although many announcements were made it seems not to be available yet.
It does have faster interconnects but will probably be much more expensive.
dsrtslnd23
|
2 months ago
|
on: macOS 26.2 enables fast AI clusters with RDMA over Thunderbolt
do you have a source for that? I am trying to find pricing information but was not successful yet.
dsrtslnd23
|
2 months ago
|
on: macOS 26.2 enables fast AI clusters with RDMA over Thunderbolt
what about a GB300 workstation with 784GB unified mem?