helltone's comments

helltone | 1 month ago | on: Ask HN: What did you find out or explore today?

I'm building in robotics. Setting up a new 3d camera today. I found that the 10m active USB C cable that I bought transfers power in both directions, but only transfers data in one direction, it turns out to be some weird video USB variant. Next I needed to plug a gripper into a modbus controller. That uses an M8 8-pole 20cm cable. The controller manufacturer recently decided to switch from male to female connector, so now the cable needs to be male-male. After searching online for hours, I believe that is impossible to find as everyone only sells male-female cables.

I'm continuously surprised by how difficult it is to plug things together and how non-descriptive cable "standards" are about the actual capabilities of cables and connectors.

helltone | 6 months ago | on: Weaponizing image scaling against production AI systems

No amount of fine-tuning can prevent models from doing anything. All it can do is reduce the likelihood of exploits happening, while also increasing the surprise factor when they inevitably do. This is a fundamental limitation.

helltone | 6 months ago | on: Show HN: Luminal – Open-source, search-based GPU compiler

I have a background in program analysis, but I'm less familiar with the kind of kernels you are optimising.

- Can you give some more insight on why 12 ops suffice for representing your input program?

- With such a small number of ops, isn't your search space full of repeat patterns? I understand the will to have no predefined heuristics, but it seems that learning some heuristics/patterns would massively help reduce the space.

helltone | 7 months ago | on: Robot hand could harvest blackberries better than humans

Every time I see these headlines, the tech seems to be at least 10 years away from product.

- demos done in a lab controlled environment without the crazy things that happen in a real world.

- no humans nearby so none of the safety features that would be needed should this thing work alongside/near humans.

- no regards for economics, expensive vision models, expensive hardware, no consideration for maintenance and repair costs

helltone | 8 months ago | on: Show HN: I built a tensor library from scratch in C++/CUDA

This is very cool. I'm wondering if some of the templates and switch statements would be nicer if there was an intermediate representation and a compiler-like architecture.

I'm also curious about how this compares to something like Jax.

Also curious about how this compares to zml.

helltone | 9 months ago | on: Gradients Are the New Intervals

No? Or maybe I'm missing something. If the goal is to be able to bound the computation of f, you can:

1) compute f with interval arithmetic

2) compute f normally and f' with interval arithmetic

3) compute f rounding towards zero, compute f' from f rounded towards infinity, and round f' up (if f positive) or round f' down (if f negative).

In all 3 cases you can use what you computed to figure out bounds on f, (1) is direct, the other two need extra work.

helltone | 9 months ago | on: Gradients Are the New Intervals

I think perhaps this could be done in other ways that don't require interval arithmetic for autodiff, only that the gradient is conservatively computed, in other words carrying the numerical error from f into f'

helltone | 9 months ago | on: Ask HN: Anyone working in traditional ML/stats research instead of LLMs?

I'm building a tool to make ml on tabular data (forecasting, imputation, etc) easier and more accessible. The goal is to go from zero to a basic working model in minutes, even if the initial model is not perfect, and then iteratively improve the model step by step, while continuously evaluating each step with metrics and comparisons to the previous model. So it's less ml foundation research, and more trying to package it in a user friendly way with a nice workflow, but if that's interesting feel free to reach out (email in profile).

helltone | 1 year ago | on: Decompiling 2024: A Year of Resurgance in Decompilation Research

Program equivalence is undecidable, in general, but also in practice (in my experience) most interesting cases quickly escalate to require an unreasonable amount of compute. Personally, I think it is easier to produce correct-by-construction decompilation by applying sequences of known-correct transformations, rather than trying to reconstruct correctness a posteriori. So perhaps the LLM could produce such sequence of transforms rather than outputting the final decompiled program only.
page 1