top | item 46716379

(no title)

AmiteK | 1 month ago

I think the disagreement is about where inference belongs, not whether LLMs are capable.

Git diffs + LLM inference work well for understanding changes once. What I’m targeting is reducing the need to re-infer semantic surface changes every run, especially across large refactors or long-running workflows.

Today, LogicStamp derives deterministic semantic contracts and hashes, and watch mode surfaces explicit change events. The direction this enables is treating those derived facts as a semantic baseline (e.g. drift detection / CI assertions) instead of relying on repeated inference from raw diffs.

By “repeatability” I mean the artifacts, not agent behavior: same repo state + config ⇒ same semantic model. I don’t yet have end-to-end agent performance evals versus AGENTS.md + LSP

discuss

order

verdverm|1 month ago

> By “repeatability” I mean the artifacts

> Inference works well per session ... doesn't give artifact ... explicitness and repeatability across runs.

When you write this, it sounds like you are talking about repeatability between inference sessions and that this artifact enables that. It does not read that you are applying the repeatability to the artifact itself, which one assumes since it is autogenerated from code via AST walking

AmiteK|1 month ago

I agree - that’s on me for the wording. I’m not claiming repeatability of agent inference or LLM sessions.

By “repeatability” I mean the extraction itself: given the same repo state + config, the derived semantic artifact is identical every time. That gives CI and agents a stable reference point, but it doesn’t make agent behavior deterministic.

The value is in not having to re-infer structure from raw source each run - not in making inference runs repeatable.