top | item 46384794

(no title)

krish678 | 2 months ago

Thank you for taking the time to look through the repository.

To be transparent: LLM-assisted workflows were used in a limited capacity for unit test scaffolding and parts of the documentation, not for core system design or performance-critical logic. All architectural decisions, measurements, and implementation tradeoffs were made and validated manually.

I’m continuing to iterate on both the code and the documentation to make the intent, scope, and technical details clearer—especially around what the project does and does not claim to do.

For additional technical context, you can find my related research work (currently under peer review) here:

https://www.preprints.org/manuscript/202512.2293

https://www.preprints.org/manuscript/202512.2270

Thanks again for your time and attention!

discuss

order

rfl890|2 months ago

Are you sure? This code snippet reeks of AI hallucination:

    // 3. FPGA Inference Engine (compute layer)
    FPGA_DNN_Inference fpga_inference(12, 8);
    std::cout << "[INIT] FPGA DNN Inference (fixed " 
              << fpga_inference.get_fixed_latency_ns() 
              << "ns latency)" << std::endl;
What's going on here? Are you simulating an FPGA? In software? To guarantee a fixed latency? It's named confusingly, at the very least. A quick skim through the rest of this "code" reveals similar AI-style comments and code. Certainly not "only for unit tests and documentation".

krish678|2 months ago

Thanks for pointing this out. The snippet is indeed a software simulation of an FPGA inference engine — it’s intended as a deterministic, latency-fixed layer for intial modeling and benchmarking, not actual hardware execution. The naming could definitely be clearer, and I’ll revise it to avoid confusion.