top | item 44040301

Show HN: JavaFactory – IntelliJ plugin to generate Java code

44 points| javafactory | 9 months ago |github.com

Hi HN,

I built a code generator plugin for IntelliJ that uses LLMs to create repetitive Java code like implementations, tests, and fixtures — based on custom natural-language patterns and annotation-based references.

Most tools like Copilot or Cursor aim to be general, but fail to produce code that actually fits a project structure or passes tests.

So I made something more explicit: define patterns + reference scope, and generate code consistently.

In this demo, 400 lines of Java were generated in 20 seconds — and all tests passed: https://www.youtube.com/watch?v=ReBCXKOpW3M

GitHub: https://github.com/JavaFactoryPluginDev/javafactory-plugin

27 comments

order

AugustoCAS|9 months ago

A side comment, I have found that configuring a few live templates in IntelliJ helps me to write a lot of the repetitive code just a handful of keystrokes regardless of the language.

Structural refactoring is another amazing feature that is worth knowing.

zikani_03|9 months ago

I've also got some mileage from live templates for repetitive code. However, at some point I built[0] an IntelliJ IDEA plugin to help me generate setters and field assignments that I felt live templates weren't a good solution for (for my case). I don't know if JavaFactory solves this kind of problem, keen to try it out.

[0]: https://github.com/nndi-oss/intellij-gensett

javafactory|9 months ago

I think IntelliJ is a great tool on its own. Recently, they even added a feature that auto-injects dependencies when you declare them as private final — super convenient.

I can’t help but wonder if the folks at JetBrains are starting to feel a bit of pressure from tools like Cursor or Windsurf

geodel|9 months ago

Feels very Java like. Factories, repositories, utils, patterns etc. Good stuff.

javafactory|9 months ago

thank you. i think this tool have really room to grow, but still concept of manipulate each task is quite usefule

cess11|9 months ago

The guide is a 404.

"404 - page not found The

master branch of

javafactory-plugin does not contain the path

docs/how-to-use.md."

How do I hook it into local models? Does it support Ollama, Continue, that kind of thing? Do you collect telemetry?

javafactory|9 months ago

1. Im sorry. i it was typo on path, i fixed it so you can see now.

2. from now, i only allow to use gpt-4o, because the requests involve relatively long context windows, which require high-quality reasoning. Only recent high-performance models like GPT-4o or Claude Sonnet are capable of reducing the manual workload for this kind of task.

___

but still, if user want to use other models , i can make adapter features for various models

simpaticoder|9 months ago

If the trend continues a program will look like "JavaFactory("<prompt>").compile().run();".

winrid|9 months ago

I've always wondered how long until we reach this. If every pc can run models locally, with a given seed and prompt it could be the ultimate compression. It's also hilarious.

javafactory|9 months ago

Thank you — I’ll consider adding that feature.

Actually, I'm currently thinking about creating a small community for sharing pattern definitions.

likis|9 months ago

What LLM is it using? Is it something local? Or does it call out? It wasn't obvious from the docs, and I didn't want to dig through all of the code to figure it out. Should probably be clearly stated on the front page.

But the project looks interesting, I have been looking for something similar.

javafactory|9 months ago

This uses OpenAI's GPT-4o model.

The requests involve relatively long context windows, which require high-quality reasoning. Only recent high-performance models like GPT-4o or Claude Sonnet are capable of reducing the manual workload for this kind of task.

p0w3n3d|9 months ago

As a programmer I feel bad if tests don't fail at the first run... It might show that they are not testing...

javafactory|9 months ago

Your point is valid. In real-world work, tests should focus on parts that are difficult to verify, and if everything passes on the first try, it's often a sign that something deserves a closer look.

That said, what I wanted to highlight in the example was a contrast — tools like Cursor and other general-purpose models often fail to even generate simple tests correctly, or can't produce tests that pass. So the goal was to show the difference in reliability.

diggernet|9 months ago

Related to this, consider that when an LLM writes tests for code, it's writing them based on what the code actually does, not what it's supposed to do. This is equally true when the code itself was written by the LLM. Sure the tests pass, but that doesn't prove the code is correct.