(no title)
rads | 9 months ago
When working with Clojure, I've been using LLMs primarily for two use cases:
1. Search
2. Design feedback
The first case is obvious to anyone who's used an LLM chat interface: it's often easier to ask an LLM for the answer than a traditional search engine.The second case is more interesting. I believe the design of a system is more important than the language being used. I'd rather inherit a well-designed codebase in some other language over a poorly designed Clojure codebase any day. Due to the values of Clojure embedded in the language itself and the community that surrounds it, Clojure programmers are naturally encouraged to think first, code second.
The problem I've run into with the second case is that it often takes too much effort for me to get the context into the LLM for it to answer my questions in detail. As a result, I tend to reach for LLMs when I have a general design question that I can then translate into Clojure. Asking it specific questions about my existing Clojure code has felt like more effort than it's worth, so I've actually trained myself to make things more generic when I talk to the LLM.
This MCP with Claude Code seems like the tipping point where I can start asking questions about my code, not just asking for general design feedback. I hooked this up to a project of mine where I recently added multi-tenancy support (via an :app-id key), which required low-level changes across the codebase. I asked the following question with Claude Code and the Clojure MCP linked here:
> given that :app-id is required after setup, are there any places where :app-id should be checked that is missing?
It actually gave me some good feedback on specific files and locations in my code for about 10 seconds of effort. That said, it also cost me $0.48. This might be the thing that gets me to subscribe to a Claude Max plan...
ransom1538|9 months ago
stingraycharles|9 months ago
It’s really powerful.
didibus|9 months ago
You need to find a workflow to leverage it. There are two approaches.
Here you setup the project and basic project structure. Add the dependencies you want to use, setup your src and test folders, and so on.Then you start creating the namespaces you want, but you don't implement them, just create the `(ns ...)` with a doc-string that describes it. You can also start adding the public functions you want for it's API. Don't implement those either. Just add a signature and doc-string.
Then you create the test namespace for it. Creates a deftest for the functions you want to test, and add `(testing ...)` but don't add the body, just write the test description.
Now you tell the AI to fill in the implementation of the tests and namespace so that all described test cases pass and to run the test and iterate until it all does.
Then ask the AI to code review itself, and iterate on the code until it has no more comments.
Mention security, exception handling, logging, and so on as you see fit, if you explicitly call those concerns it'll work on them.
Rinse and repeat. You can add your own tests to be more sure, and also test things out and ask it to fix.
Here you pretend to be the Product Manager. You create a project and start adding markdown files in it that describe the user stories, the features of the app/service and so on.Then you ask AI to generate a design specification. You review that, and have it iterate on it until you like it.
Then you ask AI to break down a delivery plan, and a test plan to implement it. Review and iterate until you like it.
Then you ask AI to break up the delivery in milestones, and to create a break down of tasks for the first milestone. Review and iterate here.
Then you ask AI to implement the first task, with tests. Review and iterate. Then the next, and so on.
rads|9 months ago
Developer guided: For the projects I'm currently working on, the understanding is the most difficult part, and writing the code is a way for me to check my understanding as I go. I do use LLMs to generate code when I feel like it can save me time, such as setting up a new project or scaffolding tests, but I think there are diminishing returns the larger and/or more complex the project is. Furthermore, I work on code that other people (or LLMs) are meant to understand, so I value code that is consistent and concise.
Product guided: Even with meat-based agents (i.e humans), there's a limit to how many Jira tickets I can write and junior engineers I can babysit, and this is one of the worst parts of the job to begin with. Furthermore, junior engineers often make mistakes which means I need to have my own understanding to fix the issues. That said, getting feedback from experienced colleagues is invaluable, and that's what I'm currently simulating with LLMs.