(no title)
datadeft | 11 months ago
For example: what would be the best strategy to download 1000s of URLs using async in Rust. It gives you ok solutions but the final solution came from the Rust forum (the answer was written 1 year ago) which I assume made its way into the model.
There is also the verbosity problem. Calude without the concise flag on generates roughly 10x the required amount of code to solve a problem.
Maybe I am prompting incorrectly and somehow I could get the right answers from these models but at this stage I use these as a boilerplate generator and the actual creative problem solving remains on the human side.
gazereth|11 months ago
hypeatei|11 months ago
I really fail to see the usefulness in typing out long winded prompts then waiting for information to stream in. And repeat...
ahofmann|11 months ago
hakaneskici|11 months ago
Examples include things like, referring to LLM nicely ("my dear"), saying "please" and asking nicely, or thanking.
Do these actually work?
tmpz22|11 months ago
I'd argue you need to bootstrap and configure your project then allow only narrow access and problems to the llm to write code for - individual functions where your prompt includes the signature, individual tests, etc. Anything else and you really need to invest time in the code review lest they re-configure some of your code in a drastic way.
LLMs are useful but they do not replace procedure.
MortyWaves|11 months ago
So I’m not very experienced with Docker and can just about make a Docker Compose file.
I wanted to setup cron as a container in order to run something on a volume shared with another container.
I googled “docker compose cron” and must have found a dozen cron images. I set one up and it worked great on X86 and then failed on ARM because the image didn’t have an ARM build. This is a recurring theme with Docker and ARM but not relevant here I guess.
Anyway, after going through those dozen or so images all of which don’t work on ARM I gave up and sent the Compose file to Claude and asked it to suggest something.
It suggested simply use the alpine base image and add an entry to its crontab, and it works perfectly fine.
This may well be a skill issue but it had never occurred to me to me that cron is still available like that.
Three pages of Google results and not a single result anywhere suggesting I should just do it that way.
Of course this is also partly because Google search is mostly shit these days.
noisy_boy|11 months ago
You want to schedule things. What is the basic tool we use to schedule on Linux? Cron. Do you need to install it separately? No, it usually comes with most Linux images. What is your container, functionally speaking? A working Linux system. So you can run scripts on it. Lot of these scripts run binaries that come with Linux. Is there a cron binary available? Try using that.
Of course, hindsight is 20/20 but breaking objectives down to their basic core can be helpful.
sgarland|11 months ago
noisy_boy|11 months ago
"IMPORTANT: Do not overkill. Do not get distracted. Stay focused on the objective."
lfsh|11 months ago
Sohcahtoa82|11 months ago
In my experience, before "reasoning" became an option, if you ask it a question that takes a decent amount of thinking to solve, but also tell the model "Just give me the answer", you're FAR more likely to get an incorrect answer.
So "reasoning" just tells the model to first come up with a plan to solve a problem before actually solving it. It generates its own context for coming up with a more complete solution.
"Planning" would be a more accurate term for what LLMs are doing.
heap_perms|11 months ago
> Act as if you're and outside observer to this chat so far.
This really helps in a lot of these cases.
TeMPOraL|11 months ago
MortyWaves|11 months ago
benhurmarcel|11 months ago