top | item 42619022

(no title)

dewitt | 1 year ago

One interesting bit of context is that the author of this post is a legit world-class software engineer already (though probably too modest to admit it). Former staff engineer at Google and co-founder / CTO of Tailscale. He doesn't need LLMs. That he says LLMs make him more productive at all as a hands-on developer, especially around first drafts on a new idea, means a lot to me personally.

His post reminds me of an old idea I had of a language where all you wrote was function signatures and high-level control flow, and maybe some conformance tests around them. The language was designed around filling in the implementations for you. 20 years ago that would have been from a live online database, with implementations vying for popularity on the basis of speed or correctness. Nowadays LLMs would generate most of it on the fly, presumably.

Most ideas are unoriginal, so I wouldn't be surprised if this has been tried already.

discuss

order

gopalv|1 year ago

> That he says LLMs make him more productive at all as a hands-on developer, especially around first drafts on a new idea, means a lot to me personally.

There is likely to be a great rift in how very talented people look at sharper tools.

I've seen the same division pop up with CNC machines, 3d printers, IDEs and now LLMs.

If you are good at doing something, you might find the new tool's output to be sub-par over what you can achieve yourself, but often the lower quality output comes much faster than you can generate.

That causes the people who are deliberate & precise about their process to hate the new tool completely - expressing in the actual code (or paint, or marks on wood) is much better than trying to explain it in a less precise language in the middle of it. The only exception I've seen is that engineering folks often use a blueprint & refine it on paper.

There's a double translation overhead which is wasteful if you don't need it.

If you have dealt with a new hire while being the senior of the pair, there's that familiar feeling of wanting to grab their keyboard instead of explaining how to build that regex - being able to do more things than you can explain or just having a higher bandwidth pipe into the actual task is a common sign of mastery.

The incrementalists on the other hand, tend to love the new tool as they tend to build 6 different things before picking what works the best, slowly iterating towards what they had in mind in the first place.

I got into this profession simply because I could Ctrl-Z to the previous step much more easily than my then favourite chemical engineering goals. In Chemistry, if you get a step wrong, you go to the start & start over. Plus even when things work, yield is just a pain there (prove it first, then you scale up ingredients etc).

Just from the name of sketch.dev, it appears that this author is of the 'sketch first & refine' model where the new tool just speeds up that loop of infinite refinement.

liotier|1 year ago

> If you are good at doing something, you might find the new tool's output to be sub-par over what you can achieve yourself, but often the lower quality output comes much faster than you can generate. That causes the people who are deliberate & precise about their process to hate the new tool completely

Wow, I've been there ! Years ago we dragged a GIS system kicking and screaming from its nascent era of a dozen ultrasharp dudes with the whole national fiber optics network in their head full of clever optimizations, to three thousand mostly clueless users churning out industrial scale spaghetti... The old hands wanted a dumb fast tool that does their bidding - they hated the slower wizard-assisted handholding, that turned out to be essential to the new population's productivity.

Command line vs. GUI again... Expressivity vs. discoverability, all the choices vs. don't make me think. Know your users !

harrall|1 year ago

I believe it’s more that people hate trying new tools because they’ve already made their choice and made it their identity.

However, there are also people who love everything new and jump onto the latest hype too. They try new things but then immediately advocate it without merit.

Where are the sane people in the middle?

numpad0|1 year ago

I can't relate to this comment at all. Doesn't feel like what's said in GP either.

IMO, LLMs are super fast predictive input and hallucinatory unzip; files to be decompressed don't have to exist yet, but input has to be extremely deliberate and precise.

You have to have a valid formula that gives the resultant array that don't require no more than 100 IQ to comprehend, and then they unroll it for you into the whole code.

They don't reward trial and error that much. They don't seem to help outsiders like 3D printers did, either. It is indeed a discriminatory tool as in it mistreats amateurs.

And, by the way, it's also increasingly obvious to me that assuming pro-AI posture more than what you would from purely rational and utilitarian standpoint triggers a unique mode of insanity in humans. People seem to contract a lot of negativity doing it. Don't do that.

jprete|1 year ago

This is a good characterization. I'm precision-driven and know what I need to do at any low level. It's the high-level definition that is uncertain. So it doesn't really help to produce a dozen prototypes of an idea and pick one, nor does it help to fill in function definitions.

tikkun|1 year ago

Intersting.

So engineers that like to iterate and explore are more likely to like LLMs.

Whereas engineers that like have a more rigid specific process are more likely to dislike LLMs.

travisporter|1 year ago

> I got into this profession simply because I could Ctrl-Z to the previous step much more easily than my then favourite chemical engineering goals.

That is interesting. Asking as a complete ignoramus - is there not a way to do this now? Like start off with a 100 of reagent and at every step use a bit and discard if wrong

throwaway4aday|1 year ago

Not so sure about those examples and pairing with the idea of quick and dirty work.

dboreham|1 year ago

Calculators vs slide rules.

antirez|1 year ago

I have also many years of programming experience and find myself strongly "accelerated" by LLMs when writing code. But, if you think at it, it makes sense that many seasoned programmers are using LLMs better. LLMs are a helpful tool, but also a hard-to-use tool, and in general it's fair to think that better programmers can do a better use of some assistant (human or otherwise): better understanding its strengths, identifying faster the good and bad output, providing better guidance to correct the approach...

Other than that, what correlates more strongly with the ability to use LLMs effectively is, I believe, language skills: the ability to describe problems very clearly. LLMs reply quality changes very significantly with the quality of the prompt. Experienced programmers that can also communicate effectively provide the model with many design hints, details where to focus, ..., basically escaping many local minima immediately.

mhalle|1 year ago

I completely agree that communication skills are critical in extracting useful work or insight from LLMs. The analogy for communicating with people is not far-fetched. Communicating successfully with a specific person requires an understanding of their strengths and weaknesses, their tendencies and blind spots. The same is true for communicating with LLMs.

I have actually found that from a documentation point of view, querying LLMs has made me better and explaining things to people. If, given the documentation for a system or API, a modern LLM can't answer specific questions about how to perform a task, a person using the same documentation will also likely struggle. It's proving to be a good way to test the effectiveness of documentation, for humans and for LLMs.

bsenftner|1 year ago

Communication skills are the keys to using LLMs. Think about it: every type of information you want is in them, in fact it is there multiple times, with multiple levels of seriousness in the treatment of the idea. If one is casual in their request, using casual language, then the LLM will reply with a casual reply because that matched your request best. To get a hard, factual answer from those that are experts in a subject, use the formal term, use the expert's language and you'll get back a rely more likely to be correct because it's in the same level of formal treatment as correct answers.

gen220|1 year ago

Hey! Asking because I know you're a fellow vimmer [0]. Have you integrated LLMs into your editor/shell? Or are you largely copy-pasting context between a browser and vim? This context-switching of it all has been a slight hang-up for me in adopting LLMs. Or are you asking more strategic questions where copy-paste is less relevant?

[0] your videos on writing systems software were part of what inspired me to make a committed switch into vim. thank you for those!

rudiksz|1 year ago

> "seasoned programmers are using LLMs better".

I do not remember a single instance when code provided to me by an LLM worked at all. Even if I ask something small that cand be done in 4-5 lines of code is always broken.

From a fellow "seasoned" programmer to another: how the hell do you write the prompts to get back correct working code?

LouisSayers|1 year ago

> the ability to describe problems very clearly

Yes, and to provide enough context.

There's probably a lot that experience is contributing to the interaction as well, for example - knowing when the LLM has gone too far, focusing on what's important vs irrelevant to the task, modularising and refactoring code, testing etc

kragen|1 year ago

That's really interesting. What are the most important things you've learned to do with the LLMs to get better results? What do your problem descriptions look like? Are you going back and forth many times, or crafting an especially-high-quality initial prompt?

ignoramous|1 year ago

> [David, Former staff engineer at Google ... CTO of Tailscale,] doesn't need LLMs. That he says LLMs make him more productive at all as a hands-on developer, especially around first drafts on a new idea, means a lot to me...

Don't doubt for a second the pedigree of founding engs at Tailscale, but David is careful to point out exactly why LLMs work for them (but might not for others):

   I am doing a particular kind of programming, product development, which could be roughly described as trying to bring programs to a user through a robust interface. That means I am building a lot, throwing away a lot, and bouncing around between environments. Some days I mostly write typescript, some days mostly Go. I spent a week in a C++ codebase last month exploring an idea, and just had an opportunity to learn the HTTP server-side events format. I am all over the place, constantly forgetting and relearning.

  If you spend more time proving your optimization of a cryptographic algorithm is not vulnerable to timing attacks than you do writing the code, I don't think any of my observations here are going to be useful to you.

big_youth|1 year ago

> If you spend more time proving your optimization of a cryptographic algorithm is not vulnerable to timing attacks than you do writing the code, I don't think any of my observations here are going to be useful to you.

I am not a software dev I am a security researcher. LLM's are great for my security research! It is so much easier and faster to iterate on code like fuzzers to do security testing. Writing code to do a padding oracle attack would have taken me a week+ in the past. Now I can work with an LLM to write code and learn and break within the day.

It has accelerated my security research 10 fold, just because I am able to write code and parse and interpret logs at a level above what I was able to a few years ago.

pplonski86|1 year ago

I'm in similar situations, I jump between many environments, mainly between Python and Typescript, however, currently testing a new idea of learning algorithm in C++, and I simply don't always remember all syntax. I was very skeptical about LLMs at first. Now, I'm using LLMs daily. I can focus more on thinking rather than searching stackoverflow. Very often I just need simple function, that it is much faster to create with chat.

greenyouse|1 year ago

That approach sounds similar to the Idris programming language with Type Driven Development. It starts by planning out the program structure with types and function signatures. Then the function implementation (aka holes) can be filled in after the function signatures and types are set.

I feel like this is a great approach for LLM assisted programming because things like types, function signatures, pre/post conditions, etc. give more clarity and guidance to the LLM. The more constraints that the LLM has to operate under, the less likely it is to get off track and be inconsistent.

I've taken a shot at doing some little projects for fun with this style of programming in TypeScript and it works pretty well. The programs are written in layers with the domain design, types, schema, and function contracts being figured out first (optionally with some LLM help). Then the function implementations can be figured out towards the end.

It might be fun to try Effect-TS for ADTs + contracts + compile time type validation. It seems like that locks down a lot of the details so it might be good for LLMs. It's fun to play around with different techniques and see what works!

lysecret|1 year ago

100% this is what I do in python too!

brabel|1 year ago

I am not a genius but have a couple of decades experience and finally started using LLMs in anger in the last few weeks. I have to admit that when my free quota from GitHub Copilot ran out (I had already run out of Jetbrains AI as well!! Our company will start paying for some service as the trials have been very successful), I had a slight bad feeling as my experience was very similar to OP: it's really useful to get me started, and I can finish it much more easily from what the AI gives me than if I started from scratch. Sometimes it just fills in boilerplate, other times it actually tells me which functions to call on an unfamiliar API. And it turns out it's really good at generating tests, so it makes my testing more comprehensive as it's so much faster to just write them out (and refine a bit usually by hand). The chat almost completely replaced my StackOverflow queries, which saves me much time and anxiety (God forbid I have to ask something on SO as that's a time sink: if I just quickly type out something I am just asking to be obliterated by the "helpful" SO moderators... with the AI, I just barely type anything at all, leave it with typos and all, the AI still gets me!).

EagnaIonat|1 year ago

Have you tried using Ollama? You can download and run an LLM locally on your machine.

You can also pick the right model for the right need and it's free.

devjab|1 year ago

I'm genuinely curious but what did you use StackOverflow for before? With a couple of decades in the industry I can't remember when the last time I "Google programmed" anything was. I always go directly to the documentation for whatever it is I'm working for, because where else would I find out how it actually works? It's not like I haven't "Google programmed" when I was younger, but it's just such a slow process based on trusting strangers on the internet that it never really made much sense once I started knowing what I was doing. I sort of view LLM's in a similar manner. Why would you go to them rather than the actual documentation? I realize this might sound arrogant or rude, and I really hope you believe me when I say that I don't mean it like this. The reason I'm curious is because we're really struggling getting junior developers to not look, everywhere, but the documentation first. Which means they often actually don't know how what they build works. Which can be an issue when they load every object of a list into memory isntead of using a generator...

As far as using LLMs in anger I would really advice anyone to use them. GitHub copilot hasn't been very useful for me personally, but I get a lot of value out of running my thought process by a LLM. I think better when I "think out loud" and that is obviously challenging when everyone is busy. Running my ideas by an LLM helps me process them in a similar (if not better) fashion, often it won't even really matter what the LLM conjures up because simply describing what I want to do often gives me new ideas, like "thinking out loud".

As far as coding goes. I find it extremely useful to have LLMs write cli scripts to auto-generate code. The code the LLM will produce is going to be absolute shite, but that doesn't matter if the output is perfectly fine. It's reduced my personal reliance on third party tools by quite a lot. Because why would I need a code generator for something (and in that process trust a bunch of 3rd party libraries) when I can have a LLM write a similar tool in half an hour?

Vox_Leone|1 year ago

I have been using LLM to generate functional code from *pseudo-code* with excellent results. I am starting to experiment with UML diagrams, both with LLM and computer vision to actually generate code from UML diagrams; for example a simple activity diagram could be the prompt on LLM 's, and might look like:

Start -> Enter Credentials -> Validate -> [Valid] -> Welcome Message -> [Invalid] -> Error Message

Corresponding Code (Python Example):

class LoginSystem:

    def validate_credentials(self, username, password):
        if username == "admin" and password == "password":
            return True
        return False

    def login(self, username, password):
        if self.validate_credentials(username, password):
            return "Welcome!"
        else:
            return "Invalid credentials, please try again."
*Edited for clarity

jonvk|1 year ago

This example illustrates one of the risks of using LLMs without subject expertise though. I just tested this with claude and got that exact same validation method back. Using string comparison is dangerous from a security perspective [1], so this is essentially unsafe validation, and there was no warning in the response about this.

1. https://sqreen.github.io/DevelopersSecurityBestPractices/tim...

jpc0|1 year ago

Could you add to the prompt that the password is stored in an sqlite database using argon2 for encryption, the encryption parameters are stored as environment variables.

You would like it to avoid timing based attacks as well as dos attacks.

It should also generate the functions as pure functions so that state is passed in and passed out and no side effects(printing to the console) happen within the function.

Then also confirm for me that it has handled all error cases that might reasonably happen.

While you are doing that, just think about how much implicit knowledge I just had to type into the comment here and that is still ignoring a ton of other knowledge that needs to be considered like whether that password was salted before being stored. All the error conditions for the sqlite implementation in python, the argon2 implementation in the library.

TLDR: that code is useless and would have taken me the same amount of time to write as your prompt.

dekhn|1 year ago

I think what you're describing is basically "interface driven development" and "test driven development" taken to the extreme: where the formal specification of an implementation is defined by the test suite. I suppose a cynic would say that's what you get if you left an AI alone in a room with Hyrum's Law.

HarHarVeryFunny|1 year ago

> His post reminds me of an old idea I had of a language where all you wrote was function signatures and high-level control flow

Regardless of language, that's basically how you approach the design of a new large project - top down architecture first, then split the implementation into modules, design the major data types, write function signatures. By the time you are done what is left is basically the grunt work of implementing it all, which is the part that LLMs should be decent at, especially if the functions/methods are documented to level (input/output assertions as well as functionality) where it can also write good unit tests for them.

dingnuts|1 year ago

> the grunt work of implementing it all

you mean the fun part. I can really empathize with digital artists. I spent twenty years honing my ability to write code and love every minute of it and you're telling me that in a few years all that's going to be left is PM syncs and OKRs and then telling the bot what to write

if I'm lucky to have a job at all

CraigJPerry|1 year ago

>> where all you wrote was function signatures and high-level control flow, and maybe some conformance tests around them

AIUI that’s where idris is headed

benterix|1 year ago

> designed around filling in the implementations for you. 20 years ago that would have been from a live online database

This reminds me a bit of PowerBuilder (or was it PowerDesigner?) from early 1990s. They sold it to SAP later, I was told it's still being used today.

mahmoudimus|1 year ago

Isn't that the idea behind UML? Which didn't work out so well, however, with the advent of LLMs today, I think that premise could work.

knighthack|1 year ago

I knew he was a world-class engineer the moment I saw that his site didn't bother with CSS stylesheets, ads, pictures, or anything beyond a rudimentary layout.

The whole article page reads like a site from the '90s, written from scratch in HTML.

That's when I knew the article would go hard.

Substantive pieces don't need fluffy UIs - the idea takes the stage, not the window dressing.

shaneofalltrad|1 year ago

I wonder what he uses, I noticed the first paragraph took over a second to load... Largest Contentful Paint element 1,370 ms This is the largest contentful element painted within the viewport. Element p

alexvitkov|1 year ago

Glad to know I was a world class engineer at the age of 8, when all I knew were the <h1> and <b> tags!

apwell23|1 year ago

he is using llm for coding. you don't become staff engineer by being a badass coder. Not sure how they are related.

ilrwbwrkhv|1 year ago

Being a dev at a large company is usually the sign that you're not very good though. And anyone can start a company with the right connections.

tomwojcik|1 year ago

That's a terrible blanket statement, very US-centric. Not everyone wants to start a company and you can't just reduce ones motivations to your measure of success.

ksenzee|1 year ago

You've just disproved your own assertion. Either that or you believe everyone who's any good has the right connections.