top | item 35766421

(no title)

Dennizz | 2 years ago

Correct, you probably should know SQL.

In my experience though it is often a significant time saver if I only have to review the output of the AI vs. coming up with everything myself.

Re: ambiguities: You are right, the AI won't always be able to infer the correct fields to use. In such cases though it is often enough to help the AI out a little. E.g. by saying "information X is stored in field Y of table Z".

Even if you have to do that, you can still save time and more importantly mental effort by letting AI help you, compared to writing all the SQL yourself.

Re: iterative process: You can simply send a follow up message saying "You did X wrong". It is usually happy to correct itself.

discuss

order

tomhallett|2 years ago

I 100% agree. This is why I think that LLM's will help encourage better coding practices, not worse. The more that you can write "clean code" (things which are named correctly, follow the single responsibility pricinciple, don't violate the law of demeter, build the correct data-models/data-marts), then the easier it will be for LLM's to assist senior engineers in writing code/sql faster.

And if there is an interface given to people who will struggle to evaluate the correct-ness of something, ie: a Business Intelligence tool for people who don't know/care about the data model, then the question becomes "How can the tool facilitate getting the people who can verify the correctness of the sql/code do so in an efficient manner?" I'm thinking like a short-term pull-request, where the "code committer" is the LLM and the reviewer is a human. Said in another way: the chatbot is going in reverse, where chatgpt is asking the questions and the human gives the answers...... haha

AdieuToLogic|2 years ago

>> the concern i would have here is that if you don't know SQL then you have no way of knowing whether or not the results are correct and if the results are wrong you have no way to debug the query.

> Correct, you probably should know SQL.

If your customers need to know SQL, presumably they also need to know the data architecture in order to verify correctness and/or fitness of purpose.

Assuming both of these preconditions are true, why would someone not just write the requisite SQL themself?

> Even if you have to do that, you can still save time and more importantly mental effort by letting AI help you, compared to writing all the SQL yourself.

Not really, at least in my experience. Staying "in the flow" is easier when writing SQL queries instead of having to:

1. Take time to think of a "good" ChatGPT request.

2. Review/test what was generated.

3. Take more time to refine the ChatGPT request to make it "better."

4. Goto 2 until a satisfactory SQL query is generated.

Contrast the above steps with:

1. Take time to determine what SQL query is needed.

2. Write the query.

3. Test the query.

metalspot|2 years ago

> iterative process: You can simply send a follow up message saying "You did X wrong". It is usually happy to correct itself.

cool. i am interested to try it out.