top | item 34085906

Using ChatGPT to make Bash palatable

176 points| naderkhalil | 3 years ago |brev.dev

144 comments

order

fariszr|3 years ago

Am I the only who think this is incredible!?

If you would've told me a year ago when we would have an ai that can code based on normal human language prompts, I would've said maybe in 2025 or 2026, but its 2022 and it already exists!

Man, if this what we have now, imagine what we will have in 2025 or 2030!

I just hope this doesn't end up killing search engines and personal blogs, since no one needs to search for anything anymore.

Also Ai generated replys are definitely an Extinction level threat for forums and the independent internet in general, let's hope OpenAI can find a way to make chatgpt replies easy to filter out.

layer8|3 years ago

In the current state, everything the AI knows is stuff that people have written on the internet. It doesn’t seem to come up with new insights or judgements on its own. If people stop writing, AI won’t learn anything new (unless you turn it into AlphaZero for $DEVTOPIC).

ChatGPT certainly saves time, but it becomes useless roughly at the same point where I would remain stuck after exhausting what Google Search turns up. That is, knowledge or conceptual topics that are hard to find on the web. At least for technical topics, ChatGPT doesn’t expand the scope of what you can find out without it, it merely speeds up the process.

piyh|3 years ago

It won't kill personal blogs because chat GPT won't be messing around with weird hardware combinations and running into unique ender 3 firmware issues like real people will.

I also don't see them conquering falsehoods coming from the bots anytime soon.

aksss|3 years ago

This is a tool that I think openai originally made and seems to give you a probability that a sample block of text was or was not generated by openai. I pasted some of my own writing in there and said definitely not AI-generated. At this point, I don't know if I should be insulted by that or not. ;D

https://huggingface.co/openai-detector

stcroixx|3 years ago

The end of privately owned search engines is number one on my wishlist, I’d so love to see that happen.

more_corn|3 years ago

I hope it ends up killing search engines. Google needs a swift kick in the face.

ikealampe200|3 years ago

Just today I used ChatGPT to help me speed up writing somewhat trivial C Code for a project in an embedded systems class.

Prompt: "Generate a tiny PID controller with only a Proportional factor written in C. That takes a rotational input from -360 to 360 degrees. The setpoint in degrees. And returns a motor speed in the range of -255 to 255."

=> Produced a compiling correct result.

Later I wanted to know how to communicate between my kernel module and user space program: Prompt: "How do I get a value via character device from my kernel module into my user space c programm?" gave a bunch of answers and digging deeper with Prompt: "Could you provide me with an example of the user space program" gave a compiling and correct answer again.

I could have written all of that myself while spending a good amount researching on google. But this way I felt less frustrated and was definitively a lot quicker.

Not the solution for everything but maybe for a C beginner where research can take a long time and often leads to more confusion than anything else. Now the question is if that confusion is critical in the learning process. And if so how critical and at what stages of the experience spectrum the most?

Panzer04|3 years ago

I guess the main concern with its use as a learning tool is what happens if it's wrong? It might be helpful for boilerplate when you already know what you want, but if you don't even know that it'll blow up in your face when it doesn't give you something workable.

Still, seems like a viable assistant so long as you have an understanding of what you're working with.

543g43g43|3 years ago

I've just re-created your "PID" controller, and was completely underwhelmed with the response. I just don't find it amazing that something using that much compute power can generate source code that multiplies an input by a constant.

If you can't write that quicker than the ChatGPT prompt you provided, then you probably should pay more attention to your class.

jrvarela56|3 years ago

I just bought a Streamdeck (https://www.amazon.com/Elgato-Stream-Deck-Controller-customi...) for Cybermonday and attached bash scripts similar to those mentioned by the author to buttons next to my keyboard. It is delightful.

Someone1234|3 years ago

I've done a similar thing on Windows and Powershell (via Open). Only limitation is it still spawn a terminal window even if briefly (enough to mess with focus however).

powershell -WindowStyle Hidden -ExecutionPolicy Bypass -Command "& { // Command(s) }"

In particular I use it with this Powershell Script:

https://stackoverflow.com/questions/21355891/change-audio-le...

via e.g.:

"& { C:\[Path To Script]\SetVolume.ps1; [audio]::Volume = 0.4 }"

For 40% and so on.

user3939382|3 years ago

What action type do you use for your script? "Open"?

this_steve_j|3 years ago

Thanks for the idea! I just found a Streamdeck from a past Cyber Monday when I was cleaning out my closet.

joenot443|3 years ago

This is a neat product! What kinds of scripts do you have it set up to run?

solarkraft|3 years ago

> But it fucking sucks. Like, it’s truly awful to write

I feel like I'm the only person among my peers to think this and I don't understand why.

LelouBil|3 years ago

I had to try PowerShell for a project.

It is soooo much better that bash.

Passing typed objects instead of only text, Typed functions, and the ability to use C# types/functions inline !

willio58|3 years ago

I think it’s fine with some changes like using iterm with zen/ohmyzsh, but bash is definitely a barrier to entry for a lot of could-be programmers.

xlii|3 years ago

On the other hand I’ve had vastly different experience.

Every single time I went to GPT and ask for anything development related I came back empty handed after being send for the goose chase.

An example of this would be when I asked fswatch to emit one faux event preemptively and it insisted to use “-1” instead (which quits after one event).

I had few instances for more obscure problems where GPT would actually create something I’d call parallel universe of the API. It felt good but such API never existed. Those problems were in JS, Ruby, Shellscript and Elixir.

One of the worst answer I was given was actually really buggy implementation of input controlled debounce function. It seemed correct when running but in reality it wasn’t debouncing but ignoring the output on debounce call.

So yeah, I don’t think I’ll be using GPT for that soon, but it works quite well as a rubber duck equivalent. By proposing dumb solutions I gather my thoughts better. Not sure if that’s something I’d pay for (yet I’d pay for text generating capabilities)

Edit: denounce -> debounce, because autocorrect

guites|3 years ago

This is a great example of how someone with little programming knowledge could leverage an AI into building simple scripts.

Lately I've been encouraging my friends into trying just that.

If the poster would want, for example, to save all current tabs when switching context (going from dev to marketing, for example), this would quickly turn into a more involved debugging/prompting question.

generalizations|3 years ago

ChatGPT has been amazing for all kinds of programming-adjacent things, even in my line of work where I asked it for help modifying the config file for a selfhosted gitlab instance.

> But [Bash] fucking sucks. Like, it’s truly awful to write [...]

As an aside, considering how basic the shell script actually was, I think this is a great example of being so intimidated by something that you don't actually try and use it - until you do, and discover it wasn't actually that bad. The hardest part was just discovering the incantations for interacting with Chrome - which was a fantastic demo of the power of ChatGPT.

yamtaddle|3 years ago

Bash isn't that bad... except there are 20 ways to do anything and 18 of them are wrong, but only 1% of online examples will use either of the two correct approaches and simply reading docs won't always clue you in about how the technique or syntax it's describing is actually subtly wrong and you should ~never use it.

forrestthewoods|3 years ago

> The hardest part was just discovering the incantations

Right, that’s why Bash sucks. A more extreme version of this is APL.

Bash isn’t APL bad. But it’s pretty bad!

naderkhalil|3 years ago

yeah absolutely. I think interacting with chrome and also parsing/iterating files were hard parts that ChatGPT breezed through.

pizzalife|3 years ago

Also, the specific argument format that chrome expects has nothing to do with Bash.

SpencerMLevitt|3 years ago

that's a great point - totally agree on just gettin over the hump of intimidation to try it, and chatgpt makes that hump trivial

raverbashing|3 years ago

So the author complains about bash, but uses zsh

They also basically uses the chrome cmdline commands and blames bash for that being bad

Your problem doesn't seem to actually be bash (but chatgpt really makes it super easy)

mozman|3 years ago

I find bash to be pretty awesome, it’s super easy for an old hat like me to use. It just works and it has been mostly unchanged, two core principles more projects should consider.

63|3 years ago

This is a relatively common use case for browsers that's usually solved by tab groups. I'm happy the author learned bash and leveraged new tools to solve the problem, but it's a little over engineered.

lossolo|3 years ago

I've been using it more and more at work, and it's already saved me hours by generating bash commands and simple scripts/servers that I would otherwise have to search for on Google and adjust to my specific use case from multiple sources. Thanks to this tool, I have more time to focus on difficult and business-related problems. If they start charging for it, I will definitely become a paying customer. This is an excellent tool that is making me more productive, and I was a big skeptic about how LLMs work internally. Remove hallucination problem, add annotations with links to sources and this is how Google will look like in a few years. IMO this is how future of knowledge search will look like on the internet.

ramoz|3 years ago

Wow, yea, I've have the exact same experience with bash.

"im using mac, not linux" is an often prompt I need to use, but otherwise this type of flow works great for simple bash functions.

For more advanced scripts, prompting & careful flow are important, but I've done some pretty awesome things. Today, ChatGPT helped me create a bash script to create a flat structure of large tars from an nTiB dataset directory by aggregating multiple sub-datasets & their files into the desired tar file size. Eg. "need single tar files of all the data in that folder/subfolder, every tar file must be 50GB, most files range from 4MB-1GB. So, need to aggregate them"

civopsec|3 years ago

> Bash is available everywhere

When you have AI but you don’t have permission to use a package manager.

SpencerMLevitt|3 years ago

This is an awesome walkthrough and gets me thinking about all the other automation tasks I could get done with ChatGPT-driven bash scripts... can take this same approach to context switching for actual apps. For ex: "dev" branch can open up vscode, terminal windows, linear, server logs, etc, while "marketing" branch can open up slack, chrome (to email), twitter, notion, etc

naderkhalil|3 years ago

ChatGPT has opened my eyes to how powerful Bash can be. Interesting idea about branching full workflows and not just the browser.

user3939382|3 years ago

I was just commenting to a friend how annoying it is that macOS aliases can't add flags to executables like you can easily do in Windows shortcuts since, what? Windows 95?

If you want to launch Chrome with flags through your dock/UI you have to compile an AppleScript to an .app. It's crazy.

cerved|3 years ago

I don't use osx but surely you can create aliases just like in Linux and BSD?

cerved|3 years ago

The problem you describe is how to interface with Google chrome on osx using shell script.

I fail to see a single bashism.

Jensson|3 years ago

It loops through the rows of a file in the last example. But yeah, the main reason this works is that this is a trivial bash script. The main help you got is not having to read the chrome command line docs.

albert_e|3 years ago

I am very excited to see this being integrated with a lot of productivity tools -- removing the need for manually copy-pasting the ChatGPT output into various other apps like VS Code or Excel :)

"Create a new Python project folder named 'hello-openapi' and initate a git repo. Create a requirements.txt with openai, os and json. Create a starter python file with an openai example code and make the first commit."

alecfong|3 years ago

I find it interesting how much harder it is to grok bash/sh/zsh than other languages I’ve learned. Off the top of my head it may be tooling like the lack of linting, or maybe it’s just experience as I avoid complexity like the plague when writing bash which sounds like a self fulfilling feedback loop.

Gpt does seem to unblock this mental burden a bit which has me excited for its potential when it comes to education/teaching.

geysersam|3 years ago

Something about the quoting / unquoting can get really difficult to reason about. I'm rarely exactly sure how the language constructs work, even the for loop and the if statement. The syntax is complex compared to most other languages, and subtle differences can give totally different results.

  PICK = can you
  THE=tell the
  RIGHT=difference\ between
  WHAY="all of"
  TODO='these versions?'

jrochkind1|3 years ago

I'm pretty damn impressed with this one. The osa-script thing for Mac even, wow.

waynesonfire|3 years ago

yeah, game changer. it's deprecated having to click on links for common use-cases and not only that, it's improved it by more than 10x. can't wait to see this technology evolve.

naderkhalil|3 years ago

100%, also just looking up dev docs for frameworks. Something like "write a post API in Golang's Gin framework" instead of looking up what that syntax was again

tottenval|3 years ago

Github Copilot can do this too.

naderkhalil|3 years ago

Totally! I found ChatGPT to be more helpful in this use case since it outputted full code snippets instead of generating it line by line. The context was also extremely valuable in making iterations (like "make it work for mac")

olalonde|3 years ago

I don't understand how people can look at this and still claim that it's not doing any "actual reasoning".

CJefferson|3 years ago

Here is an example of where I think it is doing "no reasoning".

I ask it for the XOR swap trick and I get:

    int a = 5;
    int b = 10;

    a = a ^ b;
    b = a ^ b;
    a = a ^ b;

    // After the swap, a = 10 and b = 5
I ask for the bitwise OR swap trick and I get:

    int x = 5;
    int y = 10;

    x = x | y;
    y = x | y;
    x = x | y;

    // After the swap, x = 10 and y = 5
When asked for something which is invalid, but close to something it knows, it tends to produce stuff like this -- pattern matching it's best guess.

y04nn|3 years ago

Chat GPT is simply amazing, it feels like Google with super powers. I think it can boost productivity by a considerable amount. It makes a perfect peer programmer, giving you sample code with first class comments explaining the generated code, sometimes with minor errors to make it compile. You can even ask it to explain some specific part of the code. It's also like having a secretary or an assistant available 24/7 with a never seen productivity. It probably feels like when first mechanical computers where built and people thinking "How can it compute the right answer so fast?".

Jensson|3 years ago

It has trained on countless programming tutorials out there, including bash tutorials for all kinds of things. Such tutorials often includes "create file -> ls to see file -> print content of file" etc, so GPT then takes those tutorials and creates grammatical rules how those words transform into each other. But if you start going outside of the realms of online tutorials it starts to falter quickly and then just prints nonsense.

cobbal|3 years ago

> On macOS, the Google Chrome executable is called `open -a "Google Chrome"`

This inaccuracy, in particular, feels more like mad-libs than reason

foobarqux|3 years ago

"actual reasoning" doesn't mean anything concrete, until you define what you are talking about it can't be the basis of a question you can meaningfully answer.

jrochkind1|3 years ago

I realize I don't have a good idea of what I think "actual reasoning" means. But yeah, this is pretty impressive stuff, I agree. Before ChatGPT I didn't realize the tech was available to do things like this, and I'm still pretty bewildered by how it can be possible.

chlorion|3 years ago

You can directly ask it whether it is capable of reasoning and it tells you it's not, and that it's just a language model that is not capable of reasoning or self improvement or something along those lines.

Another example, ask it for a list of programming languages that it has been trained on. If it was capable of reasoning it would be able to trivially answer this, but since its a language model, and it just predicts the most likely response based on the prompt, it has no concept of this at all, and tells you exactly that when asked.

YeGoblynQueenne|3 years ago

Here's a brief reminder of how large language models like GPT-3 work.

First, you train until the cows come home on billions of tokens on the entire web. This is called "pre-training", even though it's basically all of the model's training (i.e. the setting of its parameters, a.k.a. weights).

The trained model is a big, huge table of tokens and their probabilities to occur in a certain position relative to other tokens in the table. It is, in other words, a probability distribution over token collocations in the training set.

Given this trained model, a user can then give a sequence as an input to the model. This input is called a "prompt".

Given the input prompt, the model can be searched (by an outside process that is not part of the model itself) for a token with maximal probability conditioned on the prompt [1]. Semi-formally, that means, given a sequence of tokens t₁, ..., tₙ, finding a token tₙ₊₁ such that the conditional probability of the token, given the sequence, i.e. P(tₙ₊₁|t₁, ..., tₙ), is maximised.

Once a token that maximises that conditional probability is found... the system searches for another token.

And another.

And another.

This process typically stops when the sampling generates an end-of-sequence token (which is a magic marker tautologically saying, essentially, "Here be the end of a <token sequence>", and is not the same as an end-of-line, end-of-paragraph etc token; it depends on the tokenisation procedure used before training, to massage the training set into something trainable-on) [2].

Once the process stops, the sampling procedure spits out the sequence of tokens starting at tₙ₊₁.

Now, can you say where in all this is the "actual reasoning" you are concerned people are still claiming is not there?

____________

[1] This used to be called "sampling from the model's probability distribution". Nowadays it's called "Magick fairy dust learning with unicorn feelies" or something like that. I forget the exact term but you get the gist.

[2] Btw, this half-answers your question. Language models on their own can't even tell that a sentence is finished. What reasoning?

LelouBil|3 years ago

Has anyone tried to make ChatGPT output first order logic statements about it's input problem, then make implications using a solver, then feed the solution back to ChatGPT for usage ?

Maybe this could solve the reasoning part.

ChatGPT should perform well in translating prompts to statement and vice versa, it's just text to text.

WhiteBlueSkies|3 years ago

How does it "reason" though, I thought LLM just generated likely next words?

sureglymop|3 years ago

Since when is back awful to write? I live writing bash :(

timwis|3 years ago

I’ve been waiting for ChatGPT to be “available” for weeks now (it was too busy before). How have so many people been able to use it?

bulldog13|3 years ago

Can anyone recommend some more articles about this ? Specifically, using chatgpt to write code? The more in depth, the better.

hmsimha|3 years ago

This isn't an article, but I used ChatGPT to make a Hacker News extension (which I'm now using), that highlights new comments when I navigate to a thread I've already visited: https://github.com/HartS/gpt-hacker-news-extension

Each commit here contains my prompt in the commit message, and the changed code was entirely provided by ChatGPT. I also appended its output (including explanations) verbatim to the gpt-output file in each commit.

So with each commit, you can see what I prompted it (the commit message), what it responded with (the change in the commit to that log file), and the code that I changed as a result of its response (all other changes in the repo).

In actual use of the extension (if you want to use it), I changed the "yellow" background-color to "rgba(128,255,128,0.3)" (a light green color), but I did that change myself because I didn't think I'd be able to get it to pick a colour that looks good for HN

nilsbunger|3 years ago

Ask ChatGPT! It can write one for you :)

secondcoming|3 years ago

We're all doomed. This is going to ruin everything. Everyone's job will turn into debugging AI generated code.

intelVISA|3 years ago

Depending on your team that might be an upgrade for some.

an_aparallel|3 years ago

i will take a chance anywhere i can to just say it's a crying shame how useful this is - but also how crap it is that it requires use to be linked to your identity...