If you would've told me a year ago when we would have an ai that can code based on normal human language prompts, I would've said maybe in 2025 or 2026, but its 2022 and it already exists!
Man, if this what we have now, imagine what we will have in 2025 or 2030!
I just hope this doesn't end up killing search engines and personal blogs, since no one needs to search for anything anymore.
Also Ai generated replys are definitely an Extinction level threat for forums and the independent internet in general, let's hope OpenAI can find a way to make chatgpt replies easy to filter out.
In the current state, everything the AI knows is stuff that people have written on the internet. It doesn’t seem to come up with new insights or judgements on its own. If people stop writing, AI won’t learn anything new (unless you turn it into AlphaZero for $DEVTOPIC).
ChatGPT certainly saves time, but it becomes useless roughly at the same point where I would remain stuck after exhausting what Google Search turns up. That is, knowledge or conceptual topics that are hard to find on the web. At least for technical topics, ChatGPT doesn’t expand the scope of what you can find out without it, it merely speeds up the process.
It won't kill personal blogs because chat GPT won't be messing around with weird hardware combinations and running into unique ender 3 firmware issues like real people will.
I also don't see them conquering falsehoods coming from the bots anytime soon.
This is a tool that I think openai originally made and seems to give you a probability that a sample block of text was or was not generated by openai. I pasted some of my own writing in there and said definitely not AI-generated. At this point, I don't know if I should be insulted by that or not. ;D
Just today I used ChatGPT to help me speed up writing somewhat trivial C Code for a project in an embedded systems class.
Prompt: "Generate a tiny PID controller with only a Proportional factor written in C. That takes a rotational input from -360 to 360 degrees. The setpoint in degrees. And returns a motor speed in the range of -255 to 255."
=> Produced a compiling correct result.
Later I wanted to know how to communicate between my kernel module and user space program:
Prompt: "How do I get a value via character device from my kernel module into my user space c programm?" gave a bunch of answers and digging deeper with Prompt: "Could you provide me with an example of the user space program" gave a compiling and correct answer again.
I could have written all of that myself while spending a good amount researching on google. But this way I felt less frustrated and was definitively a lot quicker.
Not the solution for everything but maybe for a C beginner where research can take a long time and often leads to more confusion than anything else. Now the question is if that confusion is critical in the learning process. And if so how critical and at what stages of the experience spectrum the most?
I guess the main concern with its use as a learning tool is what happens if it's wrong? It might be helpful for boilerplate when you already know what you want, but if you don't even know that it'll blow up in your face when it doesn't give you something workable.
Still, seems like a viable assistant so long as you have an understanding of what you're working with.
I've just re-created your "PID" controller, and was completely underwhelmed with the response. I just don't find it amazing that something using that much compute power can generate source code that multiplies an input by a constant.
If you can't write that quicker than the ChatGPT prompt you provided, then you probably should pay more attention to your class.
I've done a similar thing on Windows and Powershell (via Open). Only limitation is it still spawn a terminal window even if briefly (enough to mess with focus however).
On the other hand I’ve had vastly different experience.
Every single time I went to GPT and ask for anything development related I came back empty handed after being send for the goose chase.
An example of this would be when I asked fswatch to emit one faux event preemptively and it insisted to use “-1” instead (which quits after one event).
I had few instances for more obscure problems where GPT would actually create something I’d call parallel universe of the API. It felt good but such API never existed. Those problems were in JS, Ruby, Shellscript and Elixir.
One of the worst answer I was given was actually really buggy implementation of input controlled debounce function. It seemed correct when running but in reality it wasn’t debouncing but ignoring the output on debounce call.
So yeah, I don’t think I’ll be using GPT for that soon, but it works quite well as a rubber duck equivalent. By proposing dumb solutions I gather my thoughts better. Not sure if that’s something I’d pay for (yet I’d pay for text generating capabilities)
This is a great example of how someone with little programming knowledge could leverage an AI into building simple scripts.
Lately I've been encouraging my friends into trying just that.
If the poster would want, for example, to save all current tabs when switching context (going from dev to marketing, for example), this would quickly turn into a more involved debugging/prompting question.
That would be a great follow on. I posted the repo if you want to leave any feedback or help me continue building it out: https://github.com/brevdev/dev-browser
ChatGPT has been amazing for all kinds of programming-adjacent things, even in my line of work where I asked it for help modifying the config file for a selfhosted gitlab instance.
> But [Bash] fucking sucks. Like, it’s truly awful to write [...]
As an aside, considering how basic the shell script actually was, I think this is a great example of being so intimidated by something that you don't actually try and use it - until you do, and discover it wasn't actually that bad. The hardest part was just discovering the incantations for interacting with Chrome - which was a fantastic demo of the power of ChatGPT.
Bash isn't that bad... except there are 20 ways to do anything and 18 of them are wrong, but only 1% of online examples will use either of the two correct approaches and simply reading docs won't always clue you in about how the technique or syntax it's describing is actually subtly wrong and you should ~never use it.
I find bash to be pretty awesome, it’s super easy for an old hat like me to use. It just works and it has been mostly unchanged, two core principles more projects should consider.
This is a relatively common use case for browsers that's usually solved by tab groups. I'm happy the author learned bash and leveraged new tools to solve the problem, but it's a little over engineered.
I've been using it more and more at work, and it's already saved me hours by generating bash commands and simple scripts/servers that I would otherwise have to search for on Google and adjust to my specific use case from multiple sources. Thanks to this tool, I have more time to focus on difficult and business-related problems. If they start charging for it, I will definitely become a paying customer. This is an excellent tool that is making me more productive, and I was a big skeptic about how LLMs work internally. Remove hallucination problem, add annotations with links to sources and this is how Google will look like in a few years. IMO this is how future of knowledge search will look like on the internet.
Wow, yea, I've have the exact same experience with bash.
"im using mac, not linux" is an often prompt I need to use, but otherwise this type of flow works great for simple bash functions.
For more advanced scripts, prompting & careful flow are important, but I've done some pretty awesome things. Today, ChatGPT helped me create a bash script to create a flat structure of large tars from an nTiB dataset directory by aggregating multiple sub-datasets & their files into the desired tar file size. Eg. "need single tar files of all the data in that folder/subfolder, every tar file must be 50GB, most files range from 4MB-1GB. So, need to aggregate them"
This is an awesome walkthrough and gets me thinking about all the other automation tasks I could get done with ChatGPT-driven bash scripts... can take this same approach to context switching for actual apps. For ex: "dev" branch can open up vscode, terminal windows, linear, server logs, etc, while "marketing" branch can open up slack, chrome (to email), twitter, notion, etc
I was just commenting to a friend how annoying it is that macOS aliases can't add flags to executables like you can easily do in Windows shortcuts since, what? Windows 95?
If you want to launch Chrome with flags through your dock/UI you have to compile an AppleScript to an .app. It's crazy.
It loops through the rows of a file in the last example. But yeah, the main reason this works is that this is a trivial bash script. The main help you got is not having to read the chrome command line docs.
I am very excited to see this being integrated with a lot of productivity tools -- removing the need for manually copy-pasting the ChatGPT output into various other apps like VS Code or Excel :)
"Create a new Python project folder named 'hello-openapi' and initate a git repo. Create a requirements.txt with openai, os and json. Create a starter python file with an openai example code and make the first commit."
I find it interesting how much harder it is to grok bash/sh/zsh than other languages I’ve learned. Off the top of my head it may be tooling like the lack of linting, or maybe it’s just experience as I avoid complexity like the plague when writing bash which sounds like a self fulfilling feedback loop.
Gpt does seem to unblock this mental burden a bit which has me excited for its potential when it comes to education/teaching.
Something about the quoting / unquoting can get really difficult to reason about. I'm rarely exactly sure how the language constructs work, even the for loop and the if statement.
The syntax is complex compared to most other languages, and subtle differences can give totally different results.
PICK = can you
THE=tell the
RIGHT=difference\ between
WHAY="all of"
TODO='these versions?'
yeah, game changer. it's deprecated having to click on links for common use-cases and not only that, it's improved it by more than 10x. can't wait to see this technology evolve.
100%, also just looking up dev docs for frameworks. Something like "write a post API in Golang's Gin framework" instead of looking up what that syntax was again
Totally! I found ChatGPT to be more helpful in this use case since it outputted full code snippets instead of generating it line by line. The context was also extremely valuable in making iterations (like "make it work for mac")
Chat GPT is simply amazing, it feels like Google with super powers. I think it can boost productivity by a considerable amount. It makes a perfect peer programmer, giving you sample code with first class comments explaining the generated code, sometimes with minor errors to make it compile. You can even ask it to explain some specific part of the code. It's also like having a secretary or an assistant available 24/7 with a never seen productivity. It probably feels like when first mechanical computers where built and people thinking "How can it compute the right answer so fast?".
It has trained on countless programming tutorials out there, including bash tutorials for all kinds of things. Such tutorials often includes "create file -> ls to see file -> print content of file" etc, so GPT then takes those tutorials and creates grammatical rules how those words transform into each other. But if you start going outside of the realms of online tutorials it starts to falter quickly and then just prints nonsense.
"actual reasoning" doesn't mean anything concrete, until you define what you are talking about it can't be the basis of a question you can meaningfully answer.
I realize I don't have a good idea of what I think "actual reasoning" means. But yeah, this is pretty impressive stuff, I agree. Before ChatGPT I didn't realize the tech was available to do things like this, and I'm still pretty bewildered by how it can be possible.
You can directly ask it whether it is capable of reasoning and it tells you it's not, and that it's just a language model that is not capable of reasoning or self improvement or something along those lines.
Another example, ask it for a list of programming languages that it has been trained on. If it was capable of reasoning it would be able to trivially answer this, but since its a language model, and it just predicts the most likely response based on the prompt, it has no concept of this at all, and tells you exactly that when asked.
Here's a brief reminder of how large language models like GPT-3 work.
First, you train until the cows come home on billions of tokens on the entire web. This is called "pre-training", even though it's basically all of the model's training (i.e. the setting of its parameters, a.k.a. weights).
The trained model is a big, huge table of tokens and their probabilities to occur in a certain position relative to other tokens in the table. It is, in other words, a probability distribution over token collocations in the training set.
Given this trained model, a user can then give a sequence as an input to the model. This input is called a "prompt".
Given the input prompt, the model can be searched (by an outside process that is not part of the model itself) for a token with maximal probability conditioned on the prompt [1]. Semi-formally, that means, given a sequence of tokens t₁, ..., tₙ, finding a token tₙ₊₁ such that the conditional probability of the token, given the sequence, i.e. P(tₙ₊₁|t₁, ..., tₙ), is maximised.
Once a token that maximises that conditional probability is found... the system searches for another token.
And another.
And another.
This process typically stops when the sampling generates an end-of-sequence token (which is a magic marker tautologically saying, essentially, "Here be the end of a <token sequence>", and is not the same as an end-of-line, end-of-paragraph etc token; it depends on the tokenisation procedure used before training, to massage the training set into something trainable-on) [2].
Once the process stops, the sampling procedure spits out the sequence of tokens starting at tₙ₊₁.
Now, can you say where in all this is the "actual reasoning" you are concerned people are still claiming is not there?
____________
[1] This used to be called "sampling from the model's probability distribution". Nowadays it's called "Magick fairy dust learning with unicorn feelies" or something like that. I forget the exact term but you get the gist.
[2] Btw, this half-answers your question. Language models on their own can't even tell that a sentence is finished. What reasoning?
Has anyone tried to make ChatGPT output first order logic statements about it's input problem, then make implications using a solver, then feed the solution back to ChatGPT for usage ?
Maybe this could solve the reasoning part.
ChatGPT should perform well in translating prompts to statement and vice versa, it's just text to text.
This isn't an article, but I used ChatGPT to make a Hacker News extension (which I'm now using), that highlights new comments when I navigate to a thread I've already visited: https://github.com/HartS/gpt-hacker-news-extension
Each commit here contains my prompt in the commit message, and the changed code was entirely provided by ChatGPT. I also appended its output (including explanations) verbatim to the gpt-output file in each commit.
So with each commit, you can see what I prompted it (the commit message), what it responded with (the change in the commit to that log file), and the code that I changed as a result of its response (all other changes in the repo).
In actual use of the extension (if you want to use it), I changed the "yellow" background-color to "rgba(128,255,128,0.3)" (a light green color), but I did that change myself because I didn't think I'd be able to get it to pick a colour that looks good for HN
i will take a chance anywhere i can to just say it's a crying shame how useful this is - but also how crap it is that it requires use to be linked to your identity...
fariszr|3 years ago
If you would've told me a year ago when we would have an ai that can code based on normal human language prompts, I would've said maybe in 2025 or 2026, but its 2022 and it already exists!
Man, if this what we have now, imagine what we will have in 2025 or 2030!
I just hope this doesn't end up killing search engines and personal blogs, since no one needs to search for anything anymore.
Also Ai generated replys are definitely an Extinction level threat for forums and the independent internet in general, let's hope OpenAI can find a way to make chatgpt replies easy to filter out.
layer8|3 years ago
ChatGPT certainly saves time, but it becomes useless roughly at the same point where I would remain stuck after exhausting what Google Search turns up. That is, knowledge or conceptual topics that are hard to find on the web. At least for technical topics, ChatGPT doesn’t expand the scope of what you can find out without it, it merely speeds up the process.
piyh|3 years ago
I also don't see them conquering falsehoods coming from the bots anytime soon.
aksss|3 years ago
https://huggingface.co/openai-detector
stcroixx|3 years ago
more_corn|3 years ago
ikealampe200|3 years ago
Prompt: "Generate a tiny PID controller with only a Proportional factor written in C. That takes a rotational input from -360 to 360 degrees. The setpoint in degrees. And returns a motor speed in the range of -255 to 255."
=> Produced a compiling correct result.
Later I wanted to know how to communicate between my kernel module and user space program: Prompt: "How do I get a value via character device from my kernel module into my user space c programm?" gave a bunch of answers and digging deeper with Prompt: "Could you provide me with an example of the user space program" gave a compiling and correct answer again.
I could have written all of that myself while spending a good amount researching on google. But this way I felt less frustrated and was definitively a lot quicker.
Not the solution for everything but maybe for a C beginner where research can take a long time and often leads to more confusion than anything else. Now the question is if that confusion is critical in the learning process. And if so how critical and at what stages of the experience spectrum the most?
Panzer04|3 years ago
Still, seems like a viable assistant so long as you have an understanding of what you're working with.
543g43g43|3 years ago
If you can't write that quicker than the ChatGPT prompt you provided, then you probably should pay more attention to your class.
jrvarela56|3 years ago
Someone1234|3 years ago
powershell -WindowStyle Hidden -ExecutionPolicy Bypass -Command "& { // Command(s) }"
In particular I use it with this Powershell Script:
https://stackoverflow.com/questions/21355891/change-audio-le...
via e.g.:
"& { C:\[Path To Script]\SetVolume.ps1; [audio]::Volume = 0.4 }"
For 40% and so on.
user3939382|3 years ago
this_steve_j|3 years ago
joenot443|3 years ago
solarkraft|3 years ago
I feel like I'm the only person among my peers to think this and I don't understand why.
LelouBil|3 years ago
It is soooo much better that bash.
Passing typed objects instead of only text, Typed functions, and the ability to use C# types/functions inline !
willio58|3 years ago
xlii|3 years ago
Every single time I went to GPT and ask for anything development related I came back empty handed after being send for the goose chase.
An example of this would be when I asked fswatch to emit one faux event preemptively and it insisted to use “-1” instead (which quits after one event).
I had few instances for more obscure problems where GPT would actually create something I’d call parallel universe of the API. It felt good but such API never existed. Those problems were in JS, Ruby, Shellscript and Elixir.
One of the worst answer I was given was actually really buggy implementation of input controlled debounce function. It seemed correct when running but in reality it wasn’t debouncing but ignoring the output on debounce call.
So yeah, I don’t think I’ll be using GPT for that soon, but it works quite well as a rubber duck equivalent. By proposing dumb solutions I gather my thoughts better. Not sure if that’s something I’d pay for (yet I’d pay for text generating capabilities)
Edit: denounce -> debounce, because autocorrect
guites|3 years ago
Lately I've been encouraging my friends into trying just that.
If the poster would want, for example, to save all current tabs when switching context (going from dev to marketing, for example), this would quickly turn into a more involved debugging/prompting question.
naderkhalil|3 years ago
generalizations|3 years ago
> But [Bash] fucking sucks. Like, it’s truly awful to write [...]
As an aside, considering how basic the shell script actually was, I think this is a great example of being so intimidated by something that you don't actually try and use it - until you do, and discover it wasn't actually that bad. The hardest part was just discovering the incantations for interacting with Chrome - which was a fantastic demo of the power of ChatGPT.
yamtaddle|3 years ago
forrestthewoods|3 years ago
Right, that’s why Bash sucks. A more extreme version of this is APL.
Bash isn’t APL bad. But it’s pretty bad!
naderkhalil|3 years ago
pizzalife|3 years ago
SpencerMLevitt|3 years ago
raverbashing|3 years ago
They also basically uses the chrome cmdline commands and blames bash for that being bad
Your problem doesn't seem to actually be bash (but chatgpt really makes it super easy)
john2x|3 years ago
worldsayshi|3 years ago
mozman|3 years ago
63|3 years ago
lossolo|3 years ago
ramoz|3 years ago
"im using mac, not linux" is an often prompt I need to use, but otherwise this type of flow works great for simple bash functions.
For more advanced scripts, prompting & careful flow are important, but I've done some pretty awesome things. Today, ChatGPT helped me create a bash script to create a flat structure of large tars from an nTiB dataset directory by aggregating multiple sub-datasets & their files into the desired tar file size. Eg. "need single tar files of all the data in that folder/subfolder, every tar file must be 50GB, most files range from 4MB-1GB. So, need to aggregate them"
civopsec|3 years ago
When you have AI but you don’t have permission to use a package manager.
SpencerMLevitt|3 years ago
naderkhalil|3 years ago
user3939382|3 years ago
If you want to launch Chrome with flags through your dock/UI you have to compile an AppleScript to an .app. It's crazy.
cerved|3 years ago
cerved|3 years ago
I fail to see a single bashism.
Jensson|3 years ago
albert_e|3 years ago
"Create a new Python project folder named 'hello-openapi' and initate a git repo. Create a requirements.txt with openai, os and json. Create a starter python file with an openai example code and make the first commit."
alecfong|3 years ago
Gpt does seem to unblock this mental burden a bit which has me excited for its potential when it comes to education/teaching.
geysersam|3 years ago
_jayhack_|3 years ago
https://github.com/jayhack/llm.sh
Type `llm [natural language command]` and it will suggest a command for you, then run it.
Details here: https://twitter.com/mathemagic1an/status/1590480438258462721
jrochkind1|3 years ago
waynesonfire|3 years ago
naderkhalil|3 years ago
tottenval|3 years ago
naderkhalil|3 years ago
olalonde|3 years ago
CJefferson|3 years ago
I ask it for the XOR swap trick and I get:
I ask for the bitwise OR swap trick and I get: When asked for something which is invalid, but close to something it knows, it tends to produce stuff like this -- pattern matching it's best guess.y04nn|3 years ago
Jensson|3 years ago
cobbal|3 years ago
This inaccuracy, in particular, feels more like mad-libs than reason
foobarqux|3 years ago
jrochkind1|3 years ago
chlorion|3 years ago
Another example, ask it for a list of programming languages that it has been trained on. If it was capable of reasoning it would be able to trivially answer this, but since its a language model, and it just predicts the most likely response based on the prompt, it has no concept of this at all, and tells you exactly that when asked.
YeGoblynQueenne|3 years ago
First, you train until the cows come home on billions of tokens on the entire web. This is called "pre-training", even though it's basically all of the model's training (i.e. the setting of its parameters, a.k.a. weights).
The trained model is a big, huge table of tokens and their probabilities to occur in a certain position relative to other tokens in the table. It is, in other words, a probability distribution over token collocations in the training set.
Given this trained model, a user can then give a sequence as an input to the model. This input is called a "prompt".
Given the input prompt, the model can be searched (by an outside process that is not part of the model itself) for a token with maximal probability conditioned on the prompt [1]. Semi-formally, that means, given a sequence of tokens t₁, ..., tₙ, finding a token tₙ₊₁ such that the conditional probability of the token, given the sequence, i.e. P(tₙ₊₁|t₁, ..., tₙ), is maximised.
Once a token that maximises that conditional probability is found... the system searches for another token.
And another.
And another.
This process typically stops when the sampling generates an end-of-sequence token (which is a magic marker tautologically saying, essentially, "Here be the end of a <token sequence>", and is not the same as an end-of-line, end-of-paragraph etc token; it depends on the tokenisation procedure used before training, to massage the training set into something trainable-on) [2].
Once the process stops, the sampling procedure spits out the sequence of tokens starting at tₙ₊₁.
Now, can you say where in all this is the "actual reasoning" you are concerned people are still claiming is not there?
____________
[1] This used to be called "sampling from the model's probability distribution". Nowadays it's called "Magick fairy dust learning with unicorn feelies" or something like that. I forget the exact term but you get the gist.
[2] Btw, this half-answers your question. Language models on their own can't even tell that a sentence is finished. What reasoning?
LelouBil|3 years ago
Maybe this could solve the reasoning part.
ChatGPT should perform well in translating prompts to statement and vice versa, it's just text to text.
WhiteBlueSkies|3 years ago
sureglymop|3 years ago
timwis|3 years ago
unknown|3 years ago
[deleted]
bulldog13|3 years ago
hmsimha|3 years ago
Each commit here contains my prompt in the commit message, and the changed code was entirely provided by ChatGPT. I also appended its output (including explanations) verbatim to the gpt-output file in each commit.
So with each commit, you can see what I prompted it (the commit message), what it responded with (the change in the commit to that log file), and the code that I changed as a result of its response (all other changes in the repo).
In actual use of the extension (if you want to use it), I changed the "yellow" background-color to "rgba(128,255,128,0.3)" (a light green color), but I did that change myself because I didn't think I'd be able to get it to pick a colour that looks good for HN
mbil|3 years ago
[0]: https://www.shawnmatthewcrawford.com/balloons-the-balloon-cl...
[1]: https://news.ycombinator.com/item?id=31211280
nilsbunger|3 years ago
secondcoming|3 years ago
intelVISA|3 years ago
an_aparallel|3 years ago
aaron695|3 years ago
[deleted]