Hi HN! I've been playing around with ChatGPT a bunch since it came out. This experiment has a little bit of a backstory. Some friends and I were out at a pho restaurant; one of us put the whole bill on his card, so the rest of us needed to figure out how much to Venmo him. We were talking about how many bill-splitting apps there are, and I made a joke about doing it with ChatGPT. Then I actually tried it out.
I OCR'd the text with Google Lens, described who had what, and after a bit of prompt engineering (e.g., adding "Be sure to get your math correct" to make the AI's arithmetic check out, and convincing the AI to split shared items evenly), it totally worked: https://gist.github.com/spinda/967322dda1c04d9864f3efd45addc...
Then I started experimenting with describing a hypothetical check-splitting app to the AI, and asking it to feed me JSON commands to update the UI in response to messages from me telling it what the user was doing. The results were promising! And then the similarity to the Redux data loop jumped out, and I built this generic plugin to wire ChatGPT up to apps for real.
Pretty sure you've invented something here. A coolest form of copilot. A new way of programming. This is really lispy, this is really cool. Love it 10 times.
This seems like it could be perfect for making prototypes. "Pretend that you are a backend for a text messaging app that supports group chats. The data model should look something like this: ..."
So, you needed a multiple step process of technology transfer to an ai or an app to figure out how to split the check?
I find this fascinating but not terribly impressive. Its fascinating that such a rudimentary skill prompted you to dive down this rabbit hole. The unimpressive part is that it's a rudimentary skill yet your over engineered solution still only required rudimentary skills.
People keep saying ChatGPT isn't that impressive because it's just "regurgitating knowledge" and has no insight into it, or things along those lines. But I find it insanely impressive that you can specify something like:
"Provide your answer in JSON form. Reply with only the answer in JSON form and include no other commentary."
And it will do exactly that. Or tell it to explain you something "in the style of Shakespere".
I just asked it about quantum physics as Shakespere and got this (plus a lot more):
Throughout history there are moments where humans realize they're not special in a way they previously thought they were — universe doesn't revolve around us, other animals possess skills we thought were unique to us, etc.
I think what's interesting is that many types of creativity may really just be re-synthesizing "stuff we already know."
So a lot of the negative comments along the lines of, "it can't be creative because it never thinks of anything beyond its training data" don't click with me. I think synthesizing two existing concepts into some third thing is actually a form of creativity.
These nets may not learn the same way we do exactly, and they may not possess the same creative abilities as us — but there's definitely something interesting going on. I for one am taking a Beginner's Mind view of it all. It's pretty fascinating.
I keep describing it as the Enterprise Ship’s Computer. It won’t answer “how do I solve this problem?” But it’ll help you workshop a solution if you do the “thinking.”
…But I’ve also had it clearly tell me in an answer that 2 is an odd number.
But if you actually read Shakespeare, this is nothing like it. Every example I have seen of someone trying to make ChatGPT sound like Shakespeare, it just spits out this generic puff not anything like Shakespeare. Whether you thin Shakespeare is good or not doesn’t, it doesn’t match anything similar to the complexity, word choice, or rhythm of the prose.
Agreed, both ChatGPT and DALL-E feel significantly different in their ability to at least simulate “understanding.” They aren’t perfect by any means, but they’re a big step up from anything I’ve seen before.
The X in Y format is really one of its strengths. I asked for “A truth table for three valued logic in Markdown” and got something totally usable which I could then tweak.
> People keep saying ChatGPT isn't that impressive because it's just "regurgitating knowledge" and has no insight into it, or things along those lines.
Really? This seems like a straw man - I've only seen gobs and gobs of examples showing all the amazing things ChatGPT can do. I have seen some measured comments from real experts helping to explain how ChatGPT works behind the scenes, and this is usually to temper sentiments when folks start going down the "It's sentient!!" route.
There will always be naysayers stuck in the old way of doing things. Don’t let em get in your head and keep your eyes full of wonder. Incredible things are still ahead.
What's incredible is how software engineers have failed over the last decade to truly advance the creation of CRUD apps.
Making a CRUD app in many ways has become much more complicated than it was when I started programming 10 years ago.
Today when I want to build an application, I often find myself frustrated and bemused at the state of things. Not because I find it difficult to write TailWind, or connect my Redux state to a component, but because I would have imagined the increase in the number of engineers would have led to more abstractions that would have simplified the creation of a CRUD app, which is are just glorified web forms.
I wonder if ChatGPT had even stood a chance if engineers were good at engineering. But engineers are really mostly good at boiler-plating code complex enough to ensure job security. Which ChatGPT might excel at one day.
What I would have liked to have seen is a world in which we were good at creating abstraction layers to solve an entire class of problems. But alas, I might have asked for too much.
It makes me wonder: what if the solutions you have in mind do exist out there, but simply as open source projects with zero eyeballs on them? What if there are dozens upon dozens of them, each solving a class of problems but in a slightly different way?
In terms of organizing work among huge groups of people, I don't think any of the above possibilities would be feasible for an industry. We already complain about having just a handful of frameworks to choose from.
In my experience writing novels with ChatGPT, it starts to break down after a long running thread before eventually becoming almost useless. I wind up needing to remind it what it’s doing over and over.
That is likely by virtue of its limit on tokens, but I think also because the weight each token has reduces as the conversation continues.
I wonder if users would slowly watch the website go insane after using over X interactions.
This is cool. I've been using it mostly to explain APIs to me when I'm too lazy to dig through docs. This works surprisingly well, even for some relatively obscure APIs like Libre/OpenOffice's "UNO" API.
I think a really interesting use case for this would be to have it read through a long standards document and produce a compliant implementation, and maybe point out flaws/omissions. Maybe implement a full web browser from scratch? Or something less intense like a GLTF reader/writer? Or something ludicrous like a brainfuck implementation of Office Open XML, which has like ~7000 pages of specs.
Its memory is restricted to a couple pages of prompt, so I don’t think you’ll have any luck with the kinds of projects you’re mentioning. In addition, the way ChatGPT is generating output is linear, based on the sequence of preceding prompts and answers. It can’t really go back and forth (and sideways), navigating a graph of new things, as would be needed to develop a larger project.
It’s like you’re trying working with someone who only has short-term memory, and also has a tendency to make up things and be scatterbrained.
What is it drawing from, so to speak? Isn't it the case that it's training included the actual API docs or else it is just guessing based off knowledge of other API's and inferring things from how things are named and whatnot? It just seems like there are bound to be lots of errors if it's the latter.
Replying in a sibling comment because I can't reply to the original. FYI
spikeagally you appear to be shadowbanned. I would email [email protected] if you feel it's in error.
>> This is cool. I've been using it mostly to explain APIs to me when I'm too lazy to dig through docs.
I’d be careful with this. I maintain docs for a project and asked ChatGPT how to implement a feature. The answer is in the docs obviously. It returned a really compelling step by step guide including code samples. Like a great StackOverflow answer. The problem - it was completely wrong. The code samples called API’s that didn’t exist and the whole explanation was based on the premise they did.
> There’s no official API for ChatGPT yet, so I’m using the unofficial chatgpt NPM package to wire up the app in the demo video. OpenAI has their Cloudflare protection cranked up high; to get around that, it runs a full Chrome instance (!) in the background to shuttle messages back and forth between the little REST API server I wrote and ChatGPT’s internal API.
This is quite a workaround, have you tried the official Davinci003 api? It's rather capable now and probably faster response times. Very cool experiment regardless!
Don't build your frontend in React at all, unless absolutely necessary. Look for simpler ways. Check if your website really needs to be a web app. Check if your website actually has many interactive widgets. Even if interactive widgets are present, check, whether you could go with a simpler approach of serving static web pages and serving a frontend framework only on pages, where the interactive widgets are located. This will save loads of time for pages, where simple template rendering as offered by most traditional web frameworks in proper backend languages is sufficient. Then you can add interactive widgets later on.
I like to use React for it's component system. Those components don't even have to be reusable, I just like working with them, it's way easier to separate and organize code and it makes me more productive when I have to find and change anything. I don't like huge html files. If I need static sites, things like Nextjs and Astro are great for that.
This is very useful, I feel like people just want to use React for all the "coolness" of using React. People should just use React when their application requires react.
Interesting concept. However both of these examples are apps that probably appear in millions of tutorial articles on react and redux. I would be curious to see how it performs on a more unique or complex application request, even the bill pay example you list in your concept.
While I hate to think how inefficient this is, it gave me a really good prompt idea I tried with GTP-3 and it works.
> Here is it in Json format: ...
Seems pretty effective to get GTP-3 to spit out the results in the exact format you want. This will save me so much time parsing to get out the results I need.
Chatgpt has saved me hours helping me convert raw SQL queries into Ecto queries. I have learned a lot more about Ecto thanks to chatgpt. Easily a tool I would pay to use monthly.
Isn't this just programming with more steps? For example you had ChatGPT try different ways to implement the data model, as you might assign a junior code monkey, until you guided it with your experience on how it should be implemented
I pitched a similar technique to my company a few weeks ago but didn’t get any enthusiasm back. I think these sorts of apps will be commonplace in a few years, for better or worse
[+] [-] mintplant|3 years ago|reply
I OCR'd the text with Google Lens, described who had what, and after a bit of prompt engineering (e.g., adding "Be sure to get your math correct" to make the AI's arithmetic check out, and convincing the AI to split shared items evenly), it totally worked: https://gist.github.com/spinda/967322dda1c04d9864f3efd45addc...
Then I started experimenting with describing a hypothetical check-splitting app to the AI, and asking it to feed me JSON commands to update the UI in response to messages from me telling it what the user was doing. The results were promising! And then the similarity to the Redux data loop jumped out, and I built this generic plugin to wire ChatGPT up to apps for real.
[+] [-] Gigachad|3 years ago|reply
Having to tell chatGPT to make sure it gets it’s math right is not confidence inspiring.
[+] [-] quadcore|3 years ago|reply
[+] [-] vinhboy|3 years ago|reply
[+] [-] kristiandupont|3 years ago|reply
[+] [-] supermatt|3 years ago|reply
For the type of queries you are doing (sending whole context), the output is comparable (and just as wrong) between chatGPT and GPT-3.
[+] [-] willnonya|3 years ago|reply
I find this fascinating but not terribly impressive. Its fascinating that such a rudimentary skill prompted you to dive down this rabbit hole. The unimpressive part is that it's a rudimentary skill yet your over engineered solution still only required rudimentary skills.
[+] [-] EMM_386|3 years ago|reply
People keep saying ChatGPT isn't that impressive because it's just "regurgitating knowledge" and has no insight into it, or things along those lines. But I find it insanely impressive that you can specify something like:
"Provide your answer in JSON form. Reply with only the answer in JSON form and include no other commentary."
And it will do exactly that. Or tell it to explain you something "in the style of Shakespere".
I just asked it about quantum physics as Shakespere and got this (plus a lot more):
---
Oh sweet youth, listen closely as I impart
The secrets of the quantum realm, a place of art
Where particles and waves, both small and large
Exist in states both definite and in charge
---
That is really fascinating stuff.
[+] [-] gdubs|3 years ago|reply
I think what's interesting is that many types of creativity may really just be re-synthesizing "stuff we already know."
So a lot of the negative comments along the lines of, "it can't be creative because it never thinks of anything beyond its training data" don't click with me. I think synthesizing two existing concepts into some third thing is actually a form of creativity.
These nets may not learn the same way we do exactly, and they may not possess the same creative abilities as us — but there's definitely something interesting going on. I for one am taking a Beginner's Mind view of it all. It's pretty fascinating.
[+] [-] Waterluvian|3 years ago|reply
…But I’ve also had it clearly tell me in an answer that 2 is an odd number.
[+] [-] dinkumthinkum|3 years ago|reply
[+] [-] yashap|3 years ago|reply
[+] [-] travisjungroth|3 years ago|reply
https://github.com/travisjungroth/trinary#truth-table
[+] [-] hn_throwaway_99|3 years ago|reply
Really? This seems like a straw man - I've only seen gobs and gobs of examples showing all the amazing things ChatGPT can do. I have seen some measured comments from real experts helping to explain how ChatGPT works behind the scenes, and this is usually to temper sentiments when folks start going down the "It's sentient!!" route.
[+] [-] raydiatian|3 years ago|reply
[+] [-] ilaksh|3 years ago|reply
[+] [-] fn1|3 years ago|reply
[+] [-] survirtual|3 years ago|reply
[+] [-] khet|3 years ago|reply
Making a CRUD app in many ways has become much more complicated than it was when I started programming 10 years ago.
Today when I want to build an application, I often find myself frustrated and bemused at the state of things. Not because I find it difficult to write TailWind, or connect my Redux state to a component, but because I would have imagined the increase in the number of engineers would have led to more abstractions that would have simplified the creation of a CRUD app, which is are just glorified web forms.
I wonder if ChatGPT had even stood a chance if engineers were good at engineering. But engineers are really mostly good at boiler-plating code complex enough to ensure job security. Which ChatGPT might excel at one day.
What I would have liked to have seen is a world in which we were good at creating abstraction layers to solve an entire class of problems. But alas, I might have asked for too much.
[+] [-] ducharmdev|3 years ago|reply
In terms of organizing work among huge groups of people, I don't think any of the above possibilities would be feasible for an industry. We already complain about having just a handful of frameworks to choose from.
[+] [-] lelandfe|3 years ago|reply
In my experience writing novels with ChatGPT, it starts to break down after a long running thread before eventually becoming almost useless. I wind up needing to remind it what it’s doing over and over.
That is likely by virtue of its limit on tokens, but I think also because the weight each token has reduces as the conversation continues.
I wonder if users would slowly watch the website go insane after using over X interactions.
[+] [-] bogwog|3 years ago|reply
I think a really interesting use case for this would be to have it read through a long standards document and produce a compliant implementation, and maybe point out flaws/omissions. Maybe implement a full web browser from scratch? Or something less intense like a GLTF reader/writer? Or something ludicrous like a brainfuck implementation of Office Open XML, which has like ~7000 pages of specs.
[+] [-] vesinisa|3 years ago|reply
Though I've noticed sometimes it gets fringier details incorrect but remains very self-assured nonetheless.
[+] [-] layer8|3 years ago|reply
It’s like you’re trying working with someone who only has short-term memory, and also has a tendency to make up things and be scatterbrained.
[+] [-] nonethewiser|3 years ago|reply
[+] [-] ilaksh|3 years ago|reply
[+] [-] thefreeman|3 years ago|reply
[+] [-] spikeagally|3 years ago|reply
I’d be careful with this. I maintain docs for a project and asked ChatGPT how to implement a feature. The answer is in the docs obviously. It returned a really compelling step by step guide including code samples. Like a great StackOverflow answer. The problem - it was completely wrong. The code samples called API’s that didn’t exist and the whole explanation was based on the premise they did.
[+] [-] maxkrieger|3 years ago|reply
This is quite a workaround, have you tried the official Davinci003 api? It's rather capable now and probably faster response times. Very cool experiment regardless!
[+] [-] acemarke|3 years ago|reply
(now I'm curious how well it would handle requests using our modern Redux Toolkit API syntax...)
[+] [-] zelphirkalt|3 years ago|reply
[+] [-] mhfu|3 years ago|reply
[+] [-] dymk|3 years ago|reply
[+] [-] nigamanth|3 years ago|reply
[+] [-] thefreeman|3 years ago|reply
[+] [-] suction|3 years ago|reply
[deleted]
[+] [-] throwaway2016a|3 years ago|reply
> Here is it in Json format: ...
Seems pretty effective to get GTP-3 to spit out the results in the exact format you want. This will save me so much time parsing to get out the results I need.
[+] [-] sergiotapia|3 years ago|reply
[+] [-] personjerry|3 years ago|reply
[+] [-] barbariangrunge|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] sailorganymede|3 years ago|reply
[+] [-] incrudible|3 years ago|reply
[+] [-] scrame|3 years ago|reply
[deleted]
[+] [-] sodapopcan|3 years ago|reply
[deleted]
[+] [-] noisy_boy|3 years ago|reply
[deleted]
[+] [-] Waterluvian|3 years ago|reply
[+] [-] jhgg|3 years ago|reply
[+] [-] mark_mart|3 years ago|reply