(no title)
orand | 2 years ago
Unless you explain this statement, most people here are likely to dismiss everything you have to say on the topic of AI.
orand | 2 years ago
Unless you explain this statement, most people here are likely to dismiss everything you have to say on the topic of AI.
PeterCorless|2 years ago
I just ran a query for ChatGPT to recommend various databases for a specific use case. Out of the seven databases it recommended, only one was a database actually appropriate; one suggestion was marginally acceptable; three of the recommendations weren't even databases.
I then asked it to provide a list of the most important battles in North Africa prior to the entry of the United States into World War 2.
It gave me five answers. Three of which occurred after the entry of the United States into World War 2.
AIs provides extremely plausible answers. Sometimes it will actually generate correct, useful output; but you cannot yet rely on it for correctness.
px43|2 years ago
There is clearly significant value to this tech and I'm still dumbfounded how strongly some people try to deny it.
golergka|2 years ago
Also, what model did you use?
bongodongobob|2 years ago
https://chat.openai.com/share/43ebf64e-34ae-402b-a3ce-0787e2...
eximius|2 years ago
spencerflem|2 years ago
opportune|2 years ago
Try asking chatgpt or Gemini about something complex that you know all about. You’ll likely notice some inaccuracies, or thinking one related subject is more important than something else. That’s not even scratching the surface of the weird things they do in the name of “safety” like refusing to do work, paying lip service to heterodox opinions, or interjecting hidden race/gender prompts to submodels.
It’s good at generalist information retrieval to a certain degree. But it’s basically like an overconfident college sophomore majoring in all subjects. Progressing past that point requires a completely different underlying approach to AI because you can’t just model text anymore to reason about new and unknown subjects. It’s not something we can tweak and iterate into in the near term.
This same story has recurred after every single ML advance from DL, to CNN + RNN/LSTM, to transformers.
nostrademons|2 years ago
I've found two so far: the review summaries on Google Play are generally quite accurate, and much easier than scrolling through dozens of reviews, and the automatic meeting notes from Google Meet are great and mean that I don't have to take notes at a meeting anymore.
It did okay at finding and tabulating a list of local government websites, but had enough of an error rate (~10%) that I would've had to go through the whole list to verify its factualness, which defeats a lot of the time savings of using ChatGPT.
Beyond that: I tried ChatGPT vs. Google Search when I had what turned out to appendicitis, asking about symptoms, and eventually the 5th or so Google result convinced me to go in. If I had followed ChatGPT's "diagnosis", I would be dead. I've tried to have ChatGPT write code for me; it works for toy examples, but anything halfway complicated won't compile half the time, and it's very far from having maintainable structure or optimal performance. Basically works well if your idea of coding is copying StackOverflow posts, but that was never how I coded. I tried getting ChatGPT to write some newspaper articles for me; it created cogent text that didn't say anything. I did some better prompting, telling to incorporate some specific factual data - it did this well, but looking up the factual data is most of the task in the first place, and its accuracy wasn't high enough to automate this task with confidence.
Bard was utter crap at math. ChatGPT is better, but Wolfram Alpha or just a Google Search is better still.
In general, I've found LLMs to be very effective at spewing out crap. To be fair, most of the economy and public discourse involves spewing out crap these days, so to that extent it can automate a lot of people's jobs. But I've already found myself just withdrawing from public discourse as a result - I invest my time in my family and local community, and let the ad bots duke it out (while collecting a fat salary from one of the major beneficiaries of the ad fraud economy).
losvedir|2 years ago
I agree they hallucinate and write bad code and whatever, but the fact that they work at all is just magical to me. GPT-4 is just an incredibly good, infinitely flexible, natural language interface. I feel like it's so good people don't even realize what it's doing. Like, it never makes a grammatical mistake! You can have totally natural conversations with it. It doesn't use hardcoded algorithms or English grammar references, it just speaks at a native level.
I don't think it needs to be concretely useful yet to be incredible. For anyone who's used Eliza, or talked to NPCs, or programmed a spellchecker or grammar checker, I think it should be obviously incredible already.
I'm not sold on it being a queryable knowledge store of all human information yet, but certainly it's laying the inevitable future of interacting with technology through natural language, as a translation layer.
bongodongobob|2 years ago
elicksaur|2 years ago
You’ll find people who claim to have doubled their productivity from ChatGPT and people who think it’s useless here.
dullcrisp|2 years ago