(no title)
fckgnad | 3 years ago
All of it. Only one human leader to write queries. Everything else designed by an AI.
>Neural networks are great for task where minor details are largely unimportant compared to overall "impression" - generating visuals, informal texts, music, probably image/video decompression, etc. On the other hand, while they can mimic "overal look", they can't guarantee (and in practice they always fail in that regard) that each detail of the produced artifact is correct. Which means you can't reliably or productively use them for programming, legal texts, construction design (though it can be used to draw inspiration for the overall image), etc.
You're just regurgitating a trope that's Categorically false. You're a NN did you realize that?
>I never said it's not revolutionary. I merely point out its hard limits.
And you're wrong. You have thoroughly expanded the limitations and you are mistaken about this.
>Technically artists are starting lawsuits due to copyright. Also, technically, an artist can easily tell the difference between raw NN output and an actual drawing, sometimes even non-artists, as the images often look somewhat uncanny.
No. corps and AI's and bots have been scraping pics off the internet for years. Google is one. No lawsuit of this nature has been filed until AI came out. Artists are threatened and they are reacting as such that's why the lawsuit is filed now instead of before.
https://futurism.com/the-byte/artist-banned-looked-ai-human <- artist banned because they thought his work was by an AI.
>These are two completely different tasks. You are comparing apples and oranges, that can't really be put on a same scale, unless by "HARDER" you specifically imply the amount of brainless tedious work required to complete the job.
No, ENGLISH is written in a language written with tokens of symbols. The other, PICTURES, is written in tokens of language as well. A pixel is 3 numbers of RGB and in the computer it is represented as a language with a format before translation onto your monitor. It is a translation problem and it is treated the same way by experts. Both DALL-E and chatGPT utilize very similar generative models translating English to English in the case of chatGPT and english to numbers which can be further translated to pixels for DALL-E.
>Also, in practice artists just use and process real photos when they aim for "photorealistic" - no one actually draws photorealistics from scratch, normally (but one can obviously invent any kind of challenge for themselves if they want to)
Not true. A good amount do.
>Who told you that there is a bootcamp that can produce a fine software engineer in a year? It takes (a talented-enough person) at the very least 5 years of rigorous study and practice before one can actually start working somewhat autonomously without constant supervision, while also delivering appropriate quality.
There's many bootcamps that make that claim and there's PLENTY of people who can live up to that claim. But NONE for artistry.
>Don't kid yourself thinking that these two are similar or comparable sets of tasks.
Kid myself? It is literally the same type of neural network. There's no kidding here. It's not a coincidence that chatGPT and DALL-E came out back to back. These models are called generative models. It's a single new technology that's responsible for this.
>That's actually not true and I never made such a claim. ChatGPT is EXTREMELY useful in a professional environment, but only for a specific set of tasks, while being used as a tool by an expert with actual responsibilities.
No it's not. There's no guard rails users can ask it anything and take it anywhere. It can't stay within a defined task. It's also wrong enough times that it can't be used in prod for virtually most tasks.
>The first GPT and GANs were heralds. ChatGPT is already a relatively mature and refined technology. I don't know why you expect to see low base effect here - the base is already actually pretty high.
No they weren't heralds. Text generators have always been around it got better. But never displayed signs of true understanding or even self awareness as it does now. Literal self awareness.
>Notice how each one of them also for some reason mentions a kind of business and languages and frameworks, which are totally unrelated.
I told it to do that. So that the responses wouldn't be generic. chatGPT is following my instructions.
>Not sure what you mean here by "squishy stuff" or "SPECIFICALLY". ChatGPT is a language model trained on a huge-ass volume of non-specific text corpus.
It is ALSO trained using humans to pick and choose good and bad answers. This training is non-specific and they used just regular people. If they used programmers and had programmers pick and choose good answers from programming questions, chatGPT will begin outputting really accurate code.
>Nope, that is merely a property and a limitation of the NNs. At best, you can use them to build up "intuition" to bruteforce problems (like AlphaFold for protein folding), but obviously it only works for simple-enough stuff that can actually be bruteforced, when the output can be easily formally verified fast-enough.
You are categorically wrong about this. 3 neurons can be trained to become an NAND gate which can then be used to simulate any computational network or mathematical equation that doesn't have a feedback loop. It can model anything with just an input and an output. This also has been demonstrated in practice and proven theoretically.
No comments yet.