I've been building a ChatGPT project this week and this explanation is so true it made me actually lol. Sometimes it's impressively flawless, sometimes it gets stuck in broken patterns... Even if the inputs are identical.
"Monkey around with the prompt and pray" has replaced unit testing.
From authors website: "I graduated high school in May of 2022..". Elsewhere, he writes he placed first at a university level drone programming challenge at IIT-Bombay Techfest. Very impressive.
We also use GPT to perform actions in the software I build at work and we hit the same issue of inconsistency which lead me down a long rabbit hole to see if I could force an LLM to only emit grammatically correct output that follows a bespoke DSL (a DSL ideally safer and more precise than just eval'ing random AI-produced Python).
I just finished writing up a long post [1] that describes how this can work on local models. It's a bit tricky to do via API efficiently, but hopefully OpenAI will give us the primitives one day to appropriately steer these models to do the right things (tm).
Copilot has an option (maybe beta users only) to see alternative generations for a prompt. It's really handy because some generations are one-liners while others are entire functions.
Perhaps this is infeasible with the current cost per generation but a multiple choice display could be handy (perhaps split the screen into quadrants and pick the one that most fits what you prefer).
I know this tech is terrifying to actual 3D artist who don’t want be a “prompt engineer”, but as someone who has never used Blender, I think its cool that I can create something using tools like this and use them in my projects, ex background animation on a website hero section.
I see a lot of people asking how does this work. The method he is using is one shot learning. He has a prompt and an example of what the interaction should look like.
It is really easy to build this kind of thing - I've got a very simple command line chatbot that should be very understandable and you can easily play with the prompt.
Ultimately there's a relationship between the preciseness in which you want to control something and the underlying information, as conveyed in language, to describe such precision.
Whether you use plain english, or code - ultimately to do things of sufficient precision you will have to be equally precise in your description. I'm sure someone with more time and more knowledge on such things have already formalized this in some information theory paper, but...
The point I'm making here is that this is great because a lot of people are doing "simple" things, and now they will be able to do those things without understanding the idiosyncrasies of Blender APIs, but I'm convinced that this will ultimately turn into something equally difficult as blender APIs to do novel things. and WHEN (not if) that happens, I hope users are prepared to learn the Blender APIs, because it will be inevitable.
edit:
one other thought. I think "language models" are not the right solution ultimately. I think kind of like AI didn't boom until the proper compute was available even though theoretical models and algorithms existed, language models are the crud solution.
once we have a loseless way to simply "think" what we want, then a "large thought model" will have less trouble, as there will be less ambiguity in what you want to what is said.
It could also be that something like this is just another tool in the toolbox. Sure, you could spend time trying to understand Blender's Python API, but lots of Blender users are not programmers. It could be really helpful for them to be able to say "Place 10 spot lights at random position within the boundaries of Mesh 3 with random intensity and color" and just have that appear, rather than having to go looking for a plugin that does it for them.
Well, you get precision with imprecise communication via iteration. You walk together toward the correct mutual understanding by successive clarifications.
Chatbots use random number generators to create a variety of output, so it would be a terrible idea to use natural language as "source code" for a code generation tool. Running a chatbot shouldn't happen within a build process, because the results aren't reproducible.
Once you have the source code though, you can use a variety of tools to manipulate it and save the result. Using chatbots to make modifications under supervision is fine. You discard bad modifications and save the good ones.
This is using natural language for one-offs and source code for reproducible results. It's looking like they will go well together.
With a shared vector database and semantic search plugin you can see bunch of prompts other people have created.
with a search plugin you can have it find the api docs and have it output interesting parameters and how to use them with examples.
with a python REPL plugin you can have it generate 10 variations and run the code for each.
with GPT4 and plugins you could describe the output you want to midjourney or something and give the prompt to it to generate it in blender(feed the outputs to some image similarity vector to compare) and have it search through parameter space(or vector space of the prompt) until it finds something pretty close to what you want.
If the command is not restricted to text only, but use text and geometrical context, you can remove a lot of ambiguities. This is often done in video games with contextual interactions.
After all, using the Blender GUI, you can do a lot using only a 2D mouse coordinate and two boutons. So 2D mouse coordinates and text could be better.
A nice evolution would be an AI model that can understand natural language instructions, while taking into account where your mouse pointer, how the model is zoomed and oriented, and that has geometric insight of the 3D scene built so far.
Or in other words, the "create an adversary capable of defeating Data" problem... just a bit less of an issue (for now) if you get the instructions wrong.
One interesting thing would be to describe a scene and get the rough print. But do it in sections, such that you can select and begin to refine sections and elements within the scene to whittle down to the preciseness that pleases you for each element...
What would be really interesting will be how geometry nodes can be managed using gpt.
Looking at the underlying prompt [1], it is a one-shot prompt, i.e. it contains one example of a natural language task along with the corresponding Python code (plus a "system" prompt for overall context and direction). Amazing how much you can do with a simple prompt.
Imagine Jupyter notebooks with this capability. Or Photoshop. Or Davinci Resolve. We live in amazing times.
Did you run into issues that made you tweak the prompt? You often tell the tool not to do extra work despite having it in the prompt, did you find that it often tries to do a lot of "boilerplate" work like setting up lights and cameras?
system_prompt = """You are an assistant made for the purposes of helping the user with Blender, the 3D software.
- Respond with your answers in markdown (```).
- Preferably import entire modules instead of bits.
- Do not perform destructive operations on the meshes.
- Do not use cap_ends. Do not do more than what is asked (setting up render settings, adding cameras, etc)
- Do not respond with anything that is not Python code.
Famous quote: "Wouldn't it be nice if our machines were smart enough to allow programming in natural language?". Well, natural languages are most suitable for their original purposes, viz. to be ambiguous in, to tell jokes in and to make love in, but most unsuitable for any form of even mildly sophisticated precision. And if you don't believe that, either try to read a modern legal document and you will immediately see how the need for precision has created a most unnatural language, called "legalese", or try to read one of Euclid's original verbal proofs (preferably in Greek). That should cure you, and should make you realize that formalisms have not been introduced to make things difficult, but to make things possible. And if, after that, you still believe that we express ourselves most easily in our native tongues, you will be sentenced to the reading of five student essays.
- Dijkstra From EWD952
On the other hand, we already specify programs in natural language, and in this case chatGPT is really taking a "specification language" as input.
When a client wants a button on a webpage, they don't send the web designer a legaleze document describing the dimensions of the button. They usually don't even tell the designer what font to use.
The web designer pattern matches the client's english request to the dozens of websites they've built, similar buttons they've seen and used, and then either asks for clarification, or specifies it clearly to the machine in a more specific way.
Is that different from the chatGPT flow?
Honestly, we also already mostly use english for programming too, not just design. Most of programming now is glueing together libraries, and libraries don't provide a formal logical specification of how each function works. No, they provide english documentation saying something like "http.get(url) returns an httpresponse object or an error". That's far from an actual mathematical specification of how it works, but the plain english definition is enough that most programmers won't ever look at the implementation, the actually correct specification, because the english docs are fine.
Yes and no. You've provided an excellent introduction to the problem space, but I think natural language has a larger role in formalisms than you might expect.
The most familiar formal language grammars to most people here are programming languages. The difference between them and natural language has been categorized as the difference between "context-free grammar" and "context-dependent grammar".
The most popular context-free language is mathematics. The language of math provides an excellent grammar for expressing logical relationships. You can take an equation, write it in math, and transform it into a different equivalent representation. But why? Arithmetic. The Pythagorean Theorem would be wholly inconsequential if we didn't have an interest in calculating triangles. The application of math exists outside the grammar itself. This is why you, and everyone else here, grew up with story problems in math class.
Similarly, programming languages provide excellent utility for describing explicit computational behavior. What they are missing is the reason why that behavior should exist at all. Programs are surrounded by moats of incompatible context: it takes explicit design to coordinate them together.
If we can be explicit about the context in which a formalism exists, we could eliminate the need for ambiguity. With that work done, the incompatibility between software could be factored out. We could be precise about what we mean, and clear about what we infer. We could factor out all semantic arguments, and all logically fallacious positions. We could make empathy itself into software. That is the dream of Natural Language Processing.
I think that dream is achievable, but certainly not through implicit text models (LLMs). We need an explicit symbolic approach like parsing. If you're interested, I have been chewing on an idea that might work.
In (language model-backed) natural language interfaces the loss of precision can be made up via iteration. If it were about getting it right on the first try this would be a dead end but there's no need for that restriction.
I’ve brought this up in another thread, but we already have evidence that in certain cases it is desirable to trade off specificity for just “getting something done”, that evidence is Python. Python succeeds despite not having a serious type system, because often times you can get something that achieves your goal despite the 25 bugs in your program that would have been caught by a real type system.
On the other side of the coin, there’s C++, which is usually doing the heavy lifting underneath the underspecified-but-sufficient Python code.
My guess is that as LLMs evolve, they will more naturally fill this niche and you will have high-level, underspecified “code” (prompts) that then glued together more formal libraries (like OpenAI’s plugins).
While I agree, I don't think this is what Dijkstra was speaking about. A lot of the "AI interfaces" are just underpinning the real technical representation. For example, if you say "Which customers recently bought product #721?" this might generate a SQL query like "select * from customers where ...".
The interface is not the ChatGPT text box; it's SQL. The ChatGPT text box is just an assistant to help you do the correct thing (or at least, that's the way it should be used).
I really feel this. Whenever I try to use ChatGPT, I feel both slow and strained. My usual workflow involves me thinking very few English sentences in my head - I move faster thanks to abstract thought, and I don't think I'm alone.
So, really, all we need then is a language sufficiently precise enough to specify what a program needs to do and we can then feed it into a program which can write software that implements that specification, possibly adopting safe transformations of it into equivalent forms.
Now, that safely describes a modern, optimizing C compiler.....
If you want something unambiguous in a single pass, sure. But interaction with some back and forth can dig into the ambiguities that matter quickly, ignoring those that don’t. Dialog is a whole separate model. So maybe natural language is a terrible specification language but a great interaction language.
We're nearing the precipice of more natural human-computer interaction that will need to rethink the interfaces and conventions.
Alexa and Siri seem like Model T Fords when there's a jet aircraft flying overhead. I'm thinking these agents need to be replaced by more natural agents who can co-create a language with their human counterparts rather than relying on fixed, awkward, and sometimes unhelpful commands. It would behoove us to expose APIs and permissions delegation in a more consistent and self-describing (OpenAPI + OAuth / SAML possibly) manner for all possible services one would wish to grant to an agent. If a natural language agent is uncertain, it should ask for clarification. And on results, it is necessary to capture ever-more-precise feedback from users because positive and negative prompts aren't good enough.
Its not 'thanking', its positive signal that GPT's previous outputs were correct, so it should continue doing whatever it was doing.
If you say no/bad etc, then GPT will try other approaches.
This is getting closer to the system I've been saying I wanted since the 1980s, whenever people said "what could you use a faster computer for anyway?"
It's a system where you can talk to it and make a photo realistic movie. The example I always use is, you're sitting at the computer, looking at a blank screen and you say something like:
"Ok opening scene. Dockside, London, early 19th century. Early evening. There are several ships docked, one being offloaded. Stevedores are working, some disreputable louts hanging around."
The screen is updating as I'm talking.
"OK make it grittier, more dirt and grime, let's have a fight break out in mid distance left of the screen. Now pan slowly right to reveal a bar called the Skull and Crown. Make the sign dirtier but let the last light of sunset glint off of the skull."
Screen updates. We are looking at what appears to be a Hollywood level period set full of extras who look the way they should, based on historical data that the model has.
"As we pan over towards the door Micky gets tossed out by the big burly barman. Make him younger, skinnier, he's about 17 years old."
The point of all this is, no, you don't need exact language to specify what you want. In the world of filmmaking you never do that. The screenwriter describes things in some detail, but always leaves a lot up to the interpretation of the director, the set designer, the costumer, the makeup artist, the casting director, etc.
The AI can take on any or all of those roles for us.
What I want is something I can control to make the movie I want to make. Then I want to be able to iterate on it: Let's make the main character a woman. Now everything gets changed to fit that. etc.
Of course the AI can replace the role of the writer too, and the director, and the producer leaving me with nothing to do. But the fact that I can bring my vision to the screen still makes it a great tool.
Until We started to see LLMs, and the tools that can be created with them, I doubted the possibility of Star Trek's Voice command system. Asking for the computer to clarify some concept, or filter and reduce data sets based on arbitrary data was pure science fiction.
Seeing something like this makes me think that the arbitrary holodeck commands "Paris, 1950's, rainy afternoon" is suddenly not a challenging part of the equation. It's really exciting.
I'm just thinking of that reddit post about the 3d artist who is demoralized about his small indie company and him being reduced to just inputting prompts all day. Now its spreading to the animators and Explosion/CreatureEffects artists.
Studios like Wetta Digital / DoubleNegative etc.. are gonna pounce on this
I've been watching the behind the scenes stuff from the LotR extended editions. The Weta Workshop people were true craftsmen and women, dedicated and creative and really inspiring. Now I'm imagining two people just prompting a machine over and over and to be honest it's a really sad vision of a sad, boring future. Give me hippies carving Treebeard out of polystyrene over AI generated digital perfection any day.
Can this same idea be extended to, say, interact with an open-source SVG editor like Inkscape? What are the requirements of the editor — I presume it must support some form of scripting?
I would love to be able to have GPT sketch math figures, which I then modify/perfect.
Note: this comment is partially inspired by the workflow of Gilles Castel — I’d love to be able to use GPT in the loop of note taking, similar to the system that Gilles setup to improve sketching speed.
Does anyone know if this works well (to the extent that it does), because the documented Blender Python API was part of the GPT-4 training set, or because the Blender API is fairly predictable?
I'd love to add this capability to our SaaS product, but I've waited for OpenAI to make GPT-3.5 or GPT-4 available for fine-tuning. (Cramming an entire API into the prompt does not seem feasible, not even with support for 32K tokens.)
By the way, is this a possible vector of remote code execution? Probably yes, by definition, but how could somebody exploit it to do some harm to the user of that Blender instance?
Given that Blender itself managed to bork my Blend file to the point where I had to delete the offending mesh to shop it from crashing just recently, I would recommend everyone using this to, at the very least, put a version control system in place when working on anything remotely important.
[+] [-] EamonnMR|3 years ago|reply
[+] [-] junon|3 years ago|reply
[+] [-] elif|3 years ago|reply
"Monkey around with the prompt and pray" has replaced unit testing.
[+] [-] carlsborg|3 years ago|reply
[+] [-] newhouseb|3 years ago|reply
I just finished writing up a long post [1] that describes how this can work on local models. It's a bit tricky to do via API efficiently, but hopefully OpenAI will give us the primitives one day to appropriately steer these models to do the right things (tm).
[1] https://github.com/newhouseb/clownfish
[+] [-] arecurrence|3 years ago|reply
Perhaps this is infeasible with the current cost per generation but a multiple choice display could be handy (perhaps split the screen into quadrants and pick the one that most fits what you prefer).
[+] [-] Spivak|3 years ago|reply
There’s two approaches I use, I’m sure there are more.
* Do multiple completions and, filter the ones that successfully run, and take the most common result.
* Do a completion, if it fails, ask the Llm to find and correct the bug.
[+] [-] csmpltn|3 years ago|reply
[+] [-] bigbassroller|3 years ago|reply
[+] [-] iamflimflam1|3 years ago|reply
You can see the prompt here: https://github.com/gd3kr/BlenderGPT/blob/main/__init__.py
It is really easy to build this kind of thing - I've got a very simple command line chatbot that should be very understandable and you can easily play with the prompt.
https://github.com/atomic14/command_line_chatgpt
I would also recommend that people try out the openai playgrounds. They are great for experimenting with parameters.
[+] [-] endisneigh|3 years ago|reply
Ultimately there's a relationship between the preciseness in which you want to control something and the underlying information, as conveyed in language, to describe such precision.
Whether you use plain english, or code - ultimately to do things of sufficient precision you will have to be equally precise in your description. I'm sure someone with more time and more knowledge on such things have already formalized this in some information theory paper, but...
The point I'm making here is that this is great because a lot of people are doing "simple" things, and now they will be able to do those things without understanding the idiosyncrasies of Blender APIs, but I'm convinced that this will ultimately turn into something equally difficult as blender APIs to do novel things. and WHEN (not if) that happens, I hope users are prepared to learn the Blender APIs, because it will be inevitable.
edit:
one other thought. I think "language models" are not the right solution ultimately. I think kind of like AI didn't boom until the proper compute was available even though theoretical models and algorithms existed, language models are the crud solution.
once we have a loseless way to simply "think" what we want, then a "large thought model" will have less trouble, as there will be less ambiguity in what you want to what is said.
right now it's thought -> language -> model.
later it will be thought -> model.
[+] [-] capableweb|3 years ago|reply
[+] [-] ericd|3 years ago|reply
[+] [-] skybrian|3 years ago|reply
Once you have the source code though, you can use a variety of tools to manipulate it and save the result. Using chatbots to make modifications under supervision is fine. You discard bad modifications and save the good ones.
This is using natural language for one-offs and source code for reproducible results. It's looking like they will go well together.
[+] [-] sharemywin|3 years ago|reply
with a search plugin you can have it find the api docs and have it output interesting parameters and how to use them with examples.
with a python REPL plugin you can have it generate 10 variations and run the code for each.
with GPT4 and plugins you could describe the output you want to midjourney or something and give the prompt to it to generate it in blender(feed the outputs to some image similarity vector to compare) and have it search through parameter space(or vector space of the prompt) until it finds something pretty close to what you want.
given your budget of course.
[+] [-] guyomes|3 years ago|reply
After all, using the Blender GUI, you can do a lot using only a 2D mouse coordinate and two boutons. So 2D mouse coordinates and text could be better.
A nice evolution would be an AI model that can understand natural language instructions, while taking into account where your mouse pointer, how the model is zoomed and oriented, and that has geometric insight of the 3D scene built so far.
[+] [-] crooked-v|3 years ago|reply
[+] [-] samstave|3 years ago|reply
One interesting thing would be to describe a scene and get the rough print. But do it in sections, such that you can select and begin to refine sections and elements within the scene to whittle down to the preciseness that pleases you for each element...
What would be really interesting will be how geometry nodes can be managed using gpt.
[+] [-] gandalfgeek|3 years ago|reply
Imagine Jupyter notebooks with this capability. Or Photoshop. Or Davinci Resolve. We live in amazing times.
[1]: https://github.com/gd3kr/BlenderGPT/blob/main/__init__.py
[+] [-] post-it|3 years ago|reply
[+] [-] nr2x|3 years ago|reply
[+] [-] JoeDaDude|3 years ago|reply
https://www.cs.utexas.edu/~EWD/transcriptions/EWD09xx/EWD952...
[+] [-] TheDong|3 years ago|reply
When a client wants a button on a webpage, they don't send the web designer a legaleze document describing the dimensions of the button. They usually don't even tell the designer what font to use.
The web designer pattern matches the client's english request to the dozens of websites they've built, similar buttons they've seen and used, and then either asks for clarification, or specifies it clearly to the machine in a more specific way.
Is that different from the chatGPT flow?
Honestly, we also already mostly use english for programming too, not just design. Most of programming now is glueing together libraries, and libraries don't provide a formal logical specification of how each function works. No, they provide english documentation saying something like "http.get(url) returns an httpresponse object or an error". That's far from an actual mathematical specification of how it works, but the plain english definition is enough that most programmers won't ever look at the implementation, the actually correct specification, because the english docs are fine.
[+] [-] thomastjeffery|3 years ago|reply
The most familiar formal language grammars to most people here are programming languages. The difference between them and natural language has been categorized as the difference between "context-free grammar" and "context-dependent grammar".
The most popular context-free language is mathematics. The language of math provides an excellent grammar for expressing logical relationships. You can take an equation, write it in math, and transform it into a different equivalent representation. But why? Arithmetic. The Pythagorean Theorem would be wholly inconsequential if we didn't have an interest in calculating triangles. The application of math exists outside the grammar itself. This is why you, and everyone else here, grew up with story problems in math class.
Similarly, programming languages provide excellent utility for describing explicit computational behavior. What they are missing is the reason why that behavior should exist at all. Programs are surrounded by moats of incompatible context: it takes explicit design to coordinate them together.
If we can be explicit about the context in which a formalism exists, we could eliminate the need for ambiguity. With that work done, the incompatibility between software could be factored out. We could be precise about what we mean, and clear about what we infer. We could factor out all semantic arguments, and all logically fallacious positions. We could make empathy itself into software. That is the dream of Natural Language Processing.
I think that dream is achievable, but certainly not through implicit text models (LLMs). We need an explicit symbolic approach like parsing. If you're interested, I have been chewing on an idea that might work.
[+] [-] westoncb|3 years ago|reply
[+] [-] georgelyon|3 years ago|reply
On the other side of the coin, there’s C++, which is usually doing the heavy lifting underneath the underspecified-but-sufficient Python code.
My guess is that as LLMs evolve, they will more naturally fill this niche and you will have high-level, underspecified “code” (prompts) that then glued together more formal libraries (like OpenAI’s plugins).
[+] [-] jxf|3 years ago|reply
The interface is not the ChatGPT text box; it's SQL. The ChatGPT text box is just an assistant to help you do the correct thing (or at least, that's the way it should be used).
[+] [-] whateveracct|3 years ago|reply
[+] [-] einhverfr|3 years ago|reply
Now, that safely describes a modern, optimizing C compiler.....
[+] [-] 6gvONxR4sf7o|3 years ago|reply
[+] [-] sacnoradhq|3 years ago|reply
We're nearing the precipice of more natural human-computer interaction that will need to rethink the interfaces and conventions.
Alexa and Siri seem like Model T Fords when there's a jet aircraft flying overhead. I'm thinking these agents need to be replaced by more natural agents who can co-create a language with their human counterparts rather than relying on fixed, awkward, and sometimes unhelpful commands. It would behoove us to expose APIs and permissions delegation in a more consistent and self-describing (OpenAPI + OAuth / SAML possibly) manner for all possible services one would wish to grant to an agent. If a natural language agent is uncertain, it should ask for clarification. And on results, it is necessary to capture ever-more-precise feedback from users because positive and negative prompts aren't good enough.
[+] [-] ChancyChance|3 years ago|reply
[+] [-] aiappreciator|3 years ago|reply
[+] [-] cwkoss|3 years ago|reply
When super intelligent AI gains power, I want it to know I've been a good boy.
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] raylad|3 years ago|reply
It's a system where you can talk to it and make a photo realistic movie. The example I always use is, you're sitting at the computer, looking at a blank screen and you say something like:
"Ok opening scene. Dockside, London, early 19th century. Early evening. There are several ships docked, one being offloaded. Stevedores are working, some disreputable louts hanging around."
The screen is updating as I'm talking.
"OK make it grittier, more dirt and grime, let's have a fight break out in mid distance left of the screen. Now pan slowly right to reveal a bar called the Skull and Crown. Make the sign dirtier but let the last light of sunset glint off of the skull."
Screen updates. We are looking at what appears to be a Hollywood level period set full of extras who look the way they should, based on historical data that the model has.
"As we pan over towards the door Micky gets tossed out by the big burly barman. Make him younger, skinnier, he's about 17 years old."
The point of all this is, no, you don't need exact language to specify what you want. In the world of filmmaking you never do that. The screenwriter describes things in some detail, but always leaves a lot up to the interpretation of the director, the set designer, the costumer, the makeup artist, the casting director, etc.
The AI can take on any or all of those roles for us.
What I want is something I can control to make the movie I want to make. Then I want to be able to iterate on it: Let's make the main character a woman. Now everything gets changed to fit that. etc.
Of course the AI can replace the role of the writer too, and the director, and the producer leaving me with nothing to do. But the fact that I can bring my vision to the screen still makes it a great tool.
[+] [-] quickthrower2|3 years ago|reply
[+] [-] waynenilsen|3 years ago|reply
[+] [-] sircastor|3 years ago|reply
Seeing something like this makes me think that the arbitrary holodeck commands "Paris, 1950's, rainy afternoon" is suddenly not a challenging part of the equation. It's really exciting.
[+] [-] abledon|3 years ago|reply
Studios like Wetta Digital / DoubleNegative etc.. are gonna pounce on this
[+] [-] eggsmediumrare|3 years ago|reply
[+] [-] nstart|3 years ago|reply
[+] [-] mccoyb|3 years ago|reply
I would love to be able to have GPT sketch math figures, which I then modify/perfect.
Note: this comment is partially inspired by the workflow of Gilles Castel — I’d love to be able to use GPT in the loop of note taking, similar to the system that Gilles setup to improve sketching speed.
[+] [-] davidpolberger|3 years ago|reply
I'd love to add this capability to our SaaS product, but I've waited for OpenAI to make GPT-3.5 or GPT-4 available for fine-tuning. (Cramming an entire API into the prompt does not seem feasible, not even with support for 32K tokens.)
[+] [-] iamflimflam1|3 years ago|reply
[+] [-] pmontra|3 years ago|reply
[+] [-] felipelalli|3 years ago|reply
LOL
[+] [-] charles_f|3 years ago|reply
[+] [-] 1attice|3 years ago|reply
[+] [-] janosdebugs|3 years ago|reply