top | item 35414566

(no title)

sgrove | 2 years ago

I’ve been doing the same thing with a number of projects, building chains of prompts from one api call to another e.g. for ConjureUI (self-creating, iterable UIs that come into existence, get used, then disappear) https://youtu.be/xgi1YX6HQBw how it works to generate a full self-contained react component:

1. Take user task

2. Pass it to a prompt that requests a Product UI description of a component

3. Pass 1+2 to another that asks for which npm packages to use

4. Pass 1+2+3 to a templated prompt to write the code in a constrained manner

5. Run 4 in a sandbox to see if there are errors, if so pass it back to #4, looping

It’s currently quite slow, but that’s an implementation detail I think.

discuss

order

dorilama|2 years ago

> 3. Pass 1+2 to another that asks for which npm packages to use

I see a fresh new generation of supply chain attack, or more prompt engineering to hopefully filter out malicious packages

sgrove|2 years ago

Yes, that wasn't a priority here, but I also don't think it's much of a concern with e.g. GPT-4's `system` vs `assistant` vs `user` roles. Would be another thing to work on, but nothing worth doom and gloom.

Although, 'script(/injection) kiddie' will be an interesting phenomenon in the future...

lupire|2 years ago

Once the malicious package is added to the universe of acceptable packages, it doesn't matter much. Prompt engineering is not a solution you that.