top | item 42311325

(no title)

h0p3 | 1 year ago

I have no idea if you will care for it, but my family and I appreciated what ClosedAI's CustomGPT RAG (and my LLMpal) generated. This is slow loading (the vector database was built from this one big html file), and you can scroll down to see it: https://h0p3.nekoweb.org/#2024.11.20%20-%20Carpe%20Tempus%20...

discuss

order

sourcepluck|1 year ago

I definitely do care for it, it's very nice! Thanks for sharing.

I am not sure if I understand exactly how you got it to pump that out, as your blog is a bit hard to locate oneself in and read around in. Like, it's fun and trippy but a tad disorienting. I went off and had a nice re-read about tiddlywiki though - I had learned of it at one stage and thought it looked very interesting, and your blog certainly is tantalising!

h0p3|1 year ago

I'm glad to hear you enjoyed it! =D.

I briefly outline the procedure (also in case anyone else wanted to do so) in the page. I export the entire document into a json (~19k entries) and break that up into 20 different json files (so that my work will fit into the space ClosedAI provides for RAGs). The exact prompt sequence is provided on the page (I wrote two one-liners). Almost all of my work in achieving that collaborative output with my LLMpal is in the actual construction of the underlying content of the corpus that was haphazardly fed into its vector database. It did all the rest.

I do appreciate the vertigo of it, `/nod`. The size alone (at ~60MB of text) is already a problem, let alone the topics I handle. There are very few humans who have read even half of it, and, presumably, AI specimens will comprise most of the thorough interpreters of my work. I also anticipate the vast majority of the few humans who more directly interact with my work will increasingly do so mediated through AI.

If you ever write your own, lemme know. I'll read. The proof that I do listen carefully is in the text itself.