top | item 46366492

(no title)

mootoday | 2 months ago

Yeah, I've noticed similar things with my projects. Hard to avoid these days I think.

Awesome, thanks for being an early adopter!

I got some great feedback already, so I'll continue building it out.

Roadmap: - Release binaries for Intel Mac, Linux, Windows - Add / test support for more database engines - Wrap up the LLM integration

Holidays are coming up, it may be a productive time haha

discuss

order

freakynit|2 months ago

Awesome... for databases containing large number of tables, you can pre-process the tables and generate embeddings for each. Then, when user asks a question in plain english, filter relevant tables using in-built vector search and pass metedata of these only as context to LLM.

Happy Holidays..