(no title)
FlyingLawnmower | 5 months ago
User friendly library that connects to lots of OSS model serving backends: https://github.com/guidance-ai/guidance/
Core Rust library written for high performance mask computation (written mostly by my collaborator @mmoskal): http://github.com/guidance-ai/llguidance
btown|5 months ago
TL;DR instead of just getting a token and seeing if it would be accepted by the parser, you can actually zero-out probabilities for all invalid tokens, and do the computation for this in parallel at effectively zero cost:
> Here, compute_mask() can run on the CPU during the time it would be normally just waiting for the GPU to finish. The line prob[~mask] = 0.0 would normally be fused into the softmax kernel in the last stage of the LLM, with negligible overhead. Therefore, as long as the compute_mask() function completes faster than the LLM forward pass and parser.consume() is negligible (typically follows from compute_mask() speed), the constrained generation will be as fast as the unconstrained one.
I'm curious - have there been any research/conversations about pushing masking even earlier in the pipeline? In theory, there's a fair amount of compute that goes into computing the probability of tokens that will end up being masked away anyways.
lelanthran|5 months ago
Well, thank you for that; from a quick skim of Guidance, it looks like it is used when interfacing with the model directly - i.e. if I want to use Guidance I can't simply send input to my local Ollama instance, I have to stand up a small Python program that loads the model, accepts input from the user, push the user input tokens into the model, and for each output token, reject it if it fails some criteria.
Is this correct? If so, it means that the current way LLMs are interfaced with (via stdin/stout or an HTTP endpoint) can't be used with something like Guidance, correct?
stillsut|5 months ago
Should work with any llama.cpp compatible model: https://github.com/sutt/innocuous
dcreater|5 months ago
ru552|5 months ago
I didn't find any more on that comment below. Is there a list of supported LLMs?
FlyingLawnmower|5 months ago
We have support for Huggingface Transformers, llama.cpp, vLLM, SGLang, and TensorRT-LLM, along with some smaller providers (e.g. mistral.rs). Using any of these libraries as an inference host means you can use an OSS model with the guidance backend for full support. Most open source models will run on at least one of these backends (with vLLM probably being the most popular hosted solution, and transformers/llama.cpp being the most popular local model solutions)
We're also the backend used by OpenAI/Azure OpenAI for structured outputs on the closed source model side.
dcreater|5 months ago
I'm yet to see a thorough comparison of design, performance and reliability between these options (along with outlines etc)
FlyingLawnmower|5 months ago
Happy to chat more about the benchmark. Note that these are a bit out of date though, I'm sure many of the providers we tested have made improvements (and some have switched to wholesale using llguidance as a backend)
_andrei_|5 months ago
Balgair|5 months ago
I'm trying to write a really large book. I have a lot of material that I'm using RAG to help manage. I put into my prompts the top RAG cosine scores with some summaries of characters and previous chapters and scene sketches. I get scenes out and then work them over. LLMs are really helpful for my disability and have allowed me to make any progress at all on this.
Is your thing something I should look into for helping keep track of my material. I'm using Excel sheets and crappy python code right now.
Im pretty sure your stuff is some super technical backend thingy, but I figured I'd shoot my shot here. Thanks for any and all info, I appreciate it
ijk|5 months ago
In general I find that matching the most natural format for a document outperforms waiting for the big model trainers to convince the model that the format you want is a valid structure, so anything that lets me interweave structured and unstructured generation is very interesting to me right now.
FlyingLawnmower|5 months ago
The annoying bit with grammars is that they are unfortunately a bit complex to write properly. Fortunately language models are getting better at this, so hopefully to get an XML grammar, you can get most of the way there with just a GPT-5 prompt. Suppose it would be a good idea to have a better pre-built set of popular grammars (like a modified XML) in guidance so that we cut this headache out for users...!
ninadpathak|5 months ago
[deleted]
FlyingLawnmower|5 months ago
Great question re: adoption...it's definitely dominated by JSON. Most API providers have standardized on JSON outputs, so application teams have started building shims that map other formats to JSON and back. Similarly, with models heavily being post-trained to generate "good" JSON, I think there's a better model-constraint alignment story with JSON than most arbitrary grammars.
That said, internally, we experiment quite a lot with custom grammars all across the stack. It's more complicated to write a grammar than a JSON schema (though LMs are very good at grammar writing now) and more error prone to debug, but it can help significantly in certain cases (e.g. having models write custom DSLs not commonly found on the internet, at various parts of a model training pipeline, etc. etc.). I'm hoping that with the right tooling around it, the broader community will start nudging beyond JSON.
To that end, the python guidance library is really an attempt to make writing grammars more friendly to a python programmer. More to be done here of course!