top | item 36409201

Show HN: Autolabel, a Python library to label and enrich text data with LLMs

153 points| nihit-desai | 2 years ago |github.com

Hi HN! I'm excited to share Autolabel, an open-source Python library to label and enrich text datasets with any Large Language Model (LLM) of your choice.

We built Autolabel because access to clean, labeled data is a huge bottleneck for most ML/data science teams. The most capable LLMs are able to label data with high accuracy, and at a fraction of the cost and time compared to manual labeling. With Autolabel, you can leverage LLMs to label any text dataset with <5 lines of code.

We’re eager for your feedback!

22 comments

order
[+] bomewish|2 years ago|reply
What can this do that the new ‘calling functions’ feature can’t? It seems to be roughly the same thing?
[+] nihit-desai|2 years ago|reply
function calling, as I understand it, makes LLM outputs easier to consume by downstream APIs/functions (https://openai.com/blog/function-calling-and-other-api-updat...).

Autolabel is quite orthogonal to this - it's a library that makes interacting with LLMs very easy for labeling text datasets for NLP tasks.

We are actively looking at integrating function calling into Autolabel though, for improving label quality, and support downstream processing.

[+] devjab|2 years ago|reply
This is very interesting to me. We spent a significant time “labelling” data when I was in the public sector digitalisation. Basically what was done, was to do the LLM part manually and then have engines like this on top of it. Having used ChatGPT to write JSDoc documentation for a while now, and been very impressed with how good it is when it understands your code through good use of naming conventions, I’m fairly certain it’ll be the future of “librarian” styled labelling of case files.

But the key issue is going to be privacy. I’m not big on LLM, so I’m sorry if this is obvious, but can I use something like this without sending my data outside my own organisation?

[+] oli5679|2 years ago|reply
You can self-host an open-source model. Llama CCP is a very popular project with great docs.

https://github.com/ggerganov/llama.cpp

You need to be careful about liscencing - some of these models its a legal grey area whether you can use them for commercial projects.

The 'best' models require some quite large hardware to run, but a popular compression methodology at the moment is 'quantization', using lower precision model weights. I find it a bit hard to evaluate which open source models are better than others, and how they are impacted by quantization.

You can also use the Open-AI API. They don't use the data. They store for 30 days, which they use for fraud-monitoring, and then delete. It doesn't seem hugely different to using something like Slack/Google doc/AWS.

I think some people imagine their data will end up in the knowledge-base of GPT-5 if they use Open-AI products, but this would be a clear breach of TOS.

https://openai.com/policies/api-data-usage-policies

[+] nihit-desai|2 years ago|reply
Yep! I totally understand the concerns around not being able to share data externally - the library currently supports open source, self-hosted LLMs through huggingface pipelines (https://github.com/refuel-ai/autolabel/blob/main/src/autolab...), and we plan to add more support here for models like llama cpp that can be run without many constrains on hardware
[+] viswajithiii|2 years ago|reply
Thank you for open sourcing this! This seems very useful, especially because of the confidence estimation, which lets you use LLMs for the points they can do well and fall back to manual labelling for the rest.
[+] msp26|2 years ago|reply
>Refuel provides LLMs that can compute confidence scores for every label, if the LLM you've chosen doesn't provide token-level log probabilities.

How does this work exactly?

[+] isawczuk|2 years ago|reply
You should read carefully OpenAI terms and conditions before using it to build custom datasets.
[+] Takennickname|2 years ago|reply
No you don't. OpenAI didn't ask for permission when it took everyone's work to create gpt.

Pirate all LLMs. They're all yours anyway.

[+] applgo443|2 years ago|reply
How does the confidence scores work?
[+] voz_|2 years ago|reply
You just posted this here https://news.ycombinator.com/item?id=36384015

It's one thing to show HN / share, its another thing to spam it with your ads.

[+] nihit-desai|2 years ago|reply
Hi!

The earlier post was a report summarizing LLM labeling benchmarking results. This post shares the open source library.

Neither is intended to be an ad. Our hope with sharing these is to demonstrate how LLMs can be used for data labeling, and get feedback from the community