Ask HN: How is the community using LLMs for data cleaning/enriching/structuring?
7 points| jarulraj | 2 years ago
--- Prompt to GPT-3.5
You are given a block of disorganized text extracted from the GitHub user profile of a user using an automated web scraper. The goal is to get structured results from this data. Extract the following fields from the text: name, country, city, email, occupation, programming_languages, topics_of_interest, social_media. If some field is not found, just output fieldname: N/A. Always return all the 8 field names. DO NOT add any additional text to your output. The topic_of_interest field must list a broad range of technical topics that are mentioned in any portion of the text. This field is the most important, so add as much information as you can. Do not add non-technical interests. The programming_languages field can contain one or more programming languages out of only the following 4 programming languages - Python, C++, JavaScript, Java. Do not include any other language outside these 4 languages in the output. If the user is not interested in any of these 4 programming languages, output N/A. If the country is not available, use the city field to fill the country. For example, if the city is New York, fill the country as United States. If there are social media links, including personal websites, add them to the social media section. Do NOT add social media links that are not present. Here is an example (use it only for the output format, not for the content):
name: Pramod Chundhuri
country: United States
city: Atlanta
email: pramodc@gatech.edu
occupation: PhD student at Georgia Tech
programming_languages: Python, C++
topics_of_interest: PyTorch, Carla, Deep Reinforcement Learning, Query Optimization
social_media: https://pchunduri6.github.io
---- [1] https://en.wikipedia.org/wiki/Data_wrangling
[2] https://github.com/pchunduri6/stargazers-reloaded
[3] https://medium.com/evadb-blog/stargazers-reloaded-llm-powered-analyses-of-your-github-community-aef9288eb8a5
nbrad|2 years ago
I've found GPT-3.5 more than adequate at inferring schemas and filling them for conventional use cases like chat-based forms (as an alternative to Google Forms/TypeForm); my code and prompts available at: https://github.com/nsbradford/talkformai - i've also used this to extract structured data from code for LLM coding agents (e.g. "return the names of every function in this file")
In my opinion, more and more APIs are likely to become unstructured and be reduced to LLM agents chatting with each other; I wrote a brief blog about this here: https://nickbradford.substack.com/p/llm-agents-behind-every-...
AlwaysNewb23|2 years ago
jarulraj|2 years ago
It would be great if you could share an example of the inconsistent output problem -- we also faced it. GPT-4 was much better than GPT-3.5 in output quality.
PaulHoule|2 years ago
jarulraj|2 years ago
The results were pretty good: https://gist.github.com/gaurav274/506337fa51f4df192de78d1280...
Another interesting aspect was the money spent on LLMs. We could have directly used GPT-4 to generate the "golden" table; however, it's a bit expensive — costing $60 to process the information of 1000 users. To maintain accuracy while reducing costs significantly, we set up an LLM model cascade in the EvaDB query, running GPT-3.5 before GPT-4, leading to a 11x cost reduction ($5.5).
Query 1: https://github.com/pchunduri6/stargazers-reloaded/blob/228e8...
Query 2: https://github.com/pchunduri6/stargazers-reloaded/blob/228e8...
tmaly|2 years ago
Having the LLM generalize responses, look for patterns and rank by frequency
jarulraj|2 years ago
What were the interesting problems you faced in processing the survey data?
If possible, can you share the prompt?