(no title)
brooksbp | 3 years ago
Along the way, I've been struggling with a question and I hope someone can help me understand how to go about this: how would you build a model that does more than one NLP task? For a simple classifier like input: text (a tweet) and output: text (an emotion), you can fine-tune an existing classifier on such a data set. But, how would you build a model that does NER and sentiment analysis? E.g. input: text (a Yelp review of a restaurant) and output: list of (entity, sentiment) tuples (e.g. [("tacos", "good"), ("margaritas", "good"), ("salsa", "bad")]). If you have a data set structured this way, and want to fine-tune a model, how does that model know how to make use of a Python list of tuples?
axiom92|3 years ago
You just need to create [(input, output)] examples in the format you want.
For example
[(a Yelp review of a restaurant, [("tacos", "good"), ("margaritas", "good"), ("salsa", "bad")]].
With enough data, the model should be able to learn to generate the output in the right format.
> Python list of tuples
Things get interesting if you want to generate actual Python code. You can use a large language model with just a few examples of the task to generate such code. For example, see https://reasonwithpal.com/.
Happy to answer more questions!
[1] https://huggingface.co/docs/transformers/model_doc/t5
[2] https://colab.research.google.com/github/huggingface/noteboo...
mothcamp|3 years ago
Or maybe skip all that and outsource it to GPT: https://imgur.com/a/BQv6C3K
brooksbp|3 years ago
gattilorenz|3 years ago
[1] http://essay.utwente.nl/91778/1/Middelraad_BA_EEMCS.pdf