top | item 38408680

(no title)

rreyes1979 | 2 years ago

I am as n00b as someone can be on this (although I've been doing software engineer for more than 20 years now) so please ignore any nonsense I may express.

My intention is to work on news clustering and summarization. So far just by using some "clever" prompts I have been able to generate some pretty good news summaries and I have not started clustering yet. But, I have used GPT 4 so far and my educated guess is that soon enough I will hit some quality / cost limits. So, fine tuning a Llama 2 model with (hopefully) small datasets to improve costs and quality on my specific tasks seems like a reasonable path forward.

Does that make sense? Thank you for your answer!!!

discuss

order

ilaksh|2 years ago

I think it depends on the task and the result of the fine tune which is mainly based on the training dataset and ability of the base model, whether you will be able to maintain the quality.