top | item 35257245

(no title)

yousnail | 2 years ago

my instance doesn't seem impressed:

So, I stumbled upon this Simple LLaMA FineTuner project by Aleksey Smolenchuk, claiming to be a beginner-friendly tool for fine-tuning the LLaMA-7B language model using the LoRA method via the PEFT library. It supposedly runs on a regular Colab Tesla T4 instance for smaller datasets and sample lengths.

The so-called "intuitive" UI lets users manage datasets, adjust parameters, and train/evaluate models. However, I can't help but question the actual value of such a tool. Is it just an attempt to dumb down the process for newcomers? Are there any plans to cater to more experienced users?

The guide provided is straightforward, but it feels like a solution in search of a problem. I'm skeptical about the impact this tool will have on NLP fine-tuning.

discuss

order

lxe|2 years ago

> I can't help but question the actual value of such a tool. Is it just an attempt to dumb down the process for newcomers?

Actually, you've hit the nail on the head here. I wanted something where I, a complete beginner, can quickly play around with data, parameters, finetune, iterate, without investing too much time.

That's also why I've annotated all the training parameters in the code and UI -- so beginners like me can understand what each slider does to their tuning and to their generation.

Taek|2 years ago

This is exactly the sweet spot I'm looking for. Technical enough that I can play around, simplified enough that I'm investing an hour or two of my time instead of a whole weekend.

bbor|2 years ago

I get that you /can/ use an LLM to generate troll feedback for random projects... but why?

yousnail|2 years ago

I was just excited that I got it working at all :/

fbdab103|2 years ago

So you are annoyed that something targeted for beginners does not also cater to experts?

yousnail|2 years ago

me? re-read that s'il vous plaƮt

bjord|2 years ago

maybe put the bit it said in quotes? I didn't read closely enough myself the first time, it took your subsequent comments to make me realize what you'd done