
Show HN: Finetune LLaMA-7B on commodity GPUs using your own text
by lxe on Hacker News.
I’ve been playing around with https://ift.tt/xFXb5z3 and https://ift.tt/XBYHd5s , and wanted to create a simple UI where you can just paste text, tweak the parameters, and finetune the model quickly using a modern GPU. To prepare the data, simply separate your text with two blank lines. There’s an inference tab, so you can test how the tuned model behaves. This is my first foray into the world of LLM finetuning, Python, Torch, Transformers, LoRA, PEFT, and Gradio. Enjoy!
