View Single Post
  #11  
Old 09-15-2025, 11:43 AM
Ekco Ekco is offline
Planar Protector

Ekco's Avatar

Join Date: Jan 2023
Location: Felwithe
Posts: 5,133
Default

yeah, i was wondering well, what about fine tuning a already trained model. still need a really nice home lab

didn't know about this, but what's the point. just max vram and just load a model someone else did

Quote:
Parameter-Efficient Fine-Tuning (PEFT): This is a game-changer for consumer hardware. Techniques like QLoRA significantly reduce VRAM requirements by only fine-tuning a small portion of the model. Using QLoRA, a 7B model can often be fine-tuned on a single GPU with less than 10-12 GB of VRAM
7B model?, who gives a shit.
__________________
Ekco - 60 Wiz // Oshieh - 60 Dru // Kusanagi - 54 Pal // Losthawk - 52 Rng // Tpow - 54 Nec // Tiltuesday - EC mule
Reply With Quote