recently i’d been playing around with local models mostly because i kept exhausting my ide credits and i hit rate limits really fast
up until recently, i’d been using qwen, a 3b parameter model my computer handled surprisingly well. for context it’s a pretty well specced out macbook so no surprise there
my only issue was that in some situations, i was smarter than the model itself, something that annoyed me
so i decided to install a 14b parameter model, 9 gigabytes. as soon as i ran it in ollama in my cli, my fans began screaming….then they stopped
ladies and gentlemen, my computer no longer turns the fuck on🙂
p.s: send help, this is serious
1 Comments
You guys can even run models locally?? 😫
skill issue, lol
dissing me on my forum? children of these days lack respsect oh
Beats me 🫠