recently i’d been playing around with local models mostly because i kept exhausting my ide credits and i hit rate limits really fast
up until recently, i’d been using qwen, a 3b parameter model my computer handled surprisingly well. for context it’s a pretty well specced out macbook so no surprise there
my only issue was that in some situations, i was smarter than the model itself, something that annoyed me
so i decided to install a 14b parameter model, 9 gigabytes. as soon as i ran it in ollama in my cli, my fans began screaming….then they stopped
ladies and gentlemen, my computer no longer turns the fuck on🙂
p.s: send help, this is serious
5 Comments
“Send help” is the right expression 😂
help.send()
😂😂😂 classic question, who send you?
I'm about to do the same this night cos i keep hitting my limit. Anyways, Thanks for the heads up. I don't even have a macbook to risk.
You guys can even run models locally?? 😫
skill issue, lol
dissing me on my forum? children of these days lack respsect oh
Beats me 🫠
Yup, try ollama