My meagre experience and knowledge of the AI seems to have garnered the below two conclusions:
1: You need powerful hardware
2: You need to run a version of Deepseek that’s better than 1.5b
No doy, right? That’s exactly what every other youtube video I have watched recommends. I have found that the bottom line is running the 7b (that’s 7 billion parameters version) and it kills my MS01 12600 with 32 gigs of RAM. Well, it makes the fan run really loud anyway. I would recommend running the LLM with a couple of GPUs and a decent CPU, as well as plenty of space to download the upper tier versions of the LLM. I would imagine that it would be better with a graphics card of some type to offload the simulation onto when processing the question and generating the answer, but that’s not something I have access to right now.
That being said, I have a lot of older gear around here and I am going to spend the next few days experimenting with what I have in the hopes of getting something usable and quick running – I will check back in when I can!