Everyone is talking about Deepseek AI. I have a Minisforum MS01, and it’s running Proxmox. Seems like a no-brainer? Right? Hell yeah son let’s do this!
So, here’s the real-quick version:
Have a computer
Wipe whatever’s on there and install Proxmox
Download the Ubuntu Server ISO to your Proxmox
Create a VM using said ISO – I recommend many gigs of space for LLMs
Run the VM and go through the Ubuntu Server process
Update your VM (sudo apt update && sudo apt upgrade -y)
curl -fsSL https://ollama.ai/install.sh | sh
ollama run deepseek-r1:1.5b
Basically after this Deepseek should be running in your terminal and you’re off to the races. However, if you would prefer a webgui to interact with your new AI friend, then check out this page which will go through all the details:
sudo apt install python3-venv -y
python3 -m venv ~/open-webui-venv
source ~/open-webui-venv/bin/activate
pip install open-webui
open-webui serve
Personally I would also recommend installing tailscale to your VM and then you can connect to the VM on the go via your phone or tablet and you basically have a free and secure version of Deepseek AI that is not communicating with the CCP. I mean, hey, communism is still probably better than what we’re dealing with right now in the UK, but I digress.
Good luck, let me know how you get on 🙂 Personally I find Deepseek a bit, well, weird. The more distilled the version of Deepseek you use, the more likely you are to receive answers which are incoherent or just plain illusory, as it fails to answer even the most simple of questions. The larger the LLM you choose (and therefore the more powerful a system you have to have available to run said LLM) the more coherent and ‘human-like’ the answers become. Personally I really do not care to see or know why the AI has come to it’s conclusion. I simply just want an answer to the question of ‘why does X = Y’. What the AI thinks of me and my question and what it has to do to posit any kind of response is irrelevant. Just give me a proper response to my question! I dont even mind if the response would be something like ‘I am unable to answer that response’, but more often than not the 1.5b LLM just repeats the same ‘I’m an AI and I want to give you a positive response’ response.
I’d love something like the Jetson Orin Nano Super for this, but it’s sold out everywhere and a Raspberry Pi just doesnt seem to cut it. I dont want to install on my main PC so that just leaves my Proxmox server. I’ve used a few different versions of the LLM and the MS01 (as powerful as it is) just audibly suffers as it ramps up the fans to deal with the CPU strain of answering a simple question on Deepseek. I appreciate that it is early days, so things will only get better, but it would be awesome to have a cost-effective piece of AI hardware for the everyman, that doesnt cost an arm and a leg. Oh, and also that produces images. Of boobs. Big boobs. Mmmm… big AI boobs.