Practical AI for Everyone

How I integrate AI into everyday work and life. Use cases, tools and tactics.

TL;DR Summary:

Get your own ChatGPT-like AI chatbot (e.g. powered by Llama) running on your PC with “Ollama” and “Open WebUI” in a few beginner-friendly steps.

Keen to run a powerful large language model (LLM) like Llama on your own PC – for free? There are plenty of good reasons to try it out: You can experiment with new models. You may prefer certain open-source options that are potentially (ahem) less constrained. Valuing privacy is another reason. Lastly, you can build your own custom artificial intelligence (AI) systems (“FrankenstAIn”).

Whatever your motivation, setting this up isn’t as complicated as some “tech gurus” or overcomplicated guides might make it seem. In fact, I’ve put together this simple tutorial to help you get started. In just 3 (!) steps, you can be chatting with your favorite LLMs locally – “no frills” mode. And if you’re aiming for the full ChatGPT-ish experience offline, complete with a sleek interface and features like “chat with your PDF,” it’s only seven steps!

To make it happen, we’ll use two top open-source AI tools as our tech stack: “Ollama“, the “back end” running the AI model(s) and “Open WebUI“, the “front end” with a user-friendly chat interface to interact with the model – like ChatGPT. We’ll get into the details later.

No worries, you don’t need to be a techie or own server farms. You can run some of the more efficient models (like Mistral 7B) reliably with 8–16 GB RAM already. Keep in mind that the higher the number of parameters (i.e., the “7B” of “Mistral 7B”), the more computing power you’ll need.

Heads-up: This tutorial focuses on running Ollama and Open WebUI locally on your PC. (Thus, by default network and tool settings, all runs on “localhost”.) Opening them to the web as “servers” involves extra steps (and implications) not covered here – tread carefully (with other guides) if you go that route. Also, use (Gen)AI (chatbots, agents etc.) responsibly – esp. “less constrained” models. Mind that AI can make mistakes. In general, use software only from trusted sources and apply your own discretion.

Table of Contents

Part 1: “Just the Local Chatbot – No Extras” (Ollama)

Step 1: Install Ollama with One Click

Our tool of choice to actually run the AI model is Ollama, developed by a dedicated team. And despite the name, it’s not limited to just “Llama” AI models. You can download the software from Ollama’s website, choosing the right version for your operating system. I’ll be using Windows in this example. The installation is pretty straightforward, just a simple one-click process:

Step 2: Pick Your Favorite Model

Next, explore Ollama’s rich model library to find your preferred LLM – for instance, the new and high-performing “Llama 3.1 8B“: (Mind that some models have licenses which only permit personal use while some also allow commercial applications.)

Once you’ve found the model you want, click on it to go to its profile page:

Copy the command highlighted on the page; you’ll need it in the next step to download and run the model in Ollama.

Step 3: Run Your Favorite Model with Ollama

This is where some might go “Errr nope“, but don’t worry. It’s not as intimidating as it seems, even if you’ve never used the “Command Prompt” (that black window on the screenshot below). Think of it as another way to interact with your computer, just like using a mouse.

  1. Open the Command Prompt app (you can find it by typing “command” in the Windows search bar near the Start button).
  2. Paste the command you copied earlier to install your chosen AI model with Ollama (see yellow marked line in the screenshot):

Et voilà. Not too painful, right? Now, you can start chatting with the AI directly in the same window.

Crazy simple, right? If this minimalist solution already suffices, you can just stop here. If you’re too coddled from the nice ChatGPT interface, no worries, the next part of this guide will make you even happier.

[Optional] Part 2: Upgrade to ChatGPT-like User Interface (Open WebUI)

Before we proceed, note Step 4 is only needed if you don’t already have Python and Git installed on your PC, which are prerequisites. Don’t worry, no coding is involved and you won’t need to interact with these tools directly – they just need to be installed. (FYI: If you can’t remember if you have Python/Git installed or what version, you can type “python –version” or “git –version” in the command prompt to check.)

Step 4: Install Python & Git on Your Computer

For Open WebUI (the ChatGPT-like interface we’ll install soon) to work, you need “Python” installed (which automatically handles running the program). Currently, Python 3.11 is recommended, but you can check the latest requirements on their website.

Just download it from the official Python website and install it with standard settings:

Similarly, you’ll need “Git” (which automatically manages the files), which you can download from their official website here. You may install it with the default options, too:

Now, your PC is ready for the “fun part” … 😉

Step 5: Install “Open WebUI”

“Open WebUI” is the open-source, highly performant user interface for large language models which looks like ChatGPT and its features (chat, document upload, settings etc.) It’s powered by a highly engaged team and community. To install this tool, you can visit their website (click me) or just check out the following screenshot:

First, copypaste the installation command from the screenshot/page (“pip install open-webui”) into the Command Terminal:

After a few moments, you’ll get the confirmation that the installation was “successful” in the same window.

Step 6: Run “Open WebUI”

Now that “Open WebUI” is installed, you can already start it up. How? By copypasting the second command from the page above (“open-webui serve”) in the Command Terminal:

Quick note: The default command (“open-webui serve”) hosts Open WebUI via the address “0.0.0.0” which can listen on all network interfaces. Your firewall and standard network settings, which this guide assumes, should block external access. However, visibility on local Wi-Fi could still occur this way. So, for true local-only access, you can add “--host localhost" to the end of the command before running it.

Launching Open WebUI in Windows PowerShell terminal.

It might feel a bit unusual to use the Terminal at first, but you’ll get used to it. Plus, it makes you feel a bit “tech-savvy”, doesn’t it? After some loading, you should see this message indicating that it’s running successfully:

Terminal output displaying Open WebUI server process startup running locally.

That’s all for the setup. You can now access your local ChatGPT variant through your web browser by navigating to http://localhost:8080/ (see below):

Step 7: Select Your Model… and Start Chatting!

Now, it’s time to select the model (LLM) which you want to chat with (in my case it’s “Llama 3.1”):

Keep in mind that for this to work, the Ollama application should be running. If you closed it earlier, just reopen it by starting the Ollama app from your Start menu or taskbar.

With everything set up, you can start chatting just like you would with any other chatbot:

There you have it. Now’s your chance to explore the interface further. As you see, it includes most features you’re familiar with from ChatGPT, like the option to upload documents (cf. red arrow above) for asking questions about PDFs and even more!

Bonus tip: If restarting this setup (again and again) feels tedious, here’s a quick way: Pin both Command Prompt and Ollama to your taskbar so they’re always “one click away”. For easier access, bookmark the chatbot’s URL (http://localhost:8080/) in your browser. Also, keep the “open-webui serve” command handy to copypaste (e.g., I renamed the bookmark accordingly…) Got a better shortcut? I’d love to hear it! 😉

Conclusion: Ready, Steady, Go?

I hope this guide was easy to follow and helped you set up your local chatbot. I’m eager to hear your feedback, esp. tips on improving this guide or keeping up with the (likely fast) developments of the tools. In future articles, I’ll cover some more “advanced” possibilities of this set-up (e.g. setting model temperature, system prompts, custom AI assistants, web search, “RAG” etc.)

WDYT? What are your favorite features (or which ones do you miss) in this “ChatGPT alternative”? If you want inspiration for AI use cases (e.g. language tutor, sparring partner), try this article. If you are looking for prompting tips to get more useful answers from your chatbots, check this out. If you want to take it even further – from chatbots to local “AI agents” (with Langflow and, again, Ollama) – I’ve got you covered.

As always: If any of this resonated with you, please drop me a line (below or here) and spread the word.

Cheers,
John

2 responses to “Local LLMs on Your PC with Ollama + Open WebUI (3-Step Setup)”

  1. Anthony

    Ran across your blog on reddit thought appreciated your approach toward AI and using it as a tool vs replacing humans. I was wondering if you ran any local LLMs and finally found this post using Ollama! I have a similar write up on my website but using Ollama and docker. Keep up the good content – Anthony

    1. Hi Anthony, I appreciate the kind words. Glad you liked it – your Docker-based write-up looks solid, too. Always good to swap notes. Cheers, John

Leave a Reply to AnthonyCancel reply