Practical AI for Everyone

How I integrate AI into everyday work and life. Use cases, tools and tactics.

TL;DR Summary:

Run your own Manus-like AI agent powered by the latest (e.g. open source) models in just a few easy steps: privately on your PC, free and customizable.

Want to run powerful, flexible AI agents on your own computer – free and easy as pie? There are many good reasons to explore this: You can experiment with state-of-the-art AI models. You may prefer fewer constraints. Plus, running things locally is great for privacy. And you can build your own “armada” of AI sidekicks (“FrankenstAIns”).

Setting this up can sound hard at first and many make it look harder than it is… So, I wanted to share this easy, visual tutorial with you. I’ll show you the setup in 7 (!) steps and then we’ll build your first local AI agent together – in another 5 (!) steps. Specifically, we’ll create a simple “Research Agent” that analyzes papers in the “Arxiv” database for you – like a (watered-down) “Manus AI”…

To make it happen, we use 2 helpful, free open-source tools: First Ollama, the “back end” engine running AI on your PC. From my other tutorial for local “ChatGPT-esque” chatbots you know how I like this. Second Langflow, a cool visual editor where you connect blocks to build agents (no code – just “drag-and-drop”). FYI: Langflow uses the popular LangChain framework “under the hood” but abstracts the complexity away.

You don’t need to be an “expert” or own server farms. A “normal” PC is fine. You can run efficient open-source models (e.g. Qwen 2.5) already on 8 GB GPU (or less for smaller or ”quantized” versions). Just know that running larger (more capable) models usually requires higher-end hardware. This guide helps you check which models run on which hardware. If you can, go for (a bit) larger models though (for reliability etc.)

Heads-up: This guide shows how to run Ollama and Langflow locally on your PC, so everything stays on “localhost” with (assumed) default settings. Exposing them as web “servers” takes extra steps not covered here – consult other guides if you need that. Generally, use software from trusted sources and apply your own discretion.

Agents also improved a lot but still face practical limitations, esp. smaller local models. AI needs clear prompts, “only knows what it knows” and can err (“hallucinate”) or act unpredictably. So, always review critical outputs and remain the “human in the loop”. Only wire AI and systems (e.g. files) when you know the implications.

Table of Contents

Optional Excursus: AI Agents 101

New to agents? The following (“expandable”) quick overview covers the basics for this guide. Skip it if you’re already familiar. For an even more detailed look at agents – incl. use cases, potentials and limitations etc. – check out my full introductory article.

AI Agents Refresher

So, what exactly is an “AI agent”? Think of it like an assistant you can give a task to in plain language. Unlike simple chatbots that just answer questions, an agent tries to achieve its objective autonomously. It breaks goals into steps, makes a plan, takes actions, uses available tools, checks if it worked and adjusts its plan if needed: Kind of like we do!

For this, agents typically use a few “core components” working together. Many “agent frameworks” exist – like “ReAct”, “LangChain”, “CrewAI” etc. – but they all share the same underlying “building blocks” and flow:

  1. Tasks: The goals you give the agent with your prompts (like summarizing research papers in our example in “Part 3”).
  2. AI Model: A Large Language Model (e.g. Qwen 2.5) acts as the “brain” of the system, uses tools, generates outputs etc.
  3. Tools: These external resources give the agent extra abilities the LLM doesn’t have alone, like searching (in our Arxiv example below), accessing specific data etc.
  4. Memory: This allows the agent to remember interactions, learn from feedback or maintain context for longer tasks etc.
  5. “AgentOps”: The systems managing the agent’s lifecycle/operations, i.e. deployment, orchestrating all parts, monitoring, guardrails etc.

You can think of these “building blocks” (LLMs, tools etc.) like “clothes” (pants, shirt etc.): When, e.g., a “new amazing LLM” comes out, it’s like getting new shoes – you just swap that part into your existing setup. Sometimes, with a new “agent framework”, the “wardrobe” itself may change – but the clothes are still similar.

Focusing on these “essentials” makes keeping up much easier. If the fast AI developments feel “too much” sometimes, it’s a helpful mental model I use to put things into perspective.

Part 1: Setting Up Your Local AI Model (Ollama)

First, you’ll need to grab the Ollama software. No worries, despite its name, it’s not limited to “Llama” models… 😉 (If you already have Ollama installed and an LLM running, you can obviously directly skip to Part 2.)

Step 1: Install Ollama with “One Click”

Just head over to the official Ollama website (where you can also find plenty more useful info) and download the right version for your OS. I’m using Windows here. The installation is easy – just click through the installer:

Step 2: Select Your Favorite AI Model

With Ollama installed, you can now explore its vast library of LLMs. I’ll pick “Qwen 2.5 7B”. It’s a capable model from Alibaba that performs well on agentic tasks. Make sure the model you pick supports “tool use” (s. purple tag “tools”). (Also, be aware that some models have licenses which only allow personal use while others also allow commercial use.)

Ollama Library search results page showing Qwen 2.5 7B model.

Once you found the model of your choice, just click on its name to visit its profile page:

Ollama's Qwen 2.5 7B page highlighting the "ollama run qwen2.5:7b" command to copy and initiate the model.

Now copy the highlighted command (“ollama run qwen2.5:7b”); we’ll need it in the next step to download and run the model in Ollama.

Step 3: Run Your Chosen Model with Ollama

This is where some might go “nope“, but no worries. It’s not hard, even if you’ve never used the “Command Prompt” (the black window in the screenshot below). Just think of it as another way to interact with your PC, just like using a mouse. [Note: “Command Prompt” and “Windows PowerShell” look similar and work similarly. You can use either. Mind that I’m using “Command Prompt”. The prompts used here can differ for PowerShell.]

  1. Open the Command Prompt (which from now on I’ll just call “Terminal”). You can usually find it by typing “cmd” or “command” in Windows’s search bar (near Start).
  2. Now paste the command you copied earlier to download and install your AI model with Ollama (s. yellow marked line in the screenshot). This might take a few moments, as the model is several gigabytes:
Command Prompt showing “ollama run qwen2:7b” and the correspondingly downloaded model.

Voilà. Not so difficult, right? Now you could even start chatting with the AI right in the same window (“Send a message”):

Command Prompt showing Ollama's "Send a message" prompt with a sample chat.

With Ollama and Qwen up and running, our local “AI backbone” is set up. In the next part, we’ll install Langflow to start creating agents…

Part 2: Setting Up the Agent Builder (Langflow)

Leave Ollama running in the background in the (still open) Terminal tab and open a new one for this Part 2. FYI – as an alternative to running Langflow locally on your PC – you could also use it online through their website, if that’s enough for you..

Step 4: Quickly Prepare Your PC for Langflow

Note that this step is only needed if you don’t already have Python and “uv” installed (prerequisites). If you do, just skip to the next step. Also don’t worry, there’s no coding involved. [If you can’t remember whether you have Python/uv installed or which version, type “python –version” or “uv –version” in the Terminal to check.]

First, Langflow requires Python (which automatically handles running the program). Currently, Python 3.10–3.13 is recommended in Langflow’s official documentation (which is a great resource for troubleshooting or research). If you don’t have it, just download it from the official Python website and install it with default settings:

Next is “uv”: This is a modern installer recommended by (and for) Langflow. (It’s faster and better for complex installations like Langflow compared to Python’s standard “pip”). If you don’t have uv yet, you can install it directly in your Terminal by entering “pip install uv”. You only need to do this once and the result should look like this:

Command Prompt showing pip install uv command.

Now, your computer is ready for the “fun part”… 😉

Step 5: Create a Dedicated Folder for Langflow

To keep things tidy, we need to give Langflow its own isolated space/folder. No worries, super simple. We create this space, called “virtual environment”, in a dedicated folder.

First, decide where on your drive you want to save your Langflow project(s). This could be e.g. in your “Projects” (or “Documents” etc.) folder. I chose this path (“C:\Users\johni\Projects”) which you can see in the next screenshot. Then, inside this “Projects” folder, create the “Langflow” folder.

Click path of the folder "Projects" where all Langflow projects shall be saved.

Now, it’s time to create the “virtual environment”. For this, just open your Terminal window again and ”navigate” there to your Langflow folder by entering “cd C:\Users\YOUR_NAME\Projects\Langflow” (adapting YOUR_NAME etc.). Then create the environment sub-folder by entering “uv venv .venv”.

Then, you can check in your file explorer (inside the “Langflow” folder) and the “.venv” folder should “magically appear” (incl. its needed files inside):

The ".venv" folder is the virtual environment for all Langflow projects. This folder is shown as part of the file explorer in this image.

Before we install Langflow into it, we first need to “activate” that environment. For that, just enter this in the Terminal: “.venv\Scripts\activate.bat”

You know it worked (i.e. Langflow’s environment is successfully activated) when the new line in the Terminal now starts with “(.venv)” like in the screenshot:

Command Prompt showing folder navigation, virtual environment creation and its activation.

Step 6: Install Langflow

With the environment active, installing Langflow (using “uv”) is just as simple. In the Terminal, just enter: “uv pip install langflow”

Command Terminal showing the initiation of the prompt to install Langflow.

This step can take some time. The uv tool needs to find and download many components that Langflow relies on… Great – when the Langflow installation is complete, your Terminal should look like this:

Command Terminal showing of the successful completion of the "uv pip install langflow" prompt.

Step 7: Run Langflow

Make sure the environment is still active – visible “(.venv)”. [If not, follow the instructions from “Step 13.2b” below.] Now, in the same Terminal window, to start Langflow simply type: “python -m langflow run”

After a few (patient) moments, Langflow will start its local server – hosted on your PC. When it’s done loading, it should look like this:

Command Terminal showing the prompt "python -m langflow run" successfully executed. It started local Langflow server which can be accessed through its localhost URL.

Kudos – the setup is complete!

Part 3: Building Your First AI Agent

In this next (aka last) part, we’ll build our first Langflow agent – specialized in academic research (because, why not?). Excited? 

Step 8: Start a “New Flow” in Langflow

To launch Langflow just open the address (http://127.0.0.1:7860) from the screenshot above in your browser of choice (e.g. Edge). (Even if it looks like a website – it’s still running on your local PC). You should now see the start page and hit “+ New Flow”:

Langflow landing page including the  button to create a new workflow.

To create a new agent, find and click the button “+ Blank Flow”: (Later, you can also explore the countless featured templates in the gallery …)

Langflow page with the button to create a blank agentic workflow.

This opens an empty canvas, ready for us to start building:

Empty Langflow canvas with an overview of available components and endless possibilities to create AI agents from scratch.

Step 9: Add the Agent’s “Building Blocks”

Now, look at the “Components” menu on the left. You can use the search bar or browse the categories (“Models”, “Agents”, “Tools” etc.). Find the following 5 components and “drag” each one onto the main canvas:

  1. “Chat Input”: This block is the entry point for your typed prompts to the agent (like in ChatGPT).
  2. “Chat Output”: This part is for the agent’s final answers to you – again, like in ChatGPT.
  3. “Ollama”: This connects Langflow to your local Ollama model – the agent’s “brain” (e.g. Qwen 2.5). [Remark: If you can’t use sufficiently capable local LLMs for your use case (e.g. hardware limits), pick another option from “Models”. But know that these services are cloud-hosted (e.g. Google Gemini) and require (paid) API keys.]
  4. “arXiv”: This is the agent’s special tool. Arxiv is a popular open online archive for academic papers (millions). This block allows our agent to do its research in that vast database.
  5. “Tool Calling Agent”: This acts as the agent’s “manager”. Based on your instructions, it autonomously applies the available Tools and LLMs to “get the job done”.
Langflow canvas showing the 5 components (input, throughput and output modules) of the Scientific Research Agent.

Step 10: Configure the Agent’s Parts

Now, we need to make sure the blocks have the correct settings. Good news: For this simple example, not too many edits are needed… Click on each component to see and edit the settings as follows:

  1. (“Chat Input” and “Chat Output” can stay as they are…)
  2. “Ollama“: “Base URL” field should be http://localhost:11434. Also, select the right “Model Name” (e.g. “qwen2:7b”). “Tool Model Enabled” should be toggled on.
  3. “arXiv”: Click on the block first and then enable “Tool Mode”. For this simple demo, the other default settings are ok.
  4. “Tool Calling Agent”: Just paste this (rather simple) exemplary “System Prompt”:

“Act as my Research Assistant. Your goal is to find and summarize recent research papers using the Arxiv tool based on the user’s query about papers or topics. You MUST use the Arxiv tool for this task. DO NOT answer from your internal knowledge or memory. If no papers are found at all, state that clearly. Answer concisely please.”

Langflow canvas showing the 5 components (input, throughput and output modules) of the Scientific Research Agent individually configured.

Step 11: “Connect the Dots”

Let’s “wire” the blocks together. For that, click on the small output circles on one block’s right side and the matching input circles on the other components’ left sides as follows:

  1. Connect Chat Input’s output –> Tool Calling Agent’s “Input” handle.
  2. Ollama’s “Language Model” output –> Tool Calling Agent’s “Language Model” input.
  3. arXiv’s “Toolset” output –> Tool Calling Agent’s “Tools” handle.
  4. Tool Calling Agent’s “Response” output –> Chat Output’s input.
Langflow canvas showing the 5 components (input, throughput and output modules) of the Scientific Research Agent configured and connected.

Step 12: Agent Ready to Take Off!

There you go. You built your first local AI agent – maybe it’s even useful if you do research? Now, let’s see how it performs. For that, just click on the “Playground” button in the upper right corner of the canvas. Below are 2 screenshots with a sample chat and a “sneak peek” into the agent’s “inner workings” (i.e. how it uses its tool):

Langflow Playground showing a sample chat interaction with the agent's final response.

For such a “minimalist” setup (and model), this doesn’t look too shabby. But what “surprised” me (a bit) was inside its “thinking steps” (s. little “downward arrow”): how cleanly it translated my request into the arXiv query. Generally, the ability to check execution details and trace sources [yes, the referenced paper incl. URL exists 😉] helps with more explainable AI:

Langflow Playground showing the agent's chain of thought and tool use in a sample chat interaction.

This is still a simplified example. And – as mentioned – the tech still has plenty limitations (bugs, “wobbly” UX etc.). Try to be forgiving as the technology matures (fast)… As for your own skills, congrats: You now have a working baseline. Since you understand the basic building blocks, use this as a starting point to experiment (“learning by doing”).

Pro tip: work iteratively (and patiently) with agents, i.e. make small adjustments to the workflow/components, test it in the Playground and repeat. [Also, keep in mind that (generative) AI models work statistically, not deterministically, i.e. they never do exactly the same thing twice. In practice, this means: A prompt may (or not) work one time, but that’s no guarantee it’s the same when you run it again, even if all else is equal.]

For instance, try changing the agent’s System Prompt, swapping the Ollama model or exploring other tools. Your imagination is the limit now. You can make it way more powerful than this example: think multiple agents, more tools, multi-modal models (e.g. computer vision), smart prompting etc. [You can even work with the underlying code (just click on a building block and select “Code”) – but that’s beyond this post’s scope.] What agentic “patterns” or use cases are interesting to you? I’m keen to read it.

If you want to explore the various components Langflow offers – agents, LLMs, tools (e.g. web search, calculators, APIs or the latest craze MCPs) etc. – the official Langflow Documentation is a great resource. But be aware that many components, esp. services like OpenAI’s GPT models or Google Search, need specific API keys which can incur costs. (BTW, that’s partly why I picked this “didactic” example: All components work without API keys – free and easy.)

“Step 13”: Stopping and Restarting Your Setup

Okay, done experimenting for today? This is how you can easily stop and later restart your new “tech stack” …

Step 13.1: Stopping Everything

You have 2 options …:

a) “The Clean Way”:

  1. Langflow: Go to the Terminal window where Langflow is running (i.e. where you entered “python -m langflow run” earlier). Press “Ctrl + C” to stop the Langflow server “politely”. Once it stops, also type “deactivate” in the same window to close the environment. Then close Langflow’s terminal and browser windows (clicking “x”).
  2. Ollama: If you started Ollama in another Terminal window, just go to that window and press “Ctrl + D” there and then close the window, too. [If Ollama is running as a background service “via its icon” (which works just as fine), you can right-click the tray icon and pick “Quit Ollama”.]

b) “The DirtQuick Way”: For simplicity, just close (“x”) all Terminal and Browser windows where Ollama and Langflow are running. Some techies may now throw their hands up in horror reading this. This “method” stops the servers more “abruptly” but shouldn’t cause problems. Let me know if this is too naive. At least, I never had issues. Of course, do so at your own discretion.

Step 13.2: Starting Again Later

When you want to come back and use your agent(s) again:

  1. Start Ollama: Check if Ollama is maybe still running in the background (look for its icon in the Windows tray near the clock). If not, open a Terminal window and simply type “ollama serve” – then leave that window running in the background.
  2. Start Langflow: Open a new Terminal tab. Navigate to your project folder in the Terminal (i.e. enter the adapted “cd C:\Users\YOUR_NAME\Projects\Langflow”), activate the environment (enter “.venv\Scripts\activate.bat”), launch Langflow (run “python -m langflow run”) and open http://127.0.0.1:7860 in your browser (as before).

Wrap-up: Your First AI Agent – What’s Next?

I hope this “visual guide” made setting up your first local AI agent straightforward. Did you get it running successfully? Any hurdles? If so, were you able to solve them (maybe with ChatGPT’s brilliant troubleshooting skills)? I appreciate any feedback or tips to make this tutorial better.

In future posts, I can cover more “advanced” case studies to show what’s possible with a setup like this. Some call this tech the next evolution of “RPA” – which makes sense to me. What kind of agents could you envision creating now? Any particular features of these tools (e.g. RAG, specific tooling or models, “MCP” etc.) you’re curious about?

If you’re now looking for ideas to apply this tech meaningfully, check out my guide on finding creative (Gen)AI use cases. Or, if you want better results with ChatGPT & Co., take a look at my beginner-friendly “prompting playbook”.

As always, if any of this resonates with you, just drop me a line (below or here) and spread the word.

Cheers,
John

What do you think?