Alex Shank

Home Blog Apps Résumé About
Connect with Alex Shank on Linked In Visit Alex Shank's GitHub
Home
Home Blog Apps Résumé About
Connect with Alex Shank on Linked In Visit Alex Shank's GitHub
Published Apr 15, 2025

Running LLMs Locally

Exploring the local model options available in 2026.

AIOllamaWhisperStable Diffusion

This series explores running local LLMs. Ollama is primarily covered, but other tools like Whisper and Stable Diffusion are also tested.

Posts in this Series

This is a three-part blog series.

Running LLMs Locally with Ollama

Running LLMs Locally | 1 / 3

Published: Apr 15, 2025

Testing which LLMs my NVIDIA GeForce RTX 4060 Ti can run locally through Ollama.

AILLMOllamaLinux

Ollama Copilot Integration

Running LLMs Locally | 2 / 3

Published: Apr 22, 2025

Integrate Ollama with GitHub Copilot alternatives for AI-powered code completion using local LLMs.

AILLMOllamaLinuxCopilot

Using LM Studio's Chat Interface

Running LLMs Locally | 3 / 3

Published: Jun 7, 2025

Using Ollama models within LM Studio's GUI

AILLMOllamaLinuxLM Studio
© 2026 Alex Shank. All rights reserved.
Connect with Alex Shank on Linked In Visit Alex Shank's GitHub