Open Source / MIT License

Run uncensored AI locally.
Chat. Images. Video.

The only local AI app that combines uncensored chat, image generation, and video creation in one interface. No cloud, no subscriptions, no data collection.

Get Started View Source
Locally Uncensored — Uncensored AI Chat Interface with Ollama, showing persona selection and private local conversation

Download v1.9.0

Windows (.exe) Linux (.AppImage) Linux (.deb)

macOS: Build from source

$ git clone https://github.com/PurpleDoubleD/locally-uncensored.git && setup.bat
Windows — installs Node.js, Ollama, downloads an uncensored model, and launches the app.
Linux / macOS instructions
Capabilities

Three tools. One interface.

Stop switching between Ollama for chat, ComfyUI for images, and AnimateDiff for video in separate tabs.

Agent Mode BETA

Your AI uses tools autonomously — web search, page fetching, file I/O, code execution. Multi-step chains with live tool-call blocks. Powered by Hermes 3.

Uncensored AI Chat

Powered by Ollama. Run abliterated models like Llama 3.1, Qwen 3.5, or DeepSeek R1 locally — uncensored, private, with streaming responses and thinking display.

Local Image Generation

Text-to-image via ComfyUI. Supports Stable Diffusion XL, FLUX.1 Schnell, Pony Diffusion, and Juggernaut XL checkpoints. Full parameter control, no content filter.

AI Video Generation

Wan 2.1/2.2 and AnimateDiff support. Generate video clips from text on your own GPU. No cloud API, no watermarks, no usage limits.

25+ AI Personas

Pre-built characters from Helpful Assistant to Roast Master. Switch personalities without prompt engineering. Works with any uncensored Ollama model.

One-Click Model Manager

Browse, install, and switch AI models from within the app. Auto-detects text, image, and video models across Ollama and ComfyUI backends.

100% Private & Offline

Everything runs on localhost. No telemetry, no cloud, no accounts, no Docker required. Your conversations, images, and videos never leave your machine.

📄

Document Chat (RAG)

Upload PDFs, DOCX, or TXT files and chat with your documents. Hybrid search with confidence scores and source citations.

🎤

Voice Chat

Talk to your AI with push-to-talk and hear responses with sentence-level text-to-speech streaming.

🤖

AI Agents

Give your AI a goal — it plans, searches the web, reads/writes files, and executes Python code autonomously.

Locally Uncensored — Local AI image and video generation with ComfyUI backend, Stable Diffusion and Wan 2.1 support
Image and video generation with full parameter control — no content filters
Locally Uncensored — AI Model Manager showing one-click install for Ollama and ComfyUI models
One-click model installation with hardware-aware recommendations
New in v1.9

Agent Mode. Local AI that actually does things.

Your AI doesn't just chat — it searches the web, reads files, writes code, and generates images. All running on your hardware, with models that don't refuse.
🔍

Web Search & Fetch

Two-phase search: find URLs, then fetch and read actual page content. Real answers, not hallucinations.

💻

Code Execution

Write and run Python code locally. Data analysis, file processing, automation — your AI does the work.

📁

File I/O

Read and write files on your system. Summarize documents, create reports, process data.

🎨

Image Generation

Generate images mid-conversation via ComfyUI. The only local agent that can create visuals on command.

🧠

Memory System

Auto-saves tool results. Search past results by keyword, filter by category, export as markdown.

🔒

Tool Approval

Safe tools auto-execute. File writes and code execution ask permission first. You stay in control.

Works with abliterated models. LU auto-fixes tool-calling templates via Ollama Modelfile. Models that other apps refuse to load work out of the box — with full agent capabilities.

Comparison

What others don't do.

Every competitor handles chat. None combine agents, uncensored chat, image generation, and video creation in one local app.
Feature Locally Uncensored Open WebUI LM Studio SillyTavern
AI ChatYesYesYesYes
Image GenerationYesNoNoNo
Video GenerationYesNoNoNo
Uncensored by DefaultYesNoNoPartial
One-Click SetupYesDockerYesNode.js
Built-in Personas25+NoNoManual
Open SourceMITMITNoAGPL
No Docker RequiredYesNoYesYes
Document Chat (RAG)YesYesNoNo
Voice (STT + TTS)YesPartialNoNo
AI Agents + Tool Calling6 ToolsNoNoNo
Agent Memory SystemYesNoNoNo
Abliterated Model SupportAuto-fixNoNoPartial
Setup

Running in under 5 minutes.

One command. No Docker, no config files, no manual model downloads.
1

Clone & Run Setup

Run git clone and setup.bat (or setup.sh on Linux/macOS). The script checks for Node.js and Ollama, installs them if missing.

2

Auto Model Download

The setup script downloads a recommended uncensored/abliterated model (~5.7 GB). The app launches in your browser at localhost:5173.

3

Chat, Create, Generate

Start chatting immediately. For image and video generation, click "Install ComfyUI" in the Create tab — one click, fully automated.

Supported Models

Works with the best local AI models.

Auto-detects all installed models. Just drop them in and they show up.
Text / Chat

Qwen 3 (8B-30B) Abliterated

Latest & smartest open model. 6-18 GB VRAM. Exceptional reasoning, coding, and multilingual support.

Text / Chat

Llama 3.1 8B Abliterated

Fastest all-rounder. 6 GB VRAM. Uncensored, reliable, perfect entry point for any hardware.

Text / Reasoning

DeepSeek R1 (8B-70B)

Chain-of-thought reasoning. Shows its thinking process. 6-48 GB VRAM. Scales to your hardware.

Image Generation

FLUX.1 Dev / Schnell

Best text-to-image. 8-10 GB VRAM. Incredible prompt following, detail, and coherence.

Image Generation

FLUX 2 Klein

Next-gen image model. 8 GB VRAM. Fastest FLUX architecture with stunning quality.

Image Generation

Juggernaut XL V9

Top photorealistic SDXL checkpoint. 6 GB VRAM. Perfect for portraits and realistic scenes.

Video Generation

Wan 2.1 (1.3B-14B)

Best text-to-video. 8-12+ GB VRAM. Lightweight 1.3B for speed, 14B for cinema quality.

Video Generation

HunyuanVideo 1.5

Tencent's video model. 12+ GB VRAM. Excellent temporal consistency and visual quality.

Video Generation

LTX Video 2

Lightricks' latest. 12+ GB VRAM. Fast inference, high quality text-to-video generation.

FAQ

Common questions.

What is Locally Uncensored?

Locally Uncensored is a free, open-source desktop app that lets you run uncensored AI locally on your own machine. It combines AI chat (via Ollama), image generation (via ComfyUI with FLUX, SDXL, and more), and video generation (via Wan 2.1, HunyuanVideo, LTX Video) in one interface. No cloud, no subscriptions, no data collection. Everything is MIT licensed.

Is it really 100% free and offline?

Yes. After the initial setup and model download, no internet connection is needed. Your conversations, generated images, and videos never leave your machine. There are no accounts, no telemetry, and no usage limits. The MIT license means you can use, modify, and distribute it freely.

How is this different from Open WebUI, LM Studio, or SillyTavern?

Those tools only handle text chat. Locally Uncensored is the only local AI app that combines chat, image generation, AND video generation in one interface. It ships with 25+ built-in personas, uses uncensored/abliterated models by default, and wraps ComfyUI's complexity behind a simple UI — no node graphs required.

What hardware do I need?

For text chat: any modern computer with 8 GB RAM. For image generation: NVIDIA GPU with 8+ GB VRAM (GTX 1080 or better). For video generation: 10-12 GB VRAM recommended. The app auto-detects your hardware and recommends appropriate models. Works on Windows, macOS, and Linux.

What does "uncensored" mean?

Locally Uncensored uses abliterated AI models — models where artificial content restrictions have been removed. This means the AI responds honestly to any question without refusing or adding disclaimers. Combined with running locally, your conversations are completely private and unrestricted.

Blog

Guides and comparisons.

Best Local AI Apps in 2026

Complete comparison of GPT4All, Open WebUI, LM Studio, Jan, Kobold.cpp, SillyTavern, text-generation-webui, and Locally Uncensored.

How to Run Uncensored AI Locally

A complete guide to running AI locally without restrictions. Setup, models, and why local beats cloud.

Locally Uncensored vs GPT4All

All-in-one AI creative suite vs the most popular local chatbot with document RAG.

Locally Uncensored vs Jan.ai

Lightweight Tauri app with image/video gen vs polished Electron chat client with cloud API support.

Locally Uncensored vs Open WebUI

Both are MIT-licensed Ollama frontends. Only one combines chat, image gen, and video generation.

Locally Uncensored vs LM Studio

Open source all-in-one vs polished closed-source chat client.

Locally Uncensored vs SillyTavern

Both run uncensored AI locally. One is built for roleplay, the other for everything else.

Run your own uncensored AI stack.

Free, open source, and yours to keep. No sign-up required.

View on GitHub Join the Discussion