Sixth home page
GitHub
Discord
Install Sixth
Install Sixth
Search...
Search...
Navigation
Running Models Locally
LM Studio
Getting Started
For New Coders
Installing Sixth
Installing Dev Essentials
Model Selection Guide
Task Management in Sixth
Context Management
What is Sixth?
Improving Your Prompting Skills
Prompt Engineering Guide
Sixth Memory Bank
Features
Auto approve
Checkpoints
Sixth rules
Drag & Drop
Plan & Act
Workflows
Editing Messages
@ Mentions
Slash Commands
Commands & Shortcuts
Exploring Sixth's Tools
Sixth tools guide
New task tool
Remote browser support
Enterprise Solutions
Cloud Provider Integration
Custom Instructions
MCP Servers
Security Concerns
MCP Servers
MCP Overview
Adding MCP Servers from GitHub
Configuring MCP Servers
Connecting to a Remote Server
MCP Made Easy
MCP Server Development Protocol
MCP Transport Mechanisms
Provider Configuration
Anthropic
Claude Code
AWS Bedrock
AWS Bedrock
AWS Bedrock w/ Profile Authentication
GCP Vertex AI
Litellm and sixth using codestral
VS Code Language Model API
xAI (Grok)
Mistral
DeepSeek
Ollama
OpenAI
OpenAI Compatible
OpenRouter
Requesty
SAP AI Core
Running Models Locally
Read Me First
LM Studio
Ollama
Troubleshooting
Terminal Quick Fixes
Terminal Troubleshooting
More Info
Telemetry
On this page
🤖 Setting Up LM Studio with Sixth
đź“‹ Prerequisites
🚀 Setup Steps
1. Install LM Studio
2. Launch LM Studio
3. Download a Model
4. Start the Server
5. Configure Sixth
⚠️ Important Notes
đź”§ Troubleshooting
Running Models Locally
LM Studio
Copy page
A quick guide to setting up LM Studio for local AI model execution with Sixth.
Copy page
​
🤖 Setting Up LM Studio with Sixth
Run AI models locally using LM Studio with Sixth.
​
đź“‹ Prerequisites
Windows, macOS, or Linux computer with AVX2 support
Sixth installed in VS Code
​
🚀 Setup Steps
​
1. Install LM Studio
Visit
lmstudio.ai
Download and install for your operating system
​
2. Launch LM Studio
Open the installed application
You’ll see four tabs on the left:
Chat
,
Developer
(where you will start the server),
My Models
(where your downloaded models are stored),
Discover
(add new models)
​
3. Download a Model
Browse the “Discover” page
Select and download your preferred model
Wait for download to complete
​
4. Start the Server
Navigate to the “Developer” tab
Toggle the server switch to “Running”
Note: The server will run at
http://localhost:1234
​
5. Configure Sixth
Open VS Code
Click Sixth settings icon
Select “LM Studio” as API provider
Select your model from the available options
​
⚠️ Important Notes
Start LM Studio before using with Sixth
Keep LM Studio running in background
First model download may take several minutes depending on size
Models are stored locally after download
​
đź”§ Troubleshooting
If Sixth can’t connect to LM Studio:
Verify LM Studio server is running (check Developer tab)
Ensure a model is loaded
Check your system meets hardware requirements
Read Me First
Ollama
Assistant
Responses are generated using AI and may contain mistakes.