πŸ“‹ Prerequisites

  • Windows, macOS, or Linux computer
  • Sixth installed in VS Code

πŸš€ Setup Steps

1. Install Ollama

  • Visit ollama.com
  • Download and install for your operating system
Ollama download page

2. Choose and Download a Model

  • Browse models at ollama.com/search
  • Select model and copy command:
    ollama run [model-name]
    
Selecting a model in Ollama
  • Open your Terminal and run the command:
    • Example:
      ollama run llama2
      
Running Ollama in terminal
✨ Your model is now ready to use within Sixth!

3. Configure Sixth

  1. Open VS Code
  2. Click Sixth settings icon
  3. Select β€œOllama” as API provider
  4. Enter configuration:
    • Base URL: http://localhost:11434/ (default value, can be left as is)
    • Select the model from your available options
Configuring Sixth with Ollama

⚠️ Important Notes

  • Start Ollama before using with Sixth
  • Keep Ollama running in background
  • First model download may take several minutes

πŸ”§ Troubleshooting

If Sixth can’t connect to Ollama:
  1. Verify Ollama is running
  2. Check base URL is correct
  3. Ensure model is downloaded
Need more info? Read the Ollama Docs.