A quick guide to setting up Ollama for local AI model execution with Sixth.
ollama run [model-name]
ollama run llama2
http://localhost:11434/