Ollama
Run LLMs locally with ease
Running Models with Ollama
Ollama makes it easy to run open source LLMs locally on your machine.
#
Installation
bash
curl -fsSL https://ollama.com/install.sh | sh
#
Running Models
bash
ollama run llama2
ollama run mistral
ollama run codellama
#
API Integration
python
import requests
response = requests.post('http://localhost:11434/api/generate',
json={"model": "llama2", "prompt": "Why is the sky blue?"})
print(response.json())
#