Skip to main content

Use Ollama with the official Python library

Ollama is a great way to get started with AI by using open-source and publically available large-language models locally on your computer. I wrote previously about how to get started with the experimental OpenAI API, but Ollama has a dedicated Python library that is even simpler.

  1. Install the library: pip3 install ollama

  2. Create a new python file: touch completion.py

  3. Add the following code to completion.py:

    import ollama
    
    def get_completion(prompt, model):
        ollama.chat(model, messages=[{
            'role': 'user',
            'content': prompt,
        }])
    
        return response['message']['content']
    
    prompt = "What is the chief end of man?"
    
    print(get_completion(prompt, "mistral"))
    
  4. Run the file: python3 completion.py

There is more information available in the library repo on GitHub, including examples for streaming responses and a custom client. For even more documentation on Ollama, check out the /docs directory in the main repo.