Ollama supports LLM-model, ChatModel, and Embedding Models. ## Chat Models and LLM The important ones are: "mistral", "llama3". For multi-modal llm, there is "[llava](https://ollama.com/library/llava)". ## Embedding Models For embedding models, there are two high performance models: "[mxbai-embed-large](https://ollama.com/library/mxbai-embed-large)" and "[nomic-embed-text](https://ollama.com/library/nomic-embed-text)". They need to be downloaded before using them on your own Ollama instance. # ChatGPT Answer Ollama is a lightweight, extensible framework designed to facilitate the building, running, and managing of language models on local machines. It provides a simple API for creating and running models, alongside a library of pre-built models ready for a variety of applications. Ollama supports running open-source large language models, such as Llama3, Mistral, Gemma, and others, locally without the need for sending private data to third-party services. It also includes features like GPU acceleration for faster model inference and a graphical user interface (OLLAMA-UI) for easier management of local language models. # Bard Answer Ollama.ai is a platform that empowers you to **work with large language models (LLMs)** like Llama 2 and Code Llama right on your own computer. It's a game-changer for anyone interested in: - **Experimenting with cutting-edge LLMs:** Ollama.ai makes it easy to download and run models like Llama 2, known for its general language capabilities, and Code Llama, designed specifically for code generation and editing. This opens doors to exploration, learning, and personal projects. - **Customization and fine-tuning:** Ollama.ai doesn't just let you run pre-built models; it grants you the power to customize them. You can fine-tune the models on your own data, tailoring them to specific tasks or domains. - **Local deployment and control:** No cloud dependency here! Ollama.ai enables you to run LLMs entirely on your local machine, giving you full control over their operation and output. This is particularly appealing for privacy-conscious users or those working with sensitive data. Here are some of the key features of Ollama.ai: - **Library of Models:** Choose from a range of pre-trained models, including Llama 2, mistral (another 7B parameter model), Code Llama, and more. - **Download and Installation:** Simple installation instructions for macOS and Linux make getting started a breeze. Windows support is coming soon. - **API and CLI:** Interact with the LLMs programmatically through a Python API or a command-line interface. - **Fine-tuning and Customization:** Train models on your own data or modify existing models' parameters. - **Community and Support:** Ollama.ai boasts a helpful community and dedicated support channels. Whether you're an ML enthusiast, a developer seeking code assistance, or simply curious about LLMs, Ollama.ai provides a valuable platform for exploration and innovation. It's definitely worth checking out if you're interested in working with these powerful language models on your own terms. For further information, you can visit the Ollama.ai website at [https://ollama.ai/](https://ollama.ai/). Remember, Windows support is currently in the works, so keep an eye out if that's relevant to you. # How to run Ollama on Linux **Prerequisites:** - **Linux machine:** Any modern Linux distribution should work (Ubuntu, Debian, CentOS, etc.). - **OLLAMA installation:** Follow the installation instructions on the OLLAMA repository: [https://github.com/jmorganca/ollama](https://github.com/jmorganca/ollama) - **Sufficient Resources:** OLLAMA, especially large models, can be resource-intensive. Ensure your machine has enough RAM and CPU capacity. **Methods for Keeping OLLAMA Running:** **1. Using tmux:** - **Start tmux:** `tmux new -s ollama` - **In the tmux session:** - Navigate to your OLLAMA installation directory. - Start the server: `ollama serve` - **Detach from the session:** Press `Ctrl+b` then `d` **Pros:** Simple, good for temporary background running. **Cons:** If the terminal session closes, the process stops. **2. Using screen:** - **Start screen:** `screen -S ollama` - **In the screen session:** - Navigate to your OLLAMA installation directory. - Start the server: `ollama serve` - **Detach from the session:** Press `Ctrl+a` then `d` **Pros:** Similar to tmux, easy for temporary background running. **Cons:** If the terminal session closes, the process stops. **3. Using systemd (Recommended for persistent running):** - **Create a systemd service file:** Bash ``` sudo nano /etc/systemd/system/ollama.service ``` Use code [with caution.](https://gemini.google.com/faq#coding) content_copy - **Paste the following content, adjusting paths as needed:** ``` [Unit] Description=OLLAMA Service After=network.target [Service] User=your_username # Replace with your actual username WorkingDirectory=/path/to/ollama ExecStart=/path/to/ollama serve Restart=always [Install] WantedBy=multi-user.target ``` - **Enable and start the service:** Bash ``` sudo systemctl enable ollama.service sudo systemctl start ollama.service ``` Use code [with caution.](https://gemini.google.com/faq#coding) content_copy - **Check status:** `sudo systemctl status ollama.service` **Pros:** - Robust, ensures the process restarts if it fails. - Integrated with Linux system management. **Cons:** Slightly more complex setup. **Additional Considerations:** - **Model Selection:** For systemd, you may want to specify the model in the `ExecStart` line or use an environment variable (e.g., `ExecStart=/path/to/ollama serve --model your_model.onnx`) - **Resource Monitoring:** Use tools like `top` or `htop` to monitor the resource usage of OLLAMA. # Ollama Library To find out what models are available for Ollama, see https://ollama.com/library # References ```dataview Table title as Title, authors as Authors where contains(subject, "Ollama" ) ```