Welcome to the Ollama Course.
Ollama is an open source platform to download, install, manage, run, and deploy large language models (LLMs). All this can be done locally with Ollama. LLM stands for Large Language Model. These models are designed to understand, generate, and interpret human language at a high level.
Features
- Model Library: Offers a variety of pre built models like Llama 3.2, Mistral, etc.
- Customization: Allows you to customize and create your own models
- Easy: Provides a simple API for creating, running, and managing models
- Cross Platform: Available for macOS, Linux, and Windows
- Modelfile: Packages everything you need to run an LLM into a single Modelfile, making it easy to manage and run models
Popular LLMs, such as Llama by Meta, Mistral, Gemma by Google DeepMind, Phi by Microsoft, Qwen by Alibaba Clouse, etc., can run locally using Ollama.
In this course, you will learn about Ollama and how it eases the work of a programmer running LLMs. We have discussed how to begin with Ollama, install, and tune LLMs like Lama 3.2, Mistral 7b, etc. We have also covered how to customize a model and create a teaching assistant like ChatBot locally by creating a modefile.
Course Lessons
Section A: Ollama Introduction & Setup
- Ollama Introduction and Features
- Install Ollama on Windows locally
Section B: Setup LLMs locally with Ollama
- Install Llama 3.2 locally
- Install Mistral 7b locally
Section C: Ollama Commands and Usage
- List all the models running on Ollama locally
- List the installed models on your system with Ollama
- Show the information of a model using Ollama locally
- How to stop a running model on Ollama
- How to run an already installed model on Ollama
Section D: Create and Run a ChatGPT like model with Ollama
- Customize the model and create and run a ChatGPT like model with Ollama locally
Section E: Ollama Remove a model
- Remove any model from Ollama locally
Note: We have covered only open source technologies
Let us start the journey.