Using Large Language Models locally
GDPR or intellectual property considerations may prohibit exposing data or even queries to providers of Large Language Models via a web interface. A model may have to be fine-tuned to increase the quality for certain applications. This training will show you how to do deploy an LLM locally and give some illustrations of applications you can implement.
Objectives:
- Install and run LLMs on local hardware;
- Use Retrieval Augmented Generation (RAG) to chat with your own documents/data;
- Fine-tune an LLM to better answer your questions;
- Optimize an LLM to reduce requirements for inference.
Required skills: familiarity with Python, experience using the Bash command line will help.
Organised by VIB Technology Training and ELIXIR Belgium