Building RAG agents locally using open source LLMs on Intel CPU (#28302)

**Description:** Added a cookbook that showcase how to build a RAG agent
pipeline locally using open-source LLM and embedding models on Intel
Xeon CPU. It uses Llama 3.1:8B model from Ollama for LLM and
nomic-embed-text-v1.5 from NomicEmbeddings for embeddings. The whole
experiment is developed and tested on Intel 4th Gen Xeon Scalable CPU.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
This commit is contained in:
Pratool Bharti
2024-11-27 07:40:09 -08:00
committed by GitHub
parent 607c60a594
commit c09000f20e
2 changed files with 657 additions and 1 deletions

File diff suppressed because one or more lines are too long