Retrieval-Augmented Generation (RAG) for AI Applications
Comprehensive guide to Retrieval-Augmented Generation, covering architecture, embeddings, vector databases, document indexing, retrieval strategies, and best practices for building production-ready RAG systems.
π§Ό RAG (Retrieval-Augmented Generation)
What is RAG?
Helps LLM generate answers grounded in retrieved knowledge based on Vector DB of existing knowledge.
RAG combines two components:
- Retriever
- Generator (LLM)
Core idea:
Without RAG we ask LLM directly:
User β LLM β Answer
With RAG we add a context retrieval step to improve the answer:
RAG is becoming the default architecture for AI products. But building it reliably in production requires careful engineering.
Advantages of RAG
- Access private knowledge: This allows models to answer questions about private or up-to-date data.
- Reduce hallucinations: By grounding the model in retrieved documents, it reduces the chance of generating false information.
- Stay up-to-date: Give the model access to external knowledge.
How RAG Works
RAG works in three steps:
- Search relevant documents for an answer
- Insert retrieved text into the prompt
- Generate the answer from the updated prompt.
Given:
represents the document set.
The RAG system retrieves the most relevant document
Then the LLM generates a response conditioned on (q) and (d^*).
flowchart TD
Q[User question β] --> R1[Retrieve relevant documents π]
R1 --> R2[Insert retrieved context into prompt βΉοΈ]
R2 --> LLM[LLM generates answer π]
LLM --> A[Grounded response π¬]
Conceptually, the prompt becomes:
For example:
This is powerful because the LLM is being used more as a reasoning engine than as a pure source of facts.
It reads relevant text and uses that text to formulate an answer
Building a Production RAG System Step-by-Step
Large Language Models are powerful, but they have one major limitation: they don't know your private data.
If you ask a model about your company docs, support tickets, or internal knowledge base, it will hallucinate or say it doesn't know.
Retrieval Augmented Generation (RAG) solves this.
Instead of relying only on the model's training data, we retrieve relevant documents at query time and inject them into the prompt.
In this post weβll walk through how to build a production RAG system step-by-step, including architecture, scaling concerns, and engineering tradeoffs.
Step 1 β Data Collection π
Your RAG system is only as good as the documents you feed it.
We need to collect and index all relevant documents that the model can retrieve from.
Rag will search through this documents vector DB to find relevant context for the user query.
Typical sources:
- Confluence Pages
- Slack threads
- GitHub repos
- Product docs & Wiki
- Policy Docs eg. PDFs
Example pipeline:
flowchart TD
A[Data Sources π] --> B[Document Loader π]
B --> C[Text Cleaning π]
C --> D[Chunking π]
Python example:
from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader("docs/architecture.pdf")
documents = loader.load()
Step 2 β Chunking Documents π
Instead of embedding an entire document, we split it into chunks.
LLMs have context limits (e.g. 4k tokens), so we need to break documents into smaller pieces.
| Chunk Size | Tradeoff |
|---|---|
| Small (200 tokens) | Better retrieval |
| Large (1000 tokens) | More context |
A common heuristic:
1 chunk size = 300 token
Where overlap helps maintain context across chunks.
Example:
from langchain.text_splitter import RecursiveCharacterTextSplitter
splitter = RecursiveCharacterTextSplitter(
chunk_size=300,
chunk_overlap=50
)
chunks = splitter.split_documents(documents)
Step 3 β Embedding the Data βοΈ
Embeddings convert text into vectors.
- Also called vectorization or encoding.
- Converts text into a high-dimensional vector that captures semantic meaning.
Example embedding:
"What is Kubernetes?"
β [0.12, -0.44, 0.88, ...]
Similar meaning β similar vectors.
Example code:
from openai import OpenAI
client = OpenAI()
embedding = client.embeddings.create(
model="text-embedding-3-large",
input="What is Kubernetes?"
)
Step 4 β Store in a Vector Database π’
Embeddings must be stored in a vector index.
Popular Vector DB options:
| Database | Use Case |
|---|---|
| Pinecone | Fully managed vector database. |
| Weaviate | Supports hybrid search |
| FAISS | Opensource Vector DB by Meta |
| Qdrant | Lightweight, embedded vector search engine for in-process retrieval |
Example architecture:
flowchart TD
C[Chunks π] --> E[Embedding Model βοΈ]
E --> V[Vector DB π’]
Python:
vector_db.add(
ids=[chunk_id],
embeddings=[embedding],
metadata={"source": "docs"}
)
Step 5 β Query Time Retrieval π
flowchart TD
Q[User Query β] --> E[Embedding βοΈ]
E --> S[Vector Similarity Search π]
S --> D[Top-K Documents π]
Mathematically we search using cosine similarity:
Python:
results = vector_db.search(
query_embedding,
k=5
)
Step 6 β Prompt Construction π¬
Now we inject retrieved documents into the prompt.
Example prompt template:
You are a helpful assistant.
Use the context below to answer the question.
Context:
{retrieved_docs}
Question:
{user_query}
Example
prompt = f"""
Answer the question using the context below.
Context:
{docs}
Question:
{query}
"""
Step 7 β Generate Answer with LLM π
Now the LLM generates the answer grounded in retrieved knowledge.
flowchart TD
P[Prompt + Retrieved Context βΉοΈ] --> LLM[LLM Generation π]
LLM --> A[Answer π¬]
response = client.chat.completions.create(
model="gpt-4.1",
messages=[{"role": "user", "content": prompt}]
)
Production Architecture
A scalable RAG architecture looks like this:
βββββββββββββββ
β User App β
ββββββββ¬βββββββ
β
βΌ
βββββββββββββββ
β API Server β
ββββββββ¬βββββββ
β
βββββββββββ΄ββββββββββ
βΌ βΌ
Vector Database LLM API
(Retrieval) (Generation)
β β
βββββββββ¬ββββββββββββ
βΌ
Response
Example Tech Stack
| Layer | Tools |
|---|---|
| Ingestion | Airflow |
| Embeddings | OpenAI |
| Vector DB | Pinecone |
| Orchestration | LangChain |
| API | FastAPI |
