AI • Tech • Backend
Building RAG Pipelines with LangChain
Retrieval Augmented Generation is the standard for enterprise AI. Here is how to build a robust pipeline with vector databases and semantic reranking.
Why RAG?
LLMs don't know your private data. Fine-tuning is expensive and slow to update. RAG allows us to inject relevant context into the prompt at runtime. It's the difference between a generic answer and a business-specific insight.
The Architecture
I use Pinecone for vector storage and OpenAI's `text-embedding-3-small` for embeddings. The trick isn't the retrieval; it's the chunking strategy. RecursiveCharacterTextSplitter with meaningful overlap ensures context isn't severed mid-sentence.
The Reranking Step
Vector similarity isn't always semantic relevance. I add a Cross-Encoder reranker step (using Cohere) to sort the retrieved chunks before feeding them to the LLM. This dramatically reduces hallucinations.
Latest articles
View blog
Feb 12, 2025
The State of AI Agents in 2025
Moving beyond simple chatbots to autonomous agents that plan, execute, and verify. A look at the architectures defining the next wave of AI.

Feb 10, 2025
React Compiler: Goodbye useMemo?
React 19's optimizing compiler promises to automate memoization. I tested it on a large codebase to see if manual optimization is truly dead.

Feb 5, 2025
Next.js Middleware Mastery
Advanced patterns for using Edge Middleware to handle authentication, geolocation, and AB testing before the request hits your layout.