Build RAG From Scratch
Retrieval-Augmented Generation (RAG) helps large language models stay up to date and reduce hallucinations, but what’s really happening under the hood?
Join us for a hands-on livestream where we’ll break down the key components of a RAG system—by building one from scratch! (Okay, maybe not the LLM itself—we do have a time limit!) Along the way, you’ll gain a deep understanding of how vectorization, similarity search, embedding models, and vector databases work together to power better AI responses.
What you'll learn:
- Vectorization & similarity search – How data is transformed for AI-powered retrieval
- Embedding models & vector databases – Their roles in improving chatbot accuracy
- Bringing it all together – Watch as we connect the pieces and build an augmented chatbot
Can’t join us live? Register anyway and we’ll share the replay afterward.
Speakers

Phil Nash
Developer Relations Engineer
DataStax
Resources
Highly Accurate Retrieval for your RAG Application with ColBERT and Astra DBBetter LLM Integration and Relevancy with Content-Centric Knowledge GraphsGenerate Related Posts for Your Astro Blog with Astra DB Vector SearchGet Started
Simplify and Accelerate GenAI App Development with Astra DB