Overview
Join us for an educational 3-hour workshop where you will explore applying Retrieval Augmented Generation (RAG) to create a chatbot application with private data using Elastic. You will also learn to deploy a publicly available Large Language Model (LLM) on Amazon SageMaker, utilize Elasticsearch's hybrid search, and integrate private data for improved chatbot performance.
The workshop involves integrating Elastic, a robust search engine, with an existing LLM on Amazon SageMaker and Amazon Bedrock to address domain-specific questions.
Highlights
Utilizing Elasticsearch:
- Run hybrid semantic search queries to retrieve relevant private data from Elastic.
- Leverage BM25 for textual search and KNN for semantic understanding.
Contextualizing LLMs:
- Modify the chatbot to feed fetched data from Elastic as context to the LLM.
- Observe how enriched context improves LLM responses for domain-specific questions.
LLM Deployment:
- Understand choosing and deploying the best LLM for their use case within Amazon SageMaker or Bedrock
Customer Applications:
- Improved Chatbot application Accuracy: Elevate chatbot performance for domain-specific inquiries by leveraging their private data in Elastic.
- Enhanced Customer Experience: Offer more relevant and accurate information, leading to increased customer satisfaction.
- Personalization Opportunities: Tailor chatbot responses to individual user needs by incorporating their past interactions and preferences.
- Efficient Workflow Automation: Reduce manual query resolution by empowering the chatbot to handle complex questions.
FOR GOVERNMENT EMPLOYEES ONLY
Agenda
12:00 - Check In / Lunch
12:45 - 1:45 Presentation
1:45 - 2:30 Workshop
2:30 - 3:00 Q&A
Location:
AWS Offices Sacramento
400 Capitol Mall, 9th Floor
Sacramento, CA 95814