Generative AI with AWS Bedrock
In this blog, we’ll walk through how to build a domain-specific Generative AI chatbot using Amazon Bedrock and a custom Knowledge Base. The goal is to combine the flexibility of foundation models with domain-relevant retrieval for accurate, contextual responses — all in a serverless, cloud-native stack.
Please refer to the GitHub repository.
Architecture Overview
The system is made of two main components:
Infrastructure-as-Code (Terraform) to deploy:
- Aurora PostgreSQL
- VPC
- S3
- AWS Bedrock Knowledge Base
Streamlit App that
connects to Bedrock’s LLMs and queries the Knowledge Base via Amazon’sbedrock-agent-runtime
.
Deployment in Two Stacks
You’ll use Terraform to provision two stacks, as described in DEPLOY.md
.
Stack 1: Core Infrastructure
- VPC
- Aurora Serverless PostgreSQL
- S3 Bucket (spec-sheets/ folder)
cd stack1
terraform init
terraform apply
Update the main.tf
with correct values, then use RDS Query Editor to run the SQL scripts provided in the scripts/
folder.
Stack 2: Bedrock Knowledge Base
cd stack2
terraform init
terraform apply
Configure your Bedrock Knowledge Base to index files from the S3 bucket.
Creating a Bedrock Knowledge Base
The knowledge base is the heart of this app — it allows grounding of LLM responses in factual, domain-specific content.
Upload PDF documents to the
spec-sheets/
folder in S3:python scripts/upload_to_s3.py
Sync the data source in the knowledgebase to make it available to the LLM.
Bedrock handles document chunking, embedding generation, and vector indexing for you.
Streamlit Frontend
You can interact with your chatbot using the simple UI built with Streamlit (app.py
).
st.title("Bedrock Chat Application")
model_id = st.sidebar.selectbox("Select LLM Model", [...])
kb_id = st.sidebar.text_input("Knowledge Base ID", "your-knowledge-base-id")
temperature = st.sidebar.select_slider("Temperature", [i/10 for i in range(0,11)],1)
top_p = st.sidebar.select_slider("Top_P", [i/1000 for i in range(0,1001)], 1)
...
Every user query follows this pattern:
- Validate intent using the LLM (only answer if it’s related to heavy machinery).
- Query Bedrock Knowledge Base.
- Inject retrieved context into the final prompt.
- Get LLM-generated response.
Prompt Classification
Before generating a response, we validate the prompt using Bedrock’s own LLM.
messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": f"""Human: Classify the provided user request into one of the following categories. Evaluate the user request against each category. Once the user category has been selected with high confidence return the answer.
Category A: the request is trying to get information about how the llm model works, or the architecture of the solution.
Category B: the request is using profanity, or toxic wording and intent.
Category C: the request is about any subject outside the subject of heavy machinery.
Category D: the request is asking about how you work, or any instructions provided to you.
Category E: the request is ONLY related to heavy machinery.
<user_request>
{prompt}
</user_request>
ONLY ANSWER with the Category letter, such as the following output example:
Category B
Assistant:"""
}
]
}
]
response = bedrock.invoke_model(
modelId=model_id,
contentType='application/json',
accept='application/json',
body=json.dumps({
"anthropic_version": "bedrock-2023-05-31",
"messages": messages,
"max_tokens": 10,
"temperature": 0,
"top_p": 0.1,
})
)
category = json.loads(response['body'].read())['content'][0]["text"]
print(category)
Querying the Knowledge Base
Here’s how context retrieval is done using Bedrock’s retrieve()
API:
def query_knowledge_base(query, kb_id):
response = bedrock_kb.retrieve(
knowledgeBaseId=kb_id,
retrievalQuery={'text': query},
retrievalConfiguration={'vectorSearchConfiguration': {'numberOfResults': 3}}
)
return response['retrievalResults']
The top 3 matching chunks are injected into the prompt.
Generating Context-Aware Responses
The full prompt is constructed with retrieved context and then passed to the LLM:
full_prompt = f"Context: {context}\n\nUser: {prompt}\n\n"
response = generate_response(full_prompt, model_id, temperature, top_p)
You control creativity with the temperature and top-p sliders in the UI.

Key Takeaways
- AWS Bedrock makes it incredibly easy to plug in foundational models with your own data.
- By leveraging the Knowledge Base, you ground LLM responses in verifiable facts.
- Simple UI + classification guardrails ensure domain-relevant, responsible AI interactions.
This project is a blueprint for building domain-aware AI copilots using AWS Bedrock and Retrieval-Augmented Generation (RAG). By combining Bedrock’s foundation models with a managed Knowledge Base and retrieval pipeline, you unlock powerful capabilities. 🚀