Learn how to prompt and customize your LLM responses using Amazon Bedrock.
Learn how to prompt and customize your LLM responses using Amazon Bedrock.
In this course, you’ll learn how to deploy a large language model-based application into production using serverless technology.
A serverless architecture enables you to quickly deploy your applications without the need to manage and scale the infrastructure that it runs on.
You’ll learn to summarize audio files by pairing an LLM with an automatic speech recognition (ASR) model. Through hands-on exercises, you’ll build an event-driven system that automatically detects incoming customer inquiries, transcribes them with ASR and summarizes them with an LLM, using Amazon Bedrock.
After course completion, you’ll know how to:
You’ll work with the Amazon Titan model, but in practice Amazon Bedrock allows you to use any model you prefer.
Start building serverless LLM applications with Amazon Bedrock and deploy your apps in just days.
Advanced AI assistant for natural conversations and problem-solving
Create stunning AI-generated artwork and images from text descriptions
AI-powered writing assistant integrated into your workspace
AI content generator for marketing copy and creative writing