Blog

Revolutionizing knowledge management: VW’s AI prototype journey with AWS

Revolutionizing knowledge management: VW’s AI prototype journey with AWS

Today, we’re excited to share the journey of the VW—an innovator in the automotive industry and Europe’s largest car maker—to enhance knowledge management by using generative AI, Amazon Bedrock, and Amazon Kendra to devise a solution based on Retrieval Augmented Generation (RAG) that makes internal information more easily accessible by its users. This solution efficiently …

Revolutionizing knowledge management: VW’s AI prototype journey with AWS Read More »

Fine-tune large language models with Amazon SageMaker Autopilot

Fine-tune large language models with Amazon SageMaker Autopilot

Fine-tuning foundation models (FMs) is a process that involves exposing a pre-trained FM to task-specific data and fine-tuning its parameters. It can then develop a deeper understanding and produce more accurate and relevant outputs for that particular domain. In this post, we show how to use an Amazon SageMaker Autopilot training job with the AutoMLV2 …

Fine-tune large language models with Amazon SageMaker Autopilot Read More »

Unify structured data in Amazon Aurora and unstructured data in Amazon S3 for insights using Amazon Q

Unify structured data in Amazon Aurora and unstructured data in Amazon S3 for insights using Amazon Q

In today’s data-intensive business landscape, organizations face the challenge of extracting valuable insights from diverse data sources scattered across their infrastructure. Whether it’s structured data in databases or unstructured content in document repositories, enterprises often struggle to efficiently query and use this wealth of information. In this post, we explore how you can use Amazon …

Unify structured data in Amazon Aurora and unstructured data in Amazon S3 for insights using Amazon Q Read More »

Automate Q&A email responses with Amazon Bedrock Knowledge Bases

Automate Q&A email responses with Amazon Bedrock Knowledge Bases

Email remains a vital communication channel for business customers, especially in HR, where responding to inquiries can use up staff resources and cause delays. The extensive knowledge required can make it overwhelming to respond to email inquiries manually. In the future, high automation will play a crucial role in this domain. Using generative AI allows …

Automate Q&A email responses with Amazon Bedrock Knowledge Bases Read More »

Streamline RAG applications with intelligent metadata filtering using Amazon Bedrock

Streamline RAG applications with intelligent metadata filtering using Amazon Bedrock

Retrieval Augmented Generation (RAG) has become a crucial technique for improving the accuracy and relevance of AI-generated responses. The effectiveness of RAG heavily depends on the quality of context provided to the large language model (LLM), which is typically retrieved from vector stores based on user queries. The relevance of this context directly impacts the …

Streamline RAG applications with intelligent metadata filtering using Amazon Bedrock Read More »

Embedding secure generative AI in mission-critical public safety applications

Embedding secure generative AI in mission-critical public safety applications

This post is co-written with  Lawrence Zorio III from Mark43. Public safety organizations face the challenge of accessing and analyzing vast amounts of data quickly while maintaining strict security protocols. First responders need immediate access to relevant data across multiple systems, while command staff require rapid insights for operational decisions. Mission-critical public safety applications require …

Embedding secure generative AI in mission-critical public safety applications Read More »

How FP8 boosts LLM training by 18% on Amazon SageMaker P5 instances

How FP8 boosts LLM training by 18% on Amazon SageMaker P5 instances

Large language models (LLMs) are AI systems trained on vast amounts of text data, enabling them to understand, generate, and reason with natural language in highly capable and flexible ways. LLM training has seen remarkable advances in recent years, with organizations pushing the boundaries of what’s possible in terms of model size, performance, and efficiency. …

How FP8 boosts LLM training by 18% on Amazon SageMaker P5 instances Read More »

Scroll to Top