Blog_dumb

AWS and DXC collaborate to deliver customizable, near real-time voice-to-voice translation capabilities for Amazon Connect

AWS and DXC collaborate to deliver customizable, near real-time voice-to-voice translation capabilities for Amazon Connect

Providing effective multilingual customer support in global businesses presents significant operational challenges. Through collaboration between AWS and DXC Technology, we’ve developed a scalable voice-to-voice (V2V) translation prototype that transforms how contact centers handle multi-lingual customer interactions. In this post, we discuss how AWS and DXC used Amazon Connect and other AWS AI services to deliver …

AWS and DXC collaborate to deliver customizable, near real-time voice-to-voice translation capabilities for Amazon Connect Read More »

Orchestrate an intelligent document processing workflow using tools in Amazon Bedrock

Orchestrate an intelligent document processing workflow using tools in Amazon Bedrock

Generative AI is revolutionizing enterprise automation, enabling AI systems to understand context, make decisions, and act independently. Generative AI foundation models (FMs), with their ability to understand context and make decisions, are becoming powerful partners in solving sophisticated business problems. At AWS, we’re using the power of models in Amazon Bedrock to drive automation of …

Orchestrate an intelligent document processing workflow using tools in Amazon Bedrock Read More »

Reducing hallucinations in LLM agents with a verified semantic cache using Amazon Bedrock Knowledge Bases

Reducing hallucinations in LLM agents with a verified semantic cache using Amazon Bedrock Knowledge Bases

Large language models (LLMs) excel at generating human-like text but face a critical challenge: hallucination—producing responses that sound convincing but are factually incorrect. While these models are trained on vast amounts of generic data, they often lack the organization-specific context and up-to-date information needed for accurate responses in business settings. Retrieval Augmented Generation (RAG) techniques …

Reducing hallucinations in LLM agents with a verified semantic cache using Amazon Bedrock Knowledge Bases Read More »

LLM continuous self-instruct fine-tuning framework powered by a compound AI system on Amazon SageMaker

LLM continuous self-instruct fine-tuning framework powered by a compound AI system on Amazon SageMaker

Fine-tuning a pre-trained large language model (LLM) allows users to customize the model to perform better on domain-specific tasks or align more closely with human preferences. It is a continuous process to keep the fine-tuned model accurate and effective in changing environments, to adapt to the data distribution shift (concept drift) and prevent performance degradation …

LLM continuous self-instruct fine-tuning framework powered by a compound AI system on Amazon SageMaker Read More »

Maximize your file server data’s potential by using Amazon Q Business on Amazon FSx for Windows

Maximize your file server data’s potential by using Amazon Q Business on Amazon FSx for Windows

Organizations need efficient ways to access and analyze their enterprise data. Amazon Q Business addresses this need as a fully managed generative AI-powered assistant that helps you find information, generate content, and complete tasks using enterprise data. It provides immediate, relevant information while streamlining tasks and accelerating problem-solving. Amazon FSx for Windows File Server is …

Maximize your file server data’s potential by using Amazon Q Business on Amazon FSx for Windows Read More »

Generate synthetic counterparty (CR) risk data with generative AI using Amazon Bedrock LLMs and RAG

Generate synthetic counterparty (CR) risk data with generative AI using Amazon Bedrock LLMs and RAG

Data is the lifeblood of modern applications, driving everything from application testing to machine learning (ML) model training and evaluation. As data demands continue to surge, the emergence of generative AI models presents an innovative solution. These large language models (LLMs), trained on expansive data corpora, possess the remarkable capability to generate new content across …

Generate synthetic counterparty (CR) risk data with generative AI using Amazon Bedrock LLMs and RAG Read More »

Turbocharging premium audit capabilities with the power of generative AI: Verisk’s journey toward a sophisticated conversational chat platform to enhance customer support

Turbocharging premium audit capabilities with the power of generative AI: Verisk’s journey toward a sophisticated conversational chat platform to enhance customer support

This post is co-written with Sajin Jacob, Jerry Chen, Siddarth Mohanram, Luis Barbier, Kristen Chenowith, and Michelle Stahl from Verisk. Verisk (Nasdaq: VRSK) is a leading data analytics and technology partner for the global insurance industry. Through advanced analytics, software, research, and industry expertise across more than 20 countries, Verisk helps build resilience for individuals, …

Turbocharging premium audit capabilities with the power of generative AI: Verisk’s journey toward a sophisticated conversational chat platform to enhance customer support Read More »

Build verifiable explainability into financial services workflows with Automated Reasoning checks for Amazon Bedrock Guardrails

Build verifiable explainability into financial services workflows with Automated Reasoning checks for Amazon Bedrock Guardrails

Foundational models (FMs) and generative AI are transforming how financial service institutions (FSIs) operate their core business functions. AWS FSI customers, including NASDAQ, State Bank of India, and Bridgewater, have used FMs to reimagine their business operations and deliver improved outcomes. FMs are probabilistic in nature and produce a range of outcomes. Though these models …

Build verifiable explainability into financial services workflows with Automated Reasoning checks for Amazon Bedrock Guardrails Read More »

Best practices for Amazon SageMaker HyperPod task governance

Best practices for Amazon SageMaker HyperPod task governance

At AWS re:Invent 2024, we launched a new innovation in Amazon SageMaker HyperPod on Amazon Elastic Kubernetes Service (Amazon EKS) that enables you to run generative AI development tasks on shared accelerated compute resources efficiently and reduce costs by up to 40%. Administrators can use SageMaker HyperPod task governance to govern allocation of accelerated compute …

Best practices for Amazon SageMaker HyperPod task governance Read More »

Scroll to Top