Blog_dumb

Drive organizational growth with Amazon Lex multi-developer CI/CD pipeline

Drive organizational growth with Amazon Lex multi-developer CI/CD pipeline

As your conversational AI initiatives evolve, developing Amazon Lex assistants becomes increasingly complex. Multiple developers working on the same shared Lex instance leads to configuration conflicts, overwritten changes, and slower iteration cycles. Scaling Amazon Lex development requires isolated environments, version control, and automated deployment pipelines. By adopting well-structured continuous integration and continuous delivery (CI/CD) practices, …

Drive organizational growth with Amazon Lex multi-developer CI/CD pipeline Read More »

Building custom model provider for Strands Agents with LLMs hosted on SageMaker AI endpoints

Building custom model provider for Strands Agents with LLMs hosted on SageMaker AI endpoints

Organizations increasingly deploy custom large language models (LLMs) on Amazon SageMaker AI real-time endpoints using their preferred serving frameworks—such as SGLang, vLLM, or TorchServe—to help gain greater control over their deployments, optimize costs, and align with compliance requirements. However, this flexibility introduces a critical technical challenge: response format incompatibility with Strands agents. While these custom …

Building custom model provider for Strands Agents with LLMs hosted on SageMaker AI endpoints Read More »

Embed Amazon Quick Suite chat agents in enterprise applications

Embed Amazon Quick Suite chat agents in enterprise applications

Organizations can face two critical challenges with conversational AI. First, users need answers where they work—in their CRM, support console, or analytics portal—not in separate tools. Second, implementing a secure embedded chat in their applications can require weeks of development to build authentication, token validation, domain security, and global distribution infrastructure. Amazon Quick Suite embedded …

Embed Amazon Quick Suite chat agents in enterprise applications Read More »

Unlock powerful call center analytics with Amazon Nova foundation models

Unlock powerful call center analytics with Amazon Nova foundation models

Call center analytics play a crucial role in improving customer experience and operational efficiency. With foundation models (FMs), you can improve the quality and efficiency of call center operations and analytics. Organizations can use generative AI to assist human customer support agents and managers of contact center teams, so they can gain insights that are …

Unlock powerful call center analytics with Amazon Nova foundation models Read More »

How Ricoh built a scalable intelligent document processing solution on AWS

How Ricoh built a scalable intelligent document processing solution on AWS

This post is cowritten by Jeremy Jacobson and Rado Fulek from Ricoh. This post demonstrates how enterprises can overcome document processing scaling limits by combining generative AI, serverless architecture, and standardized frameworks. Ricoh engineered a repeatable, reusable framework using the AWS GenAI Intelligent Document Processing (IDP) Accelerator. This framework reduced customer onboarding time from weeks …

How Ricoh built a scalable intelligent document processing solution on AWS Read More »

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.

Scroll to Top