Blog

Generate dashboards from natural language prompts in Amazon Quick

Generate dashboards from natural language prompts in Amazon Quick

Building meaningful dashboards demands hours of manual setup, even for experienced BI professionals. Amazon Quick now generates complete multi-sheet dashboards from natural language prompts, taking you from one or more datasets to a production-ready analysis in minutes. Data analysts building recurring operations reports, program managers preparing a leadership review, or engineers exploring a new dataset can …

Generate dashboards from natural language prompts in Amazon Quick Read More »

From data lake to AI-ready analytics: Introducing new data source with S3 Tables in Amazon Quick

From data lake to AI-ready analytics: Introducing new data source with S3 Tables in Amazon Quick

Organizations today are increasingly looking to combine analytics and AI to accelerate insights and decision-making. Amazon Quick, a unified agentic AI-powered analytics and decision intelligence service, brings together data visualization, natural language interaction, and agent-driven automation in a single, governed experience. With this, business users can explore data, generate insights, and take action without requiring …

From data lake to AI-ready analytics: Introducing new data source with S3 Tables in Amazon Quick Read More »

Introducing Dataset Q&A: Expanding natural language querying for structured datasets in Amazon Quick

Introducing Dataset Q&A: Expanding natural language querying for structured datasets in Amazon Quick

Every BI team knows this bottleneck: a business user has a question that falls outside existing dashboards, so they file a ticket. An analyst writes the query, validates the results, and delivers them—hours or days later. Multiply that by hundreds of ad-hoc requests per month, and the backlog becomes the single biggest constraint on data …

Introducing Dataset Q&A: Expanding natural language querying for structured datasets in Amazon Quick Read More »

Capacity-aware inference: Automatic instance fallback for SageMaker AI endpoints

Capacity-aware inference: Automatic instance fallback for SageMaker AI endpoints

As organizations scale generative AI workloads in production, securing reliable GPU compute has become one of the most persistent operational challenges. Large language models (LLMs) and multimodal architectures demand specific instance types and when that capacity isn’t available, endpoints fail before they serve a single request. Building a real-time inference endpoint on Amazon SageMaker AI …

Capacity-aware inference: Automatic instance fallback for SageMaker AI endpoints Read More »

AWS Transform now automates BI migration to Amazon Quick in days

AWS Transform now automates BI migration to Amazon Quick in days

Migrating to Amazon Quick doesn’t have to mean starting from scratch. Your dashboards encode hard-won domain knowledge: calculated fields your analysts perfected, layouts your executives rely on every Monday morning, security rules tuned to your org chart. You want AI-powered insights and serverless scale, but you’re staring at hundreds of dashboards and a migration estimate …

AWS Transform now automates BI migration to Amazon Quick in days Read More »

Reinforcement fine-tuning with LLM-as-a-judge

Reinforcement fine-tuning with LLM-as-a-judge

Large language models (LLMs) now drive the most advanced conversational agents, creative tools, and decision-support systems. However, their raw output often contains inaccuracies, policy misalignments, or unhelpful phrasing—issues that undermine trust and limit real-world utility. Reinforcement Fine‑Tuning (RFT) has emerged as the preferred method to align these models efficiently, using automated reward signals to replace …

Reinforcement fine-tuning with LLM-as-a-judge Read More »

AWS Generative AI Model Agility Solution: A comprehensive guide to migrating LLMs for generative AI production

AWS Generative AI Model Agility Solution: A comprehensive guide to migrating LLMs for generative AI production

Maintaining model agility is crucial for organizations to adapt to technological advancements and optimize their artificial intelligence (AI) solutions. Whether transitioning between different large language model (LLM) families or upgrading to newer versions within the same family, a structured migration approach and a standardized process are essential for facilitating continuous performance improvement while minimizing operational …

AWS Generative AI Model Agility Solution: A comprehensive guide to migrating LLMs for generative AI production Read More »

Sun Finance automates ID extraction and fraud detection with generative AI on AWS

Sun Finance automates ID extraction and fraud detection with generative AI on AWS

This post was co-authored with Krišjānis Kočāns, Kaspars Magaznieks, Sergei Kiriasov from Sun Finance Group If you process identity documents at scale—loan applications, account openings, compliance checks—you’ve likely hit the same wall: traditional optical character recognition (OCR) gets you partway there, but extraction errors still push a large share of applications into manual review queues. …

Sun Finance automates ID extraction and fraud detection with generative AI on AWS Read More »

Unleashing Agentic AI Analytics on Amazon SageMaker with Amazon Athena and Amazon Quick

Unleashing Agentic AI Analytics on Amazon SageMaker with Amazon Athena and Amazon Quick

Modern enterprises face mounting challenges in extracting actionable insights from vast data lakes and lakehouses spanning petabytes of structured and unstructured data. Traditional analytics require specialized technical expertise in SQL, data modeling, and business intelligence tools, creating bottlenecks that slow decision-making across retail, financial services, healthcare, Travel & Hospitality, manufacturing and many more industries. This …

Unleashing Agentic AI Analytics on Amazon SageMaker with Amazon Athena and Amazon Quick Read More »

Configuring Amazon Bedrock AgentCore Gateway for secure access to private resources

Configuring Amazon Bedrock AgentCore Gateway for secure access to private resources

AI agents in production environments often need to reach internal APIs, databases, and private resources that sit behind Amazon Virtual Private Cloud (Amazon VPC) boundaries. Managing private connectivity for each agent-to-tool path adds operational overhead and slows deployment. Amazon Bedrock AgentCore VPC connectivity is designed to deploy AI agents and Model Context Protocol (MCP) servers …

Configuring Amazon Bedrock AgentCore Gateway for secure access to private resources Read More »

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.

Scroll to Top