Democratizing business intelligence: BGL’s journey with Claude Agent SDK and Amazon Bedrock AgentCore

Democratizing business intelligence: BGL’s journey with Claude Agent SDK and Amazon Bedrock AgentCore

This post is cowritten with James Luo from BGL.

Data analysis is emerging as a high-impact use case for AI agents. According to Anthropic’s 2026 State of AI Agents Report, 60% of organizations rank data analysis and report generation as their most impactful agentic AI applications. 65% of enterprises cite it as a top priority. In practice, businesses face two common challenges:

  • Business users without technical knowledge rely on data teams for queries, which is time-consuming and creates a bottleneck.
  • Traditional text-to-SQL solutions don’t provide consistent and accurate results.

Like many other businesses, BGL faced similar challenges with its data analysis and reporting use cases. BGL is a leading provider of self-managed superannuation fund (SMSF) administration solutions that help individuals manage the complex compliance and reporting of their own or a client’s retirement savings, serving over 12,700 businesses across 15 countries. BGL’s solution processes complex compliance and financial data through over 400 analytics tables, each representing a specific business domain, such as aggregated customer feedback, investment performance, compliance tracking, and financial reporting. BGL’s customers and employees need to find insights from the data. For example, Which products had the most negative feedback last quarter? or Show me investment trends for high-net-worth accounts. Working with Amazon Web Services (AWS), BGL built an AI agent using Claude Agent SDK hosted on Amazon Bedrock AgentCore. By using the AI agent business users can retrieve analytic insights through natural language while aligning with the security and compliance requirements of financial services, including session isolation and identity-based access controls.

In this blog post, we explore how BGL built its production-ready AI agent using Claude Agent SDK and Amazon Bedrock AgentCore. We cover three key aspects of BGL’s implementation:

  • Why building a strong data foundation is essential for reliable AI agent-based text-to-SQL solutions
  • How BGL designed its AI agent using Claude Agent SDK for code execution, context management, and domain-specific expertise
  • How BGL used AgentCore to provide the ideal stateful execution sessions in production for a more secure, scalable AI agent.

Setting up strong data foundations for an AI agent-based text-to-SQL solution

When engineering teams implement an AI agent for analytics use cases, a common anti-pattern is to have the agent handle everything including understanding database schemas, transforming complex datasets, sorting out business logic for analyses and interpreting results. The AI agent is likely to produce inconsistent results and fail by joining tables incorrectly, missing edge cases, or producing incorrect aggregations.

BGL used its existing mature big data solution powered by Amazon Athena and dbt Labs, to process and transform terabytes of raw data across various business data sources. The extract, transform, and load (ETL) process builds analytic tables and each table answers a specific category of business questions. Those tables are aggregated, denormalized datasets (with metrics and, summaries) that serve as a business-ready single source of truth for business intelligence (BI) tools, AI agents, and applications. For details on how to build a serverless data transformation architecture with Athena and dbt, see How BMW Group built a serverless terabyte-scale data transformation architecture with dbt and Amazon Athena.

The AI agent’s role is to handle complex data transformation within the data system by focusing on interpreting the user’s natural language questions, translating it, and generating SQL SELECT queries against well-structured analytic tables. When needed, the AI agent writes Python scripts to further process results and generate visualizations. This separation of concerns significantly reduces the risk of hallucination and offers several key benefits:

  • Consistency: The data system handles complex business logic in a more deterministic way: joins, aggregations, and business rules are validated by the data team ahead of time. The AI agent’s task becomes straightforward: interpret questions and generate basic SELECT queries against those tables.
  • Performance: Analytic tables are pre-aggregated and optimized with proper indexes. The agent performs basic queries rather than complex joins across raw tables, resulting in a faster response time even for large datasets.
  • Maintainability and governance: Business logic resides in the data system, not in the AI’s context window. This helps ensure that the AI agent relies on the same single source of truth as other consumers, such as BI tools. If a business rule changes, the data team updates the data transformation logic in dbt, and the AI agent automatically consumes the updated analytic tables that reflect those changes.

“Many people think the AI agent is so powerful that they can skip building the data platform; they want the agent to do everything. But you can’t achieve consistent and accurate results that way. Each layer should solve complexity at the appropriate level” 

– James Luo, BGL Head of Data and AI

How BGL builds AI agents using Claude Agent SDK with Amazon Bedrock

BGL’s development team has been using Claude Code powered by Amazon Bedrock as its AI coding assistant. This integration uses temporary, session-based access to mitigate credential exposure, and integrates with existing identity providers to align with financial services compliance requirements. For details of integration, see Guidance for Claude Code with Amazon Bedrock

Through its daily use of the Claude Code, BGL recognized that its core capabilities extend beyond coding. BGL used its ability to reason through complex problems, write and execute code, and interact with files and systems autonomously. Claude Agent SDK packages the same agentic capabilities into a Python and TypeScript SDK, so that developers can build custom AI agents on top of Claude Code. For BGL, this meant they could build an analytics AI agent with:

  • Code execution: The agent writes and runs Python code to process datasets returned from analytic tables and generate visualizations
  • Automatic context management: Long-running sessions don’t overwhelm token limits
  • Sandboxed execution: Production-grade isolation and permission controls
  • Modular memory and knowledge: A CLAUDE.md file for project context and Agent Skills for product line domain-specific expertise

Why code execution matters for data analytics

Analytics queries often return thousands of rows and sometimes beyond megabytes of data. Standard tool-use, function calling, and Model Context Protocol (MCP) patterns often pass retrieved data directly into the context window, which quickly reaches model context window limits. BGL implemented a different approach: the agent writes SQL to query Athena, then writes Python code to process the CSV file results directly in its file system. This enables the agent to handle large result sets, perform complex aggregations, and generate charts without reaching context window limits. You can learn more about the code execution patterns in Code execution with MCP: Building more efficient agents.

Modular knowledge architecture

To handle BGL’s diverse product lines and complex domain knowledge, the implementation uses a modular approach with two key configuration types that work together seamlessly.

CLAUDE.md (project context)

The CLAUDE.md file provides the agent with global context—the project structure, environment configuration (test, production, and so on), and critically, how to execute SQL queries. It defines which folders store intermediate results and final outputs, making sure files land in a defined file path that users can access. The following diagram shows the structure of a CLAUDE.md file:

This image displays a technical documentation file named CLAUDE.md that outlines the hierarchical structure of a data analytics project. The document begins with three comment sections describing the project context, setup instructions, and data folder construction. The main content shows a numbered list explaining three file types found in each analytics table: schema.md for schema descriptions and data dictionaries, sample_data.csv for sample data, and statistics.md for data statistics. Below this, an ASCII tree diagram illustrates the folder hierarchy starting with a data directory, containing multiple databases labeled as mart layer, including database_1 and database_2. Each database contains analytics tables such as analytics_table_1, analytics_table_2, and analytics_table_3. Each analytics table consistently includes the three file types mentioned above. Ellipses indicate additional databases and tables follow the same pattern.

SKILL.md (Product domain expertise)

BGL organizes their agent domain knowledge by product lines using the SKILL.md configuration files. Each skill acts as a specialized data analyst for a specific product. For example, the BGL CAS 360 product has a skill called CAS360 Data Analyst agent, which handles company and trust management with ASIC compliance alignment; while BGL’s Simple Fund 360 product has a skill called Simple Fund 360 Data Analyst agent, which is equipped with SMSF administration and compliance-related domain skills. A SKILL.md file defines three things:

  • When to trigger: What types of questions should activate this skill
  • Which tables to use or map: References to the relevant analytic tables in the data folder (as shown in the preceding figure)
  • How to handle complex scenarios: Step-by-step guidance for multi-table queries or specific business questions if required

By using SKILL.md files, the agent can dynamically discover and load the right skill to gain domain-specific expertise for corresponding tasks.

  • Unified context: When a skill is triggered, Claude Agent SDK dynamically merges its specialized instructions with the global CLAUDE.md file into a single prompt. This allows the agent to simultaneously apply project-wide standards (for example, always save to disk) while using domain-specific knowledge (such as mapping user questions to a group of tables).
  • Progressive discovery: Not all skills need to be loaded into the context window at once. The agent first reads the query to determine which skill needs to be triggered. It loads the skill body and references to understand which analytic table’s metadata is required. It then further explores corresponding data folders. This keeps context usage efficient while providing comprehensive coverage.
  • Iterative refinement: If the AI agent is unable to handle some business knowledge because of a lack of new domain knowledge, the team will gather feedback from users, identify the gaps, and add new knowledge to existing skills using a human-in-the-loop process so skills are updated and refined iteratively.

This technical architecture diagram illustrates an Agent Virtual Machine system designed for AI automation and skill management. The diagram is organized into two main sections: At the top level, the system provides two scripting execution environments: Bash for shell command execution and Python for running Python scripts. These environments enable the agent to perform various computational tasks. The lower section displays the file system architecture, represented by a light blue container. Within this file system, skills are organized using a standardized directory structure following the pattern "skills/[skillname]360/". Three specific skill modules are shown: skills/sf360/ containing a SKILL.md documentation file and a references subdirectory skills/cas360/ containing a SKILL.md documentation file and a references subdirectory skills/smartdocs360/ containing a SKILL.md documentation file and a references subdirectory An ellipsis notation indicates additional skill directories follow the same organizational pattern. Each skill module maintains consistent structure with documentation (SKILL.md) and supporting reference materials stored in dedicated subdirectories. This modular architecture enables the AI agent system to access, execute, and manage multiple capabilities programmatically, with each skill packaged alongside its documentation and resources for efficient automation workflows.

As shown in the preceding figure, agent skills are organized per product line. Each product folder contains a SKILL.md definition file and a references directory with more domain knowledge and support materials that the agent loads on demand.

For details about Anthropic Agent Skills, see the Anthropic blog post, agents for the real world with Agent Skills

High-level solution architecture

To deliver a more secure and scalable text-to-SQL experience, BGL uses Amazon Bedrock AgentCore to host Claude Agent SDK while keeping data transformation in the existing big data solution.

AWS Cloud Architecture with dbt, Amazon Athena, and Claude Agent Integration Image Description This architecture diagram illustrates an AWS Cloud-based data pipeline system that integrates multiple AWS services with dbt and Slack to enable intelligent data processing and AI-powered interactions. Components The diagram shows seven key components within the AWS Cloud environment: dbt (data build tool): A data transformation tool positioned on the left side, represented by its distinctive logo Amazon Athena: AWS's serverless interactive query service for analyzing data Amazon S3: AWS's object storage service for storing and retrieving data AgentCore runtime with Claude agent hosted: The central orchestration component that runs an AI agent powered by Claude Amazon Bedrock: AWS's fully managed service for foundation models and generative AI capabilities Slack: An external communication platform that serves as the user interface Data Flow The architecture demonstrates a seven-step data flow pattern: Users initiate requests from Slack to the AgentCore runtime The AgentCore runtime communicates with Amazon Bedrock for AI processing The agent queries Amazon Athena for structured data analysis Amazon Athena retrieves data from Amazon S3 storage Data flows from Amazon S3 back to the AgentCore runtime Amazon Bedrock returns AI-generated responses to the agent The AgentCore runtime sends final results back to Slack Additionally, dbt maintains a bidirectional connection with Amazon Athena, enabling data transformation workflows. Purpose This architecture enables users to interact with AWS data services and AI capabilities through Slack. The Claude agent orchestrates requests across multiple AWS services, combining data querying, transformation, and AI-powered analysis to deliver intelligent responses to user queries. Legal Notice dbt and the dbt logo are trademarks of dbt Labs, Inc. This diagram does not imply affiliation with or endorsement by dbt Labs.

The preceding figure illustrates a high-level architecture and workflow. The analytic tables are pre-built daily using Athena and dbt, and serve as the single source of truth. A typical user interaction flows through the following stages:

  1. User request: A user asks a business question using Slack (for example, Which products had the most negative feedback last quarter?).
  2. Schema discovery and SQL generation: The agent identifies relevant tables using skills and writes SQL queries.
  3. SQL security validation: To help prevent unintended data modification, a security layer allows only SELECT queries and blocks DELETE, UPDATE, and DROP operations.
  4. Query execution: Athena executes the query and stores results into Amazon Simple Storage Service (Amazon S3).
  5. Result Download: The agent downloads the resulting CSV file to the file system on AgentCore, completely bypassing the context window to avoid token limits.
  6. Analysis and visualization: The agent writes Python code to analyze the CSV file and generate visualizations or refined datasets depending on the business question.
  7. Response delivery: Final insights and visualizations are formatted and returned to the user in Slack.

Why use Amazon Bedrock AgentCore to host Claude Agent SDK

Deploying an AI agent that executes arbitrary Python code requires significant infrastructure considerations. For instance, you need isolation to help ensure that there’s no cross-session access to data or credentials. Amazon Bedrock AgentCore provides fully-managed, stateful execution sessions, each session has its own isolated microVM with a separate CPU, memory, and file system. When a session ends, the microVM terminates fully and sanitizes memory, helping to ensure no remnants persist for future sessions. BGL found this service especially valuable:

  • Stateful execution session: AgentCore maintains session state for up to 8 hours. Users can have ongoing conversations with the agent, referring back to previous queries without losing context.
  • Framework flexibility: It’s framework-agnostic. It supports deployment of AI agents such as Strands Agents SDK, Claude Agent SDK, LangGraph, and CrewAI with a few lines of code.
  • Aligned with security best practices: It provides session isolation, VPC support, AWS Identity and Access Management (IAM) or OAuth based identity to facilitate governed, compliance-aligned agent operations at scale.
  • System integration: This is a forward-looking consideration.

“There’s Gateway, Memory, Browser tools, a whole ecosystem built around it. I know AWS is investing in this direction, so everything we build now can integrate with these services in the future.”

– James Luo, BGL Head of Data and AI. 

BGL is already planning to integrate AgentCore Memory for storing user preferences and query patterns.

Results and impact

For BGL’s more than 200 employees, this represents a significant shift in how they extract business intelligence. Product managers can now validate hypotheses instantly without waiting for the data team. Compliance teams can spot risk trends without learning SQL. Customer success managers can pull account-specific analytics in real-time during client calls. This democratization of data access helps transform analytics from a bottleneck into a competitive advantage, enabling faster decision-making across the organization while freeing the data team to focus on strategic initiatives rather than one-time query requests.

Conclusion and key takeaways

BGL’s journey demonstrates how combining a strong data foundation with agentic AI can democratize business intelligence. By using Amazon Bedrock AgentCore and the Claude Agent SDK, BGL built a more secure and scalable AI agent that empowers employees to tap into their data to answer business questions. Here are some key takeaways:

  • Invest in a strong data foundation: Accuracy starts with a strong data foundation. By using the data system and data pipeline to handle complex business logic (joins and aggregations), the agent can focus on basic, reliable logic.
  • Organize knowledge by domain: Use Agent Skills to encapsulate domain-specific expertise (for example, Tax Law or Investment Performance). This keeps the context window clean and manageable. Furthermore, establish a feedback loop: continuously monitor user queries to identify gaps and iteratively update these skills.
  • Use code execution for data processing: Avoid using an agent to process large datasets using a large language model (LLM) context. Instead, instruct the agent to write and execute code to filter, aggregate, and visualize data.
  • Choose stateful, session-based infrastructure to host the agent: Conversational analytics requires persistent context. Amazon Bedrock AgentCore simplifies this by providing built-in state persistence (up to 8-hour sessions), alleviating the need to build custom state handling layers on top of stateless compute.

If you’re ready to build similar capabilities for your organization, get started by exploring the Claude Agent SDK and a short demo of Deploying Claude Agent SDK on Amazon Bedrock AgentCore Runtime. If you have a similar use case or need support designing your architecture, reach out to your AWS account team.

References:


About the authors

Dustin Liu is a solutions architect at AWS, focused on supporting financial services and insurance (FSI) startups and SaaS companies. He has a diverse background spanning data engineering, data science, and machine learning, and he is passionate about leveraging AI/ML to drive innovation and business transformation.

Melanie Li, PhD, is a Senior Generative AI Specialist Solutions Architect at AWS based in Sydney, Australia, where her focus is on working with customers to build solutions leveraging state-of-the-art AI and machine learning tools. She has been actively involved in multiple Generative AI initiatives across APJ, harnessing the power of Large Language Models (LLMs). Prior to joining AWS, Dr. Li held data science roles in the financial and retail industries.

James Luo is Head of Data & AI at BGL Corporate Solutions, a world-leading provider of compliance software for accountants and financial professionals. Since joining BGL in 2008, James has progressed from developer to architect to his current leadership role, spearheading the Data Platform and Roni AI Agent initiatives. In 2015, he formed BGL’s BigData team, implementing the first deep learning model in the SMSF industry (2017), which now processes 200+ million transactions annually. He has spoken at Big Data & AI World and AWS Summit, and BGL’s AI work has been featured in multiple AWS case studies.

Dr. James Bland is a Technology Leader with 30+ years driving AI transformation at scale. He holds a PhD in Computer Science with a machine learning focus and leads strategic AI initiatives at AWS, enabling enterprises to adopt AI-powered development lifecycles and agentic capabilities. Dr. Bland spearheaded the AI-SDLC initiative, authored comprehensive guides on Generative AI in the SDLC, and helps enterprises architect production-scale AI solutions that fundamentally transform how organizations operate in an AI-first world.

Frank Tan is a Senior Solutions Architect at AWS with a special interest in Applied AI. Coming from a product development background, he is driven to bridge technology and business success.

​ 

Leave a Comment

Your email address will not be published. Required fields are marked *

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.

Scroll to Top