Blog_dumb

Build an end-to-end RAG solution using Knowledge Bases for Amazon Bedrock and AWS CloudFormation

Build an end-to-end RAG solution using Knowledge Bases for Amazon Bedrock and AWS CloudFormation

Retrieval Augmented Generation (RAG) is a state-of-the-art approach to building question answering systems that combines the strengths of retrieval and foundation models (FMs). RAG models first retrieve relevant information from a large corpus of text and then use a FM to synthesize an answer based on the retrieved information. An end-to-end RAG solution involves several …

Build an end-to-end RAG solution using Knowledge Bases for Amazon Bedrock and AWS CloudFormation Read More »

Faster LLMs with speculative decoding and AWS Inferentia2

Faster LLMs with speculative decoding and AWS Inferentia2

In recent years, we have seen a big increase in the size of large language models (LLMs) used to solve natural language processing (NLP) tasks such as question answering and text summarization. Larger models with more parameters, which are in the order of hundreds of billions at the time of writing, tend to produce better …

Faster LLMs with speculative decoding and AWS Inferentia2 Read More »

Catalog, query, and search audio programs with Amazon Transcribe and Knowledge Bases for Amazon Bedrock

Catalog, query, and search audio programs with Amazon Transcribe and Knowledge Bases for Amazon Bedrock

Information retrieval systems have powered the information age through their ability to crawl and sift through massive amounts of data and quickly return accurate and relevant results. These systems, such as search engines and databases, typically work by indexing on keywords and fields contained in data files. However, much of our data in the digital …

Catalog, query, and search audio programs with Amazon Transcribe and Knowledge Bases for Amazon Bedrock Read More »

Cepsa Química improves the efficiency and accuracy of product stewardship using Amazon Bedrock

Cepsa Química improves the efficiency and accuracy of product stewardship using Amazon Bedrock

This is a guest post co-written with Vicente Cruz Mínguez, Head of Data and Advanced Analytics at Cepsa Química, and Marcos Fernández Díaz, Senior Data Scientist at Keepler. Generative artificial intelligence (AI) is rapidly emerging as a transformative force, poised to disrupt and reshape businesses of all sizes and across industries. Generative AI empowers organizations …

Cepsa Química improves the efficiency and accuracy of product stewardship using Amazon Bedrock Read More »

GraphStorm 0.3: Scalable, multi-task learning on graphs with user-friendly APIs

GraphStorm 0.3: Scalable, multi-task learning on graphs with user-friendly APIs

GraphStorm is a low-code enterprise graph machine learning (GML) framework to build, train, and deploy graph ML solutions on complex enterprise-scale graphs in days instead of months. With GraphStorm, you can build solutions that directly take into account the structure of relationships or interactions between billions of entities, which are inherently embedded in most real-world …

GraphStorm 0.3: Scalable, multi-task learning on graphs with user-friendly APIs Read More »

Few-shot prompt engineering and fine-tuning for LLMs in Amazon Bedrock

Few-shot prompt engineering and fine-tuning for LLMs in Amazon Bedrock

This blog is part of the series, Generative AI and AI/ML in Capital Markets and Financial Services. Company earnings calls are crucial events that provide transparency into a company’s financial health and prospects. Earnings reports detail a firm’s financials over a specific period, including revenue, net income, earnings per share, balance sheet, and cash flow …

Few-shot prompt engineering and fine-tuning for LLMs in Amazon Bedrock Read More »

Streamline insurance underwriting with generative AI using Amazon Bedrock – Part 1

Streamline insurance underwriting with generative AI using Amazon Bedrock – Part 1

Underwriting is a fundamental function within the insurance industry, serving as the foundation for risk assessment and management. Underwriters are responsible for evaluating insurance applications, determining the level of risk associated with each applicant, and making decisions on whether to accept or reject the application based on the insurer’s guidelines and risk appetite. In this …

Streamline insurance underwriting with generative AI using Amazon Bedrock – Part 1 Read More »

Import a fine-tuned Meta Llama 3 model for SQL query generation on Amazon Bedrock

Import a fine-tuned Meta Llama 3 model for SQL query generation on Amazon Bedrock

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading artificial intelligence (AI) companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API. Amazon Bedrock also provides a broad set of capabilities needed to build generative AI applications with security, …

Import a fine-tuned Meta Llama 3 model for SQL query generation on Amazon Bedrock Read More »

Unlocking Japanese LLMs with AWS Trainium: Innovators Showcase from the AWS LLM Development Support Program

Unlocking Japanese LLMs with AWS Trainium: Innovators Showcase from the AWS LLM Development Support Program

Amazon Web Services (AWS) is committed to supporting the development of cutting-edge generative artificial intelligence (AI) technologies by companies and organizations across the globe. As part of this commitment, AWS Japan announced the AWS LLM Development Support Program (LLM Program), through which we’ve had the privilege of working alongside some of Japan’s most innovative teams. …

Unlocking Japanese LLMs with AWS Trainium: Innovators Showcase from the AWS LLM Development Support Program Read More »

Use the ApplyGuardrail API with long-context inputs and streaming outputs in Amazon Bedrock

Use the ApplyGuardrail API with long-context inputs and streaming outputs in Amazon Bedrock

As generative artificial intelligence (AI) applications become more prevalent, maintaining responsible AI principles becomes essential. Without proper safeguards, large language models (LLMs) can potentially generate harmful, biased, or inappropriate content, posing risks to individuals and organizations. Applying guardrails helps mitigate these risks by enforcing policies and guidelines that align with ethical principles and legal requirements. …

Use the ApplyGuardrail API with long-context inputs and streaming outputs in Amazon Bedrock Read More »

Scroll to Top