Blog_dumb

Serving LLMs using vLLM and Amazon EC2 instances with AWS AI chips

Serving LLMs using vLLM and Amazon EC2 instances with AWS AI chips

The use of large language models (LLMs) and generative AI has exploded over the last year. With the release of powerful publicly available foundation models, tools for training, fine tuning and hosting your own LLM have also become democratized. Using vLLM on AWS Trainium and Inferentia makes it possible to host LLMs for high performance …

Serving LLMs using vLLM and Amazon EC2 instances with AWS AI chips Read More »

Using LLMs to fortify cyber defenses: Sophos’s insight on strategies for using LLMs with Amazon Bedrock and Amazon SageMaker

Using LLMs to fortify cyber defenses: Sophos’s insight on strategies for using LLMs with Amazon Bedrock and Amazon SageMaker

This post is co-written with Adarsh Kyadige and Salma Taoufiq from Sophos.  As a leader in cutting-edge cybersecurity, Sophos is dedicated to safeguarding over 500,000 organizations and millions of customers across more than 150 countries. By harnessing the power of threat intelligence, machine learning (ML), and artificial intelligence (AI), Sophos delivers a comprehensive range of …

Using LLMs to fortify cyber defenses: Sophos’s insight on strategies for using LLMs with Amazon Bedrock and Amazon SageMaker Read More »

Enhanced observability for AWS Trainium and AWS Inferentia with Datadog

Enhanced observability for AWS Trainium and AWS Inferentia with Datadog

This post is co-written with Curtis Maher and Anjali Thatte from Datadog.  This post walks you through Datadog’s new integration with AWS Neuron, which helps you monitor your AWS Trainium and AWS Inferentia instances by providing deep observability into resource utilization, model execution performance, latency, and real-time infrastructure health, enabling you to optimize machine learning …

Enhanced observability for AWS Trainium and AWS Inferentia with Datadog Read More »

Create a virtual stock technical analyst using Amazon Bedrock Agents

Create a virtual stock technical analyst using Amazon Bedrock Agents

Stock technical analysis questions can be as unique as the individual stock analyst themselves. Queries often have multiple technical indicators like Simple Moving Average (SMA), Exponential Moving Average (EMA), Relative Strength Index (RSI), and others. Answering these varied questions would mean writing complex business logic to unpack the query into parts and fetching the necessary …

Create a virtual stock technical analyst using Amazon Bedrock Agents Read More »

Apply Amazon SageMaker Studio lifecycle configurations using AWS CDK

Apply Amazon SageMaker Studio lifecycle configurations using AWS CDK

This post serves as a step-by-step guide on how to set up lifecycle configurations for your Amazon SageMaker Studio domains. With lifecycle configurations, system administrators can apply automated controls to their SageMaker Studio domains and their users. We cover core concepts of SageMaker Studio and provide code examples of how to apply lifecycle configuration to …

Apply Amazon SageMaker Studio lifecycle configurations using AWS CDK Read More »

Build a read-through semantic cache with Amazon OpenSearch Serverless and Amazon Bedrock

Build a read-through semantic cache with Amazon OpenSearch Serverless and Amazon Bedrock

In the field of generative AI, latency and cost pose significant challenges. The commonly used large language models (LLMs) often process text sequentially, predicting one token at a time in an autoregressive manner. This approach can introduce delays, resulting in less-than-ideal user experiences. Additionally, the growing demand for AI-powered applications has led to a high …

Build a read-through semantic cache with Amazon OpenSearch Serverless and Amazon Bedrock Read More »

Rad AI reduces real-time inference latency by 50% using Amazon SageMaker

Rad AI reduces real-time inference latency by 50% using Amazon SageMaker

This post is co-written with Ken Kao and Hasan Ali Demirci from Rad AI. Rad AI has reshaped radiology reporting, developing solutions that streamline the most tedious and repetitive tasks, and saving radiologists’ time. Since 2018, using state-of-the-art proprietary and open source large language models (LLMs), our flagship product—Rad AI Impressions— has significantly reduced the …

Rad AI reduces real-time inference latency by 50% using Amazon SageMaker Read More »

Read graphs, diagrams, tables, and scanned pages using multimodal prompts in Amazon Bedrock

Read graphs, diagrams, tables, and scanned pages using multimodal prompts in Amazon Bedrock

Large language models (LLMs) have come a long way from being able to read only text to now being able to read and understand graphs, diagrams, tables, and images. In this post, we discuss how to use LLMs from Amazon Bedrock to not only extract text, but also understand information available in images. Amazon Bedrock …

Read graphs, diagrams, tables, and scanned pages using multimodal prompts in Amazon Bedrock Read More »

How Crexi achieved ML models deployment on AWS at scale and boosted efficiency

How Crexi achieved ML models deployment on AWS at scale and boosted efficiency

This post is co-written with Isaac Smothers and James Healy-Mirkovich from Crexi.  With the current demand for AI and machine learning (AI/ML) solutions, the processes to train and deploy models and scale inference are crucial to business success. Even though AI/ML and especially generative AI progress is rapid, machine learning operations (MLOps) tooling is continuously …

How Crexi achieved ML models deployment on AWS at scale and boosted efficiency Read More »

Scroll to Top