Blog_dumb

Vitech uses Amazon Bedrock to revolutionize information access with AI-powered chatbot

Vitech uses Amazon Bedrock to revolutionize information access with AI-powered chatbot

This post is co-written with Murthy Palla and Madesh Subbanna from Vitech. Vitech is a global provider of cloud-centered benefit and investment administration software. Vitech helps group insurance, pension fund administration, and investment clients expand their offerings and capabilities, streamline their operations, and gain analytical insights. To serve their customers, Vitech maintains a repository of …

Vitech uses Amazon Bedrock to revolutionize information access with AI-powered chatbot Read More »

Enhance your security capabilities with Azure Bastion Premium

Enhance your security capabilities with Azure Bastion Premium

At Microsoft Azure, we are unwavering in our commitment to providing robust and reliable networking solutions for our customers. In today’s dynamic digital landscape, seamless connectivity, uncompromising security, and optimal performance are non-negotiable. As cyber threats have grown more frequent and severe, the demand for security in the cloud has increased drastically. As a response …

Enhance your security capabilities with Azure Bastion Premium Read More »

Celebrating customers’ journeys to AI innovation at Microsoft Build 2024

Celebrating customers’ journeys to AI innovation at Microsoft Build 2024

Ever since I started at Microsoft in August 2023, I was more than excited for Microsoft Build 2024, which wrapped up last week. Why? The achievements of our customers leveraging Azure AI to drive innovation across all industries are astounding, and I’ll happily take every opportunity to showcase and celebrate them. From enhancing productivity and …

Celebrating customers’ journeys to AI innovation at Microsoft Build 2024 Read More »

Enhance image search experiences with Amazon Personalize, Amazon OpenSearch Service, and Amazon Titan Multimodal Embeddings in Amazon Bedrock

Enhance image search experiences with Amazon Personalize, Amazon OpenSearch Service, and Amazon Titan Multimodal Embeddings in Amazon Bedrock

A variety of different techniques have been used for returning images relevant to search queries. Historically, the idea of creating a joint embedding space to facilitate image captioning or text-to-image search has been of interest to machine learning (ML) practitioners and businesses for quite a while. Contrastive Language–Image Pre-training (CLIP) and Bootstrapping Language-Image Pre-training (BLIP) …

Enhance image search experiences with Amazon Personalize, Amazon OpenSearch Service, and Amazon Titan Multimodal Embeddings in Amazon Bedrock Read More »

End-to-end LLM training on instance clusters with over 100 nodes using AWS Trainium

End-to-end LLM training on instance clusters with over 100 nodes using AWS Trainium

Llama is Meta AI’s large language model (LLM), with variants ranging from 7 billion to 70 billion parameters. Llama uses a transformers-based decoder-only model architecture, which specializes at language token generation. To train a model from scratch, a dataset containing trillions of tokens is required. The Llama family is one of the most popular LLMs. …

End-to-end LLM training on instance clusters with over 100 nodes using AWS Trainium Read More »

Fine-tune large multimodal models using Amazon SageMaker

Fine-tune large multimodal models using Amazon SageMaker

Large multimodal models (LMMs) integrate multiple data types into a single model. By combining text data with images and other modalities during training, multimodal models such as Claude3, GPT-4V, and Gemini Pro Vision gain more comprehensive understanding and improved ability to process diverse data types. The multimodal approach allows models to handle a wider range …

Fine-tune large multimodal models using Amazon SageMaker Read More »

Accelerate Mixtral 8x7B pre-training with expert parallelism on Amazon SageMaker

Accelerate Mixtral 8x7B pre-training with expert parallelism on Amazon SageMaker

Mixture of Experts (MoE) architectures for large language models (LLMs) have recently gained popularity due to their ability to increase model capacity and computational efficiency compared to fully dense models. By utilizing sparse expert subnetworks that process different subsets of tokens, MoE models can effectively increase the number of parameters while requiring less computation per …

Accelerate Mixtral 8x7B pre-training with expert parallelism on Amazon SageMaker Read More »

Generating fashion product descriptions by fine-tuning a vision-language model with SageMaker and Amazon Bedrock

Generating fashion product descriptions by fine-tuning a vision-language model with SageMaker and Amazon Bedrock

In the world of online retail, creating high-quality product descriptions for millions of products is a crucial, but time-consuming task. Using machine learning (ML) and natural language processing (NLP) to automate product description generation has the potential to save manual effort and transform the way ecommerce platforms operate. One of the main advantages of high-quality …

Generating fashion product descriptions by fine-tuning a vision-language model with SageMaker and Amazon Bedrock Read More »

Create a multimodal assistant with advanced RAG and Amazon Bedrock

Create a multimodal assistant with advanced RAG and Amazon Bedrock

Retrieval Augmented Generation (RAG) models have emerged as a promising approach to enhance the capabilities of language models by incorporating external knowledge from large text corpora. However, despite their impressive performance in various natural language processing tasks, RAG models still face several limitations that need to be addressed. Naive RAG models face limitations such as …

Create a multimodal assistant with advanced RAG and Amazon Bedrock Read More »

Scroll to Top