Blog_dumb

Azure at GitHub Universe: New tools to help simplify AI app development

Azure at GitHub Universe: New tools to help simplify AI app development

AI has reset our expectations of what technology can achieve. From transforming how we explore the cosmos to enabling doctors to provide personalized care and making business functions operate more intelligently, it all comes down to you—the developer—to turn this potential into reality. As developers, you’re experiencing a dramatic shift in what you build and …

Azure at GitHub Universe: New tools to help simplify AI app development Read More »

Build a video insights and summarization engine using generative AI with Amazon Bedrock

Build a video insights and summarization engine using generative AI with Amazon Bedrock

Professionals in a wide variety of industries have adopted digital video conferencing tools as part of their regular meetings with suppliers, colleagues, and customers. These meetings often involve exchanging information and discussing actions that one or more parties must take after the session. The traditional way to make sure information and actions aren’t forgotten is …

Build a video insights and summarization engine using generative AI with Amazon Bedrock Read More »

Automate document processing with Amazon Bedrock Prompt Flows (preview)

Automate document processing with Amazon Bedrock Prompt Flows (preview)

Enterprises in industries like manufacturing, finance, and healthcare are inundated with a constant flow of documents—from financial reports and contracts to patient records and supply chain documents. Historically, processing and extracting insights from these unstructured data sources has been a manual, time-consuming, and error-prone task. However, the rise of intelligent document processing (IDP), which uses …

Automate document processing with Amazon Bedrock Prompt Flows (preview) Read More »

Governing the ML lifecycle at scale: Centralized observability with Amazon SageMaker and Amazon CloudWatch

Governing the ML lifecycle at scale: Centralized observability with Amazon SageMaker and Amazon CloudWatch

This post is part of an ongoing series on governing the machine learning (ML) lifecycle at scale. To start from the beginning, refer to Governing the ML lifecycle at scale, Part 1: A framework for architecting ML workloads using Amazon SageMaker. A multi-account strategy is essential not only for improving governance but also for enhancing …

Governing the ML lifecycle at scale: Centralized observability with Amazon SageMaker and Amazon CloudWatch Read More »

Accelerate scale with Azure OpenAI Service Provisioned offering

Accelerate scale with Azure OpenAI Service Provisioned offering

In today’s fast-evolving digital landscape, enterprises need more than just powerful AI models—they need AI solutions that are adaptable, reliable, and scalable. With upcoming availability of Data Zones and new enhancements to Provisioned offering in Azure OpenAI Service, we are taking a big step forward in making AI broadly available and also enterprise-ready. These features …

Accelerate scale with Azure OpenAI Service Provisioned offering Read More »

Import data from Google Cloud Platform BigQuery for no-code machine learning with Amazon SageMaker Canvas

Import data from Google Cloud Platform BigQuery for no-code machine learning with Amazon SageMaker Canvas

In the modern, cloud-centric business landscape, data is often scattered across numerous clouds and on-site systems. This fragmentation can complicate efforts by organizations to consolidate and analyze data for their machine learning (ML) initiatives. This post presents an architectural approach to extract data from different cloud environments, such as Google Cloud Platform (GCP) BigQuery, without …

Import data from Google Cloud Platform BigQuery for no-code machine learning with Amazon SageMaker Canvas Read More »

Customized model monitoring for near real-time batch inference with Amazon SageMaker

Customized model monitoring for near real-time batch inference with Amazon SageMaker

Real-world applications vary in inference requirements for their artificial intelligence and machine learning (AI/ML) solutions to optimize performance and reduce costs. Examples include financial systems processing transaction data streams, recommendation engines processing user activity data, and computer vision models processing video frames. In these scenarios, customized model monitoring for near real-time batch inference with Amazon …

Customized model monitoring for near real-time batch inference with Amazon SageMaker Read More »

How Planview built a scalable AI Assistant for portfolio and project management using Amazon Bedrock

How Planview built a scalable AI Assistant for portfolio and project management using Amazon Bedrock

This post is co-written with Lee Rehwinkel from Planview. Businesses today face numerous challenges in managing intricate projects and programs, deriving valuable insights from massive data volumes, and making timely decisions. These hurdles frequently lead to productivity bottlenecks for program managers and executives, hindering their ability to drive organizational success efficiently. Planview, a leading provider …

How Planview built a scalable AI Assistant for portfolio and project management using Amazon Bedrock Read More »

Super charge your LLMs with RAG at scale using AWS Glue for Apache Spark

Super charge your LLMs with RAG at scale using AWS Glue for Apache Spark

Large language models (LLMs) are very large deep-learning models that are pre-trained on vast amounts of data. LLMs are incredibly flexible. One model can perform completely different tasks such as answering questions, summarizing documents, translating languages, and completing sentences. LLMs have the potential to revolutionize content creation and the way people use search engines and …

Super charge your LLMs with RAG at scale using AWS Glue for Apache Spark Read More »

Scroll to Top