New AI Collaboratives to take action on wildfires and food insecurity
Learn how our AI Collaboratives for wildfires and food security are taking a new funding approach to help people around the world.
Learn how our AI Collaboratives for wildfires and food security are taking a new funding approach to help people around the world.
The first satellite for the FireSat constellation officially made contact with Earth. This satellite is the first of more than 50 in a first-of-its-kind constellation de…
Learn more about Google Research’s FireSat project, built to detect small wildfires.
Computer use is a breakthrough capability from Anthropic that allows foundation models (FMs) to visually perceive and interpret digital interfaces. This capability enables Anthropic’s Claude models to identify what’s on a screen, understand the context of UI elements, and recognize actions that should be performed such as clicking buttons, typing text, scrolling, and navigating between …
Getting started with computer use in Amazon Bedrock Agents Read More »
Organizations building and deploying AI applications, particularly those using large language models (LLMs) with Retrieval Augmented Generation (RAG) systems, face a significant challenge: how to evaluate AI outputs effectively throughout the application lifecycle. As these AI technologies become more sophisticated and widely adopted, maintaining consistent quality and performance becomes increasingly complex. Traditional AI evaluation approaches …
Evaluating RAG applications with Amazon Bedrock knowledge base evaluation Read More »
Google shares policy recommendations in response to OSTP’s request for information for the U.S. AI Action Plan.
This post was co-written with Vishal Singh, Data Engineering Leader at Data & Analytics team of GoDaddy Generative AI solutions have the potential to transform businesses by boosting productivity and improving customer experiences, and using large language models (LLMs) in these solutions has become increasingly popular. However, inference of LLMs as single model invocations or …
Open foundation models (FMs) allow organizations to build customized AI applications by fine-tuning for their specific domains or tasks, while retaining control over costs and deployments. However, deployment can be a significant portion of the effort, often requiring 30% of project time because engineers must carefully optimize instance types and configure serving parameters through careful …
Benchmarking customized models on Amazon Bedrock using LLMPerf and LiteLLM Read More »
The integration of generative AI agents into business processes is poised to accelerate as organizations recognize the untapped potential of these technologies. Advancements in multimodal artificial intelligence (AI), where agents can understand and generate not just text but also images, audio, and video, will further broaden their applications. This post will discuss agentic AI driven …
Creating asynchronous AI agents with Amazon Bedrock Read More »
The Qwen 2.5 multilingual large language models (LLMs) are a collection of pre-trained and instruction tuned generative models in 0.5B, 1.5B, 3B, 7B, 14B, 32B, and 72B (text in/text out and code out). The Qwen 2.5 fine tuned text-only models are optimized for multilingual dialogue use cases and outperform both previous generations of Qwen models, and …
How to run Qwen 2.5 on AWS AI chips using Hugging Face libraries Read More »