Gemini 3.1 Flash Live: Making audio AI more natural and reliable
Gemini 3.1 Flash Live is now available across Google products.
Gemini 3.1 Flash Live is now available across Google products.
We’re expanding Search Live globally, to all languages and locations where AI Mode is available.
Video content is now everywhere, from security surveillance and media production to social platforms and enterprise communications. However, extracting meaningful insights from large volumes of video remains a major challenge. Organizations need solutions that can understand not only what appears in a video, but also the context, narrative, and underlying meaning of the content. In …
Unlocking video insights at scale with Amazon Bedrock multimodal models Read More »
This post is a collaboration between AWS and Pipecat. Deploying intelligent voice agents that maintain natural, human-like conversations requires streaming to users where they are, across web, mobile, and phone channels, even under heavy traffic and unreliable network conditions. Even small delays can break the conversational flow, causing users to perceive the agent as unresponsive …
Deploy voice agents with Pipecat and Amazon Bedrock AgentCore Runtime – Part 1 Read More »
In December 2025, we announced the availability of Reinforcement fine-tuning (RFT) on Amazon Bedrock starting with support for Nova models. This was followed by extended support for Open weight models such as OpenAI GPT OSS 20B and Qwen 3 32B in February 2026. RFT in Amazon Bedrock automates the end-to-end customization workflow. This allows the …
Lyria 3 is now available in paid preview through the Gemini API and for testing in Google AI Studio.
We are bringing Lyria 3 to the tools where professionals work and create every day.
Deploying large language models (LLMs) for inference requires reliable GPU capacity, especially during critical evaluation periods, limited-duration production testing, or burst workloads. Capacity constraints can delay deployments and impact application performance. Customers can use Amazon SageMaker AI training plans to reserve compute capacity for specified time periods. Originally designed for training workloads, training plans now …
Deploy SageMaker AI inference endpoints with set GPU capacity using training plans Read More »
Businesses across industries face a common challenge: how to efficiently extract valuable information from vast amounts of unstructured data. Traditional approaches often involve resource-intensive processes and inflexible models. This post introduces a game-changing solution: Claude Tool use in Amazon Bedrock which uses the power of large language models (LLMs) to perform dynamic, adaptable entity recognition …
Accelerating custom entity recognition with Claude tool use in Amazon Bedrock Read More »
This post is cowritten by Tal Shapira and Tamir Friedman from Reco. Reco helps organizations strengthen the security of their software as a service (SaaS) applications and accelerate business without compromise. Using Anthropic Claude in Amazon Bedrock, Reco tackles the challenge of machine-readable security alerts that SOC teams struggle to quickly interpret. This implementation helps …
How Reco transforms security alerts using Amazon Bedrock Read More »