Watch James Manyika talk AI and creativity with LL COOL J.
In the latest episode of our Dialogues on Technology and Society series, LL COOL J sits down with James Manyika.
In the latest episode of our Dialogues on Technology and Society series, LL COOL J sits down with James Manyika.
Google Translate’s Live translate with headphones is officially arriving on iOS! And we’re expanding the capability for both iOS and Android users to even more countries…
Gemini 3.1 Flash Live is now available across Google products.
We’re expanding Search Live globally, to all languages and locations where AI Mode is available.
Video content is now everywhere, from security surveillance and media production to social platforms and enterprise communications. However, extracting meaningful insights from large volumes of video remains a major challenge. Organizations need solutions that can understand not only what appears in a video, but also the context, narrative, and underlying meaning of the content. In …
Unlocking video insights at scale with Amazon Bedrock multimodal models Read More »
This post is a collaboration between AWS and Pipecat. Deploying intelligent voice agents that maintain natural, human-like conversations requires streaming to users where they are, across web, mobile, and phone channels, even under heavy traffic and unreliable network conditions. Even small delays can break the conversational flow, causing users to perceive the agent as unresponsive …
Deploy voice agents with Pipecat and Amazon Bedrock AgentCore Runtime – Part 1 Read More »
In December 2025, we announced the availability of Reinforcement fine-tuning (RFT) on Amazon Bedrock starting with support for Nova models. This was followed by extended support for Open weight models such as OpenAI GPT OSS 20B and Qwen 3 32B in February 2026. RFT in Amazon Bedrock automates the end-to-end customization workflow. This allows the …
Lyria 3 is now available in paid preview through the Gemini API and for testing in Google AI Studio.
We are bringing Lyria 3 to the tools where professionals work and create every day.
Deploying large language models (LLMs) for inference requires reliable GPU capacity, especially during critical evaluation periods, limited-duration production testing, or burst workloads. Capacity constraints can delay deployments and impact application performance. Customers can use Amazon SageMaker AI training plans to reserve compute capacity for specified time periods. Originally designed for training workloads, training plans now …
Deploy SageMaker AI inference endpoints with set GPU capacity using training plans Read More »