Blog_dumb

Learnings from COBOL modernization in the real world

Learnings from COBOL modernization in the real world

There’s a lot of excitement right now about AI enabling mainframe application modernization. Boards are paying attention. CIOs are getting asked for a plan. AI is a genuine accelerator for COBOL modernization but to get results, AI needs additional context that source code alone can’t provide.Here’s what we’ve learned working with 400+ enterprise customers: mainframe …

Learnings from COBOL modernization in the real world Read More »

Reinforcement fine-tuning for Amazon Nova: Teaching AI through feedback

Reinforcement fine-tuning for Amazon Nova: Teaching AI through feedback

Foundation models deliver impressive out-of-the-box performance for general tasks, but many organizations need models to consume their business knowledge. Model customization helps you bridge the gap between general-purpose AI and your specific business needs when building applications that require domain-specific expertise, enforcing communication styles, optimizing for specialized tasks like code generation, financial reasoning, or ensuring …

Reinforcement fine-tuning for Amazon Nova: Teaching AI through feedback Read More »

Large model inference container – latest capabilities and performance enhancements

Large model inference container – latest capabilities and performance enhancements

Modern large language model (LLM) deployments face an escalating cost and performance challenge driven by token count growth. Token count, which is directly related to word count, image size, and other input factors, determines both computational requirements and costs. Longer contexts translate to higher expenses per inference request. This challenge has intensified as frontier models …

Large model inference container – latest capabilities and performance enhancements Read More »

Efficiently serve dozens of fine-tuned models with vLLM on Amazon SageMaker AI and Amazon Bedrock

Efficiently serve dozens of fine-tuned models with vLLM on Amazon SageMaker AI and Amazon Bedrock

Organizations and individuals running multiple custom AI models, especially recent Mixture of Experts (MoE) model families, can face the challenge of paying for idle GPU capacity when the individual models don’t receive enough traffic to saturate a dedicated compute endpoint. To solve this problem, we have partnered with the vLLM community and developed an efficient …

Efficiently serve dozens of fine-tuned models with vLLM on Amazon SageMaker AI and Amazon Bedrock Read More »

Building intelligent event agents using Amazon Bedrock AgentCore and Amazon Bedrock Knowledge Bases

Building intelligent event agents using Amazon Bedrock AgentCore and Amazon Bedrock Knowledge Bases

Large conferences and events generate overwhelming amounts of information—from hundreds of sessions and workshops to speaker profiles, venue maps, and constantly updating schedules. While basic AI assistants can answer simple questions about event logistics, most fail to deliver the personalized guidance and contextual awareness that attendees need to navigate complex, multi-day conferences effectively. More importantly, …

Building intelligent event agents using Amazon Bedrock AgentCore and Amazon Bedrock Knowledge Bases Read More »

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.

Scroll to Top