AWS AI News Hub

Your central source for the latest AWS artificial intelligence and machine learning service announcements, features, and updates

Filter by Category

193
Total Updates
92
What's New
20
ML Blog Posts
19
News Articles
Showing 193 of 193 updates

In this post, we explore how product teams can leverage Amazon Bedrock and AWS services to transform their creative workflows through generative AI, enabling rapid content iteration across multiple formats while maintaining brand consistency and compliance. The solution demonstrates how teams can deploy a scalable generative AI application that accelerates everything from product descriptions and marketing copy to visual concepts and video content, significantly reducing time to market while enhancing creative quality.

bedrock
#bedrock

You can use AWS Step Functions to orchestrate complex business problems. However, as workflows grow and evolve, you can find yourself grappling with monolithic state machines that become increasingly difficult to maintain and update. In this post, we show you strategies for decomposing large Step Functions workflows into modular, maintainable components.

lexstep functions
#lex#step functions#update

In this post, we explore advanced cost monitoring strategies for Amazon Bedrock deployments, introducing granular custom tagging approaches for precise cost allocation and comprehensive reporting mechanisms that build upon the proactive cost management foundation established in Part 1. The solution demonstrates how to implement invocation-level tagging, application inference profiles, and integration with AWS Cost Explorer to create a complete 360-degree view of generative AI usage and expenses.

bedrock
#bedrock#integration

In this post, we introduce a comprehensive solution for proactively managing Amazon Bedrock inference costs through a cost sentry mechanism designed to establish and enforce token usage limits, providing organizations with a robust framework for controlling generative AI expenses. The solution uses serverless workflows and native Amazon Bedrock integration to deliver a predictable, cost-effective approach that aligns with organizational financial constraints while preventing runaway costs through leading indicators and real-time budget enforcement.

bedrockorganizations
#bedrock#organizations#ga#integration

In this post, we demonstrate how Amazon Nova Premier with Amazon Bedrock can systematically migrate legacy C code to modern Java/Spring applications using an intelligent agentic workflow that breaks down complex conversions into specialized agent roles. The solution reduces migration time and costs while improving code quality through automated validation, security assessment, and iterative refinement processes that handle even large codebases exceeding token limitations.

bedrocknovalex
#bedrock#nova#lex#ga

AWS has expanded its Customer Carbon Footprint Tool (CCFT) to include Scope 3 emissions data alongside updated Scope 1 and 2 emissions, giving customers more insight into their carbon impact. The CCFT now tracks emissions from fuel- and energy-related activities (FERA), IT hardware, buildings, equipment, and transportation. AWS customers can access this information and track changes over time through the AWS Billing console.

#now-available#update

Amazon CloudWatch now offers interactive incident report generation, enabling customers to create comprehensive post-incident analysis reports in minutes. The new capability, available within CloudWatch investigations, automatically gathers and correlates your telemetry data, as well as your input and any actions taken during an investigation, and produces a streamlined incident report. Using the new feature you can automatically capture critical operational telemetry, service configurations, and investigation findings to generate detailed reports. Reports include executive summaries, timeline of events, impact assessments, and actionable recommendations. These reports help you better identify patterns, implement preventive measures, and continuously improve your operational posture through structured post incident analysis. The incident report generation feature is available in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (Spain), and Europe (Stockholm). You can create their first incident report by first creating a CloudWatch investigation and then clicking “Incident report”. To learn more about this new feature, visit the CloudWatch incident reports documentation.

cloudwatch
#cloudwatch#ga#new-feature#new-capability

Starting today, Amazon EC2 High Memory U7i instances with 6TB of memory (u7i-6tb.112xlarge) are now available in the US East (Ohio) region. U7i-6tb instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-6tb instances offer 6TB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment. U7i-6tb instances offer 448 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server. To learn more about U7i instances, visit the High Memory instances page.

ec2
#ec2#now-available#support

This post was co-written with Frederic Haase and Julian Blau with BASF Digital Farming GmbH. At xarvio – BASF Digital Farming, our mission is to empower farmers around the world with cutting-edge digital agronomic decision-making tools. Central to this mission is our crop optimization platform, xarvio FIELD MANAGER, which delivers actionable insights through a range […]

eks
#eks

Amazon Elastic Kubernetes Service (Amazon EKS) Auto Mode is now available in the AWS GovCloud (US-East) and (US-West) regions. This feature fully automates compute, storage, and networking management for Kubernetes clusters. Additionally, EKS Auto Mode now supports FIPS-validated cryptographic modules through its Amazon Machine Images (AMIs) to help customers meet FedRAMP compliance requirements. EKS Auto Mode enables organizations to get Kubernetes conformant managed compute, networking, and storage for any new or existing EKS cluster. Its AMIs include FIPS-compliant cryptographic modules to help meet federal security standards for regulated workloads. EKS Auto Mode manages OS patching and updates, and strengthens security posture through ephemeral compute, making it ideal for workloads that require high security standards. It also dynamically scales EC2 instances based on demand, helping optimize compute costs while maintaining application availability. Amazon EKS Auto Mode is now available in AWS GovCloud (US-East) and (US-West). You can enable EKS Auto Mode in any EKS cluster running Kubernetes 1.29 and above with no upfront fees or commitments—you pay for the management of the compute resources provisioned, in addition to your regular EC2 costs. To get started with EKS Auto Mode, visit the Amazon EKS product page. For additional details, see the Amazon EKS User Guide and AWS GovCloud (US) documentation.

ec2rdseksorganizations
#ec2#rds#eks#organizations#ga#now-available

Today, AWS announced enhanced map styling features for Amazon Location Service, enabling users to further customize maps with terrain visualization, contour lines, real-time traffic data, and transportation-specific routing information. Developers can create more detailed and informative maps tailored for various use cases, such as outdoor navigation, logistics planning, and traffic management, by leveraging parameters like terrain, contour-density, traffic, and travel-mode through the GetStyleDescriptor API. With these styling capabilities, users can overlay real-time traffic conditions, visualize transportation-specific routing information such as transit and trucks, and display topographic features through elevation shading. For instance, developers can display current traffic conditions for optimized route planning, show truck-specific routing restrictions for logistics applications, or create maps that highlight physical terrain details for hiking and outdoor activities. Each feature operates seamlessly, providing enhanced map visualization and reliable performance for diverse use cases. These new map styling features are available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), Europe (Spain), and South America (SĂŁo Paulo). To learn more, please visit the Developer Guide.

#ga

Amazon CloudWatch Synthetics introduces multi-check blueprints, enabling customers to create comprehensive synthetic tests using simple JSON configuration files. This new feature addresses the challenge many customers face when developing custom scripts for basic endpoint monitoring, which often lack the depth needed for thorough synthetic testing across various check types like HTTP endpoints with different authentication methods, DNS record validation, SSL certificate monitoring, and TCP port checks. With multi-check blueprints, customers can now bundle up to 10 different monitoring steps, one step per endpoint, in a single canary, making API monitoring more cost-effective and easier to implement. The solution provides built-in support for complex assertions on response codes, latency, headers, and body content, along with seamless integration with AWS Secrets Manager for secure credential handling. Customers benefit from detailed step-by-step results and debugging capabilities through the existing CloudWatch Synthetics console, significantly simplifying the process of implementing comprehensive API monitoring compared to writing individual custom canaries for each check. This feature streamlines monitoring workflows, reduces costs, and enhances the overall efficiency of synthetic monitoring setups. Multi-check blueprints are available in all commercial AWS regions where Amazon CloudWatch Synthetics is offered. For pricing details, see Amazon CloudWatch pricing. To learn more about multi-check blueprints and how to get started, see the CloudWatch Synthetics Canaries Blueprints documentation.

lexsecrets managercloudwatch
#lex#secrets manager#cloudwatch#new-feature#integration#support

Amazon CloudWatch agent has added support for configurable Windows Event log filters. This new feature allows customers to selectively collect and send system and application events to CloudWatch from Windows hosts running on Amazon EC2 or on-premises. The addition of customizable filters helps customers to focus on events that meet specific criteria, streamlining log management and analysis. Using this new functionality of the CloudWatch agent, you can define filter criteria for each Windows Event log stream in the agent configuration file. The filtering options include event levels, event IDs, and regular expressions to either "include" or "exclude" text within events. The agent evaluates each log event against your defined filter criteria to determine whether it should be sent to CloudWatch. Events that don't match your criteria are discarded. Windows event filters help you to manage your log ingestion by processing only the events you need, such as those containing specific error codes, while excluding verbose or unwanted log entries. Amazon CloudWatch Agent is available in all commercial AWS Regions, and the AWS GovCloud (US) Regions. To get started, see Create or Edit the CloudWatch Agent Configuration File in the Amazon CloudWatch User Guide.

ec2cloudwatch
#ec2#cloudwatch#ga#new-feature#support

AWS announces Amazon DCV 2025.0, the latest version of the high-performance remote display protocol that enables customers to securely access remote desktops and application sessions. This release focuses on enhancing user productivity and security while expanding platform compatibility for diverse use cases. Amazon DCV 2025.0 includes the following key features and improvements: Enhanced WebAuthn redirection on Windows and standard browser-based WebAuthn support on Linux, enabling security key authentication (like Yubikeys, Windows Hello) in native Windows and SaaS applications within virtual desktop sessions Linux client support for ARM architecture, further broadening compatibility and performance Windows Server 2025 support, delivering latest security standards and enhanced performance on DCV hosts Server side keyboard layout support and layout alignment for Windows clients, enhancing input reliability and consistency Scroll wheel optimizations for smoother navigation For more information about the new features and enhancements in Amazon DCV 2025.0, see the release notes or visit the Amazon DCV webpage to learn more and get started.

rds
#rds#ga#new-feature#improvement#enhancement#support

Amazon S3 adds AWS CloudTrail events for table maintenance activities in Amazon S3 Tables. You can now use AWS CloudTrail to track compaction and snapshot expiration operations performed by S3 Tables on your tables. S3 Tables automatically performs maintenance to optimize query performance and lower costs of your tables stored in S3 table buckets. You can monitor and audit S3 Tables maintenance activities such as compaction and snapshot expiration as management events in AWS CloudTrail. To get started with monitoring, create a trail in the AWS CloudTrail console and filter for 'AwsServiceEvents' as the eventType and 'TablesMaintenanceEvent' as the eventName. AWS CloudTrail events for S3 Tables maintenance are now available in all AWS Regions where S3 Tables are available. To learn more, visit Amazon S3 Tables product page and documentation.

s3
#s3#now-available

Amazon Redshift auto-copy is now available in the AWS Asia Pacific (Malaysia), Asia Pacific (Thailand), Mexico (Central), and Asia Pacific (Taipei) regions. With Auto-Copy, you can set up continuous file ingestion from your Amazon S3 prefix and automatically load new files to tables in your Amazon Redshift data warehouse without the need for additional tools or custom solutions. Previously, Amazon Redshift customers had to build their data pipelines using COPY commands to automate continuous loading of data from S3 to Amazon Redshift tables. With auto-copy, you can now setup an integration which will automatically detect and load new files in a specified S3 prefix to Redshift tables. The auto-copy jobs keep track of previously loaded files and exclude them from the ingestion process. You can monitor auto-copy jobs using system tables. To learn more, see the documentation or check out the AWS Blog.

s3redshift
#s3#redshift#now-available#integration

In this post, we detail how Metagenomi partnered with AWS to implement the Progen2 protein language model on AWS Inferentia, achieving up to 56% cost reduction for high-throughput enzyme generation workflows. The implementation enabled cost-effective generation of millions of novel enzyme variants using EC2 Inf2 Spot Instances and AWS Batch, demonstrating how cloud-based generative AI can make large-scale protein design more accessible for biotechnology applications .

inferentiaec2
#inferentia#ec2

Amazon Relational Database Service (Amazon RDS) for SQL Server now allows maintaining Change Data Capture (CDC) settings and metadata when restoring native database backups. CDC is a Microsoft SQL Server feature that customers can use to record insert, update, and delete operations occurring in a database table, and make these changes accessible to applications. When a database is restored from a backup, CDC configurations and data are not preserved by default, which can result in gaps in data capture. With this new feature, customers can preserve their database CDC settings when restoring a database backup to a new instance, or a different database name. To retain CDC configurations, customers can specify the KEEP_CDC option when restoring a database backup. This option ensures that the CDC metadata and any captured change data are kept intact. Refer to the Amazon RDS for SQL Server User Guide to learn more about KEEP_CDC. This feature is available in all AWS Regions where Amazon RDS for SQL Server is available.

rds
#rds#ga#new-feature#update#support

Today, AWS’ Customer Carbon Footprint Tool (CCFT) has been updated to include Scope 3 emissions data and Scope 1 natural gas and refrigerants, providing AWS customers more complete visibility into their cloud carbon footprint. This update expands the CCFT to cover all three industry-standard emission scopes as defined by the Greenhouse Gas Protocol. The CCFT Scope 3 update gives AWS customers full visibility into the lifecycle carbon impact of their AWS usage, including emissions from manufacturing the servers that run their workloads, powering AWS facilities, and transporting equipment to data centers. Historical data is available back to January 2022, allowing organizations to track their progress over time and make informed decisions about their cloud strategy to meet their sustainability goals. This data is available through the CCFT dashboard and AWS Billing and Cost Management Data Exports, enabling customers to easily incorporate carbon insights into their operational workflows, sustainability planning, and reporting processes. To learn more about the enhanced Customer Carbon Footprint Tool, visit the CCFT Website, AWS Billing and Cost Management console or read the updated methodology documentation and release notes.

organizations
#organizations#ga#update

AWS Graviton4-based R8g database instances are now generally available for Amazon DocumentDB (with MongoDB compatibility). R8g instances are powered by AWS Graviton4 processors and feature the latest DDR5 memory, making it ideal for memory-intensive workloads. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads. Customers can get started with R8g instances through the AWS Management Console, CLI, and SDK by modifying their existing Amazon DocumentDB database cluster or creating a new one. R8g instances are available for Amazon DocumentDB 5.0 on both Standard and IO-Optimized cluster storage configurations. For more information including region availability visit our pricing page and documentation.

graviton
#graviton#generally-available#support

Amazon S3 Metadata is now available in three additional AWS Regions: Europe (Frankfurt), Europe (Ireland), and Asia Pacific (Tokyo). Amazon S3 Metadata is the easiest and fastest way to help you instantly discover and understand your S3 data with automated, easily-queried metadata that updates in near real-time. This helps you to curate, identify, and use your S3 data for business analytics, real-time inference applications, and more. S3 Metadata supports object metadata, which includes system-defined details like size and source of the object, and custom metadata, which allows you to use tags to annotate your objects with information like product SKU, transaction ID, or content rating. S3 Metadata automatically populates metadata for both new and existing objects, providing you with a comprehensive, queryable view of your data. With this expansion, S3 Metadata is now generally available in six AWS Regions. For pricing details, visit the S3 pricing page. To learn more, visit the product page, documentation, and AWS Storage Blog.

s3
#s3#generally-available#now-available#update#support#expansion

AWS Parallel Computing Service (PCS) now supports rotation of cluster secret keys using AWS Secrets Manager, enabling you to update the secure credentials used for authentication between Slurm controller and compute nodes without creating a new cluster. Regularly rotating your Slurm cluster secret keys strengthens your security posture by reducing the risk of credential compromise and ensuring compliance with best practices. This helps keep your HPC workloads and accounting data safe from unauthorized access. PCS is a managed service that makes it easier to run and scale high performance computing (HPC) workloads on AWS using Slurm. With the support of cluster secret rotation in PCS, you can strengthen your security controls and maintain operational efficiency. You can now implement secret rotation as part of your security best practices while maintaining cluster continuity. This feature is available in all AWS Regions where PCS is available. You can rotate cluster secrets using either the AWS Secrets Manager console or API after preparing your cluster for the rotation process. Read more about PCS support for cluster secret rotation in the PCS User Guide.

secrets manager
#secrets manager#update#support

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7i-flex instances that deliver up to 19% better price performance compared to C6i instances, are available in the Asia Pacific (Jakarta) Region. C7i-flex instances expand the EC2 Flex instances portfolio to provide the easiest way for you to get price performance benefits for a majority of compute intensive workloads. The new instances are powered by the 4th generation Intel Xeon Scalable custom processors (Sapphire Rapids) that are available only on AWS, and offer 5% lower prices compared to C7i. C7i-flex instances offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. With C7i-flex instances, you can seamlessly run web and application servers, databases, caches, Apache Kafka, and Elasticsearch, and more. For compute-intensive workloads that need larger instance sizes (up to 192 vCPUs and 384 GiB memory) or continuous high CPU usage, you can leverage C7i instances. To learn more, visit Amazon EC2 C7i-flex instances. To get started, see the AWS Management Console.

lexec2kafka
#lex#ec2#kafka#now-available

Amazon MQ is now available in the AWS Asia Pacific (New Zealand) Region with three Availability Zones and API name ap-southeast-6. With this launch, Amazon MQ is now available in a total of 38 regions. Amazon MQ is a managed message broker service for open-source Apache ActiveMQ and RabbitMQ that makes it easier to set up and operate message brokers on AWS. Amazon MQ reduces your operational responsibilities by managing the provisioning, setup, and maintenance of message brokers for you. Because Amazon MQ connects to your current applications with industry-standard APIs and protocols, you can more easily migrate to AWS without having to rewrite code. For more information, please visit the Amazon MQ product page, and see the AWS Region Table for complete regional availability.

#launch#now-available

AWS Nitro Enclaves is an Amazon EC2 capability that enables customers to create isolated compute environments (enclaves) to further protect and securely process highly sensitive data within their EC2 instances. Nitro Enclaves helps customers reduce the attack surface area for their most sensitive data processing applications. There is no additional cost other than the cost for the using Amazon EC2 instances and any other AWS services that are used with Nitro Enclaves. Nitro Enclaves is now available across all AWS Regions, expanding to include new regions in Asia Pacific (New Zealand, Thailand, Jakarta, Hyderabad, Malaysia, Melbourne, and Taipei), Europe (Spain and Zurich), Middle East (UAE and Tel Aviv), and North America (Central Mexico and Calgary). To learn more about AWS Nitro Enclaves and how to get started, visit the AWS Nitro Enclaves page.

ec2
#ec2#ga#now-available#new-region

Stifel Financial Corp, a diversified financial services holding company is expanding its data landscape that requires an orchestration solution capable of managing increasingly complex data pipeline operations across multiple business domains. Traditional time-based scheduling systems fall short in addressing the dynamic interdependencies between data products, requires event-driven orchestration. Key challenges include coordinating cross-domain dependencies, maintaining data consistency across business units, meeting stringent SLAs, and scaling effectively as data volumes grow. Without a flexible orchestration solution, these issues can lead to delayed business operations and insights, increased operational overhead, and heightened compliance risks due to manual interventions and rigid scheduling mechanisms that cannot adapt to evolving business needs. In this post, we walk through how Stifel Financial Corp, in collaboration with AWS ProServe, has addressed these challenges by building a modular, event-driven orchestration solution using AWS native services that enables precise triggering of data pipelines based on dependency satisfaction, supporting near real-time responsiveness and cross-domain coordination.

lex
#lex#support

At Twilio, we manage a 20 petabyte-scale Amazon S3 data lake that serves the analytics needs of over 1,500 users, processing 2.5 million queries monthly and scanning an average of 85 PB of data. To meet our growing demands for scalability, emerging technology support, and data mesh architecture adoption, we built Odin, a multi-engine query platform that provides an abstraction layer built on top of Presto Gateway. In this post, we discuss how we designed and built Odin, combining Amazon Athena with open-source Presto to create a flexible, scalable data querying solution.

lexs3athena
#lex#s3#athena#ga#support

In this post, we walk through how to take an ML model built in SageMaker Canvas and deploy it using SageMaker Serverless Inference, helping you go from model creation to production-ready predictions quickly and efficiently without managing any infrastructure. This solution demonstrates a complete workflow from adding your trained model to the SageMaker Model Registry through creating serverless endpoint configurations and deploying endpoints that automatically scale based on demand .

sagemakercanvas
#sagemaker#canvas

In this post, we explore how Amazon Nova Sonic's speech-to-speech capabilities can be combined with Amazon Bedrock AgentCore to create sophisticated multi-agent voice assistants that break complex tasks into specialized, manageable components. The approach demonstrates how to build modular, scalable voice applications using a banking assistant example with dedicated sub-agents for authentication, banking inquiries, and mortgage services, offering a more maintainable alternative to monolithic voice assistant designs.

bedrockagentcorenovalex
#bedrock#agentcore#nova#lex#ga

In this post, we demonstrate how to deploy and manage machine learning training workloads using the Amazon SageMaker HyperPod training operator, which enhances training resilience for Kubernetes workloads through pinpoint recovery and customizable monitoring capabilities. The Amazon SageMaker HyperPod training operator helps accelerate generative AI model development by efficiently managing distributed training across large GPU clusters, offering benefits like centralized training process monitoring, granular process recovery, and hanging job detection that can reduce recovery times from tens of minutes to seconds.

sagemakerhyperpod
#sagemaker#hyperpod

Today, Amazon Simple Email Service (SES) added visibility into the IP addresses used by Dedicated IP Addresses - Managed (DIP-M) pools. Customers can now find out the exact addresses in use when sending emails through DIP-M pools to mailbox providers. Customers can also see Microsoft Smart Network Data Services (SNDS) metrics for these IP addresses, giving them more insight into their sending reputation with Microsoft mailbox providers. This gives customers more transparency into the IP activities in DIP-M pools. Previously, customers could configure DIP-M pools to perform automatic IP allocation and warm-up in response to changes in email sending volumes. This reduced the operational overhead of managing dedicated sending channels, but customers could not easily see which IP addresses were in use by DIP-M pools. This also made it difficult to find SNDS feedback, which customers use to improve their reputation. Now, customers can see the IPs in DIP-M pools through the console, CLI, or SES API. SES also automatically creates CloudWatch Metrics for SNDS information on each IP address, which customers can access through the CloudWatch console or APIs. This gives customers more tools to monitor their sending reputation. SES supports DIP-M IP observability in all AWS Regions where SES is available. For more information, see the documentation for information about DIP-M pools.

cloudwatch
#cloudwatch#support

On October 21, 2025 Amazon announced quarterly security and critical updates for Amazon Corretto Long-Term Supported (LTS) versions of OpenJDK. Corretto 25.0.1, 21.0.9, 17.0.17, 11.0.29, 8u472 are now available for download. Amazon Corretto is a no-cost, multi-platform, production-ready distribution of OpenJDK. This release of Corretto JDK binaries for Generic Linux, Alpine and macOS will include Async-Profiler, a low overhead sampling profiler for Java supported by the Amazon Corretto team. Async-Profiler is designed to provide profiling data for CPU time, allocations in Java Heap, native memory allocations and leaks, contended locks, hardware and software performance counters like cache misses, page faults, context switches, Java method profiling, and much more. Click on the Corretto home page to download Corretto 25, Corretto 21, Corretto 17, Corretto 11, or Corretto 8. You can also get the updates on your Linux system by configuring a Corretto Apt, Yum, or Apk repo. Feedback is welcomed!

#now-available#update#support

Amazon Bedrock Data Automation (BDA) now supports AVI, MKV, and WEBM file formats along with the AV1 and MPEG-4 Visual (Part 2) codecs, enabling you to generate structured insights across a broader range of video content. Additionally, BDA delivers up to 50% faster image processing. BDA automates the generation of insights from unstructured multimodal content such as documents, images, audio, and videos for your GenAI-powered applications. With support for AVI, MKV, and WEBM formats, you can now analyze content from archival footage, high-quality video archives with multiple audio tracks and subtitles, and web-based and open-source video content. This expanded video format and codec support enables you to process video content directly in the formats your organization uses, streamlining your workflows and accelerating time-to-insight. With faster image processing on BDA, you you can extract insights from visual content faster than ever before. You can now analyze larger volumes of images in less time, helping you scale your AI applications and deliver value to your customers more quickly. Amazon Bedrock Data Automation is available in 8 AWS regions: Europe (Frankfurt), Europe (London), Europe (Ireland), Asia Pacific (Mumbai), Asia Pacific (Sydney), US West (Oregon) and US East (N. Virginia), and GovCloud (US-West) AWS Regions. To learn more, see the Bedrock Data Automation User Guide and the Amazon Bedrock Pricing page. To get started with using Bedrock Data Automation, visit the Amazon Bedrock console.

bedrockecs
#bedrock#ecs#ga#support

Amazon CloudWatch Database Insights expands the availability of its on-demand analysis experience to the RDS for SQL Server database engine. CloudWatch Database Insights is a monitoring and diagnostics solution that helps database administrators and developers optimize database performance by providing comprehensive visibility into database metrics, query analysis, and resource utilization patterns. This feature leverages machine learning models to help identify performance bottlenecks during the selected time period, and gives advice on what to do next. Previously, database administrators had to manually analyze performance data, correlate metrics, and investigate root cause. This process is time-consuming and requires deep database expertise. With this launch, you can now analyze database performance monitoring data for any time period with automated intelligence. The feature automatically compares your selected time period against normal baseline performance, identifies anomalies, and provides specific remediation advice. Through intuitive visualizations and clear explanations, you can quickly identify performance issues and receive step-by-step guidance for resolution. This automated analysis and recommendation system reduces mean-time-to-diagnosis from hours to minutes. You can get started with this feature by enabling the Advanced mode of CloudWatch Database Insights on your RDS for SQL Server databases using the RDS service console, AWS APIs, the AWS SDK, or AWS CloudFormation. Please refer to RDS documentation and Aurora documentation for information regarding the availability of Database Insights across different regions, engines and instance classes.

rdscloudformationcloudwatch
#rds#cloudformation#cloudwatch#launch#ga

Amazon Nova models now support the customization of content moderation settings for approved business use cases that require processing or generating sensitive content. Organizations with approved business use cases can adjust content moderation settings across four domains: safety, sensitive content, fairness, and security. These settings allow customers to adjust specific settings relevant to their business requirements. Amazon Nova enforces essential, non-configurable controls to ensure responsible use of AI, such as controls to prevent harm to children and preserve privacy. Customization of content moderation settings is available for Amazon Nova Lite and Amazon Nova Pro in the US East (N. Virginia) region. To learn more about Amazon Nova, visit the Amazon Nova product page and to learn about Amazon Nova responsible use of AI, visit the AWS AI Service Cards, or see the User Guide. To see if your business model is appropriate to customize content moderation settings, contact your AWS Account Manager.

novardsorganizations
#nova#rds#organizations#ga#support

Starting today, Amazon EC2 High Memory U7i instances with 6TB of memory (u7i-6tb.112xlarge) are now available in the Europe (London) region. U7i-6tb instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-6tb instances offer 6TB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment. U7i-6tb instances offer 448 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server. To learn more about U7i instances, visit the High Memory instances page.

ec2
#ec2#now-available#support

Amazon Elastic Container Service (Amazon ECS) now supports AWS CloudTrail data events, providing detailed visibility into Amazon ECS Agent API activities. This new capability enables customers to monitor, audit, and troubleshoot container instance operations. With CloudTrail data event support, security and operations teams can now maintain comprehensive audit trails of ECS Agent API activities, detect unusual access patterns, and troubleshoot agent communication issues more effectively. Customers can opt in to receive detailed logging through the new data event resource type AWS::ECS::ContainerInstance for ECS agent activities, including when the ECS agent polls for work (ecs:Poll), starts telemetry sessions (ecs:StartTelemetrySession), and submits ECS Managed Instances logs (ecs:PutSystemLogEvents). This enhanced visibility enables teams to better understand how container instance roles are utilized, meet compliance requirements for API activity monitoring, and quickly diagnose operational issues related to agent communications. This new feature is available for Amazon ECS on EC2 in all AWS Regions and ECS Managed Instances in select regions. Standard CloudTrail data event charges apply. To learn more, visit the Developer Guide.

ec2ecs
#ec2#ecs#new-feature#support#new-capability

AWS Parallel Computing Service (PCS) now supports Slurm v25.05. You can now create AWS PCS clusters running the newer Slurm v25.05. The release of Slurm v25.05 in PCS provides new Slurm functionalities including enhanced multi-cluster sackd configuration and improved requeue behavior for instance launch failures. With this release, login nodes can now control multiple clusters without requiring sackd reconfiguration or restart. This enables administrators to pre-configure access to multiple clusters for their users. The new requeue behavior enables more resilient job scheduling by automatically retrying failed instance launches during capacity shortages, thus increasing overall cluster reliability. AWS PCS is a managed service that makes it easier for you to run and scale your high performance computing (HPC) workloads on AWS using Slurm. To learn more about PCS, refer to the service documentation and AWS Region Table.

#launch#support

Amazon CloudWatch Database Insights now supports tag-based access control for database and per-query metrics powered by RDS Performance Insights. You can implement access controls across a logical grouping of database resources without managing individual resource-level permissions. Previously, tags defined on RDS and Aurora instances did not apply to metrics powered by Performance Insights, creating significant overhead in manually configuring metric-related permissions at the database resource level. With this launch, those instance tags are now automatically evaluated to authorize metrics powered by Performance Insights. This allows you to define IAM policies using tag-based access conditions, resulting in improved governance and security consistency. Please refer to RDS and Aurora documentation to get started with defining IAM policies with tag-based access control on database and per-query metrics. This feature is available in all AWS regions where CloudWatch Database Insights is available. CloudWatch Database Insights delivers database health monitoring aggregated at the fleet level, as well as instance-level dashboards for detailed database and SQL query analysis. It offers vCPU-based pricing – see the pricing page for details. For further information, visit the Database Insights User Guide.

rdsiamcloudwatch
#rds#iam#cloudwatch#launch#ga#support

In this post, we show how Splash Music is setting a new standard for AI-powered music creation by using its advanced HummingLM model with AWS Trainium on Amazon SageMaker HyperPod. As a selected startup in the 2024 AWS Generative AI Accelerator, Splash Music collaborated closely with AWS Startups and the AWS Generative AI Innovation Center (GenAIIC) to fast-track innovation and accelerate their music generation FM development lifecycle.

novasagemakerhyperpodtrainium
#nova#sagemaker#hyperpod#trainium

Today, AWS is announcing the general availability of Amazon EC2 Capacity Manager, a new capability that enables customers to monitor, analyze, and manage EC2 capacity across all of their accounts and regions. This new capability simplifies resource management using a single interface. EC2 Capacity Manager offers customers a comprehensive view of On-Demand, Spot, and Capacity Reservation usage across their accounts and Regions. The new service features dashboards and charts that present high-level insights while allowing customers to drill down into specific details where needed. These details include historical usage trends to help customers gain a better understanding of their capacity patterns over time, as well as optimization opportunities to guide informed capacity decisions, complete with workflows for implementing these insights. In addition to the updated user interface and APIs, EC2 Capacity Manager allows customers to export data, enabling integration with their existing systems. EC2 Capacity Manager is available in all commercial AWS Regions enabled by default at no additional cost. To learn more, visit the EC2 Capacity Manager user guide, read the AWS News Blog, or get started using EC2 Capacity Manager in the AWS console.

ec2rds
#ec2#rds#ga#update#integration#new-capability

Amazon OpenSearch Service now supports latest generation Graviton4-based Amazon EC2 instance families. These new instance types are compute optimized (C8g), general purpose (M8g), and memory optimized (R8g, R8gd) instances. AWS Graviton4 processors provide up to 30% better performance than AWS Graviton3 processors with c8g, m8g and r8g & r8gd offering the best price performance for compute-intensive, general purpose, and memory-intensive workloads respectively. To learn more about Graviton4 improvements, please see the blog on r8g instances and the blog on c8g & m8g instances. Amazon OpenSearch Service Graviton4 instances are supported on all OpenSearch versions, and Elasticsearch (open source) versions 7.9 and 7.10. One or more than one Graviton4 instance types are now available on Amazon OpenSearch Service across 23 regions globally: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Jakarta), Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Mumbai), Asia Pacific (Malaysia), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Thailand), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Spain), Europe (Stockholm), South America(Sao Paulo) and AWS GovCloud (US-West). For region specific availability & pricing, visit our pricing page. To learn more about Amazon OpenSearch Service and its capabilities, visit our product page.

ec2opensearchopensearch servicegraviton
#ec2#opensearch#opensearch service#graviton#ga#now-available

AWS Systems Manager announces the launch of security updates notification for Windows patching compliance, which helps customers identify security updates that are available but not approved by their patch baseline configuration. This feature introduces a new patch state called "AvailableSecurityUpdate" that reports security patches of all severity levels that are available to install on Windows instances but do not meet the approval rules in your patch baseline. As organizations grow, administrators need to maintain secure systems while controlling when patches are applied. The security updates notification helps prevent situations where customers could unintentionally leave instances unpatched when using features like ApprovalDelay with large values. By default, instances with available security updates are marked as Non-Compliant, providing a clear signal that security patches require attention. Customers can also configure this behavior through their patch baseline settings to maintain existing compliance reporting if preferred. This feature is available in all AWS Regions where AWS Systems Manager is available. To get started with security updates notification for Windows patching compliance, visit the AWS Systems Manager Patch Manager console. For more information about this feature, refer to our user documentation or update your patch baseline with the details here. There are no additional charges for using this feature beyond standard AWS Systems Manager pricing.

organizations
#organizations#launch#ga#update

AWS announces support for customer managed AWS Key Management Service (KMS) keys in Automated Reasoning checks in Amazon Bedrock Guardrails. This enhancement enables you to use your own encryption keys to protect policy content and tests, giving you full control over key management. Automated Reasoning checks in Amazon Bedrock Guardrails is the first and only generative AI safeguard that helps correct factual errors from hallucinations using logically accurate and verifiable reasoning that explains why responses are correct. This feature enables organizations in regulated industries like healthcare, financial services, and government to adopt Automated Reasoning checks while meeting compliance requirements for customer-owned encryption keys. For example, a financial institution can now use Automated Reasoning checks to validate loan processing guidelines while maintaining full control over the encryption keys protecting their policy content. When creating an Automated Reasoning policy, you can now select a customer managed KMS key to encrypt your content rather than using the default key. Customer managed KMS key support for Automated Reasoning checks is available in all AWS Regions where Amazon Bedrock Guardrails is offered: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), and Europe (Paris). To get started, see the following resources: Automated Reasoning checks user guide Amazon Bedrock Guardrails product page AWS Key Management Service developer guide Create an Automated Reasoning policy in the Bedrock console

bedrockorganizations
#bedrock#organizations#ga#now-available#enhancement#support

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8g instances are available in AWS Europe (Milan), and AWS Asia Pacific (Hong Kong, Osaka, Melbourne) regions. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 C8g instances are built for compute-intensive workloads, such as high performance computing (HPC), batch processing, gaming, video encoding, scientific modeling, distributed analytics, CPU-based machine learning (ML) inference, and ad serving. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads. AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. These instances offer larger instance sizes with up to 3x more vCPUs and memory compared to Graviton3-based Amazon C7g instances. AWS Graviton4 processors are up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than AWS Graviton3 processors. C8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To learn more, see Amazon EC2 C8g Instances. To get started, see the AWS Management Console.

ec2graviton
#ec2#graviton#ga#now-available

Organizations often face challenges when implementing single-shot fine-tuning approaches for their generative AI models. The single-shot fine-tuning method involves selecting training data, configuring hyperparameters, and hoping the results meet expectations without the ability to make incremental adjustments. Single-shot fine-tuning frequently leads to suboptimal results and requires starting the entire process from scratch when improvements are […]

bedrockorganizations
#bedrock#organizations#ga#improvement

In this post, you learn how to implement blue/green deployments by using Amazon API Gateway for your APIs. For this post, we use AWS Lambda functions on the backend. However, you can follow the same strategy for other backend implementations of the APIs. All the required infrastructure is deployed by using AWS Serverless Application Model (AWS SAM).

lambdaapi gateway
#lambda#api gateway#ga

In this post, we introduce an alternative architecture to synchronize mainframe data to the cloud using Amazon Managed Streaming for Apache Kafka (Amazon MSK) for greater flexibility and scalability. This event-driven approach provides additional possibilities for mainframe data integration and modernization strategies.

lexkafkamsk
#lex#kafka#msk#integration

AWS Marketplace now supports purchase order line numbers for AWS Marketplace purchases, simplifying cost-allocation and payment processing. This launch makes it easier for customers to process and pay invoices. AWS purchase order support allows customers to provide purchase orders per transaction, which reflect on invoices related to that purchase. Now, customers can associate transaction charges not only to purchase orders, but also to a specific PO line number for AWS Marketplace purchases. This capability is supported during procurement and, for future charges, post-procurement in the AWS Marketplace console. You can also view the purchase order and purchase order line number associated to an AWS invoice in the AWS Billing and Cost Management console. Streamline your invoice processing by accurately matching AWS invoices with your purchase order and purchase order line number. This capability is available today in all AWS Regions where AWS Marketplace is supported. To learn about transaction purchase orders for AWS Marketplace, view the AWS Marketplace buyer guide. For information on using blanket purchase orders with AWS, refer to the AWS Billing Documentation.

#launch#support

Amazon Timestream for InfluxDB now offers support for InfluxDB 3. Now application developers and DevOps teams can run InfluxDB 3 databases as a managed service. InfluxDB 3 uses a new architecture for the InfluxDB database engine, built on Apache Arrow for in-memory data processing, Apache DataFusion for query execution, and columnar Parquet storage format with data persistence in Amazon S3 to deliver fast performance for high-cardinality data and large scale data processing for large analytical workloads. With Amazon Timestream for InfluxDB 3, customers can leverage improved query performance and resource utilization for data-intensive use cases while benefiting from virtually unlimited storage capacity through S3-based object storage. The service is available in two editions: Core, the open source version of InfluxDB 3, for near real-time workloads focused on recent data, and Enterprise for production workloads requiring high availability, multi-node deployments, and essential compaction capabilities for long-term storage. The Enterprise edition supports multi-node cluster configurations with up to 3 nodes initially, providing enhanced availability, improved performance for concurrent queries, and greater system resilience. Amazon Timestream for InfluxDB 3 is available in all Regions where Timestream for InfluxDB is available. See here for a full listing of our Regions. To get started with Amazon Timestream for InfluxDB 3, visit the Amazon Timestream for InfluxDB console. For more information, see the Amazon Timestream for InfluxDB documentation and pricing page.

s3
#s3#support

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8gn instances, powered by the latest-generation AWS Graviton4 processors, are available in the AWS Asia Pacific (Malaysia, Sydney, Thailand) Region. The new instances provide up to 30% better compute performance than Graviton3-based Amazon EC2 C7gn instances. Amazon EC2 C8gn instances feature the latest 6th generation AWS Nitro Cards, and offer up to 600 Gbps network bandwidth, the highest network bandwidth among network optimized EC2 instances. Take advantage of the enhanced networking capabilities of C8gn to scale performance and throughput, while optimizing the cost of running network-intensive workloads such as network virtual appliances, data analytics, CPU-based artificial intelligence and machine learning (AI/ML) inference. For increased scalability, C8gn instances offer instance sizes up to 48xlarge, up to 384 GiB of memory, and up to 60 Gbps of bandwidth to Amazon Elastic Block Store (EBS). C8gn instances support Elastic Fabric Adapter (EFA) networking on the 16xlarge, 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes, which enables lower latency and improved cluster performance for workloads deployed on tightly coupled clusters. C8gn instances are available in the following AWS Regions: US East (N. Virginia), US West (Oregon, N.California), Europe (Frankfurt, Stockholm), Asia Pacific (Singapore, Malaysia, Sydney, Thailand) To learn more, see Amazon C8gn Instances. To begin your Graviton journey, visit the Level up your compute with AWS Graviton page. To get started, see AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.

ec2rdsgraviton
#ec2#rds#graviton#ga#now-available#support

AWS Security Hub Cloud Security Posture Management (CSPM) now supports the Center for Internet Security (CIS) AWS Foundations Benchmark v5.0. This industry-standard benchmark provides security configuration best practices for AWS with clear implementation and assessment procedures. The new standard includes 40 controls that perform automated checks against AWS resources to evaluate compliance with the latest version 5.0 requirements. The standard is now available in all AWS Regions where Security Hub CSPM is currently available, including the AWS GovCloud (US) and the China Regions. To quickly enable the standard across your AWS environment, we recommend that you use Security Hub CSPM central configuration. With this approach, you can enable the standard in all or only some of your organization's accounts and across all AWS Regions that are linked to Security Hub CSPM with a single action. To learn more, see CIS v5.0 in the AWS Security Hub CSPM User Guide. To receive notifications about new Security Hub CSPM features and controls, subscribe to the Security Hub CSPM SNS topic. You can also try Security Hub at no cost for 30 days with the AWS Free Tier offering.

sns
#sns#ga#now-available#support

Amazon DocumentDB (with MongoDB compatibility) now offers customers the option to use Internet Protocol version 6 (IPv6) addresses on new and existing clusters. Customers moving to IPv6 can simplify their network stack by running their databases on a dual-stack network that supports both IPv4 and IPv6. IPv6 increases the number of available addresses and customers no longer need to manage overlapping IPv4 address spaces in their VPCs (Virtual Private Cloud). Customers can standardize their applications on the new version of Internet Protocol by moving to dual-stack mode (supporting both IPv4 and IPv6) with a few clicks in the AWS Management Console or directly using the AWS CLI. Amazon DocumentDB (with MongoDB compatibility) is a fully managed, native JSON database that makes it simple and cost-effective to operate critical document workloads at virtually any scale without managing infrastructure. Amazon DocumentDB support for IPv6 is generally available on version 4.0 and 5.0 in AWS Regions listed in Dual-stack mode Region and version availability. To learn more about configuring your environment for IPv6, please refer to Amazon VPC and Amazon DocumentDB.

#generally-available#support

As analytical demands grow, many customers are upgrading from DC2 to RA3 or Amazon Redshift Serverless, which offer independent compute and storage scaling, along with advanced capabilities such as data sharing, zero-ETL integration, and built-in artificial intelligence and machine learning (AI/ML) support with Amazon Redshift ML. This post provides a practical guide to plan your target architecture and migration strategy, covering upgrade options, key considerations, and best practices to facilitate a successful and seamless transition.

redshift
#redshift#integration#support

In this post, we share four high-impact, widely adopted use cases built with Nova in Amazon Bedrock, supported by real-world customers deployments, offerings available from AWS partners, and experiences. These examples are ideal for organizations researching their own AI adoption strategies and use cases across industries.

bedrocknovaorganizations
#bedrock#nova#organizations#ga#support

In this post, we explore how Amazon Bedrock AgentCore Memory transforms raw conversational data into persistent, actionable knowledge through sophisticated extraction, consolidation, and retrieval mechanisms that mirror human cognitive processes. The system tackles the complex challenge of building AI agents that don't just store conversations but extract meaningful insights, merge related information across time, and maintain coherent memory stores that enable truly context-aware interactions.

bedrockagentcorelex
#bedrock#agentcore#lex

AWS today announced Amazon WorkSpaces Core Managed Instances availability in US East (Ohio), Asia Pacific (Malaysia), Asia Pacific (Hong Kong), Middle East (UAE), and Europe (Spain), bringing Amazon WorkSpaces capabilities to these AWS Regions for the first time. WorkSpaces Core Managed Instances in these Regions is supported by partners including Citrix, Workspot, Leostream, and Dizzion. Amazon WorkSpaces Core Managed Instances simplifies virtual desktop infrastructure (VDI) migrations with highly customizable instance configurations. WorkSpaces Core Managed Instances provisions resources in your AWS account, handling infrastructure lifecycle management for both persistent and non-persistent workloads. Managed Instances provide flexibility for organizations requiring specific compute, memory, or graphics configurations. With WorkSpaces Core Managed Instances, you can use existing discounts, Savings Plans, and other features like On-Demand Capacity Reservations (ODCRs), with the operational simplicity of WorkSpaces - all within the security and governance boundaries of your AWS account. This solution is ideal for organizations migrating from on-premises VDI environments or existing AWS customers seeking enhanced cost optimization without sacrificing control over their infrastructure configurations. You can use a broad selection of instance types, including accelerated graphics instances, while your Core partner solution handles desktop and application provisioning and session management through familiar administrative tools. Customers will incur standard compute costs along with an hourly fee for WorkSpaces Core. See the WorkSpaces Core pricing page for more information. To learn more about Amazon WorkSpaces Core Managed Instances, visit the product page. For technical documentation and getting started guides, see the Amazon WorkSpaces Core Documentation.

lexorganizations
#lex#organizations#ga#now-available#support

Claude Haiku 4.5 is now available in Amazon Bedrock. Claude Haiku 4.5 delivers near-frontier performance matching Claude Sonnet 4's capabilities in coding, computer use, and agent tasks at substantially lower cost and faster speeds, making state-of-the-art AI accessible for scaled deployments and budget-conscious applications. The model's enhanced speed makes it ideal for latency-sensitive applications like real-time customer service agents and chatbots where response time is critical. For computer use tasks, Haiku 4.5 delivers significant performance improvements over previous models, enabling faster and more responsive applications. This model supports vision and unlocks new use cases where customers previously had to choose between performance and cost. It enables economically viable agent experiences, supports multi-agent systems for complex coding projects, and powers large-scale financial analysis and research applications. Haiku 4.5 maintains Claude's unique character while delivering the performance and efficiency needed for production deployments. Claude Haiku 4.5 is now available in Amazon Bedrock via global cross region inference in multiple locations. To view the full list of available regions, refer to the documentation. To get started with Haiku 4.5 in Amazon Bedrock visit the Amazon Bedrock console, Anthropic's Claude in Amazon Bedrock product page, and the Amazon Bedrock pricing page.

bedrocklex
#bedrock#lex#now-available#improvement#support

Second-generation AWS Outposts racks are now supported in the AWS Europe (Ireland) Region. Outposts racks extend AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises data center or colocation space for a truly consistent hybrid experience. Organizations from startups to enterprises and the public sector in and outside of Europe can now order their Outposts racks connected to this new supported region, optimizing for their latency and data residency needs. Outposts allows customers to run workloads that need low latency access to on-premises systems locally while connecting back to their home Region for application management. Customers can also use Outposts and AWS services to manage and process data that needs to remain on-premises to meet data residency requirements. This regional expansion provides additional flexibility in the AWS Regions that customers’ Outposts can connect to. To learn more about second-generation Outposts racks, read this blog post and user guide. For the most updated list of countries and territories and the AWS Regions where second-generation Outposts racks are supported, check out the Outposts rack FAQs page.

lexorganizationsoutposts
#lex#organizations#outposts#ga#update#support

AWS Backup now provides schedule preview for backup plans, helping you validate when your backups are scheduled to run. Schedule preview shows the next ten scheduled backup runs, including when continuous backup, indexing, or copy settings take effect. Backup plan schedule preview consolidates all backup rules into a single timeline, showing how they work together. You can see when each backup occurs across all backup rules, along with settings like lifecycle to cold storage, point-in-time recovery, and indexing. This unified view helps you quickly identify and resolve conflicts or gaps between your backup strategy and actual configuration. Backup plan schedule preview is available in all AWS Regions where AWS Backup is available. You can start using this feature automatically from the AWS Backup console, API, or CLI without any additional settings. For more information, visit our documentation.

#preview#ga

AWS announces AI-powered troubleshooting capabilities with Amazon Q integration in AWS Step Functions console. AWS Step Functions is a visual workflow service that enables customers to build distributed applications, automate IT and business processes, and build data and machine learning pipelines using AWS services. This integration brings Amazon Q's intelligent error analysis directly into AWS Step Functions console, helping you quickly identify and resolve workflow issues. When errors occur in your AWS Step Functions workflows, you can now click the "Diagnose with Amazon Q" button that appears in error alerts and the console error notification area to receive AI-assisted troubleshooting guidance. This feature helps you resolve common types of issues including state machine execution failures as well as Amazon States Language (ASL) syntax errors and warnings. The troubleshooting recommendations appear in a dedicated window with remediation steps tailored to your error context, enabling faster resolution and improved operational efficiency. Diagnose with Amazon Q for AWS Step Functions is available in all commercial AWS Regions where Amazon Q is available. The feature is automatically enabled for customers who have access to Amazon Q in their region. To learn more about Diagnose with Amazon Q, see Diagnosing and troubleshooting console errors with Amazon Q or get started by visiting the AWS Step Functions console.

amazon qstep functions
#amazon q#step functions#integration#support

AWS Serverless Application Model Command Line Interface (SAM CLI) now supports Finch as an alternative to Docker for local development and testing of serverless applications. This gives developers greater flexibility in choosing their preferred local development environment when working with SAM CLI to build and test their serverless applications. Developers building serverless applications spend significant time in their local development environments. SAM CLI is a command-line tool for local development and testing of serverless applications. It allows you to build, test, debug, and package your serverless applications locally before deploying to AWS Cloud. To provide the local development and testing environment for your applications, SAM CLI uses a tool that can run containers on your local device. Previously, SAM CLI only supported Docker as the tool for running containers locally. Starting today, SAM CLI also supports Finch as a container development tool. Finch is an open-source tool, developed and supported by AWS, for local container development. This means you can now choose between Docker and Finch as your preferred container tool for local development when working with SAM CLI. You can use SAM CLI to invoke Lambda functions locally, test API endpoints, and debug your serverless applications with the same experience you would have in the AWS Cloud. With Finch support, SAM CLI now automatically detects and uses Finch as the container development tool when Docker is not available. You can also set Finch as your preferred container tool for SAM CLI. This new feature supports all core SAM CLI commands including sam build, sam local invoke, sam local start-api, and sam local start-lambda. To learn more about using SAM CLI with Finch, visit the SAM CLI developer guide.

lexlambda
#lex#lambda#new-feature#support

Amazon Bedrock now provides immediate access to all serverless foundation models by default for users in all commercial AWS regions. This update eliminates the need for manually activating model access, allowing you to instantly start using these models through the Amazon Bedrock console playground, AWS SDK, and Amazon Bedrock features including Agents, Flows, Guardrails, Knowledge Bases, Prompt Management, and Evaluations. While you can quickly begin using serverless foundation models from most providers, Anthropic models, although enabled by default, still require you to submit a one-time usage form before first use. You can complete this form either through the API or through the Amazon Bedrock console by selecting an Anthropic model from the playground. When completed through the AWS organization management account, the form submission automatically enables Anthropic models across all member accounts in the organization. This simplified access is available across all commercial AWS regions where Amazon Bedrock is supported. Account administrators retain full control over model access through IAM policies and Service Control Policies (SCPs) to restrict access as needed. For implementation guidance and examples on access controls, please refer to our blog.

bedrockiam
#bedrock#iam#ga#update#support

Amazon Aurora PostgreSQL-Compatible Edition now supports zero-ETL integration with Amazon SageMaker, enabling near real-time data availability for analytics workloads. This integration automatically extracts and loads data from PostgreSQL tables into your lakehouse where it's immediately accessible through various analytics engines and machine learning tools. The data synced into the lakehouse is compatible with Apache Iceberg open standards, enabling you to use your preferred analytics tools and query engines such as SQL, Apache Spark, BI, and AI/ML tools. Through a simple no-code interface, you can create and maintain an up-to-date replica of your PostgreSQL data in your lakehouse without impacting production workloads. The integration features comprehensive, fine-grained access controls that are consistently enforced across all analytics tools and engines, ensuring secure data sharing throughout your organization. As a complement to the existing zero-ETL integrations with Amazon Redshift, this solution reduces operational complexity while enabling you to derive immediate insights from your operational data. Amazon Aurora PostgreSQL zero-ETL integration with Amazon SageMaker is now available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), South America (Sao Paulo), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Europe (Frankfurt), Europe (Ireland), Europe (London), and Europe (Stockholm) AWS Regions. To learn more, visit What is zero-ETL. To begin using this new integration, visit the zero-ETL documentation for Aurora PostgreSQL.

sagemakerlexredshiftrds
#sagemaker#lex#redshift#rds#ga#now-available

Amazon Bedrock is bringing DeepSeek-V3.1, OpenAI open-weight models, and Qwen3 models to more AWS Regions worldwide, expanding access to cutting-edge AI for customers across the globe. This regional expansion enables organizations in more countries and territories to deploy these powerful foundation models locally, ensuring compliance with data residency requirements, reducing network latency, and delivering faster AI-powered experiences to their users. DeepSeek-V3.1 and Qwen3 Coder-480B are now available in the US East (Ohio) and Asia Pacific (Jakarta) AWS Regions. OpenAI open-weight models (20B, 120B) and Qwen3 models (32B, 235B, Coder-30B) are now available in the US East (Ohio), Europe (Frankfurt), and Asia Pacific (Jakarta) AWS Regions. Check out the full Region list for future updates. To learn more about these models visit the Amazon Bedrock product page. To get started, access the Amazon Bedrock console and view the documentation.

bedrockorganizations
#bedrock#organizations#ga#now-available#update#expansion

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8g instances are available in South America (Sao Paulo), Europe (London), and Asia Pacific (Melbourne) regions. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 R8g instances are ideal for memory-intensive workloads such as databases, in-memory caches, and real-time big data analytics. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads. AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. AWS Graviton4-based R8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 1.5TB) than Graviton3-based R7g instances. These instances are up to 30% faster for web applications, 40% faster for databases, and 45% faster for large Java applications compared to AWS Graviton3-based R7g instances. R8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To learn more, see Amazon EC2 R8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

ec2graviton
#ec2#graviton#now-available

Amazon Elastic Container Services (Amazon ECS) now allows you to run Firelens containers as a non-root user, by specifying a User ID in your Task Definition. Specifying a non-root user with a specific user ID reduces the potential attack footprint by users who may gain access to such software, a security best practice and a compliance requirement by some industries and security services such as the AWS Security Hub. With this release, Amazon ECS allows you to specify a user ID in the "user" field of your Firelens containerDefinition element of your Task Definition, instead of only allowing "user": "0" (root user). The new capability is supported in all AWS Regions. See the documentation for using Firelens for more details on how to set up your Firelens container to run as non-root.

ecs
#ecs#ga#support#new-capability

Amazon Web Services (AWS) announces URL and Host Header rewrite capabilities for Application Load Balancer (ALB). This feature enables customers to modify request URLs and Host Headers using regex-based pattern matching before routing requests to targets. With URL and Host Header rewrites, you can transform URLs using regex patterns (e.g., rewrite "/api/v1/users" to "/users"), standardize URL patterns across different applications, modify Host Headers for internal service routing, remove or add URL path prefixes, and redirect legacy URL structures to new formats. This capability eliminates the need for additional proxy layers and simplifies application architectures. The feature is valuable for microservices deployments where maintaining a single external hostname while routing to different internal services is critical. You can configure URL and Host Header rewrites through the AWS Management Console, AWS CLI, AWS SDKs, and AWS APIs. There are no additional charges for using URL and Host Header rewrites. You pay only for your use of Application Load Balancer based on Application Load Balancer pricing. This feature is now available in all AWS commercial regions. To learn more, visit the ALB Documentation, and the AWS Blog post on URL and Host Header rewrites with Application Load Balancer.

#launch#ga#now-available

Amazon RDS for MySQL and Amazon RDS for PostgreSQL zero-ETL integration with Amazon Redshift is now available in the Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Melbourne), Canada West (Calgary), Europe (Spain), Europe (Zurich), Israel (Tel Aviv), and Middle East (UAE) regions. Zero-ETL integrations enable near real-time analytics and machine learning (ML) on petabytes of transactional data using Amazon Redshift. Within seconds of data being written to Amazon RDS for MySQL or Amazon RDS for PostgreSQL, the data is replicated to Amazon Redshift. You can create multiple zero-ETL integrations from a single Amazon RDS database, and you can apply data filtering for each integration to include or exclude specific databases and tables, tailoring the zero-ETL integration to your needs. You can also use AWS CloudFormation to automate the configuration and deployment of resources needed for zero-ETL integrations. To learn more about zero-ETL and how to get started, visit the documentation for Amazon RDS and Amazon Redshift.

redshiftrdscloudformation
#redshift#rds#cloudformation#ga#now-available#integration

AWS Backup now provides more details in backup job API responses and Backup Audit Manager reports to give you better visibility into backup configurations and compliance settings. You can verify your backup policies with a single API call. List and Describe APIs for backup, copy, and restore jobs now return fields that required multiple API calls before. Delegated administrators can now view backup job details across their organization. Backup jobs APIs include retention settings, vault lock status, encryption details, and backup plan information like plan names, rule names, and schedules. Copy job APIs return destination vault configurations, vault type, lock state, and encryption settings. Restore job APIs show source resource details and vault access policies. Backup Audit Manager reports include new columns with vault type, lock status, encryption details, archive settings, and retention periods. You can use this information to enhance audit trails and verify compliance with data protection policies. These expanded information fields are available today in all AWS Regions where AWS Backup and AWS Backup Audit Manager are supported, with no additional charges. To learn more about AWS Backup Audit Manager, visit the product page and documentation. To get started, visit the AWS Backup console.

#ga#support

Amazon Kinesis Data Streams now supports Fault Injection Service (FIS) actions for Kinesis API errors. Customers can now test their application's error handling capabilities, retry mechanisms (such as exponential backoff patterns), and CloudWatch alarms in a controlled environment. This allows customers to validate their monitoring systems and recovery processes before encountering real-world failures, ultimately improving application resilience and availability. This integration supports Kinesis Data Streams API errors including throttling, internal errors, service unavailable, and expired iterator exceptions for Amazon Kinesis Data Streams. Amazon Kinesis Data Streams is a serverless data streaming service that enables customers to capture, process, and store real-time data streams at any scale. Now customers can create real-world Kinesis Data Stream API errors (including 500, 503, and 400 errors for GET and PUT operations) to test application resilience. This feature eliminates the previous need for custom implementation or to wait for actual production failures to verify error handling mechanisms. To get started, customers can create experiment templates through the FIS console to run tests directly or integrate them into their continuous integration pipeline. For additional safety, FIS experiments include automatic stop mechanisms that trigger when customer-defined thresholds are reached, ensuring controlled testing without risking application stability. These actions are generally available in all AWS Regions where FIS is available, including the AWS GovCloud (US) Regions. To learn more about using these actions, please see the Kinesis Data Streams User Guide and FIS User Guide.

kinesiscloudwatch
#kinesis#cloudwatch#generally-available#integration#support

Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports Apache Kafka version 4.1, introducing Queues as a preview feature, a new Streams Rebalance Protocol in early access, and Eligible Leader Replicas (ELR). Along with these features, Apache Kafka version 4.1 includes various bug fixes and improvements. For more details, please refer to the Apache Kafka release notes for version 4.1. A key highlight of Kafka 4.1 is the introduction of Queues as a preview feature. Customers can use multiple consumers to process messages from the same topic partitions, improving parallelism and throughput for workloads that need point-to-point message delivery. The new Streams Rebalance Protocol builds upon Kafka 4.0's consumer rebalance protocol, extending broker coordination capabilities to Kafka Streams for optimized task assignments and rebalancing. Additionally, ELR is now enabled by default to strengthen availability. To start using Apache Kafka 4.1 on Amazon MSK, simply select version 4.1.x when creating a new cluster via the AWS Management Console, AWS CLI, or AWS SDKs. You can also upgrade existing MSK provisioned clusters with an in-place rolling update. Amazon MSK orchestrates broker restarts to maintain availability and protect your data during the upgrade. Kafka version 4.1 support is available today across all AWS regions where Amazon MSK is offered. To learn how to get started, see the Amazon MSK Developer Guide.

kafkamsk
#kafka#msk#preview#early-access#update#improvement

Amazon RDS for Oracle zero-ETL integration with Amazon Redshift is now available in the Asia Pacific (Hyderabad), Asia pacific (Jakarta), Asia Pacific (Melbourne), Canada West (Calgary), Europe (Spain), Europe (Zurich), Israel (Tel Aviv), and Middle East(UAE) Regions. Amazon RDS for Oracle zero-ETL integration with Amazon Redshift enables near real-time analytics and machine learning (ML) to analyze petabytes of transactional data in Amazon Redshift without complex data pipelines for extract-transform-load (ETL) operations. Within seconds of data being written to an Amazon RDS for Oracle database instance, the data is replicated to Amazon Redshift. Zero-ETL integrations simplify the process of analyzing data from Amazon RDS for Oracle database instances, enabling you to derive holistic insights across multiple applications with ease. You can use the AWS management console, API, CLI, and AWS CloudFormation to create and manage zero-ETL integrations between RDS for Oracle and Amazon Redshift. If you use Oracle multitenant architecture, you can choose specific pluggable databases (PDBs) to selectively replicate them. In addition, you can choose specific tables and tailor replication to your needs. RDS for Oracle zero-ETL integration with Redshift is available with Oracle Database version 19c. To learn more, refer Amazon RDS and Amazon Redshift documentation.

lexredshiftrdscloudformation
#lex#redshift#rds#cloudformation#ga#now-available

In this post, we explore how to build a conversational device management system using Amazon Bedrock AgentCore. With this solution, users can manage their IoT devices through natural language, using a UI for tasks like checking device status, configuring WiFi networks, and monitoring user activity.

bedrockagentcore
#bedrock#agentcore

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M7i instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in the Europe (Milan) region. These custom processors, available only on AWS, offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers. M7i deliver up to 15% better price-performance compared to M6i. M7i instances are a great choice for workloads that need the largest instance sizes or continuous high CPU usage, such as gaming servers, CPU-based machine learning (ML), and video-streaming. M7i offer larger instance sizes, up to 48xlarge, and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads. To learn more, visit Amazon EC2 M7i Instances. To get started, see the AWS Management Console.

ec2
#ec2#ga#now-available#support

Amazon AppStream 2.0 now offers Microsoft applications with licenses included, providing customers with the flexibility to run these applications on AppStream 2.0 fleets. As part of this launch, AppStream 2.0 provides Microsoft Office, Visio, and Project 2021/2024 in both Standard and Professional editions. Each is available in both 32-bit and 64-bit versions for On-Demand and Always-On fleets. Administrators can dynamically control applications availability by adding or removing applications from AppStream 2.0 images and fleets. End users benefit from a seamless experience, accessing Microsoft applications that are fully integrated with their business applications within their AppStream 2.0 sessions. This helps in ensuring that users can work efficiently with both Microsoft and business applications in a unified environment, eliminating the need for switching between different platforms or services. To get started, create an AppStream custom image by launching an image builder with a Windows Server operating system image. Select the desired set of applications to be installed. Then connect to the image builder and complete image creation by following the Amazon AppStream 2.0 Administration Guide. You must use an AppStream 2.0 Image Builder that uses an AppStream 2.0 agent released on or after October 2, 2025 Or, your image must use managed AppStream 2.0 image updates released on or after October 3, 2025. This functionality is generally available in all regions where AppStream 2.0 is offered. Customers are billed per hour for the AppStream streaming resources, and per-user per-month (non-prorated) for Microsoft applications. Please see Amazon AppStream 2.0 Pricing for more information.

lex
#lex#launch#generally-available#update

AWS for Fluent Bit announces version 3.0.0, based on Fluent Bit version 4.1.0 and Amazon Linux 2023. Container logging using AWS for Fluent Bit is now more performant and more feature-rich for AWS customers, including those using Amazon Elastic Container Services (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS). AWS for Fluent Bit enables Amazon ECS and Amazon EKS customers to collect, process, and route container logs to destinations including Amazon CloudWatch Logs, Amazon Data Firehose, Amazon Kinesis Data Streams, and Amazon S3 without changing application code. AWS for Fluent Bit 3.0.0 upgrades the Fluent Bit version to 4.1.0, and upgrades the base image to Amazon Linux 2023. These updates deliver access to the latest Fluent Bit features, significant performance improvements, and enhanced security. New features include native OpenTelemetry (OTel) support for ingesting and forwarding OTLP logs, metrics, and traces with AWS SigV4 authentication—eliminating the need for additional sidecars. Performance improvements include faster JSON parsing, processing more logs per vCPU with lower latency. Security enhancements include TLS min version and cipher controls, which enforce your TLS policy on outputs from AWS for Fluent Bit for stronger protocol posture. You can use AWS for Fluent Bit 3.0.0 on both ECS and EKS. On ECS, update the FireLens log-router container image in your task definition to the 3.0.0 tag from the Amazon ECR Public Gallery. On EKS, upgrade by either updating the Helm release or setting the DaemonSet image to the 3.0.0 version. The AWS for Fluent Bit image is available in the Amazon ECR Public Gallery and in the Amazon ECR repository. You can also find it on GitHub for source code and additional guidance.

s3ecsekskinesiscloudwatch
#s3#ecs#eks#kinesis#cloudwatch#ga

Today, Amazon Web Services (AWS) announces the general availability of Volume Clones for Amazon Elastic Block Store (Amazon EBS), our high-performance block storage service. This new capability allows you to instantly create and access point-in-time copies of EBS volumes within the same Availability Zone (AZ), accelerating software development workflows and enhancing operational agility. Customers use Amazon EBS volumes as durable block storage attached to Amazon EC2 instances. With Amazon EBS Volume Clones, you can instantly create copies of volumes and access the copied volumes with single-digit millisecond latency. Amazon EBS Volume Clones enables rapid creation of test and development environments from production volumes, eliminating manual copy workflows. Additionally, Volume Clones integrates with the Amazon EBS Container Storage Interface (CSI) driver, simplifying storage management for containerized applications. Amazon EBS Volume Clones is available in all AWS Commercial Regions and AWS GovCloud (US) Regions. You can access Volume Clones through the AWS Console, AWS Command Line Interface (CLI), AWS SDKs, and AWS CloudFormation. This capability supports all EBS volume types and works for volume copies within the same account and AZ. For detailed pricing information, please visit the EBS pricing page. To explore how Volume Clones can accelerate your software development processes and improve operational efficiency, visit the AWS documentation.

ec2cloudformation
#ec2#cloudformation#support#new-capability

Version 2.0 of the AWS Deploy Tool for .NET is now available. This new major version introduces several foundational upgrades to improve the deployment experience for .NET applications on AWS. The tool comes with new minimum runtime requirements. We have upgraded it to require .NET 8 because the predecessor, .NET 6, is now out of […]

#now-available

Amazon Route 53 Profiles now supports AWS PrivateLink. Customers can now access and manage their Profiles privately, without going through the public internet. AWS PrivateLink provides private connectivity between VPCs, AWS services, and on-premises applications, securely over the Amazon network. When Route 53 Profiles is accessed via AWS PrivateLink, all operations, such as creating, deleting, editing, and listing of Profiles, can be handled via the Amazon private network.  Route 53 Profiles allows you to define a standard DNS configuration, in the form of a Profile, that may include Route 53 private hosted zone (PHZ) associations, Route 53 Resolver rules, and Route 53 Resolver DNS Firewall rule groups, and apply this configuration to multiple VPCs in your account. Profiles can also be used to enforce DNS settings for your VPCs, with configurations for DNSSEC validations, Resolver reverse DNS lookups, and the DNS Firewall failure mode. You can share Profiles with AWS accounts in your organization using AWS Resource Access Manager (RAM). Customers can use Profiles with AWS PrivateLink in regions where Route 53 Profiles is available today, including the AWS GovCloud (US) Regions. For more information about the AWS Regions where Profiles is available, see here. To learn more about configuring Route 53 Profiles, please refer to the service documentation.

#ga#support

AWS Transfer Family SFTP connectors can now connect to remote SFTP servers through your Amazon Virtual Private Cloud (VPC). This enables you to transfer files between Amazon S3 and any SFTP server, whether privately or publicly hosted, while leveraging the security controls and network configurations already defined in your VPC. By utilizing your NAT Gateways' bandwidth for file transfers over SFTP, you can achieve improved transfer performance and ensure compatibility with remote firewalls. AWS Transfer Family provides fully managed file transfers over SFTP, FTP, FTPS, AS2 and web-browser based interfaces. You can now use Transfer Family SFTP connectors to connect with SFTP servers that are only accessible from your VPC, including on-premises systems, external servers shared over private networks, or in-VPC servers. You can present the IP addresses from your VPC’s CIDR range for compatibility with IP controls, and achieve higher bandwidth for large-scale transfers via your NAT gateways when connecting over the internet. All connections are routed through your VPC’s existing networking and security controls, such as AWS Transit Gateway, centralized firewalls and traffic inspection points, helping you meet data security mandates. SFTP connectors support for VPC-based connectivity is available in select AWS Regions. To get started, visit the AWS Transfer Family console, or use AWS CLI/SDK. To learn more, read the AWS News Blog or visit the Transfer Family User Guide.

s3
#s3#ga#support

Amazon MSK Connect is now available in ten additional AWS Regions: Asia Pacific (Jakarta), Asia Pacific (Hong Kong), Asia Pacific (Osaka), Asia Pacific (Melbourne), Europe (Milan), Europe (Zurich), Middle East (Bahrain), Middle East (UAE), Africa (Cape Town), and Israel (Tel Aviv). MSK Connect enables you to run fully managed Kafka Connect clusters with Amazon Managed Streaming for Apache Kafka (Amazon MSK). With a few clicks, MSK Connect allows you to easily deploy, monitor, and scale connectors that move data in and out of Apache Kafka and Amazon MSK clusters from external systems such as databases, file systems, and search indices. MSK Connect eliminates the need to provision and maintain cluster infrastructure. Connectors scale automatically in response to increases in usage and you pay only for the resources you use. With full compatibility with Kafka Connect, it is easy to migrate workloads without code changes. MSK Connect will support both Amazon MSK-managed and self-managed Apache Kafka clusters. You can get started with MSK Connect from the Amazon MSK console or the Amazon CLI. Visit the AWS Regions page for all the regions where Amazon MSK is available. To get started visit, the MSK Connect product page, pricing page, and the Amazon MSK Developer Guide.

kafkamsk
#kafka#msk#now-available#support

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M8g instances are available in AWS Europe (Paris), Asia Pacific (Osaka), AWS Canada (Central), and AWS Middle East (Bahrain) regions. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 M8g instances are built for general-purpose workloads, such as application servers, microservices, gaming servers, midsize data stores, and caching fleets. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads. AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. These instances offer larger instance sizes with up to 3x more vCPUs and memory compared to Graviton3-based Amazon M7g instances. AWS Graviton4 processors are up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than AWS Graviton3 processors. M8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To learn more, see Amazon EC2 M8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

ec2graviton
#ec2#graviton#ga#now-available

Amazon Connect now provides configurable thresholds for schedule adherence, giving you more flexibility in how you track agent performance. You can define thresholds for how early or late agents start or end their shifts, as well as for individual activities. For example, agents can start their shift 5 minutes early and end 10 minutes late, or end their breaks 3 minutes late, without negatively impacting their adherence scores. You can further customize these thresholds for individual teams. For example, teams that handle contacts with long handle times can be given more flexibility in when they start their breaks. This launch enables managers to focus on true adherence violations and eliminates the impact of minor schedule deviations on agent performance, thus improving manager productivity and agent satisfaction. This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click here.

lex
#lex#launch#ga

Amazon SageMaker offers a comprehensive hub that integrates data, analytics, and AI capabilities, providing a unified experience for users to access and work with their data. Through Amazon SageMaker Unified Studio, a single and unified environment, you can use a wide range of tools and features to support your data and AI development needs, including […]

sagemakerunified studioemrredshiftglue
#sagemaker#unified studio#emr#redshift#glue#support

Amazon Quick Sight now supports font customization for data labels and axes. Authors can now customize fonts for data labels and axes in supported charts, in addition to the previously supported font customization for visual titles, subtitles, and legend, as well as tables and pivot tables headers. Authors can set the font size (in pixels), font family, color, and styling options like bold, italics, and underline across analysis, including dashboards, reports and embedded scenarios. With this update, you can further align your dashboard's fonts with your organization's branding guidelines, creating a more cohesive and visually appealing experience. Additionally, the expanded font customization options help improve readability, especially when viewing visualizations on large screens. This is now available in all supported Amazon Quick Suite regions. To learn more about this, visit Amazon Quick Suite Visual formatting guide.

amazon qrds
#amazon q#rds#ga#now-available#update#support

Amazon SageMaker AI Projects now supports provisioning custom machine learning (ML) project templates from Amazon S3. Administrators can now manage ML templates in SageMaker AI studio so data scientists can create standardized ML projects to meet their organizational needs. Data scientists can use Amazon SageMaker AI Projects to create standardized ML projects that meet organizational requirements and automate ML development workflows. Administrators define standardized ML project templates that include end-to-end development patterns. By provisioning custom templates from Amazon S3, administrators can define standardized project templates and provide access to these templates directly in the SageMaker AI studio for data scientists, ensuring all ML projects follow organizational standards. SageMaker AI Projects custom template S3 provisioning is available in all AWS Regions where SageMaker AI Projects is available. To learn more, visit SageMaker AI Projects documentation, and SageMaker AI Studio.

sagemakers3rds
#sagemaker#s3#rds#ga#support

Starting today, Amazon EC2 High Memory U7i instances with 12TB of memory (u7i-12tb.224xlarge) are now available in the Asia Pacific (Mumbai) region. U7i-12tb instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-12tb instances offer 12TiB of DDR5 memory enabling customers to scale transaction processing throughput in a fast-growing data environment. U7i-12tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server. To learn more about U7i instances, visit the High Memory instances page.

ec2
#ec2#now-available#support

Amazon Bedrock AgentCore is an agentic platform to build, deploy and operate highly capable agents securely at scale using any framework, model, or protocol. AgentCore lets you build agents faster, enable agents to take actions across tools and data, run agents securely with low-latency and extended runtimes, and monitor agents in production - all without any infrastructure management. With general availability, all AgentCore services now have support for Virtual Private Cloud (VPC), AWS PrivateLink, AWS CloudFormation, and resource tagging, enabling developers to deploy AI agents with enhanced enterprise security and infrastructure automation capabilities. AgentCore Runtime builds on its preview capabilities of industry-leading eight-hour execution windows and complete session isolation by adding support for the Agent-to-Agent (A2A) protocol, with broader A2A support coming soon across all AgentCore services. AgentCore Memory now offers a self-managed strategy that gives you complete control over your memory extraction and consolidation pipelines. AgentCore Gateway now connects to existing Model Context Protocol (MCP) servers in addition to transforming APIs and Lambda functions into agent-compatible tools. It also supports Identity and Access Management (IAM) authorization enabling customers to leverage IAM in additional to OAuth for secure agent to tool interactions over MCP, and acts as a single, secure endpoint for agents to discover and use tools without the need for custom integrations. AgentCore Identity now offers identity-aware authorization, secure vault storage for refresh tokens, and native integration with additional OAuth-enabled services so agents can securely act on behalf of users or by themselves with enhanced access controls. AgentCore Observability now delivers complete visibility into end-to-end agent execution and operational metrics across all AgentCore services through dashboards powered by Amazon CloudWatch, and it is OTEL compatible, offering seamless integration with Amazon CloudWatch and external observability providers like Dynatrace, Datadog, Arize Phoenix, LangSmith, and Langfuse. AgentCore works with any open source framework (CrewAI, LangGraph, LlamaIndex, Google ADK, OpenAI Agents SDK) and any model in or outside Amazon Bedrock, giving you freedom to use your preferred frameworks and models, and innovate with confidence. Amazon Bedrock AgentCore is available in nine AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland). Learn more about AgentCore through the blog, deep dive using the AgentCore resources, and get started with the AgentCore Starter Toolkit. AgentCore offers consumption-based pricing with no upfront costs.

bedrockagentcorenovalambdards+3 more
#bedrock#agentcore#nova#lambda#rds#cloudformation

Vector search for Amazon ElastiCache is now generally available. Customers can now use ElastiCache to index, search, and update billions of high-dimensional vector embeddings from popular providers like Amazon Bedrock, Amazon SageMaker, Anthropic, and OpenAI with latency as low as microseconds and up to 99% recall. Key use cases include semantic caching for large language models (LLMs) and multi-turn conversational agents, which significantly reduce latency and cost by caching semantically similar queries. Vector search for ElastiCache also powers agentic AI systems with Retrieval Augmented Generation (RAG) to ensure highly relevant results and consistently low latency across multiple retrieval steps. Additional use cases include recommendation engines, anomaly detection, and other applications that require efficient search across multiple data modalities. Vector search for ElastiCache is available with Valkey version 8.2 on node-based clusters in all AWS Regions at no additional cost. To get started, create a Valkey 8.2 cluster using the AWS Management Console, AWS Software Development Kit (SDK), or AWS Command Line Interface (CLI). You can also use vector search on your existing clusters by upgrading from any version of Valkey or Redis OSS to Valkey 8.2 in a few clicks with no downtime. To learn more about vector search for ElastiCache for Valkey read this blog and for a list of supported commands see the ElastiCache documentation.

bedrocksagemaker
#bedrock#sagemaker#generally-available#update#support

Amazon CloudWatch announces the general availability of generative AI observability, helping you monitor all components of AI applications and workloads, including agents deployed and operated with Amazon Bedrock AgentCore. This release expands beyond runtime monitoring to include complete observability across AgentCore's Built-in Tools, Gateways, Memory, and Identity capabilities. DevOps teams and developers can now get an out-of-the-box view into latency, token usage, errors, and performance across all components of their AI workloads, from model invocations to agent operations. This feature is compatible with popular generative AI orchestration frameworks such as Strands Agents, LangChain, and LangGraph, offering flexibility with your choice of framework. With this new feature, CloudWatch enables developers to analyzes telemetry data across components of a generative AI application. Customers can monitor code execution patterns in Built-in Tools, track API transformation success rates through Gateways, analyze memory storage and retrieval patterns, and ensure secure agent behavior through Identity observability. The connected view helps developers quickly identify issues - from gaps in VectorDB to authentication failures - using end-to-end prompt tracing, curated metrics, and logs. Developers can monitor their entire agent fleet through the "AgentCore" section in the CloudWatch console, which integrates seamlessly with other CloudWatch capabilities including Application Signals, Alarms, Sensitive Data Protection, and Logs Insights. This feature is now available in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Sydney). To learn more, visit documentation. There is no additional pricing for Gen AI Observability, existing CloudWatch pricing for underlying telemetry data applies.

bedrockagentcorelexcloudwatch
#bedrock#agentcore#lex#cloudwatch#generally-available#ga

AWS Config now supports 3 additional AWS resource types. This expansion provides greater coverage over your AWS environment, enabling you to more effectively discover, assess, audit, and remediate an even broader range of resources. With this launch, if you have enabled recording for all resource types, then AWS Config will automatically track these new additions. The newly supported resource types are also available in Config rules and Config aggregators. You can now use AWS Config to monitor the following newly supported resource types in all AWS Regions where the supported resources are available: Resource Types: AWS::ApiGatewayV2::Integration AWS::CloudTrail::EventDataStore AWS::Config::StoredQuery

#launch#ga#integration#support#expansion

After careful consideration, we’re announcing availability changes for a select group of AWS services and features. These changes fall into three lifecycle categories: Services and Capabilities moving to Maintenance Services moving to maintenance will no longer be accessible to new customers starting Nov 7, 2025. Current customers can continue using the service or feature while exploring alternative solutions. Amazon Cloud Directory Amazon CodeCatalyst Amazon CodeGuru Reviewer Amazon Fraud Detector Amazon Glacier Amazon S3 Object Lambda Amazon Workspaces Web Access Client for PCoIP (STXHD) AWS Application Discovery Service AWS HealthOmics - Variant and Annotation Store AWS IoT SiteWise Edge Data Processing Pack AWS IoT SiteWise Monitor AWS Mainframe Modernization Service AWS Migration Hub AWS Snowball Edge Compute Optimized AWS Snowball Edge Storage Optimized AWS Systems Manager - Change Manager AWS Systems Manager - Incident Manager AWS Thinkbox Deadline 10 .NET Modernization Tools Services Entering Sunset The following services are entering sunset, and we are announcing the date upon which we will end operations and support of the service. Customers using these services should click on the links below to understand the sunset timeline (typically 12 months), and begin planning migration to alternatives as recommended in the updated service web pages and documentation. Amazon FinSpace Amazon Lookout for Equipment AWS IoT Greengrass v1 AWS Proton Services Reaching End of Support The following services have reached end of support and are no longer available as of October 7, 2025. AWS Mainframe Modernization App Testing For customers affected by these changes, we've prepared comprehensive migration guides and our support teams are ready to assist with your transition. Visit AWS Product Lifecycle Page to learn more. or contact AWS Support.

codegurufraud detectorlookout for equipmentlambdas3
#codeguru#fraud detector#lookout for equipment#lambda#s3#update

AWS Graviton4-based R8g database instances are now generally available for Amazon Aurora with PostgreSQL compatibility in the AWS Canada (Central), AWS Asia Pacific (Singapore) and AWS Asia Pacific (Seoul) regions. R8g instances offer larger instance sizes, up to 48xlarge and features an 8:1 ratio of memory to vCPU, and the latest DDR5 memory. Graviton4-based instances provide up to a 40% performance improvement and up to 29% price/performance improvement for on-demand pricing over Graviton3-based instances of equivalent sizes on Amazon Aurora PostgreSQL databases, depending on database engine, version, and workload. AWS Graviton4 processors are the latest generation of custom-designed AWS Graviton processors built on the AWS Nitro System. R8g DB instances are available with new 24xlarge and 48xlarge sizes. With these new sizes, R8g DB instances offer up to 192 vCPU, up to 50Gbps enhanced networking bandwidth, and up to 40Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). You can launch Graviton4 R8g database instances in the Amazon RDS Management Console or using the AWS CLI. Upgrading a database instance to Graviton4 requires a simple instance type modification. For more details, refer to the Aurora documentation. Amazon Aurora is designed for unparalleled high performance and availability at global scale with full PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.

rdsgraviton
#rds#graviton#launch#generally-available#ga#improvement

Amazon Connect now supports agent schedule adherence notifications, making it easier for you to proactively identify when agents aren't adhering to their scheduled activities. You can define rules to automatically send email or text notifications (via EventBridge) to supervisors when agents exceed adherence thresholds. For example, if agent adherence drops below 85% in a trailing 15-minute window, supervisors can receive an email alert. These automated notifications eliminate the need for continuous dashboard monitoring and enable proactive intervention before service levels decline, improving both supervisor productivity and customer satisfaction. This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click here.

eventbridge
#eventbridge#support

Amazon Neptune Analytics is now available in the AWS Canada (Central) and Australia (Sydney) Regions. You can now create and manage Neptune Analytics graphs in the AWS Canada (Central) and Australia (Sydney) Regions and run advanced graph analytics and vector similarity search. Neptune Analytics is a memory-optimized graph database engine for analytics. With Neptune Analytics, you can get insights and find trends by processing large amounts of graph data in seconds. To analyze graph data quickly and easily, Neptune Analytics stores large graph datasets in memory. It supports a library of optimized graph analytic algorithms, low-latency graph queries, and vector search capabilities within graph traversals. Neptune Analytics is an ideal choice for investigatory, exploratory, or data-science workloads that require fast iteration for data, analytical and algorithmic processing, or vector search on graph data. It complements Amazon Neptune Database, a popular managed graph database. To perform intensive analysis, you can load the data from a Neptune Database graph or snapshot into Neptune Analytics. You can also load graph data that's stored in Amazon S3. To get started, you can create a new Neptune Analytics graphs using the AWS Management Console, or AWS CLI. For more information on pricing and region availability, refer to the Neptune pricing page and AWS Region Table.

s3
#s3#ga#now-available#support

Amazon EBS io2 Block Express volumes are now available in Amazon Web Services China (Beijing) Region, operated by Sinnet and Amazon Web Services China (Ningxia) Region, operated by NWCD. io2 Block Express leverage the latest generation of EBS storage server architecture designed to deliver consistent sub-millisecond latency and 99.999% durability. With a single io2 Block Express volume, you can achieve 256,000 IOPS, 4GiB/s throughput, and 64TiB storage capacity. You can also attach an io2 Block Express volume to multiple instances in the same Availability Zone, supporting shared storage fencing through NVMe reservations for improved application availability and scalability. With the lowest p99.9 I/O latency among major cloud providers, io2 Block Express is the ideal choice for the most I/O-intensive, mission-critical deployments such as SAP HANA, Oracle, SQL Server, and IBM DB2. Customers using io1 volumes can upgrade to io2 Block Express without any downtime using the ModifyVolume API to achieve 100x durability, consistent sub-millisecond latency, and significantly higher performance at the same or lower cost than io1. With io2 Block Express, you can drive up to 4x IOPS and 4x throughput at the same storage price as io1, and up to 50% cheaper IOPS cost for volumes over 32,000 IOPS. io2 Block Express is now available in all the Amazon Web Services regions. You can create and manage io2 Block Express volumes using the Amazon Web Services Management Console, Amazon Command Line Interface (CLI), or Amazon SDKs. For more information on io2 Block Express, see our tech documentation.

#now-available#support

Today, we’re announcing the general availability of Amazon Quick Suite—a new set of agentic teammates that helps you get the answers you need using all of your business data and move instantly from insights to action. Quick Suite retrieves insights across the public internet and all your documents, including information in popular third party applications, databases, and other places your company keeps important data. Whether you need a single data point, a PhD-level research project, an entire strategy tailored to your context, or anything in between, Quick Suite quickly gets you all the relevant information. Quick Suite helps you seamlessly transition from getting answers to taking action in popular applications (like creating or updating Jira tickets, or ServiceNow incidents). Quick Suite can also help you automate tasks—from routine, daily tasks like responding to RFPs and preparing for customer meetings to automating the most complex business processes such as invoice processing and account reconciliation. All of your data is safe and private. Your queries and data are never used to train models, and you can tailor the Quick Suite experience to you. Your AWS administrator can turn on Quick Suite in only a few steps, and your new agentic teammate will be ready to go. New Quick Suite customers receive a 30-day free trial for up to 25 users.  You can experience the full breadth of Quick Suite capabilities for chat, research, business intelligence, and automation in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland)., and we'll expand availability to additional AWS Regions over the coming months. To learn more about Quick Suite and its capabilities, read our deep-dive blog.

amazon qlex
#amazon q#lex

Today, AWS announced the expansion of 10 Gbps and 100 Gbps dedicated connections with MACsec encryption capabilities at the existing AWS Direct Connect location in the Netrality KC1 data center near Kansas City, MO. You can now establish private, direct network access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones from this location. The Direct Connect service enables you to establish a private, physical network connection between AWS and your data center, office, or colocation environment. These private connections can provide a more consistent network experience than those made over the public internet.  For more information on the over 146 Direct Connect locations worldwide, visit the locations section of the Direct Connect product detail pages. Or, visit our getting started page to learn more about how to purchase and deploy Direct Connect.

#expansion

Amazon Web Services (AWS) announces the availability of high performance Storage Optimized Amazon EC2 I7i instances in the AWS Europe (Spain) region. Powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, these new instances deliver up to 23% better compute performance and more than 10% better price performance over previous generation I4i instances. Powered by 3rd generation AWS Nitro SSDs, I7i instances offer up to 45TB of NVMe storage with up to 50% better real-time storage performance, up to 50% lower storage I/O latency, and up to 60% lower storage I/O latency variability compared to I4i instances. I7i instances offer the best compute and storage performance for x86-based storage optimized instances in Amazon EC2, ideal for I/O intensive and latency-sensitive workloads that demand very high random IOPS performance with real-time latency to access the small to medium size datasets (multi-TBs). Additionally, torn write prevention feature support up to 16KB block sizes, enabling customers to eliminate database performance bottlenecks. I7i instances are available in eleven sizes - nine virtual sizes up to 48xlarge and two bare metal sizes - delivering up to 100Gbps of network bandwidth and 60Gbps of Amazon Elastic Block Store (EBS) bandwidth. To learn more, visit the I7i instances page.

ec2
#ec2#now-available#support

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C6in instances are available in AWS Region Mexico (Central). These sixth-generation network optimized instances, powered by 3rd Generation Intel Xeon Scalable processors and built on the AWS Nitro System, deliver up to 200Gbps network bandwidth, for 2x more network bandwidth over comparable fifth-generation instances. Customers can use C6in instances to scale the performance of applications such as network virtual appliances (firewalls, virtual routers, load balancers), Telco 5G User Plane Function (UPF), data analytics, high performance computing (HPC), and CPU based AI/ML workloads. C6in instances are available in 10 different sizes with up to 128 vCPUs, including bare metal size. Amazon EC2 sixth-generation x86-based network optimized EC2 instances deliver up to 100Gbps of Amazon Elastic Block Store (Amazon EBS) bandwidth, and up to 400K IOPS. C6in instances offer Elastic Fabric Adapter (EFA) networking support on 32xlarge and metal sizes. C6in instances are available in these AWS Regions: US East (Ohio, N. Virginia), US West (N. California, Oregon), Europe (Frankfurt, Ireland, London, Milan, Paris, Spain, Stockholm, Zurich), Middle East (Bahrain, UAE), Israel (Tel Aviv), Asia Pacific (Hong Kong, Hyderabad, Jakarta, Malaysia, Melbourne, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo, Thailand), Africa (Cape Town), South America (Sao Paulo), Canada (Central), Canada West (Calgary), AWS GovCloud (US-West, US-East), and Mexico (Central). To learn more, see the Amazon EC2 C6in instances. To get started, see the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.

ec2
#ec2#ga#now-available#support

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M6in and M6idn instances are available in AWS Asia Pacific (Seoul) region. These sixth-generation network optimized instances, powered by 3rd Generation Intel Xeon Scalable processors and built on the AWS Nitro System, deliver up to 200Gbps network bandwidth, for 2x more network bandwidth over comparable fifth-generation instances. Customers can use M6in and M6idn instances to scale their performance and throughput of network-intensive workloads such as high-performance file systems, distributed web scale in-memory caches, caching fleets, real-time big data analytics, and Telco applications such as 5G User Plane Function. M6in and M6idn instances are available in 10 different instance sizes including metal, offering up to 128 vCPUs and 512 GiB of memory. They deliver up to 100Gbps of Amazon Elastic Block Store (EBS) bandwidth, and up to 400K IOPS. M6in and M6idn instances offer Elastic Fabric Adapter (EFA) networking support on 32xlarge and metal sizes. M6idn instances offer up to 7.6 TB of high-speed, low-latency instance storage. With this regional expansion, M6in and M6idn instances are available in the following AWS Regions: US East (Ohio, N. Virginia), US West (N. California, Oregon), Europe (Ireland, Frankfurt, Spain, Stockholm, Zurich), Asia Pacific (Mumbai, Singapore, Tokyo, Sydney, Seoul), Canada (Central), and AWS GovCloud (US-West). Customers can purchase the new instances through Savings Plans, On-Demand, and Spot instances. To learn more, see M6in and M6idn instances page.

ec2
#ec2#ga#now-available#support#expansion

Amazon DynamoDB now offers customers the option to use Internet Protocol version 6 (IPv6) addresses in their Amazon Virtual Private Cloud (VPC) when connecting to DynamoDB tables, streams, and DynamoDB Accelerator (DAX), including with AWS PrivateLink Gateway and Interface endpoints. Customers moving to IPv6 can simplify their network stack and meet compliance requirements by using a network that supports both IPv4 and IPv6. The continued growth of the internet is exhausting available Internet Protocol version 4 (IPv4) addresses. IPv6 increases the number of available addresses by several orders of magnitude and customers no longer need to manage overlapping address spaces in their VPCs. Customers can standardize their applications on the new version of Internet Protocol by moving to IPv6 with a few clicks in the AWS Management Console. Support for IPv6 in Amazon DynamoDB is now available in all commercial AWS Regions in the United States and the AWS GovCloud (US) Regions. It will deploy to the remaining global AWS Regions where Amazon DynamoDB is available over the next few weeks. To connect to DynamoDB using IPv6 addresses and check regional availability, please see the DynamoDB developer guide and the DynamoDB Accelerator user guide.

dynamodbeks
#dynamodb#eks#ga#now-available#support

Today, we are excited to announce that Amazon SageMaker notebook instance supports Amazon Linux 2023. You can now choose Amazon Linux 2023 for your new Amazon SageMaker notebook instance to take advantage of the latest innovations, enhanced security features. Amazon SageMaker notebook instances are fully managed Jupyter Notebooks with pre-configured development environments for data science and machine learning. Data scientists and developers can use SageMaker Notebooks to interactively explore, visualize and prepare data, and build and deploy machine learning models on SageMaker. Amazon Linux 2023 (AL2023) is a general-purpose rpm-based Linux distribution and successor to Amazon Linux 2 (AL2). Amazon Linux 2023 simplifies operating system management through its secure, stable, and high-performance runtime environment. This Linux distribution follows a predictable two-year major release cycle with five years of long-term support. The first two years provide standard support with quarterly security patches, bug fixes, and new features, followed by three years of maintenance. Enhanced security features include SELinux support and FIPS 140-3 validation for cryptographic modules. With this you now have the options to launch a notebook instance with AL2023 or AL2. For more details about this launch and instructions on how to get started with AL2023 notebook instances, please refer to the Amazon Linux 2023 documentation.

novasagemaker
#nova#sagemaker#launch#new-feature#support

Amazon EC2 has launched new M8a instances powered by 5th Generation AMD EPYC processors, offering up to 30% better performance and 19% better price performance compared to M7a instances, along with improved memory bandwidth, networking, and storage capabilities for various general-purpose workloads.

ec2
#ec2#launch#now-available

AWS is announcing Amazon EC2 I7ie instances are now available in AWS South America (São Paulo) region. Designed for large storage I/O intensive workloads, I7ie instances are powered by 5th Gen Intel Xeon Processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance over existing I3en instances. I7ie instances offer up to 120TB local NVMe storage density (highest in the cloud) for storage optimized instances and offer up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, I7ie instances deliver up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances. I7ie are high density storage optimized instances, ideal for workloads requiring fast local storage with high random read/write performance at very low latency consistency to access large data sets. These instances are available in 9 different virtual sizes and deliver up to 100Gbps of network bandwidth and 60Gbps of bandwidth for Amazon Elastic Block Store (EBS). To learn more, visit the I7ie instances page.

ec2
#ec2#now-available

AWS announces the general availability of new general-purpose Amazon EC2 M8a instances. M8a instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, deliver up to 30% higher performance, and up to 19% better price-performance compared to M7a instances. M8a instances deliver 45% more memory bandwidth compared to M7a instances, making these instances ideal for even latency sensitive workloads. M8a instances deliver even higher performance gains for specific workloads. M8a instances are 60% faster for GroovyJVM benchmark, and up to 39% faster for Cassandra benchmark compared to Amazon EC2 M7a instances. M8a instances are SAP-certified and offer 12 sizes including 2 bare metal sizes. This range of instance sizes allows customers to precisely match their workload requirements. M8a instances are built on the AWS Nitro System and ideal for applications that benefit from high performance and high throughput such as financial applications, gaming, rendering, application servers, simulation modeling, mid-size data stores, application development environments, and caching fleets. M8a instances are available in the following AWS Regions: US East (Ohio), US West (Oregon), and Europe (Spain). To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 M8a instance page or the AWS News blog.

ec2
#ec2#ga

Amazon Elastic Compute Cloud (Amazon EC2) R8gd instances with up to 11.4 TB of local NVMe-based SSD block-level storage are now available in Europe (Ireland), Asia Pacific (Sydney, Malaysia), South America (São Paulo), and Canada (Central) AWS Regions. These instances are powered by AWS Graviton4 processors, delivering up to 30% better performance over Graviton3-based instances. They have up to 40% higher performance for I/O intensive database workloads, and up to 20% faster query results for I/O intensive real-time data analytics than comparable AWS Graviton3-based instances. These instances are built on the AWS Nitro System and are a great fit for applications that need access to high-speed, low latency local storage. Each instance is available in 12 different sizes. They provide up to 50 Gbps of network bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Additionally, customers can now adjust the network and Amazon EBS bandwidth on these instances by 25% using EC2 instance bandwidth weighting configuration, providing greater flexibility with the allocation of bandwidth resources to better optimize workloads. These instances offer Elastic Fabric Adapter (EFA) networking on 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes. To learn more, see Amazon R8gd Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

ec2graviton
#ec2#graviton#now-available

Amazon Elastic Compute Cloud (Amazon EC2) M8gd instances with up to 11.4 TB of local NVMe-based SSD block-level storage are now available in Europe (London), Asia Pacific (Sydney, Malaysia), and Canada (Central) AWS Regions. These instances are powered by AWS Graviton4 processors, delivering up to 30% better performance over Graviton3-based instances. They have up to 40% higher performance for I/O intensive database workloads, and up to 20% faster query results for I/O intensive real-time data analytics than comparable AWS Graviton3-based instances. These instances are built on the AWS Nitro System and are a great fit for applications that need access to high-speed, low latency local storage. Each instance is available in 12 different sizes. They provide up to 50 Gbps of network bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Additionally, customers can now adjust the network and Amazon EBS bandwidth on these instances by 25% using EC2 instance bandwidth weighting configuration, providing greater flexibility with the allocation of bandwidth resources to better optimize workloads. These instances offer Elastic Fabric Adapter (EFA) networking on 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes. To learn more, see Amazon M8gd Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

ec2graviton
#ec2#graviton#now-available

Today, AWS announces a new pricing and cost estimation capability in Amazon Q Developer. Amazon Q Developer is the most capable generative AI-powered assistant for software development. With this launch, customers can now use Amazon Q Developer to get information about AWS product and service pricing, availability, and attributes, helping them select the right resources and estimate workload costs using natural language. When architecting new workloads on AWS, customers need to estimate costs so they can evaluate cost/performance tradeoffs, set budgets, and plan future spending. Customers can now use Amazon Q Developer to retrieve detailed product attribute and pricing information using natural language, making it easier to estimate the cost of new workloads without having to review multiple pricing pages or specify detailed API request parameters. Customers can now ask questions about service pricing (e.g., “How much does RDS extended support cost?”), the cost of a planned workload (e.g., “I need to send 1 million notifications per month to email, and 1 million to HTTP/S endpoints. Estimate the monthly cost using SNS.”), or the relative costs of different resources (e.g., “What is the cost difference between an Application Load Balancer and a Network Load Balancer?”). To answer these questions, Amazon Q Developer retrieves information from the AWS Price List APIs. To learn more, see Managing your costs using generative AI with Amazon Q Developer. To get started, open the Amazon Q chat panel in the AWS Management Console and ask a question about pricing.

amazon qq developerrdssns
#amazon q#q developer#rds#sns#launch#support

Amazon Location Service has updated its mapping data to reflect Vietnam's recent administrative reorganization, which consolidated the country's provinces from 63 to 34 administrative units. This update enables customers in Vietnam to seamlessly align their operations with the new administrative structure that took effect July 1, 2025. The update includes changes to Vietnam's administrative boundaries, names, and hierarchical structure across all levels. The refresh incorporates the new structure of 34 provincial-level administrative units, consisting of 28 provinces and 6 centrally managed cities, along with consolidated commune-level administrative boundaries from 10,310 to 3,321 units. Place names and administrative components in Points of Interest (POI) have been updated while preserving street-level address accuracy. This update supports use cases across industries such as logistics, e-commerce, and public services where accurate administrative boundary data is essential for operations like delivery zone planning, service area management, and address validation. The updated data is automatically available to customers querying Vietnam address data through Amazon Location Service. Amazon Location Service enables developers to easily and securely add location data and mapping functionalities into applications. Amazon Location Service with GrabMaps service is available in Singapore and Malaysia regions. To learn more, check out our developer guide.

#ga#update#support

Amazon Elastic Compute Cloud (Amazon EC2) C8gd instances with up to 11.4 TB of local NVMe-based SSD block-level storage are now available in Europe (Ireland) and Asia Pacific (Sydney, Malaysia) AWS Regions. These instances are powered by AWS Graviton4 processors, delivering up to 30% better performance over Graviton3-based instances. They have up to 40% higher performance for I/O intensive database workloads, and up to 20% faster query results for I/O intensive real-time data analytics than comparable AWS Graviton3-based instances. These instances are built on the AWS Nitro System and are a great fit for applications that need access to high-speed, low latency local storage.  Each instance is available in 12 different sizes. They provide up to 50 Gbps of network bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Additionally, customers can now adjust the network and Amazon EBS bandwidth on these instances by 25% using EC2 instance bandwidth weighting configuration, providing greater flexibility with the allocation of bandwidth resources to better optimize workloads. These instances offer Elastic Fabric Adapter (EFA) networking on 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes. To learn more, see Amazon C8gd instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

ec2graviton
#ec2#graviton#now-available

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7gd instances with up to 3.8 TB of local NVMe-based SSD block-level storage are available in the Europe (Zurich) Region. These Graviton3-based instances with DDR5 memory are built on the AWS Nitro System and are a great fit for applications that need access to high-speed, low latency local storage, including those that need temporary storage of data for scratch space, temporary files, and caches. They have up to 45% improved real-time NVMe storage performance than comparable Graviton2-based instances. Graviton3-based instances also use up to 60% less energy for the same performance than comparable EC2 instances, enabling you to reduce your carbon footprint in the cloud. To learn more, see Amazon C7gd Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

ec2graviton
#ec2#graviton#now-available

Amazon DocumentDB (with MongoDB compatibility), a fully managed, native JSON database that makes it simple and cost-effective to operate critical document workloads at virtually any scale without managing infrastructure, is now available in the AWS Asia Pacific (Osaka), Asia Pacific (Thailand), Asia Pacific (Malaysia) and Mexico (Central) Regions. Amazon DocumentDB provides scalability and durability for mission-critical MongoDB workloads, supporting millions of requests per second and can be scaled to 15 low latency read replicas in minutes without application downtime. Storage scales automatically up to 128 TiB without any impact to your application. Amazon DocumentDB also natively integrates with AWS Database Migration Service (DMS), Amazon CloudWatch, AWS CloudTrail, AWS Lambda, AWS Backup and more. To learn more about Amazon DocumentDB, please visit the Amazon DocumentDB product page, and see the AWS Region Table for complete regional availability. You can create a Amazon DocumentDB cluster from the AWS Management console, AWS Command Line Interface (CLI), or SDK.

lambdacloudwatch
#lambda#cloudwatch#now-available#support#new-region

Apache Airflow 3.x on Amazon MWAA introduces architectural improvements such as API-based task execution that provides enhanced security and isolation. This migration presents an opportunity to embrace next-generation workflow orchestration capabilities while providing business continuity. This post provides best practices and a streamlined approach to successfully navigate this critical migration, providing minimal disruption to your mission-critical data pipelines while maximizing the enhanced capabilities of Airflow 3.

#ga#improvement

Starting today, Amazon VPC Lattice lets you configure the number of IPv4 addresses assigned to resource gateway elastic network interfaces (ENIs). This enhancement builds on VPC Lattice's capability of providing access to resources on Layer-4 such as databases, clusters, domain names, etc. across multiple VPCs and accounts. When configuring a resource gateway, you can now specify the number of IPv4 addresses per ENI, which becomes immutable after setting. The IPv4 addresses are used for network address translation and determine the maximum number of concurrent IPv4 connections to a resource. You should consider your expected connection volume when configuring the IPv4 address count. By default, VPC Lattice assigns 16 IPv4 addresses per ENI. For IPv6, VPC Lattice always assigns a /80 CIDR per ENI. This feature is available at no additional cost in all AWS Regions where VPC Lattice is offered. For more information, visit the Amazon VPC Lattice product detail page and Amazon VPC Lattice documentation.

#ga#enhancement#support

Amazon Relational Database Service (Amazon RDS) for Db2 now enables customers to perform native database-level backups, offering greater flexibility in database management and migration. With this feature, customers can selectively back up individual databases within a multi-database RDS for Db2 instance, enabling efficient migration of specific databases to another instance or on-premises environment. Using a simple backup command, customers can easily create database copies for development and testing environments, while also meeting their compliance requirements through separate backup copies. By backing up specific databases instead of full instance snapshots, customers can reduce their storage costs. This feature is now available in all AWS Regions where Amazon RDS for Db2 is offered. For detailed information about configuring and using native database backups, visit the Amazon RDS for Db2 documentation. For pricing details, see the Amazon RDS pricing page.

lexrds
#lex#rds#launch#now-available#support

At Vanguard, we faced significant challenges with our legacy mainframe system that limited our ability to deliver modern, personalized customer experiences. Our centralized database architecture created performance bottlenecks and made it difficult to scale services independently for our millions of personal and institutional investors. In this post, we show you how we modernized our data architecture using Amazon Redshift as our Operational Read-only Data Store (ORDS).

personalizeredshiftrds
#personalize#redshift#rds#ga

Migrating from Google Cloud’s BigQuery to ClickHouse Cloud on AWS allows businesses to leverage the speed and efficiency of ClickHouse for real-time analytics while benefiting from AWS’s scalable and secure environment. This article provides a comprehensive guide to executing a direct data migration using AWS Glue ETL, highlighting the advantages and best practices for a […]

glue
#glue

Last week, Anthropic’s Claude Sonnet 4.5—the world’s best coding model according to SWE-Bench – became available in Amazon Q command line interface (CLI) and Kiro. I’m excited about this for two reasons: First, a few weeks ago I spent 4 intensive days with a global customer delivering an AI-assisted development workshop, where I experienced firsthand […]

bedrockamazon qecseksoutposts
#bedrock#amazon q#ecs#eks#outposts

Organizations manage content across multiple languages as they expand globally. Ecommerce platforms, customer support systems, and knowledge bases require efficient multilingual search capabilities to serve diverse user bases effectively. This unified search approach helps multinational organizations maintain centralized content repositories while making sure users, regardless of their preferred language, can effectively find and access relevant […]

opensearchopensearch serviceorganizations
#opensearch#opensearch service#organizations#ga#support

Users usually package their function code as container images when using machine learning (ML) models that are larger than 250 MB, which is the Lambda deployment package size limit for zip files. In this post, we demonstrate an approach that downloads ML models directly from Amazon S3 into your function’s memory so that you can continue packaging your function code using zip files.

lambdas3
#lambda#s3

The global real-time payments market is experiencing significant growth. According to Fortune Business Insights, the market was valued at USD 24.91 billion in 2024 and is projected to grow to USD 284.49 billion by 2032, with a CAGR of 35.4%. Similarly, Grand View Research reports that the global mobile payment market, valued at USD 88.50 […]

Laravel, one of the world’s most popular web frameworks, launched its first-party observability platform, Laravel Nightwatch, to provide developers with real-time insights into application performance. Built entirely on AWS managed services and ClickHouse Cloud, the service already processes over one billion events per day while maintaining sub-second query latency, giving developers instant visibility into the health of their applications.

msk
#msk#launch

AWS announced the general availability of Apache Airflow 3 on Amazon Managed Workflows for Apache Airflow (Amazon MWAA). This release transforms how organizations use Apache Airflow to orchestrate data pipelines and business processes in the cloud, bringing enhanced security, improved performance, and modern workflow orchestration capabilities to Amazon MWAA customers. This post explores the features of Airflow 3 on Amazon MWAA and outlines enhancements that improve your workflow orchestration capabilities

organizations
#organizations#ga#new-feature#enhancement

Amazon ECS Managed Instances is a new compute option that eliminates infrastructure management overhead while giving you access to the broad suite of EC2 capabilities including the flexibility to select instance types, access reserved capacity, and advanced security and observability configurations.

lexec2ecs
#lex#ec2#ecs

Generative AI agents in production environments demand resilience strategies that go beyond traditional software patterns. AI agents make autonomous decisions, consume substantial computational resources, and interact with external systems in unpredictable ways. These characteristics create failure modes that conventional resilience approaches might not address. This post presents a framework for AI agent resilience risk analysis […]

The AWS SDK for Java 1.x (v1) entered maintenance mode on July 31, 2024, and will reach end-of-support on December 31, 2025. We recommend that you migrate to the AWS SDK for Java 2.x (v2) to access new features, enhanced performance, and continued support from AWS. To help you migrate efficiently, we’ve created a migration […]

#new-feature#support

With the Amazon EMR 7.10 runtime, Amazon EMR has introduced EMR S3A, an improved implementation of the open source S3A file system connector. In this post, we showcase the enhanced read and write performance advantages of using Amazon EMR 7.10.0 runtime for Apache Spark with EMR S3A as compared to EMRFS and the open source S3A file system connector.

s3emr
#s3#emr

In this post, we explore commonly used Amazon CloudWatch metrics and alarms for OpenSearch Serverless, walking through the process of selecting relevant metrics, setting appropriate thresholds, and configuring alerts. This guide will provide you with a comprehensive monitoring strategy that complements the serverless nature of your OpenSearch deployment while maintaining full operational visibility.

opensearchcloudwatch
#opensearch#cloudwatch

The Nova Act extension is a new IDE-integrated tool that enables developers to create browser automation agents using natural language through the Nova Act model, offering features like Builder Mode, chat capabilities, and predefined templates while streamlining the development process without leaving their preferred development environment.

nova
#nova

In this post, we explore how Metagenomi built a scalable database and search solution for over 1 billion protein vectors using LanceDB and Amazon S3. The solution enables rapid enzyme discovery by transforming proteins into vector embeddings and implementing a serverless architecture that combines AWS Lambda, AWS Step Functions, and Amazon S3 for efficient nearest neighbor searches.

lambdas3step functions
#lambda#s3#step functions

Three weeks ago, I published a post about the new AWS Region in New Zealand (ap-southeast-6). This led to an incredible opportunity to visit New Zealand, where I met passionate builders and presented at several events including Serverless and Platform Engineering meetup, AWS Tools and Programming meetup, AWS Cloud Clubs in Auckland, and AWS Community […]

amazon qq developereksstep functions
#amazon q#q developer#eks#step functions

Amazon Bedrock has expanded its model offerings with the addition of Qwen 3 foundation models enabling users to access and deploy them in a fully managed, serverless environment. These models feature both mixture-of-experts (MoE) and dense architectures to support diverse use cases including advanced code generation, multi-tool business automation, and cost-optimized AI reasoning.

bedrock
#bedrock#now-available#support

AWS launches DeepSeek-V3.1 as a fully managed models in Amazon Bedrock. DeepSeek-V3.1 is a hybrid open weight model that switches between thinking mode for detailed step-by-step analysis and non-thinking mode for faster responses.

bedrock
#bedrock#launch#now-available

Today, we’re excited to announce new capabilities that further simplify the local testing experience for Lambda functions and serverless applications through integration with LocalStack, an AWS Partner, in the AWS Toolkit for Visual Studio Code. In this post, we will show you how you can enhance your local testing experience for serverless applications with LocalStack using AWS Toolkit.

lambdalocalstack
#lambda#localstack#integration

Today, we announce the availability of a Security Technical Implementation Guide (STIG) for Amazon Linux 2023 (AL2023), developed through collaboration between Amazon Web Services (AWS) and the Defense Information Systems Agency (DISA). The STIG guidelines are important for U.S Department of Defense (DOD) and Federal customers needing strict security compliance derived from the National Institute […]

#now-available

Delightful developer experience is an important part of building serverless applications efficiently, whether you’re creating an automation script or developing a complex enterprise application. While AWS Lambda has transformed modern application development in the cloud with its serverless computing model, developers spend significant time working in their local environments. They rely on familiar IDEs, debugging […]

lexlambda
#lex#lambda

This two-part series explores the different architectural patterns, best practices, code implementations, and design considerations essential for successfully integrating generative AI solutions into both new and existing applications. In this post, we focus on patterns applicable for architecting real-time generative AI applications.

When expanding your Graviton deployment across multiple AWS Regions, careful planning helps you navigate considerations around regional instance type availability and capacity optimization. This post shows how to implement advanced configuration strategies for Graviton-enabled EC2 Auto Scaling groups across multiple Regions, helping you maximize instance availability, reduce costs, and maintain consistent application performance even in AWS Regions with limited Graviton instance type availability.

ec2graviton
#ec2#graviton#ga

In this post, we explore an efficient approach to managing encryption keys in a multi-tenant SaaS environment through centralization, addressing challenges like key proliferation, rising costs, and operational complexity across multiple AWS accounts and services. We demonstrate how implementing a centralized key management strategy using a single AWS KMS key per tenant can maintain security and compliance while reducing operational overhead as organizations scale.

lexorganizations
#lex#organizations#ga

AWS Lambda cold start latency can impact performance for latency-sensitive applications, with function initialization being the primary contributor to startup delays. Lambda SnapStart addresses this challenge by reducing cold start times from several seconds to sub-second performance for Java, Python, and .NET runtimes with minimal code changes. This post explains SnapStart's underlying mechanisms and provides performance optimization recommendations for applications using this feature.

lambda
#lambda

Imagine an AI assistant that doesn’t just respond to prompts – it reasons through goals, acts, and integrates with real-time systems. This is the promise of agentic AI. According to Gartner, by 2028 over 33% of enterprise applications will embed agentic capabilities – up from less than 1% today. While early generative AI efforts focused […]

#ga

This two-part series shows how Karrot developed a new feature platform, which consists of three main components: feature serving, a stream ingestion pipeline, and a batch ingestion pipeline. This post covers the process of collecting features in real-time and batch ingestion into an online store, and the technical approaches for stable operation.

#new-feature

In this post, we demonstrate how to deploy the DeepSeek-R1-Distill-Qwen-32B model using AWS DLCs for vLLMs on Amazon EKS, showcasing how these purpose-built containers simplify deployment of this powerful open source inference engine. This solution can help you solve the complex infrastructure challenges of deploying LLMs while maintaining performance and cost-efficiency.

lexeks
#lex#eks

Cold starts are an important consideration when building applications on serverless platforms. In AWS Lambda, they refer to the initialization steps that occur when a function is invoked after a period of inactivity or during rapid scale-up. While typically brief and infrequent, cold starts can introduce additional latency, making it essential to understand them, especially […]

lambda
#lambda

As cloud spending continues to surge, organizations must focus on strategic cloud optimization to maximize business value. This blog post explores key insights from MIT Technology Review's publication on cloud optimization, highlighting the importance of viewing optimization as a continuous process that encompasses all six AWS Well-Architected pillars.

organizations
#organizations#ga

In this post, you’ll learn how Zapier has built their serverless architecture focusing on three key aspects: using Lambda functions to build isolated Zaps, operating over a hundred thousand Lambda functions through Zapier's control plane infrastructure, and enhancing security posture while reducing maintenance efforts by introducing automated function upgrades and cleanup workflows into their platform architecture.

lambda
#lambda

In this post, we show you how to implement comprehensive monitoring for Amazon Elastic Kubernetes Service (Amazon EKS) workloads using AWS managed services. This solution demonstrates building an EKS platform that combines flexible compute options with enterprise-grade observability using AWS native services and OpenTelemetry.

lexeks
#lex#eks

In this post, you'll learn how Scale to Win configured their network topology and AWS WAF to protect against DDoS events that reached peaks of over 2 million requests per second during the 2024 US presidential election campaign season. The post details how they implemented comprehensive DDoS protection by segmenting human and machine traffic, using tiered rate limits with CAPTCHA, and preventing CAPTCHA token reuse through AWS WAF Bot Control.

waf
#waf#ga

AWS Transform for VMware is a service that tackles cloud migration challenges by significantly reducing manual effort and accelerating the migration of critical VMware workloads to AWS Cloud. In this post, we highlight its comprehensive capabilities, including streamlined discovery and assessment, intelligent network conversion, enhanced security and compliance, and orchestrated migration execution.

transform for vmware
#transform for vmware

Today, we are excited to announce the general availability of the AWS .NET Distributed Cache Provider for Amazon DynamoDB. This is a seamless, serverless caching solution that enables .NET developers to efficiently manage their caching needs across distributed systems. Consistent caching is a difficult problem in distributed architectures, where maintaining data integrity and performance across […]

dynamodb
#dynamodb#generally-available

This blog was co-authored by Afroz Mohammed and Jonathan Nunn, Software Developers on the AWS PowerShell team. We’re excited to announce the general availability of the AWS Tools for PowerShell version 5, a major update that brings new features and improvements in security, along with a few breaking changes. New Features You can now cancel […]

#generally-available#new-feature#update#improvement

In this post, we explore the Amazon Bedrock baseline architecture and how you can secure and control network access to your various Amazon Bedrock capabilities within AWS network services and tools. We discuss key design considerations, such as using Amazon VPC Lattice auth policies, Amazon Virtual Private Cloud (Amazon VPC) endpoints, and AWS Identity and Access Management (IAM) to restrict and monitor access to your Amazon Bedrock capabilities.

bedrockiam
#bedrock#iam

Software development is far more than just writing code. In reality, a developer spends a large amount of time maintaining existing applications and fixing bugs. For example, migrating a Go application from the older AWS SDK for Go v1 to the newer v2 can be a significant undertaking, but it’s a crucial step to future-proof […]

amazon qq developer
#amazon q#q developer

Organizations managing large audio and video archives face significant challenges in extracting value from their media content. Consider a radio network with thousands of broadcast hours across multiple stations and the challenges they face to efficiently verify ad placements, identify interview segments, and analyze programming patterns. In this post, we demonstrate how you can automatically transform unstructured media files into searchable, analyzable content.

organizations
#organizations#ga

We’re excited to announce that the AWS Deploy Tool for .NET now supports deploying .NET applications to select ARM-based compute platforms on AWS! Whether you’re deploying from Visual Studio or using the .NET CLI, you can now target cost-effective ARM infrastructure like AWS Graviton with the same streamlined experience you’re used to. Why deploy to […]

graviton
#graviton#support

Version 4.0 of the AWS SDK for .NET has been released for general availability (GA). V4 has been in development for a little over a year in our SDK’s public GitHub repository with 13 previews being released. This new version contains performance improvements, consistency with other AWS SDKs, and bug and usability fixes that required […]

#preview#ga#improvement

Today, AWS launches the developer preview of the AWS IoT Device SDK for Swift. The IoT Device SDK for Swift empowers Swift developers to create IoT applications for Linux and Apple macOS, iOS, and tvOS platforms using the MQTT 5 protocol. The SDK supports Swift 5.10+ and is designed to help developers easily integrate with […]

#launch#preview#support

We are excited to announce the Developer Preview of the Amazon S3 Transfer Manager for Rust, a high-level utility that speeds up and simplifies uploads and downloads with Amazon Simple Storage Service (Amazon S3). Using this new library, developers can efficiently transfer data between Amazon S3 and various sources, including files, in-memory buffers, memory streams, […]

s3
#s3#preview

In Part 1 of our blog posts for .NET Aspire and AWS Lambda, we showed you how .NET Aspire can be used for running and debugging .NET Lambda functions. In this part, Part 2, we’ll show you how to take advantage of the .NET Aspire programming model for best practices and for connecting dependent resources […]

lambda
#lambda

In a recent post we gave some background on .NET Aspire and introduced our AWS integrations with .NET Aspire that integrate AWS into the .NET dev inner loop for building applications. The integrations included how to provision application resources with AWS CloudFormation or AWS Cloud Development Kit (AWS CDK) and using Amazon DynamoDB local for […]

lambdadynamodbcloudformation
#lambda#dynamodb#cloudformation#ga#integration

.NET Aspire is a new way of building cloud-ready applications. In particular, it provides an orchestration for local environments in which to run, connect, and debug the components of distributed applications. Those components can be .NET projects, databases, containers, or executables. .NET Aspire is designed to have integrations with common components used in distributed applications. […]

#integration

AWS announces important configuration updates coming July 31st, 2025, affecting AWS SDKs and CLIs default settings. Two key changes include switching the AWS Security Token Service (STS) endpoint to regional and updating the default retry strategy to standard. These updates aim to improve service availability and reliability by implementing regional endpoints to reduce cross-regional dependencies and introducing token-bucket throttling for standardized retry behavior. Organizations should test their applications before the release date and can opt-in early or temporarily opt-out of these changes. These updates align with AWS best practices for optimal service performance and security.

organizations
#organizations#ga#update