AWS AI News Hub

Your central source for the latest AWS artificial intelligence and machine learning service announcements, features, and updates

Filter by Category

199
Total Updates
92
What's New
20
ML Blog Posts
20
News Articles
Showing 199 of 199 updates

Amazon Application Recovery Controller (ARC) Region switch allows you to orchestrate the specific steps to switch your multi-Region applications to operate out of another AWS Region and achieve a bounded recovery time in the event of a Regional impairment to your applications. Region switch saves hours of engineering effort and eliminates the operational overhead previously required to complete failover steps, create custom dashboards, and manually gather evidence of a successful recovery for applications across your organization and hosted in multiple AWS accounts. Today, we are announcing three new Region switch capabilities: AWS GovCloud (US) support: ARC Region switch is now generally available in AWS GovCloud (US-East and US-West) Regions. Plan execution reports: Region switch now automatically generates a comprehensive report from each plan execution and saves it to an Amazon S3 bucket of your choice. Each report includes a detailed timeline of events for the recovery operation, resources in scope for the Region switch, alarm states for optional application status alarms, and recovery time objective (RTO) calculations. This eliminates the manual effort previously required to compile evidence and documentation for compliance officers and auditors. DocumentDB global cluster execution blocks: Adding to the catalog of 9 execution blocks, Region switch now supports Amazon DocumentDB global cluster execution blocks for automated multi-Region database recovery. This feature allows you to orchestrate DocumentDB global cluster failover and switchover operations within your Region switch plans. To get started, build a Region switch plan using the ARC console, API, or CLI. See the AWS Regional Services List for availability information. Visit our home page or read the documentation.

s3rds
#s3#rds#generally-available#ga#support#new-region

AWS Private Certificate Authority (AWS Private CA) now supports Online Certificate Status Protocol (OCSP) in China and AWS GovCloud (US) Regions. AWS Private CA is a fully managed certificate authority service that makes it easy to create and manage private certificates for your organization without the operational overhead of running your own CA infrastructure. OCSP enables real-time certificate validation, allowing applications to check the revocation status of individual certificates on-demand rather than downloading Certificate Revocation List (CRL) files. With OCSP support, customers in these Regions can implement more efficient certificate validation with minimal bandwidth, typically requiring a few hundred bytes per query, versus downloading large Certificate Revocation Lists (CRLs) that can be hundreds of kilobytes or larger. This enables real-time revocation checks for use cases such as validating internal microservices communications, implementing zero trust security architectures, and authenticating IoT devices. AWS Private CA fully manages the OCSP responder infrastructure, providing high availability without requiring you to deploy or maintain OCSP servers. OCSP is now also available in the following AWS Regions: China (Beijing), and China (Ningxia), AWS GovCloud (US-East), AWS GovCloud (US-West). To enable OCSP for your certificate authorities, use the AWS Private CA console, AWS CLI, or API. To learn more about OCSP, see Certificate Revocation in the AWS Private CA User Guide. For pricing information, visit the AWS Private CA pricing page.

#ga#now-available#support

Today, AWS announces SOCI (Seekable Open Container Initiative) indexing support for Amazon SageMaker Studio, reducing container startup times by 30-50% when using custom images. Amazon SageMaker Studio is a fully integrated, browser-based environment for end-to-end machine learning development. SageMaker Studio provides pre-built container images for popular ML frameworks like TensorFlow, PyTorch, and Scikit-learn that enable quick environment setup. However, when data scientists need to tailor environments for specific use cases with additional libraries, dependencies, or configurations, they can build and register custom container images with pre-configured components to ensure consistency across projects. As ML workloads become increasingly complex, these custom container images have grown in size, leading to startup times of several minutes that create a bottlenecks in iterative ML development where quick experimentation and rapid prototyping are essential. SOCI indexing addresses this challenge by enabling lazy loading of container images, downloading only the necessary components to start applications with additional files loaded on-demand as needed. Instead of waiting several minutes for complete custom image downloads, users can begin productive work in seconds while the environment completes initialization in the background. To use SOCI indexing, create a SOCI index for your custom container image using tools like Finch CLI, nerdctl, or Docker with SOCI CLI, push the indexed image to Amazon Elastic Container Registry (ECR), and reference the image index URI when creating SageMaker Image resources. SOCI indexing is available in all AWS Regions where Amazon SageMaker Studio is available. To learn more about implementing SOCI indexing for your SageMaker Studio custom images, see Bring your own SageMaker image in the Amazon SageMaker Developer Guide.

sagemakerlex
#sagemaker#lex#support

Amazon Relational Database Service (RDS) now offers enhanced observability for your snapshot exports to Amazon S3, providing detailed insights into export progress, failures, and performance for each task. These notifications enable you to monitor your exports with greater granularity and enables more predictability. With snapshot export to S3, you can export data from your RDS database snapshots to Apache Parquet format in your Amazon S3 bucket. This launch introduces four new event types, including current export progress and table-level notifications for long-running tables, providing more granular visibility into your snapshot export performance and recommendations for troubleshooting export operation issues. Additionally, you can view export progress, such as the number of tables exported and pending, along with exported data sizes, enabling you to better plan your operations and workflows. You can subscribe to these events through Amazon Simple Notification Service (SNS) to receive notifications and view the export events through the AWS Management Console, AWS CLI, or SDK. This feature is available for RDS PostgreSQL, RDS MySQL, and RDS MariaDB engines in all Commercial Regions where RDS is generally available. To learn more about the new event types, see Event categories in RDS.

s3rdssns
#s3#rds#sns#launch#generally-available

Amazon Bedrock Data Automation (BDA) now supports blueprint instruction optimization, enabling you to improve the accuracy of your custom field extraction using just a few example document assets with ground truth labels. BDA automates the generation of insights from unstructured multimodal content such as documents, images, audio, and videos for your GenAI-powered applications. Blueprint instruction optimization automatically refines the natural language instructions in your blueprints, helping you achieve production-ready accuracy in minutes without model training or fine-tuning. With blueprint instruction optimization, you can now bring up to 10 representative document assets from your production workload and provide the correct, expected values for each field. Blueprint instruction optimization analyzes the differences between your expected results and the Data Automation inference results, and then refines the natural language instructions to improve extraction accuracy across your examples. For your intelligent document processing applications, you can now improve the accuracy of extracting insights such as invoice line items, contract terms, tax form fields, or medical billing codes. After optimization completes, you receive detailed evaluation metrics including exact match rates and F1 scores measured against your ground truth, giving you confidence that your blueprint is ready for production deployment. Data Automation blueprint instruction optimization for documents is available in all AWS Regions where Amazon Bedrock Data Automation is supported. To learn more, see the Bedrock Data Automation User Guide and the Amazon Bedrock Pricing page. To get started with blueprint instruction optimization, navigate to your blueprint in the Amazon Bedrock console, go to Data Automation, select your custom outputs for documents, and select Start Optimization.

bedrock
#bedrock#launch#ga#support

Amazon Timestream for InfluxDB now offers a restart API for both InfluxDB versions 2 and 3. This new capability enables customers to trigger system restarts on their database instances directly through the AWS Management Console, API, or CLI, to streamline operational management of their time-series database environments. With the restart API, customers can perform resilience testing to validate their application's behavior during database restarts and address health-related issues without requiring support intervention. This feature enhances operational flexibility for DevOps teams managing mission-critical workloads, allowing them to implement more comprehensive testing strategies and respond faster to performance concerns by providing direct control over database instance lifecycle operations. Amazon Timestream for InfluxDB restart capability is available in all Regions where Timestream for InfluxDB is offered. To get started with Amazon Timestream for InfluxDB 3, visit the Amazon Timestream for InfluxDB console. For more information, see the Amazon Timestream for InfluxDB documentation and pricing page.

lex
#lex#support#new-capability

AWS announces Cost Allocation tags support for account tags across AWS Cost Management products, enabling customers with multiple member accounts to utilize their existing AWS Organizations account tags directly in cost management tools. Account tags are applied at the account level in AWS Organizations and automatically apply to all metered usage within tagged accounts, eliminating the need to manually configure and maintain separate account groupings in AWS Cost Explorer, Cost and Usage Reports, AWS Budgets, and Cost Categories. With account tag support, customers can analyze costs by account tag directly in Cost Explorer and Cost and Usage Reports (CUR 2.0 and FOCUS). Customers can set up AWS Budgets and AWS Cost Anomaly Detection alerts on groups of accounts without configuring lists of account IDs. Customers can also build complex cost categories on top of account tags for further categorization. Account tags enable cost allocation for untaggable resources including refunds, credits, and certain service charges that cannot be tagged at the resource level. When new accounts join the organization or existing accounts are removed, customers simply add or update relevant tags, and the changes automatically apply across all cost management products. To get started, customers apply tags to accounts in the AWS Organizations console, then activate those account tags from the Cost Allocation Tags page in the Billing and Cost Management console. This feature is generally available in all AWS Regions, excluding GovCloud (US) Regions and China (Beijing) and China (Ningxia) Regions. To learn more, see organizing and tracking costs using AWS cost allocation tags.

lexorganizations
#lex#organizations#generally-available#ga#update#support

Amazon Elastic Container Registry (ECR) now supports automatic repository creation on image push. This new capability simplifies container workflows by having ECR automatically create repositories if they don't exist when an image is pushed, without customers having to pre-create repositories before pushing container images. Now when customers push images, ECR will automatically create repositories according to defined repository creation template settings. Create on push is available in all AWS commercial and AWS GovCloud (US) Regions. To learn more about repository creation templates, please visit our documentation. You can learn more about storing, managing and deploying container images and artifacts with Amazon ECR, including how to get started, from our product page and user guide.

#support#new-capability

AWS IoT now supports event-based logging, a new capability that helps developers reduce Amazon CloudWatch costs while improving log management efficiency. This feature enables targeted logging for individual event with customizable log levels and Amazon CloudWatch log group destinations. With Event-Based Logging, you can set different log levels for different types of IoT events based on their operational importance. For example, you can configure INFO-level logging for certificateProvider events while maintaining ERROR-level logging for less critical activities like connectivity events. The granularity allows you to maintain comprehensive visibility into your IoT operations without the overhead of logging every activity at the same verbosity level, improving log searchability and analysis efficiency while helping to reducing costs. Event-level logging is now available for configuration through the AWS IoT console, CLI, and API in all AWS Regions where AWS IoT is supported. To learn more about configuring Event-Based Logging, visit AWS IoT Developer Guide.

cloudwatch
#cloudwatch#now-available#support#new-capability

Amazon WorkSpaces Applications now offers images powered by Microsoft Windows Server 2025, enabling customers to launch streaming instances with the latest features and enhancements from Microsoft’s newest server operating system. This update ensures your application streaming environment benefits from improved security, performance, and modern capabilities. With Windows Server 2025 support, you can deliver the Microsoft Windows 11 desktop experience to your end users, giving you greater flexibility in choosing the right operating system for your specific application and desktop streaming needs. Whether you're running business-critical applications or providing remote access to specialized software, you now have expanded options to align your infrastructure decisions with your unique workload requirements and organizational standards. You can select from AWS-provided public images or create custom images tailored to your requirements using Image Builder. Support for Microsoft Windows Server 2025 is now generally available in all AWS Regions where Amazon WorkSpaces Applications is offered. To get started with Microsoft Windows Server 2025 images, visit the Amazon WorkSpaces Applications documentation. For pricing details, see the Amazon WorkSpaces Applications Pricing page.

lexrds
#lex#rds#launch#generally-available#ga#update

Amazon Redshift ODBC 2.x driver now supports Apple macOS, expanding platform compatibility for developers and analysts. This enhancement allows Apple macOS users to connect to Amazon Redshift clusters using the latest Amazon Redshift ODBC 2.x driver version. You can use an ODBC connection to connect to your Amazon Redshift cluster from many third-party SQL client tools and applications. The Amazon Redshift ODBC 2.x native driver support enables you to access Amazon Redshift features such as data sharing write capabilities and Amazon IAM Identity Center integration - features that are only available through Amazon Redshift drivers. This native Apple macOS support enables seamless integration with Extract, Transform, Load (ETL) and Business Intelligence (BI) tools, allowing you to use Apple macOS while accessing the full suite of Amazon Redshift capabilities. We recommend that you upgrade to the latest Amazon Redshift ODBC 2.x driver version to access new features. For installation instructions and system requirements, please see the Amazon Redshift ODBC 2.x driver documentation.

redshiftiamiam identity center
#redshift#iam#iam identity center#new-feature#enhancement#integration

AWS Glue now supports zero-ETL for self-managed database sources in seven additional regions. Using Glue zero-ETL, you can setup an integration to replicate data from Oracle, SQL Server, MySQL or PostgreSQL databases which are located on-premises or on AWS EC2 to Redshift with a simple experience that eliminates configuration complexity. AWS zero-ETL for self-managed database sources will automatically create an integration for an on-going replication of data from your on-premises or EC2 databases through a simple, no-code interface. You can now replicate data from Oracle, SQL Server, MySQL and PostgreSQL databases into Redshift. This feature further reduces users' operational burden and saves weeks of engineering effort needed to design, build, and test data pipelines to ingest data from self-managed databases to Redshift. AWS Glue zero-ETL for self-managed database sources are available in the following additional AWS Regions: Asia Pacific (Hong Kong), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (London), South America (São Paulo), and US (Virginia) regions. To get started, sign into the AWS Management Console. For more information visit the AWS Glue page or review the AWS Glue zero-ETL documentation.

lexec2redshifteksglue
#lex#ec2#redshift#eks#glue#ga

Amazon EC2 now supports Availability Zone ID (AZ ID) parameter, enabling you to create and manage resources such as instances, volumes, and subnets using consistent zone identifiers. AZ IDs are consistent and static identifiers that represent the same physical location across all AWS accounts, helping you optimize resource placement. Prior to this launch, you had to use an AZ name while creating a resource, but these names could map to different physical locations. This mapping made it difficult to ensure resources were always co-located especially when operating with multiple accounts. Now, you can specify the AZ ID parameter directly in your EC2 APIs to guarantee consistent placement of resources. AZ IDs always refer to the same physical location across all accounts, which means you no longer need to manually map AZ names across your accounts or deal with the complexity of tracking and aligning zones. This capability is now available for resources including instances, launch templates, hosts, reserved instances, fleet, spot instances, volumes, capacity reservations, network insights, VPC endpoints and subnets, network interfaces, fast snapshot restore, and instance connect. This feature is available in all AWS regions including China and AWS GovCloud (US) Regions. To learn more about Availability Zone IDs, visit the documentation.

lexec2
#lex#ec2#launch#now-available#support

Amazon WorkSpaces Applications now offers support for Ubuntu Pro 24.04 LTS on Elastic fleets, enabling Independent Software Vendors (ISVs) and central IT organizations to stream Ubuntu desktop applications to users while leveraging the flexibility, scalability, and cost-effectiveness of the AWS Cloud. Amazon WorkSpaces Applications is a fully managed, secure desktops and applications streaming service that provides users with instant access to their desktops and applications from anywhere. Within Amazon WorkSpaces Applications, Elastic fleet is a server less fleet type that lets you stream desktop applications to your end users from an AWS-managed pool of streaming instances without needing to predict usage, create and manage scaling policies, or create an image. Elastic fleet type is designed for customers that want to stream applications to users without managing any capacity or creating WorkSpaces Applications images. To get started sign into the WorkSpaces Applications management console and select one of the AWS Region of your choice. For the full list of Regions where WorkSpaces Applications is available, see the AWS Region Table. Amazon WorkSpaces Applications offers pay-as-you-go pricing. For more information, see Amazon WorkSpaces Applications Pricing.

lexorganizations
#lex#organizations#ga#support

Starting today, the compute-optimized Amazon EC2 C8a instances are available in the Europe (Spain) region. C8a instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, delivering up to 30% higher performance and up to 19% better price-performance compared to C7a instances. C8a instances deliver 33% more memory bandwidth compared to C7a instances, making these instances ideal for latency sensitive workloads. Compared to Amazon EC2 C7a instances, they are up to 57% faster for GroovyJVM allowing better response times for Java-based applications. C8a instances offer 12 sizes including 2 bare metal sizes. This range of instance sizes allows customers to precisely match their workload requirements. C8a instances are built on AWS Nitro System and are ideal for high performance, compute-intensive workloads such as batch processing, distributed analytics, high performance computing (HPC), ad serving, highly-scalable multiplayer gaming, and video encoding. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 C8a instance page.

ec2
#ec2#ga#now-available

AWS IoT Core now lets you batch multiple IoT messages into a single HTTP rule action, before routing the messages to downstream HTTP endpoints. This enhancement helps you to reduce cost and throughput overhead when ingesting telemetry from your Internet of Things (IoT) workloads. AWS IoT Core is a fully managed service that securely connects millions of IoT devices to the AWS cloud. Using rules for AWS IoT, you can filter, process, and decode device data, and route that data to AWS services or third-party endpoints via 20+ AWS IoT rule actions, such as HTTP rule action - which routes the data to HTTP endpoints. With the new feature, you can now batch messages together before routing that data set to downstream HTTP endpoints. To efficiently process IoT messages using the new batching capability, connect your IoT devices to AWS IoT Core and define a HTTP rule action with your desired batch parameters. AWS IoT Core will then process incoming messages according to these specifications and route the messages to your designated HTTP endpoints. For example, you can now combine IoT messages published from multiple smart home devices in a single batch and route it to a HTTP endpoint in your smart home platform. This new feature is available in all AWS Regions where AWS IoT Core is available, including AWS GovCloud (US) and Amazon China Regions. To learn more, visit our developer guide, pricing page, and API documentation.

#new-feature#enhancement

Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports dual-stack connectivity (IPv4 and IPv6) for new connectors on Amazon MSK Connect. This capability enables customers to create connectors on MSK Connect using both IPv4 and IPv6 protocols, in addition to the existing IPv4-only option. It helps customers modernize applications for IPv6 environments while maintaining IPv4 compatibility, making it easier to meet compliance requirements and prepare for future network architectures. Amazon MSK Connect is a fully managed service that allows you to deploy and operate Apache Kafka Connect connectors in a fully managed environment. Previously, connectors on MSK Connect only supported IPv4 addressing for all connectivity options. With this new capability, customers can now enable dual-stack connectivity (IPv4 and IPv6) on new connectors using the Amazon MSK Console, AWS CLI, SDK, or CloudFormation by setting the Network Type parameter during connector creation. All connectors on MSK Connect will by default use IPv4-only connectivity unless explicitly opted-in for dual-stack while creating new connectors. Existing connectors will continue using IPv4 connectivity. To change that you will need to delete and recreate the connector. Dual-stack connectivity for new connectors on MSK Connect is now available in all AWS Regions where Amazon MSK Connect is available, at no additional cost. To learn more about Amazon MSK dual-stack support, refer to the Amazon MSK developer guide.

cloudformationkafkamsk
#cloudformation#kafka#msk#now-available#support#new-capability

Amazon WorkSpaces now supports IPv6 for WorkSpaces domains and external endpoints, enabling users to connect through an IPv4/IPv6 dual-stack configuration from compatible clients (excluding SAML authentication). This helps customers meet IPv6 compliance requirements and eliminates the need for costly networking equipment to handle address translation between IPv4 and IPv6. Dual-stack support for WorkSpaces addresses the Internet's growing demand for IP addresses by offering a vastly larger address space than IPv4. This eliminates the need to manage overlapping address ranges within your Virtual Private Cloud (VPC). Customers can deploy WorkSpaces through dual-stack that supports both IPv4 and IPv6 protocols while maintaining backward compatibility with existing IPv4 systems. Customers can also connect to their WorkSpaces through PrivateLink VPC endpoints over IPv6, enabling them to access the service privately without routing traffic over the public internet. Connecting to Amazon WorkSpaces over IPv4/IPv6 dual-stack configuration is supported in all AWS Regions where Amazon WorkSpaces is available, including the AWS GovCloud (US East & US West) Regions. There is no additional cost for this feature. To enable IPv6, you must use the latest WorkSpaces client application for Windows, macOS, Linux, PCoIP zero clients, or web access. To learn more about IPv6 support on Amazon WorkSpaces, refer to the Amazon WorkSpaces administration guide.

#support

Amazon ECS Managed Instances now supports Amazon EC2 Spot Instances, extending the range of capabilities available with AWS-managed infrastructure. With this launch, you can leverage spare EC2 capacity at up to 90% discount compared to On-Demand prices for fault-tolerant workloads, while AWS handles infrastructure management. ECS Managed Instances is a fully managed compute option designed to eliminate infrastructure management overhead, dynamically scale EC2 instances to match your workload requirements and continuously optimize task placement to reduce infrastructure costs. You can simply define your task requirements such as the number of vCPUs, memory size, and CPU architecture, and Amazon ECS automatically provisions, configures and operates most optimal EC2 instances within your AWS account using AWS-controlled access. You can also specify desired instance types in Managed Instances capacity provider configuration, including GPU-accelerated, network-optimized, and burstable performance, to run your workloads on the instance families you prefer. With today's launch, you can additionally configure a new parameter, capacityOptionType, as spot or on-demand in your capacity provider configuration. Support for EC2 Spot Instances is available in all AWS Regions that Amazon ECS Managed Instances is available. You will be charged for the management of compute provisioned, in addition to your spot Amazon EC2 costs. To learn more about ECS Managed Instances, visit the feature page, documentation, and AWS News launch blog.

ec2ecs
#ec2#ecs#launch#support

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8i and R8i-flex instances are available in the Asia Pacific (Seoul), South America (Sao Paulo), and Asia Pacific (Tokyo) regions. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The R8i and R8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver 20% higher performance than R7i instances, with even higher gains for specific workloads. They are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to R7i. R8i-flex, our first memory-optimized Flex instances, are the easiest way to get price performance benefits for a majority of memory-intensive workloads. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. R8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. R8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. R8i instances are SAP-certified and deliver 142,100 aSAPS, the highest among all comparable machines in on-premises and cloud environments, delivering exceptional performance for mission-critical SAP workloads. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information about the new R8i and R8i-flex instances visit the AWS News blog.

lexec2
#lex#ec2#ga#now-available

AWS Lambda durable functions enable developers to build reliable multi-step applications and AI workflows within the Lambda developer experience. Starting today, durable functions are available in 14 additional AWS Regions: US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (Milan) Europe (Stockholm), Europe (Spain), Asia Pacific (Sydney), Asia Pacific (Hong Kong), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Malaysia), and Asia Pacific (Thailand). Lambda durable functions extend the Lambda programming model with new primitives in your event handler, such as "steps" and "waits", allowing you to checkpoint progress, automatically recover from failures, and pause execution without incurring compute charges for on-demand functions. With this region expansion, you can orchestrate complex processes such as order workflows, user onboarding, and AI-assisted tasks closer to your users and data, helping you to meet low-latency and data residency requirements while standardizing on a single serverless programming model. You can activate durable functions for new Python (versions 3.13 and 3.14) or Node.js (versions 22 and 24) based Lambda functions using the AWS Lambda API, AWS Management Console, or AWS SDK. You can also use infrastructure as code tools such as AWS Cloud Formation, AWS Serverless Application Model (AWS SAM), and the AWS Cloud Development Kit (AWS CDK). For more information on durable functions, visit the AWS Lambda Developer Guide. To learn about pricing, visit AWS Lambda pricing. For the latest region availability, visit the AWS Capabilities by Region page.

lexlambda
#lex#lambda#ga#now-available#expansion

AWS Direct Connect now supports resilience testing with AWS Fault Injection Service (FIS), a fully managed service for running controlled fault injection experiments to improve application performance, observability, and resilience. With this capability, you can test and observe how your applications respond when Border Gateway Protocol (BGP) sessions over your Virtual Interfaces are disrupted and validate your resilience mechanisms. With this new capability, you can test how your applications handle Direct Connect BGP failover in a controlled environment. For example, you can validate that traffic routes to redundant Virtual Interfaces when a primary Virtual Interface's BGP session is disrupted and your applications continue to function as expected. This capability is particularly valuable for proactively testing Direct Connect architectures where failover is critical to maintaining network connectivity. This new action is available in all AWS Commercial Regions where AWS FIS is offered. To learn more, visit the AWS FIS product page and the Direct Connect FIS actions user guide.

#ga#support#new-capability

AWS Clean Rooms now supports change requests to modify existing collaboration settings, offering customers greater flexibility in managing collaborations and developing new use cases with their partners. With this new capability, you can submit a change request for a collaboration, including adding new members, updating member abilities, and modifying collaboration auto-approval settings. To maintain security, all collaboration members must approve change requests before updates take affect, ensuring that existing privacy controls remain protected. For transparency, all change requests are logged in the change history for member review. For example, when a publisher creates a Clean Rooms collaboration with an advertiser, the publisher can add the advertiser’s marketing agency as a new member that can receive the analysis results directly in their account, enabling faster time-to-insights and streamlined campaign optimizations with the publisher. This approach reduces onboarding time while maintaining the existing privacy controls for you and your partners. With AWS Clean Rooms, customers can create a secure data clean room in minutes and collaborate with any company on AWS or Snowflake to generate unique insights about advertising campaigns, investment decisions, and research and development. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms.

lex
#lex#update#support#new-capability

Amazon ECS now enables you to define weekly event windows for scheduling task retirements on AWS Fargate. This capability provides precise control over when infrastructure updates and task replacements occur, helping prevent disruption to mission-critical workloads during peak business hours. AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. As part of the AWS shared responsibility model, Fargate maintains the underlying infrastructure with periodic platform updates. Fargate automatically retires your tasks for these updates and notifies you about upcoming task retirements via email and the AWS Health Dashboard. By default, tasks are retired 7 days after notification, but you can configure the fargateTaskRetirementWaitPeriod account setting to extend the retirement period to 14 days or initiate immediate retirement (0 days). Previously, you could build automation using the task retirement notification and wait period to perform service updates or task replacements on your own cadence. With today's launch, you can now use the Amazon EC2 event windows interface to define weekly event windows for precise control over the timing of Fargate task retirements. For example, you can schedule task retirements for a mission-critical service that requires high uptime during weekdays by configuring retirements to occur only on weekends. To get started, configure the AWS account setting fargateEventWindows to enabled as a one-time set up. Once enabled, configure Amazon EC2 event window(s) by specifying time ranges, and associate the event window(s) with your ECS tasks by selecting Amazon ECS-managed tags as the association target. Use the aws:ecs:clusterArn tag for targeting your tasks in an ECS cluster, aws:ecs:serviceArn tag for ECS services, or aws:ecs:fargateTask with a value of true to apply the window to all Fargate tasks. This feature is now available in all commercial AWS Regions. To learn more, visit our documentation.

ec2ecsfargate
#ec2#ecs#fargate#launch#ga#now-available

Amazon Neptune Database is now available in the Europe (Zurich) Region on engine versions 1.4.5.0 and later. You can now create Neptune clusters using R5, R5d, R6g, R6i, X2iedn, T4g, and T3 instance types in the AWS Europe (Zurich) Region. Amazon Neptune Database is a fast, reliable, and fully managed graph database as a service that makes it easy to build and run applications work with highly connected datasets. You can build applications using Apache TinkerPop Gremlin or openCypher on the Property Graph model, or using the SPARQL query language on W3C Resource Description Framework (RDF). Neptune also offers enterprise features such as high availability, automated backups, and network isolation to help customers quickly deploy applications to production. To get started, you can create a new Neptune cluster using the AWS Management Console, AWS CLI, or a quickstart AWS CloudFormation template. For more information on pricing and region availability, refer to the Neptune pricing page and AWS Region Table.

cloudformation
#cloudformation#now-available

Today, Amazon Simple Email Service (SES) announces email validation, a new capability that helps customers reduce bounce rates and protect sender reputation by validating email addresses before sending. Customers can validate individual addresses via API calls or enable automatic validation across all outbound emails. Email validation helps customers maintain list hygiene, reduce bounces and improve delivery by identifying invalid addresses that could damage sender reputation. The API provides detailed validation insights such as syntax checks and DNS records. With Auto Validation enabled, SES automatically reviews every outbound email address with out requiring any code changes. Auto-Validation can be configured at the account level or at the configuration set level using simple toggles in the AWS console, enabling seamless integration with existing workflows. Email validation is available in all AWS Regions where Amazon SES is available. To learn more, see the documentation on Email Validation in the Amazon SES Developer Guide. To start using Email Validation, visit the Amazon SES console.

rds
#rds#integration#new-capability

Amazon Managed Streaming for Apache Kafka (MSK) now supports Apache Kafka version 3.9 for Express Brokers. This release introduces support for KRaft (Kafka Raft), Apache Kafka's new consensus protocol that eliminates the dependency on Apache ZooKeeper for metadata management. KRaft shifts metadata management in Kafka clusters from external Apache ZooKeeper nodes to a group of controllers within Kafka. This change allows metadata to be stored and replicated as topics within Kafka brokers, resulting in faster propagation of metadata. New Express Broker clusters created using Kafka v3.9 will automatically use KRaft as the metadata management mode, giving you the benefits of this modern architecture from the start. The ability to upgrade existing clusters to v3.9 will be available in a future release. Amazon MSK Express Brokers with Kafka v3.9 are available in all AWS regions where MSK Express is supported. To get started, create a new Express Broker cluster and select Kafka version 3.9 in the AWS Management Console or via the AWS CLI or AWS SDKs.

kafkamsk
#kafka#msk#ga#support

With Kinesis Data Firehose, customers can use a fully managed, reliable, and scalable data streaming solution to Splunk. In this post, we tell you a bit more about the Kinesis Data Firehose and Splunk integration. We also show you how to ingest large amounts of data into Splunk using Kinesis Data Firehose.

kinesis
#kinesis#integration

Today, AWS Databases including Amazon Aurora PostgreSQL, Amazon Aurora DSQL, and Amazon DynamoDB are generally available on the Vercel Marketplace, enabling you to create and connect to an AWS database directly from Vercel in seconds. To get started, you can create a new AWS Account from Vercel that includes access to the three databases and $100 USD in credits. These credits can be used with any of these database option for up to six months. Once your account is set up, you can have a production-ready Aurora database or DynamoDB table powering your Vercel projects within seconds. You can also manage your plan, add payment information, and view usage details anytime by visiting the AWS settings portal from the Vercel dashboard. To learn more, visit the AWS landing page on the Vercel Marketplace. The integration includes serverless options for Amazon Aurora PostgreSQL, Amazon Aurora DSQL, and Amazon DynamoDB to simplify your application needs and reduce costs by scaling to zero when not in use. You can create a database in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), and Asia Pacific (Mumbai) with more Regions coming soon. AWS Databases deliver security, reliability, and price performance without the operational overhead, whether you're prototyping your next big idea or running production AI and data driven applications. For more information, visit the AWS Databases webpage.

dynamodb
#dynamodb#generally-available#now-available#integration#coming-soon

Today, AWS announces the general availability of the new Amazon Elastic Compute Cloud (Amazon EC2) M8gn and M8gb instances. These instances are powered by AWS Graviton4 processors to deliver up to 30% better compute performance than AWS Graviton3 processors. M8gn instances feature the latest 6th generation AWS Nitro Cards, and offer up to 600 Gbps network bandwidth, the highest network bandwidth among network optimized EC2 instances. M8gb offer up to 150 Gbps of EBS bandwidth to provide higher EBS performance compared to same-sized equivalent Graviton4-based instances. M8gn are ideal for network-intensive workloads such as high-performance file systems, distributed web scale in-memory caches, caching fleets, real-time big data analytics, and Telco applications such as 5G User Plane Function (UPF). M8gb are ideal for workloads requiring high block storage performance such as high performance databases and NoSQL databases. M8gn instances offer instance sizes up to 48xlarge, up to 768 GiB of memory, up to 600 Gbps of networking bandwidth, and up to 60 Gbps of bandwidth to Amazon Elastic Block Store (EBS). They also support EFA networking on the 16xlarge, 24xlarge, and 48xlarge sizes. M8gb instances offer sizes up to 24xlarge, up to 768 GiB of memory, up to 150 Gbps of EBS bandwidth, and up to 200 Gbps of networking bandwidth. They support Elastic Fabric Adapter (EFA) networking on the 16xlarge and 24xlarge sizes, which enables lower latency and improved cluster performance for workloads deployed on tightly coupled clusters. The new instances are available in the following AWS Regions: US East (N. Virginia), and US West (Oregon). To learn more, see Amazon EC2 M8gn and M8gb Instances. To begin your Graviton journey, visit the Level up your compute with AWS Graviton page.

ec2rdsgraviton
#ec2#rds#graviton#generally-available#support

Today, Amazon WorkSpaces Applications announced a new set of Amazon CloudWatch metrics for monitoring the health and performance of fleets, sessions, instances, and users. Administrators and support operations personnel can conveniently enable monitoring across fleets from the Amazon CloudWatch console. These metrics simplify troubleshooting and dynamically update to reflect the latest state of important performance metrics. Users can make informed decisions on sizing and end users' streaming instances by setting performance thresholds on available metrics to meet performance and budgeting criteria. They can view performance metrics to troubleshoot end user streaming session related issues. To enable this feature for your fleet instancess, you must use a WorkSpaces Applications image that uses latest agent released on or after December 06, 2025 or has been updated using Managed WorkSpaces Applications image updates released on or after December 05, 2025. These CloudWatch metrics are available in all AWS commercial and AWS GovCloud (US) Regions where Amazon WorkSpaces Applications is currently available. To get started or learn more, you can visit Amazon WorkSpaces Applications Metrics and Dimensions documentation.

cloudwatch
#cloudwatch#update#support

Amazon Aurora PostgreSQL-Compatible Edition now supports PostgreSQL version 18.1 in the Amazon RDS Database Preview Environment, allowing you to evaluate PostgreSQL 18.1 on Amazon Aurora PostgreSQL. PostgreSQL 18.1 was released by the PostgreSQL community on September 9, 2025.  PostgreSQL 18.1 includes "skip scan" support for multicolumn B-tree indexes and improves WHERE clause handling for OR and IN conditions. It introduces parallel GIN index builds and updates join operations. Observability improvements show buffer usage counts and index lookups during query execution, along with a per-connection I/O utilization metric. To learn more about PostgreSQL 18.1, read here.   Database instances in the RDS Database Preview Environment allow testing of a new database engine without the hassle of having to self-install, provision, and manage a preview version of the Aurora PostgreSQL database software. Clusters are retained for a maximum period of 60 days and are automatically deleted after this retention period. Amazon RDS Database Preview Environment database instances are priced the same as production Aurora instances created in the US East (Ohio) Region.   Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.

rds
#rds#preview#update#improvement#integration#support

In this post, we'll explore the new capabilities and core concepts that help organizations track and manage models development and deployment lifecycles. We will show you how the features are configured to train models with automatic end-to-end lineage, from dataset upload and versioning to model fine-tuning, evaluation, and seamless endpoint deployment.

sagemakerorganizations
#sagemaker#organizations#ga

Starting today, AWS Billing Conductor customers gain greater flexibility when using custom line items. Customers can now create service-specific custom line items scoped at either one AWS service or to a set of selected AWS service and can choose how these line items are presented in the pro forma billing artifacts, such as Bills Page, Cost Explorer and Cost and Usage Records. These enhancements enable customers to create more precise and tailored charge-back and re-billing strategies that better reflect their pricing structures and improve the traceability experience for pro forma users. Customers can use this functionality to apply percentage discounts on Saving Plans fees or allocate shared flat support charges under AWS Support service. Service specific custom line items are available for standard billing group regardless of the type of pricing plan selected, and for billing-transfer billing groups exclusively when customer-managed pricing plans are selected. To start, use AWS Billing Conductor console or APIs, create a custom line item and specify the cost reference value (one or multiple AWS services) and the display setting options (itemized or consolidated under your service of choice). To learn more about custom line items visit AWS Billing Conductor documentation. This feature is available now in all AWS commercial Regions, excluding AWS China (Beijing) Region, operated by Sinnet, and AWS China (Ningxia) Region, operated by NWCD.

lexrds
#lex#rds#ga#enhancement#support

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8g and M8g instances are available in AWS GovCloud (US-West) and R8g and M8g instances are available in AWS GovCloud (US-East) regions. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. They are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads. AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. These instances offer larger instance sizes with up to 3x more vCPUs and memory compared to Graviton3-based Amazon C8g, M8g and R8g instances. AWS Graviton4 processors are up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than AWS Graviton3 processors. C8g and R8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To learn more, see Amazon EC2 C8g Instances, Amazon EC2 M8g Instances, and Amazon EC2 R8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS GovCloud (US) Console.

ec2graviton
#ec2#graviton#now-available

Amazon Elastic Container Registry (ECR) Public now supports PrivateLink for the US East (N. Virginia) SDK endpoint, providing enhanced network security and private connectivity for customers. This update allows customers to access this ECR Public SDK endpoint through a private network connection, reducing exposure to the public internet. With this enhancement, customers can now establish a private connection from their Amazon Virtual Private Cloud (VPC) to their ECR Public SDK endpoint while creating and maintaining their ECR Public repositories. This means organizations can maintain network privacy and security, reduce exposure of sensitive network traffic, comply with stricter network security requirements, and simplify network architecture for accessing ECR Public resources. Get started today with the US East (N. Virginia) ECR Public SDK endpoint. To learn more, visit ECR documentation.

organizations
#organizations#ga#update#enhancement#support

Today, EC2 Auto Scaling is launching a new API, LaunchInstances, which gives customers more control and flexibility over how EC2 Auto Scaling provisions instances while providing instant feedback on capacity availability. Customers use EC2 Auto Scaling for automated fleet management. With scaling policies, EC2 Auto Scaling can automatically add instances when demand spikes and remove them when traffic drops, ensuring customers' applications always have the right amount of compute. EC2 Auto Scaling also offers the ability to monitor and replace unhealthy instances. In certain use cases, customers may want to specify exactly where EC2 Auto Scaling should launch additional instances and need immediate feedback on capacity availability. The new LaunchInstances API allows customers to precisely control where instances are launched by specifying an override for any Availability Zone and/or subnet in an Auto Scaling group, while providing immediate feedback on capacity availability. This synchronous operation gives customers real-time insight into scaling operations, enabling them to quickly implement alternative strategies if needed. For additional flexibility, the API includes optional asynchronous retries to help reach the desired capacity. This feature is now available in all AWS Regions and AWS GovCloud (US) Regions, at no additional cost beyond standard EC2 and EBS usage. To get started, visit the AWS Command Line Interface (CLI) and the AWS SDKs. To learn more about this feature, visit the AWS documentation.

lexec2
#lex#ec2#launch#now-available

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8g instances are available in AWS Europe (Zurich) region. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 C8g instances are built for compute-intensive workloads, such as high performance computing (HPC), batch processing, gaming, video encoding, scientific modeling, distributed analytics, CPU-based machine learning (ML) inference, and ad serving. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads. AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. These instances offer larger instance sizes with up to 3x more vCPUs and memory compared to Graviton3-based Amazon C7g instances. AWS Graviton4 processors are up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than AWS Graviton3 processors. C8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To learn more, see Amazon EC2 C8g Instances. To get started, see the AWS Management Console.

ec2graviton
#ec2#graviton#ga#now-available

Amazon OpenSearch Service introduces OI2 instances, expanding the OpenSearch Optimized Instance family. The new OI2 instances delivers up to 9% higher indexing throughput compared to OR2 instances and up to 33% over I8g instances in our internal benchmarks. The new OI2 OpenSearch Optimized instances use the same architecture as the OR2 instances, leveraging best-in-class cloud technologies like Amazon S3, to provide high durability, and improved price-performance for higher indexing throughput better for indexing heavy workload. Each OpenSearch Optimized instance is provisioned with compute, 3rd generation AWS Nitro SSDs for caching, and remote Amazon S3-based managed storage. OI2 offers pay-as-you-go pricing and reserved instances, with a simple hourly rate for the instance including the NVMe storage, as well as managed storage provisioned. OI2 instances come in sizes ‘large’ through ‘24xlarge’, and offer compute, memory, and up to 22.5 TB storage. Please refer to the Amazon OpenSearch Service pricing page for pricing details. OI2 instance family is now available on Amazon OpenSearch Service across 12 regions globally: US East (N. Virginia, Ohio), US West (Oregon), Canada (Central), Asia Pacific (Mumbai, Singapore, Sydney, Tokyo), Europe (Frankfurt, Ireland, London, Spain).

s3opensearchopensearch service
#s3#opensearch#opensearch service#ga#now-available

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M8g instances are available in Asia Pacific (Thailand, Jakarta, Melbourne), and AWS Middle East (UAE) regions. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 M8g instances are built for general-purpose workloads, such as application servers, microservices, gaming servers, midsize data stores, and caching fleets. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads. AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. These instances offer larger instance sizes with up to 3x more vCPUs and memory compared to Graviton3-based Amazon M7g instances. AWS Graviton4 processors are up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than AWS Graviton3 processors. M8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To learn more, see Amazon EC2 M8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

ec2graviton
#ec2#graviton#ga#now-available

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8g instances are available in AWS Europe (Paris), and Asia Pacific (Hyderabad) regions. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 R8g instances are ideal for memory-intensive workloads such as databases, in-memory caches, and real-time big data analytics. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads. AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. AWS Graviton4-based R8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 1.5TB) than Graviton3-based R7g instances. These instances are up to 30% faster for web applications, 40% faster for databases, and 45% faster for large Java applications compared to AWS Graviton3-based R7g instances. R8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To learn more, see Amazon EC2 R8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

ec2graviton
#ec2#graviton#now-available

Amazon OpenSearch Service now offers a new multi-tier storage option powered by OpenSearch Optimized Instances. This new architecture combines Amazon S3 cloud technology with local instance storage to deliver improved durability and performance. The new multi-tier architecture features two tiers: hot and warm. The hot tier handles frequently accessed data, while the warm tier leverages Amazon S3 for cost-effective storage of less frequently accessed data.  Until now, Amazon OpenSearch Service supported a warm tier through UltraWarm, which provided cost-effective storage for read-only data. The new warm tier powered by OpenSearch Optimized instances supports write operations, providing greater flexibility for data management. You can automate rotating data from hot to warm as it ages using Index State Management feature. For warm tier deployments, customers can use OpenSearch Optimized (OI2) instances (size ‘large’ to ‘8xlarge’), with addressable warm of up to five times the local cache size. tandard Managed Storage charges apply for warm data. The new Multi-tier experience is available on OpenSearch 3.3 and above. For more information please refer to the documentation. New Multi-Tier experience on OI2 instance family is now available on Amazon OpenSearch Service across 12 regions globally: US East (N. Virginia, Ohio), US West (Oregon), Canada (Central), Asia Pacific (Mumbai, Singapore, Sydney, Tokyo), Europe (Frankfurt, Ireland, London, Spain).  Please refer to the Amazon OpenSearch Service pricing page for pricing details

lexs3opensearchopensearch service
#lex#s3#opensearch#opensearch service#ga#now-available

AWS Security Incident Response is now available to customers in ten additional opt-in AWS Regions: Africa (Cape Town), Asia Pacific (Hong Kong, Hyderabad, Jakarta, Melbourne), Europe (Zurich, Milan, Spain), Middle East (UAE, Bahrain). You can now use these additional regions to prepare for, respond to, and recover from security events faster and more effectively. AWS Security Incident Response streamlines every step of the security incident response lifecycle through automated security finding monitoring and triage, AI-powered investigation, and containment capabilities. When specialized expertise is required, Security Incident Response gives you direct 24/7 access to a dedicated group of AWS security experts who respond to your request within minutes. This powerful combination of automation and expertise enables you to confidently scale your security operations, so you can focus on innovation and growth. For more information, please visit the AWS Security Incident Response page and documentation for more information. See the Supported Configurations page for regional and language support.

nova
#nova#ga#now-available#support

AWS Payment Cryptography has expanded its global presence with availability in two new regions - Asia Pacific(Hyderabad) and Europe(Paris). This expansion enables customers with latency-sensitive payment applications to build, deploy or migrate into additional AWS Regions without depending on cross-region support. These Region offers offers additional options for multi-region high availability for Europe and India. AWS Payment Cryptography is a fully managed service that simplifies payment-specific cryptographic operations and key management for cloud-hosted payment applications. The service scales elastically with your business needs and is assessed as compliant with PCI PIN and PCI P2PE requirements, eliminating the need to maintain dedicated payment HSM instances. Organizations performing payment functions - including acquirers, payment facilitators, networks, switches, processors, and banks can now position their payment cryptographic operations closer to their applications while reducing dependencies on auxiliary data centers with dedicated payment HSMs. AWS Payment Cryptography is available in the following AWS Regions: Canada(Montreal), US East (Ohio, N. Virginia), US West (Oregon), Europe (Ireland, Frankfurt, London,Paris), Africa(Cape Town) and Asia Pacific (Singapore, Tokyo, Osaka, Mumbai,Hyderabad). To start using the service, please download the latest AWS CLI/SDK and see the AWS Payment Cryptography user guide for more information.

organizations
#organizations#ga#now-available#support#new-region#expansion

AWS Marketplace now supports mandatory purchase order requirements and custom messaging at the time of purchase, allowing organizations to strengthen their procurement governance. These capabilities help procurement, software asset management, and cloud governance teams enforce compliance policies while maintaining purchasing agility. Administrators can now enforce their mandatory purchase order policy by requiring buyers to provide purchase orders when subscribing to products through AWS Marketplace. These requirements can be applied to purchases through a private and public offer across various pricing types. Additionally, administrators can add a custom message on the procurement page, providing guidance on policy requirements and support contacts. Organizations can implement purchase order requirements without custom messaging, use custom messaging to guide buyers through the procurement process, or combine both features for more comprehensive governance. These capabilities can also be used with Private Marketplace, which allows customers to create a curated catalog of approved products for specific users and groups within an AWS organization. This flexibility helps finance and procurement teams enforce compliance at the time of purchase, improve cost allocation accuracy, and streamline procurement-to-pay cycles. These capabilities are available today in all AWS Regions where AWS Marketplace is supported. For information on configuring purchase order requirements and custom messaging, refer to the AWS Marketplace Buyer Guide.

lexorganizations
#lex#organizations#ga#support

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8i and C8i-flex instances are available in the Asia Pacific (Singapore) region. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. These C8i and C8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% higher performance than C7i and C7i-flex instances, with even higher gains for specific workloads. The C8i and C8i-flex are up to 60% faster for NGINX web applications, up to 40% faster for AI deep learning recommendation models, and 35% faster for Memcached stores compared to C7i and C7i-flex. C8i-flex are the easiest way to get price performance benefits for a majority of compute intensive workloads like web and application servers, databases, caches, Apache Kafka, Elasticsearch, and enterprise applications. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. C8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. C8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information about the new C8i and C8i-flex instances visit the AWS News blog.

lexec2kafka
#lex#ec2#kafka#ga#now-available

Amazon Redshift Serverless announces the general availability of dual-stack mode that supports Internet Protocol version 6 (IPv6). This enhancement enables you to modernize your network infrastructure and meet the growing demands of internet connectivity. Redshift Serverless supports configuring your Redshift workgroups with both IPv4 and IPv6 addresses (dual-stack) or IPv4-only configurations within your AWS Virtual Private Clouds (VPCs). You can enable IPv6 support when creating new Redshift Serverless workgroups or modify existing workgroups to support IPv6 addressing. With this capability, you can deploy Redshift warehouses in IPv6-enabled VPC subnets and configure network settings to support the expanding address space requirements of your applications. Your applications can now communicate with Redshift warehouses using either IPv4 or IPv6 protocols, ensuring compatibility with both existing and future network architectures. This feature is available in all AWS commercial regions where Redshift Serverless is available. To get started, read the documentation and blog.

redshift
#redshift#enhancement#support

You can now deploy AWS IAM Identity Center in 37 AWS Regions, including Asia Pacific (Taipei). IAM Identity Center is the recommended service for managing workforce access to AWS applications. It enables you to connect your existing source of workforce identities to AWS once and offer your users single sign on experience across AWS. It powers the personalized experiences offered by AWS applications, such as Amazon Q, and the ability to define and audit user-aware access to data in AWS services, such as Amazon Redshift. It can also help you manage access to multiple AWS accounts from a central place. IAM Identity Center is available at no additional cost in these AWS Regions. To learn more about IAM Identity Center, visit the product detail page. To get started, see the IAM Identity Center user guide.

amazon qpersonalizeredshiftiamiam identity center
#amazon q#personalize#redshift#iam#iam identity center#now-available

Today, AWS Payment Cryptography has expanded to Australia (Sydney) Region, and now supports AS2805 functionality. This expansion represents the service's thirteenth AWS Region worldwide. With AS2805 capabilities customers can migrate more payment workloads to AWS while maintaining operability with other companies utilizing this standard. Australia, New Zealand and several other countries rely on Australia Standards 2805 (AS2805) as a consistent approach for managing cryptography between organizations for card payments. Historically, these companies required Hardware Security Modules (HSM) to perform these operations in a compliant, compatible manner. AWS Payment Cryptography now provides equivalent functionality for node-to-node use cases in an elastic, scalable service, eliminating the operational burden of procuring and managing standalone hardware. Customers can leverage the service’s use of PCI-certified HSMs as part of their overall compliance programs while using APIs that integrate with AWS IAM and AWS CloudTrail. AWS Payment Cryptography is available in the following AWS Regions: Canada (Montreal), US East (Ohio, N. Virginia), US West (Oregon), Europe (Ireland, Frankfurt, London), Africa (Cape Town), Asia Pacific (Singapore, Tokyo, Osaka, Mumbai) and Australia New Zealand (Sydney). To start using the service, please download the latest AWS CLI/SDK and see the AWS Payment Cryptography user guide for more information including further compliance details.

rdsiamorganizations
#rds#iam#organizations#ga#now-available#support

In this post, we'll show how the TwelveLabs Marengo embedding model, available on Amazon Bedrock, enhances video understanding through multimodal AI. We'll build a video semantic search and analysis solution using embeddings from the Marengo model with Amazon OpenSearch Serverless as the vector database, for semantic search capabilities that go beyond simple metadata matching to deliver intelligent content discovery.

bedrockopensearch
#bedrock#opensearch

Amazon Connect alerts on real-time metrics now provide the specific agents, queues, flows, or routing profiles that exceeded thresholds and triggered the alert. This enables managers to respond faster to customer experience and operational issues by eliminating the need to manually investigate the root cause of the alert. For example, alerts on elevated queue wait times now include the exact queues affected, so managers can reassign agents to those queues. These detailed alerts can be sent through email, tasks, and Amazon EventBridge. This feature is available in all regions where Amazon Connect is offered. To learn more, please visit our documentation and our webpage.

eventbridge
#eventbridge#launch#ga

AWS Artifact now enables direct access to previous versions of AWS compliance reports, eliminating the need to contact AWS Support or account representatives. This self-service capability helps customers efficiently manage their compliance documentation requirements, particularly during audits and vendor assessments that require historical compliance evidence. To access previous report versions, you need the "artifact:ListReportVersions" IAM permission, which is included in the AWS managed policy "AWSArtifactReportsReadOnlyAccess". If you're unable to view previous versions of reports in the AWS Artifact console, please contact your AWS account administrator to request this permission. Once authorized, you can access previous versions of compliance reports (such as SOC, ISO, and C5) directly through the AWS Artifact console. Simply navigate to the reports page and select any report to view its available versions. The availability of previous report versions varies by compliance program, with some reports offering versions from multiple prior years while others may have more limited historical coverage. This feature is now generally available in US East (N. Virginia) and AWS GovCloud (US-West) Regions. To learn more about accessing previous versions of compliance reports, visit the AWS Artifact documentation. For general information about AWS Artifact, see the AWS Artifact product page.

iam
#iam#generally-available#ga#support

AWS Security Incident Response now offers integration with the cloud-based team collaboration platform Slack, enabling you to prepare for, respond to, and recover from security events faster and more effectively while maintaining your existing notification and communication workflows. With the bidirectional integration, you can create and update cases in both the Security Incident Response console and Slack with automatic data replication. Each Security Incident Response case is represented as a dedicated Slack channel, while comments and attachments sync instantly. This gives responders immediate access to critical case information and enables more efficient collaboration regardless of tool preference. The integration helps security teams engage faster and accelerate response times by automatically adding Security Incident Response watchers to the corresponding Slack channel. This integration is available as an open-source solution on GitHub, providing customers and partners the opportunity to customize and extend the functionality. The integration leverages EventBridge which allows customers to continue using their existing security incident management and notification tooling, while leveraging AWS Security Incident Response capabilities. The solution features a modular architecture, and includes guidance on how to use Amazon Q Developer, Kiro, or similar AI assistants that help make it easy to add new integration targets beyond Slack. To get started with the AWS Security Incident Response Slack integration, visit our GitHub repository. Visit our technical documentation for Slack for implementation details. Learn more about AWS Security Incident Response in the service’s User Guide.

amazon qq developereventbridge
#amazon q#q developer#eventbridge#ga#update#integration

Starting today, Amazon EC2 M8i instances are now available in Asia Pacific (Seoul), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), and Canada (Central) Regions. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The M8i offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% better performance than M7i instances, with even higher gains for specific workloads. The M8i instances are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to M7i instances. M8i instances are a great choice for all general purpose workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. The SAP-certified M8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. To get started, sign in to the AWS Management Console. For more information about the new instances, visit the M8i instance page or visit the AWS News blog.

ec2
#ec2#ga#now-available

AWS is now publishing your carbon footprint data in 21 days or less. Previously, the carbon footprint data was published with up to a three month data lag. Now, you have access to your carbon footprint data with estimates published between the 15th and the 21st of the month following your usage. With carbon footprint data available sooner, you have the insights needed to make more timely decisions about how and where your applications are running and identify opportunities to reduce emissions and costs through improved resource efficiency. Also, the CCFT dashboard maintains 38 months of data so you can view your carbon usage trends over time. To view your carbon footprint data, navigate to your carbon emissions data through the AWS Billing and Cost Management console. For more information about CCFT visit the CCFT capabilities and features page, review the CCFT user guides, and learn more by visiting the CCFT webpage.

#ga

Starting today, you can build, train, and deploy machine learning (ML) models in Asia Pacific (New Zealand). Amazon SageMaker AI is a fully managed platform that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker AI removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models. To learn more and get started, see SageMaker AI documentation and pricing page.

sagemaker
#sagemaker#now-available

We are announcing memory for chat agents in Amazon Quick Suite – a feature that allows users to get personalized responses based on their previous conversations. With this feature, Quick Suite remembers the preferences users specify in chat and generate responses that are tailored to them. Users can also view their inferred preferences and remove any memory they don’t want Quick chat agents to use. Previously, chat users needed to repeat their preferences around response format, acronyms, dashboards, and integrations in every conversation. They also had to clarify ambiguous topics and entities in chat, increasing the tedious back and forth needed to get accurate and insightful responses. Memory addresses this pain point by remembering facts and details about users in a way that ensures responses provided to users continuously learn and improve. Users also control what Quick Suite remembers about them – all the memories are viewable and removable by users, and users have the choice to start chat in Private Mode in which conversations are not used to infer memories. Memory in Quick Suite chat agents is available in US East (N. Virginia) and US West (Oregon). To learn more, visit the Amazon Quick Suite User Guide.

amazon qpersonalizerds
#amazon q#personalize#rds#integration#support

Starting today, Amazon EC2 M8i-flex instances are now available in Asia Pacific (Sydney) Region. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The M8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% better performance than M7i-flex instances, with even higher gains for specific workloads. The M8i-flex instances are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to M7i-flex instances. M8i-flex instances are the easiest way to get price performance benefits for a majority of general-purpose workloads like web and application servers, microservices, small and medium data stores, virtual desktops, and enterprise applications. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. To get started, sign in to the AWS Management Console. For more information about the new instances, visit the M8i-flex instance page or visit the AWS News blog.

lexec2
#lex#ec2#ga#now-available

Amazon Quick Suite browser extension now supports Amazon Quick Flows, enabling you to run workflows directly within your web browser, eliminating the need to manually extract information from each web page. You can invoke workflows that you've created or that have been shared with you, and pass web page content as input—all without leaving your browser. This capability is great for completing routine tasks such as analyzing contract documents to extract key terms, or generating weekly reports from project dashboards that automatically notify stakeholders. Quick Flows in browser extension is available now in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland). There are no additional charges for using the browser extension beyond standard Quick Flows usage. To get started, visit your Chrome, Firefox or Edge store page to install browser extension and sign in with your Quick Suite account. Once you sign in, look for the Flows icon below the chat box to invoke your flows. To learn more about invoking Quick Flows in browser extension, please visit our documentation.

amazon qrds
#amazon q#rds#support

AWS Clean Rooms now publishes events to Amazon EventBridge for new member invitations and table readiness, delivering real-time insights and increasing transparency to collaboration members. Invited members to a collaboration now receive an EventBridge notification when invited to a Clean Rooms collaboration, making it easier for members to review new invitations and join collaborations. Collaboration members are also notified when AWS Entity Resolution resources are associated to a collaboration, such as ID mapping tables and ID namespaces, enabling you to automatically start analysis that uses related records across collaborators’ datasets. For example, when a publisher invites an advertiser to a collaboration, the publisher can automatically run their media planning analyses as soon as the advertiser has created their ID mapping table in the collaboration, reducing time-to-action from hours to minutes and increasing transparency between collaboration members. With AWS Clean Rooms, customers can create a secure data clean room in minutes and collaborate with any company on AWS or Snowflake to generate unique insights about advertising campaigns, investment decisions, and research and development. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms or AWS Entity Resolution.

rdseventbridge
#rds#eventbridge

In this post, you learn how to assess your existing Amazon EMR Spark applications, use the Spark upgrade agent directly from the Kiro IDE, upgrade a sample e-commerce order analytics Spark application project (including build configs, source code, tests, and data quality validation), and review code changes before rolling them out through your CI/CD pipeline.

emr
#emr

Starting with version 7.10, Amazon EMR is transitioning from EMR File System (EMRFS) to EMR S3A as the default file system connector for Amazon S3 access. This transition brings HBase on Amazon S3 to a new level, offering performance parity with EMRFS while delivering substantial improvements, including better standardization, improved portability, stronger community support, improved performance through non-blocking I/O, asynchronous clients, and better credential management with AWS SDK V2 integration. In this post, we discuss this transition and its benefits.

s3emr
#s3#emr#improvement#integration#support

In this post, we introduce checkpointless training on Amazon SageMaker HyperPod, a paradigm shift in model training that reduces the need for traditional checkpointing by enabling peer-to-peer state recovery. Results from production-scale validation show 80–93% reduction in recovery time (from 15–30 minutes or more to under 2 minutes) and enables up to 95% training goodput on cluster sizes with thousands of AI accelerators.

sagemakerhyperpod
#sagemaker#hyperpod

Socure is one of the leading providers of digital identity verification and fraud solutions. Socure’s data science environment includes a streaming pipeline called Transaction ETL (TETL), built on OSS Apache Spark running on Amazon EKS. TETL ingests and processes data volumes ranging from small to large datasets while maintaining high-throughput performance. In this post, we show how Socure was able to achieve 50% cost reduction by migrating the TETL streaming pipeline from self-managed spark to Amazon EMR serverless.

emreks
#emr#eks

Amazon SageMaker HyperPod now supports elastic training, enabling your machine learning (ML) workloads to automatically scale based on resource availability. In this post, we demonstrate how elastic training helps you maximize GPU utilization, reduce costs, and accelerate model development through dynamic resource adaptation, while maintain training quality and minimizing manual intervention.

sagemakerhyperpod
#sagemaker#hyperpod#support

Today, we're announcing that Amazon Elastic VMware Service (Amazon EVS) is now available in all availability zones in the US West (N. California), Asia Pacific (Hyderabad), Asia Pacific (Malaysia), Canada West (Calgary), Europe (Milan), Mexico (Central), and South America (São Paulo) Regions. This expansion provides more options to leverage the scale and flexibility of AWS for running your VMware workloads in the cloud. Amazon EVS lets you run VMware Cloud Foundation (VCF) directly within your Amazon Virtual Private Cloud (VPC) on EC2 bare-metal instances, powered by AWS Nitro. Using either our step-by-step configuration workflow or the AWS Command Line Interface (CLI) with automated deployment capabilities, you can set up a complete VCF environment in just a few hours. This rapid deployment enables faster workload migration to AWS, helping you eliminate aging infrastructure, reduce operational risks, and meet critical timelines for exiting your data center. The added availability in these Regions gives your VMware workloads lower latency through closer proximity to your end users, compliance with data residency or sovereignty requirements, and additional high availability and resiliency options for your enhanced redundancy strategy. To get started, visit the Amazon EVS product detail page and user guide.

lexec2
#lex#ec2#ga#now-available#expansion

Starting today, customers can use Amazon Managed Service for Apache Flink in Asia Pacific (Auckland) Region to build real-time stream processing applications. Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time with Apache Flink. Apache Flink is an open source framework and engine for processing data streams. Amazon Managed Service for Apache Flink reduces the complexity of building and managing Apache Flink applications and integrates with Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Kinesis Data Streams, Amazon OpenSearch Service, Amazon DynamoDB streams, Amazon Simple Storage Service (Amazon S3), custom integrations, and more using built-in connectors. You can learn more about Amazon Managed Service for Apache Flink here. For Amazon Managed Service for Apache Flink region availability, refer to the AWS Region Table.

lexs3opensearchopensearch servicedynamodb+3 more
#lex#s3#opensearch#opensearch service#dynamodb#kinesis

AWS announces a new cost allocation feature that uses existing workforce user attributes like cost center, division, organization, and department to track and analyze AWS application usage and cost. This new capability enables customers to allocate per-user monthly subscription and on-demand fees of AWS applications, such as Amazon Q Business, Amazon Q Developer, and Amazon QuickSight, to respective internal business units. Customers should import their workforce users’ attributes to IAM Identity Center, the recommended service for managing workforce access to AWS applications. After importing the attributes, customers can enable one or more of these attributes as cost allocation tags from the AWS Billing and Cost Management console. When users access AWS applications, their usage and cost are automatically recorded with selected attributes. Cloud Financial Operations (FinOps) professionals can view and analyze costs in AWS Cost Explorer and AWS CUR 2.0, gaining visibility into how different teams drive AWS usage and costs. Support for cost allocation using user attributes is generally available in all AWS Regions, excluding GovCloud (US) Regions and China (Beijing) and China (Ningxia) Regions. To learn more, see organizing and tracking cost using AWS cost allocation tags.

amazon qq developerq businessiamiam identity center+1 more
#amazon q#q developer#q business#iam#iam identity center#quicksight

Today, we’re announcing enhanced network policy capabilities in Amazon Elastic Kubernetes Service (EKS), allowing customers to improve the network security posture for their Kubernetes workloads and their integrations with cluster-external destinations. This enhancement builds on network segmentation features previously supported in EKS. Now you can centrally enforce network access filters across the entire cluster, as well as leverage Domain Name System (DNS) based policies to secure egress traffic from your cluster’s environment. As customers continue to scale their application environments using EKS, network traffic isolation is increasingly fundamental for preventing unauthorized access to resources inside and outside the cluster. To address this, EKS introduced support for Kubernetes NetworkPolicies in the Amazon VPC Container Network Interface (VPC CNI) plugin, allowing you to segment pod-to-pod communication at a namespace level. Now you can further strengthen the defensive posture for your Kubernetes network environment by centrally managing network filters for the whole cluster. Also, cluster admins now have a more stable and predictable approach for preventing unauthorized access to cluster-external resources in the cloud or on-prem using egress rules that filter traffic to external endpoints based on their Fully Qualified Domain Name (FQDN). These new network security features are available in all commercial AWS Regions for new EKS clusters running Kubernetes version 1.29 or later, with support for existing clusters to follow in the coming weeks. ClusterNetworkPolicy is available in all EKS cluster launch modes using VPC CNI v1.21.0 or later. DNS-based policies are only supported in EKS Auto Mode-launched EC2 instances. To learn more, visit the Amazon EKS documentation or read the launch blog post here.

ec2eks
#ec2#eks#launch#enhancement#integration#support

Today, AWS announces PDF export and CSV data download capabilities for AWS Billing and Cost Management Dashboards. These new features enable you to export your customized dashboards as PDF files for offline analysis and sharing, and download individual widget data in CSV format for detailed examination in spreadsheet applications. With these capabilities, you now have more ways to distribute AWS cost insights across your organization, in addition to sharing dashboards with can-view or can-edit access. Billing and Cost Management Dashboards allows you to export entire dashboards or individual widgets as PDF files directly from the console, eliminating the need for screenshots or manual formatting. The PDF export feature provides formatted reports that maintain consistent appearance and preserve dashboard layouts, making them ideal for sharing with stakeholders during board meetings, reviews, or strategic planning sessions. For detailed data analysis needs, you can export individual widget data in CSV format, enabling analysts to perform granular examination of specific cost metrics in their preferred spreadsheet tools. AWS Billing and Cost Management Dashboards PDF and CSV export features are available at no additional cost in all AWS commercial Regions, excluding AWS China Regions. To get started, visit the AWS Billing and Cost Management console and select "Dashboards" from the left navigation menu. For more information, see the AWS Billing and Cost Management Dashboards export user guide.

rds
#rds#ga#new-feature#support

AWS Certificate Manager (ACM) now automates certificate provisioning and distribution for Kubernetes workloads through AWS Controllers for Kubernetes (ACK). Previously, ACM automated certificate management for AWS-integrated services like Application Load Balancers and CloudFront. However, using ACM certificates with applications terminating TLS in Kubernetes required manual steps: exporting certificates and private keys via API, creating Kubernetes Secrets, and updating them at renewal. This integration extends ACM's automation to any Kubernetes workload for both public and private certificates, enabling you to manage certificates using native Kubernetes APIs. With ACK, you define certificates as Kubernetes resources, and the ACK controller automates the complete certificate lifecycle: requesting certificates from ACM, exporting them after validation, updating Kubernetes Secrets with the certificate and private key, and automatically updating those Secrets at renewal. This enables you to use ACM exportable public certificates (launched in June 2025) for internet-facing workloads or AWS Private CA private certificates for internal services in Amazon EKS or other Kubernetes environments. Use cases include terminating TLS in application pods (NGINX, custom applications), securing service mesh communication (Istio, Linkerd), and managing certificates for third-party ingress controllers (NGINX Ingress, Traefik). You can also distribute certificates to hybrid and edge Kubernetes environments. This feature is available in all commercial, AWS GovCloud (US), and AWS China regions where ACM is available. To learn more, visit the GitHub link or read our documentation and our pricing page.

ekscloudfront
#eks#cloudfront#launch#integration#support

Starting today, the general-purpose Amazon EC2 M7a instances are now available in AWS Europe (London) Region. M7a instances, powered by 4th Gen AMD EPYC processors (code-named Genoa) with a maximum frequency of 3.7 GHz, deliver up to 50% higher performance compared to M6a instances. With this additional region, M7a instances are available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney, Tokyo), and Europe (Frankfurt, Ireland, Spain, Stockholm, London). These instances can be purchased as Savings Plans, Reserved, On-Demand, and Spot instances. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the M7a instances page.

ec2
#ec2#now-available

In healthcare, generative AI is transforming how medical professionals analyze data, summarize clinical notes, and generate insights to improve patient outcomes. From automating medical documentation to assisting in diagnostic reasoning, large language models (LLMs) have the potential to augment clinical workflows and accelerate research. However, these innovations also introduce significant privacy, security, and intellectual property challenges.

nova
#nova

In this post, we explore how to build a sophisticated voice-powered AWS operations assistant using Amazon Nova Sonic for speech processing and Strands Agents for multi-agent orchestration. This solution demonstrates how natural language voice interactions can transform cloud operations, making AWS services more accessible and operations more efficient.

nova
#nova

AWS DataSync Enhanced mode now supports data transfers between on-premises file servers and Amazon S3, enabling customers to transfer datasets that scale to virtually unlimited numbers of files at higher levels of performance than DataSync Basic mode. AWS DataSync is a secure, high-speed file transfer service that optimizes data movement over a network. Enhanced mode uses parallel processing to deliver higher performance and scalability for datasets of any size, while removing file count limitations and providing detailed transfer metrics for better monitoring and management. Previously, Enhanced mode was available for data transfers between Amazon S3 locations and for multicloud transfers. This launch extends the capabilities of Enhanced mode to support transfers between on-premises NFS or SMB file servers, and Amazon S3. Using Enhanced mode, customers can accelerate generative AI workloads by rapidly moving training datasets to AWS, power data lake analytics by synchronizing on-premises data with cloud-based pipelines, and drive large-scale migrations for archival and cloud modernization. This new capability is available in all AWS Regions where AWS DataSync is offered. To get started, visit the AWS DataSync console. For more information, see the AWS DataSync documentation.

s3
#s3#launch#support#new-capability

Today, AWS Shield announces multi-account network security management and automated network analysis for network security director, which is currently in preview. AWS Shield network security director provides visibility into the AWS resources in your AWS organization, identifies missing or misconfigured network security services, and recommends remediation steps. With network security director, you can specify a delegated administrator account from which you can start continuous network analysis for multiple accounts or organizational units in your AWS Organization. You can then centrally view each account’s network topology, network security findings, and recommended remediations for missing or misconfigured network security services. You can also easily summarize and report on the network security misconfigurations identified by AWS Shield network security director from within Amazon Q Developer in the AWS Management Console and chat applications. AWS Shield network security director is also now available in five additional AWS regions: Europe (Ireland), Europe (Frankfurt), Asia Pacific (Hong Kong), Asia Pacific (Singapore), and Australia (Sydney). To learn more, visit the overview page.

amazon qq developer
#amazon q#q developer#preview#ga#now-available#support

You can now use Amazon MSK Replicator to replicate streaming data across Amazon Managed Streaming for Apache Kafka (Amazon MSK) clusters in ten additional AWS Regions: Middle East (Bahrain), Middle East (UAE), Asia Pacific (Jakarta), Asia Pacific (Hong Kong), Asia Pacific (Osaka), Asia Pacific (Melbourne), Africa (Cape Town), Europe (Milan), Europe (Zurich) and Israel (Tel Aviv). MSK Replicator is a feature of Amazon MSK that enables you to reliably replicate data across Amazon MSK clusters in different or the same AWS Region(s) in a few clicks. With MSK Replicator, you can easily build regionally resilient streaming applications for increased availability and business continuity. MSK Replicator provides automatic asynchronous replication across MSK clusters, eliminating the need to write custom code, manage infrastructure, or setup cross-region networking. MSK Replicator automatically scales the underlying resources so that you can replicate data on-demand without having to monitor or scale capacity. MSK Replicator also replicates the necessary Kafka metadata including topic configurations, Access Control Lists (ACLs), and consumer group offsets. If an unexpected event occurs in a region, you can failover to the other AWS Region and seamlessly resume processing. You can get started with MSK Replicator from the Amazon MSK console or the Amazon CLI. With this launch, MSK Replicator is now available in thirty five AWS Regions. To learn more, visit the MSK Replicator documentation, product page, and pricing page.

kafkamsk
#kafka#msk#launch#now-available

We are excited to announce that Amazon EMR Managed Scaling is now available for EMR on EC2 customers in the Asia Pacific (Malaysia, Melbourne, New Zealand, Taipei, Thailand), Canada West (Calgary), and Mexico (Central) AWS Regions. Amazon EMR Managed Scaling automatically resizes the EC2 instances in your EMR cluster for the best performance at the lowest possible cost. With Amazon EMR Managed Scaling, you simply specify the minimum and maximum compute limits for your clusters, and Amazon EMR on EC2 automatically resizes your cluster for optimal performance and resource utilization. Amazon EMR Managed Scaling constantly monitors key workload-related metrics and uses an algorithm that optimizes the cluster size for the best resource utilization. Using this algorithm, Amazon EMR can scale the EC2 cluster up during peaks and scale it down during idle periods, reducing your costs and optimizing cluster capacity for the best performance. Amazon EMR Managed Scaling can also be used with Amazon EC2 Spot Instances, that lets you take advantage of unused EC2 capacity for a discount when compared to on-demand prices. Amazon EMR Managed Scaling is now available in all AWS commercial regions. Amazon EMR Managed Scaling is supported for Apache Spark, Apache Hive and YARN-based workloads on Amazon EMR on EC2 versions 6.14 and above. To learn more and to get started, visit the Amazon EMR Managed Scaling user guide.

ec2emr
#ec2#emr#ga#now-available#support

AWS Systems Manager (AWS SSM) Configuration Manager now allows you to automatically test SAP ABAP based applications on AWS against best practices defined in the AWS Well-Architected Framework SAP Lens. Keeping SAP applications optimally configured requires SAP administrators to stay current with best practices from multiple sources including AWS, SAP, and operating system vendors and manually check their configurations to validate adherence. AWS SSM Configuration Manager automatically assesses SAP applications running on AWS against these standards, proactively identifying misconfigurations and recommending specific remediation steps, allowing you to make the necessary changes before potential impacts to business operations. With this launch, configuration checks can be scheduled or run on-demand for SAP HANA and ABAP applications. SSM for SAP Configuration Manager is available in AWS Regions where SSMSAP is available. To learn more, read the launch blog, or refer to the AWS Systems Manager for SAP documentation.

rds
#rds#launch#ga

Starting today, memory-optimized Amazon Compute Cloud (Amazon EC2) X2iedn instances are available in AWS Europe (Zurich) region. These instances, powered by 3rd generation Intel Xeon Scalable Processors and built with AWS Nitro System, are designed for memory-intensive workloads. They deliver improvements in performance, price performance, and cost per GiB of memory compared to previous generation X1e instances. These instances are SAP-certified for running Business Suite on HANA, SAP S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, SAP BW/4HANA, and SAP NetWeaver workloads on any database. To learn more, visit the EC2 X2i Instances Page, or connect with your AWS Support contacts.

ec2
#ec2#now-available#improvement#support

The AWS DataSync Terraform module now supports Enhanced mode for transfers between S3 locations, making it easier for you to set up high-performance data transfers at scale. AWS DataSync is a secure, high-speed file transfer service that optimizes data movement over a network. Enhanced mode uses parallel processing to deliver higher performance and scalability for datasets of any size, while removing file count limitations and providing detailed transfer metrics for better monitoring and management. You can now use Terraform to automatically provision DataSync tasks configured for Enhanced mode. This eliminates manual configuration steps that can be time-consuming and error-prone, while giving you a consistent, repeatable, version-controlled deployment process that can scale across your organization. You can access the AWS DataSync Terraform module on GitHub or through the Terraform Registry. To learn more about DataSync, see the AWS DataSync documentation. To see all Regions where DataSync is available, visit the AWS Region table.

s3
#s3#ga#support

Starting today, memory-optimized Amazon Compute Cloud (Amazon EC2) X2iedn instances are available in AWS Asia Pacific (Thailand) region. These instances, powered by 3rd generation Intel Xeon Scalable Processors and built with AWS Nitro System, are designed for memory-intensive workloads. They deliver improvements in performance, price performance, and cost per GiB of memory compared to previous generation X1e instances. These instances are SAP-certified for running Business Suite on HANA, SAP S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, SAP BW/4HANA, and SAP NetWeaver workloads on any database. To learn more, visit the EC2 X2i Instances Page, or connect with your AWS Support contacts.

ec2
#ec2#now-available#improvement#support

We are excited to announce the general availability of AWS Elastic Beanstalk in Asia Pacific (New Zealand) (Melbourne), (Malaysia), (Hyderabad), Canada West (Calgary), and Europe (Zurich). AWS Elastic Beanstalk is a service that simplifies application deployment and management on AWS. The service automatically handles deployment, capacity provisioning, load balancing, auto-scaling, and application health monitoring, allowing developers to focus on writing code. For a complete list of regions and service offerings, see AWS Regions. To get started on AWS Elastic Beanstalk, see the AWS Elastic Beanstalk Developer Guide. To learn more about Elastic Beanstalk, visit the Elastic Beanstalk product page.

#ga#now-available

Amazon Aurora DSQL now supports faster cluster creation, reducing setup time from minutes to seconds. With cluster creation now in seconds, developers can instantly provision Aurora DSQL databases to rapidly prototype new ideas. Developers can use the integrated query editor in the AWS console to immediately start building without needing to configure external clients or connect through the Aurora DSQL Model Context Protocol (MCP) server to enable AI-powered development tools. Whether prototyping or running production workloads, Aurora DSQL delivers virtually unlimited scalability, active-active high availability, zero infrastructure management, and pay-for-what-you-use pricing, ensuring your database effortlessly scales alongside your application needs. This enhancement is available in all Regions where Aurora DSQL is offered. Get started with Aurora DSQL for free with the AWS Free Tier. To learn more, visit the Aurora DSQL webpage and documentation.

#enhancement#support

In this post, we'll show how Swisscom implemented Amazon Bedrock AgentCore to build and scale their enterprise AI agents for customer support and sales operations. As an early adopter of Amazon Bedrock in the AWS Europe Region (Zurich), Swisscom leads in enterprise AI implementation with their Chatbot Builder system and various AI initiatives. Their successful deployments include Conversational AI powered by Rasa and fine-tuned LLMs on Amazon SageMaker, and the Swisscom Swisscom myAI assistant, built to meet Swiss data protection standards.

bedrockagentcoresagemakerrds
#bedrock#agentcore#sagemaker#rds#support

Today we’re announcing Amazon SageMaker AI with MLflow, now including a serverless capability that dynamically manages infrastructure provisioning, scaling, and operations for artificial intelligence and machine learning (AI/ML) development tasks. In this post, we explore how these new capabilities help you run large MLflow workloads—from generative AI agents to large language model (LLM) experimentation—with improved performance, automation, and security using SageMaker AI with MLflow.

sagemaker
#sagemaker

In this post, we explain how to integrate Langfuse observability with Amazon Bedrock AgentCore to gain deep visibility into an AI agent's performance, debug issues faster, and optimize costs. We walk through a complete implementation using Strands agents deployed on AgentCore Runtime followed by step-by-step code examples.

bedrockagentcore
#bedrock#agentcore#ga

Amazon WorkSpaces Secure Browser now includes Web Content Filtering, a comprehensive security and compliance feature that enables organizations to control and monitor web content access. This new capability allows administrators to define granular access policies, block specific URLs or entire domain categories using 25+ predefined categories, and seamlessly integrate with Session Logger for enhanced monitoring and compliance reporting. While existing Chrome policies for domain control remain supported, Web Content Filtering provides a more comprehensive way to control web access through category-based filtering and improved logging capabilities. Organizations can better manage their remote work security and compliance requirements through centralized policy management that scales across the enterprise. IT security teams can implement default-deny policies for high-security environments, while compliance officers benefit from detailed logging and monitoring capabilities. The feature maintains flexibility by allowing customized policies and exceptions based on specific business needs. This feature is available at no additional cost in 10 AWS Regions, including US East (N. Virginia), US West (Oregon), Canada (Central), Europe (Frankfurt, London, Ireland), and Asia Pacific (Tokyo, Mumbai, Sydney, Singapore). WorkSpaces Secure Browser offers pay-as-you go pricing. To get started with WorkSpaces Secure Browser, see Getting Started with Amazon WorkSpaces Secure Browser. You can enable this feature in your AWS console and automatically migrate any browser policies for URL Blocklists or URL Allowlists. To learn more about the feature, please refer to the feature documentation.

lexorganizations
#lex#organizations#ga#support#new-capability

In this post, we walk through building a generative AI–powered troubleshooting assistant for Kubernetes. The goal is to give engineers a faster, self-service way to diagnose and resolve cluster issues, cut down Mean Time to Recovery (MTTR), and reduce the cycles experts spend finding the root cause of issues in complex distributed systems.

lex
#lex

Amazon Cognito identity pools now support AWS PrivateLink, enabling you to securely exchange federated identities for AWS credentials through private connectivity between your virtual private cloud (VPC) and Cognito. This eliminates the need to route authentication traffic over the public internet, providing enhanced security for your workloads. Identity pools map authenticated and guest identities to your AWS Identity and Access Management (IAM) roles and provide temporary AWS credentials, with this new feature, through a secure and private connection. You can use PrivateLink connections in all AWS Regions where Amazon Cognito identity pools are available, except AWS China (Beijing) Region, operated by Sinnet, and AWS GovCloud (US) Regions. Creating VPC endpoints on AWS PrivateLink will incur additional charges; refer to AWS PrivateLink pricing page for details. You can get started by creating an AWS PrivateLink VPC interface endpoint for Amazon Cognito identity pools using the AWS Management Console, AWS Command Line Interface (CLI), AWS Software Development Kits (SDKs), AWS Cloud Development Kit (CDK), or AWS CloudFormation. To learn more, refer to the documentation on creating a VPC interface endpoint and Amazon Cognito’s developer guide.

cloudformationiam
#cloudformation#iam#new-feature#support

Today, AWS announces Amazon Aurora PostgreSQL-Compatible Edition integration with Kiro powers, enabling developers to build Aurora PostgreSQL backed applications faster with AI agent-assisted development using Kiro. Kiro powers is a repository of curated and pre-packaged Model Context Protocol (MCP) servers, steering files, and hooks validated by Kiro partners to accelerate specialized software development and deployment use cases. Kiro power for Aurora PostgreSQL packages the MCP server with targeted database development guidance, giving the Kiro agent instant expertise in Aurora PostgreSQL operations and schema design. Kiro power for Aurora PostgreSQL bundles direct database connectivity through the Aurora PostgreSQL MCP server for data plane operations (queries, table creation, schema management), and control plane operations (cluster creation) and the steering file with Aurora PostgreSQL–specific best practices. When developers work on database tasks, the power dynamically loads relevant guidance – whether creating new Aurora clusters, designing schemas, or optimizing queries – so AI agents receive only the context needed for the specific task at hand. Aurora PostgreSQL power is available within Kiro IDE and Kiro powers webpage for one-click installation and can create and manage Aurora PostgreSQL clusters in all AWS Regions. For more information about development use cases, read this blog post. To learn more about Aurora PostgreSQL MCP server, visit our documentation. Amazon Aurora is designed for unparalleled high performance and availability at global scale with full PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, and automated multi-Region replication. To get started with Amazon Aurora, visit our getting started page.

#integration#support

Amazon CloudWatch announces support for both the JSON and Concise Binary Object Representation (CBOR) protocols in the CloudWatch SDK, enabling lower latency and improved performance for CloudWatch customers. The SDK will automatically use JSON or CBOR as its new default communication protocol, offering customers a lower end-to-end processing latency as well as reduced payload sizes, application client side CPU, and memory usage. Customers use the CloudWatch SDK either directly or through Infrastructure as Code solutions to manage their monitoring resources. Reducing control plane operations latency and payload size helps customer optimize their operational maintenance and resources usage and costs. JSON and the CBOR data formats are standards designed to enable better performance over the traditional AWS Query protocol. The CloudWatch SDK for JSON and CBOR protocols support is available in all AWS Regions where Amazon CloudWatch is available and for all generally available AWS SDK language variants. To leverage the performance improvements, customers can install the latest SDK version here. To learn more about the AWS SDK, see Amazon Developer tools.

rdscloudwatch
#rds#cloudwatch#generally-available#improvement#support

AWS Application Migration Service (MGN) now supports Internet Protocol version 6 (IPv6) for both service communication and application migrations. Organizations can migrate applications that use IPv6 addressing, enabling transitions to modern network infrastructures. You can connect to AWS MGN using new dual-stack service endpoints that support both IPv4 and IPv6 communications. When migrating applications, you can transfer replication data using IPv4 or IPv6 while maintaining network connections and security. Then, during testing and cutover phases, you can use your chosen network configuration (IPv4, IPv6, or dual-stack) to launch servers in your target environment. This feature is available in every AWS Region that supports AWS MGN and Amazon Elastic Compute Cloud (Amazon EC2) dual-stack endpoints. For supported regions, see the AWS MGN Supported AWS Regions and Amazon EC2 Endpoints documentation. To learn more about AWS MGN, visit our product page or documentation. To get started, sign in to the AWS Application Migration Service Console.

ec2organizations
#ec2#organizations#launch#ga#support

Amazon EC2 High Memory U7i instances with 24TB of memory (u7in-24tb.224xlarge) are now available in AWS Europe (Frankfurt), U7i instances with 16TB of memory (u7in-16tb.224xlarge) are now available in AWS Asia Pacific (Mumbai), and U7i instances with 6TB of memory (u7i-6tb.112xlarge) are now available in the AWS Europe (Paris) region. U7i instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7in-24tb instances offer 24TiB of DDR5 memory, U7in-16tb instances offer 16TiB of DDR5 memory, and U7i-6tb instances offer 6TiB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment. U7i-6tb instances offer 448 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7in-16tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 200Gbps of network bandwidth, and support ENA Express. U7in-24tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 200Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server. To learn more about U7i instances, visit the High Memory instances page.

ec2
#ec2#now-available#support

Amazon Web Services (AWS) announces the availability of high performance Storage Optimized Amazon EC2 I7i instances in AWS Asia Pacific (Singapore, Jakarta), Europe (Stockholm) regions. Powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, these instances deliver up to 23% better compute performance and more than 10% better price performance over previous generation I4i instances. Powered by 3rd generation AWS Nitro SSDs, I7i instances offer up to 45TB of NVMe storage with up to 50% better real-time storage performance, up to 50% lower storage I/O latency, and up to 60% lower storage I/O latency variability compared to I4i instances. I7i instances are ideal for I/O intensive and latency-sensitive workloads that demand very high random IOPS performance with real-time latency to access small to medium size datasets (multi-TBs). I7i instances support torn write prevention feature with up to 16KB block sizes, enabling customers to eliminate database performance bottlenecks. I7i instances are available in eleven sizes - nine virtual sizes up to 48xlarge and two bare metal sizes - delivering up to 100Gbps of network bandwidth and 60Gbps of Amazon Elastic Block Store (EBS) bandwidth. To learn more, visit the I7i instances page.

ec2
#ec2#ga#now-available#support

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7i instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in the Asia Pacific (Hyderabad) Region. C7i instances are supported by custom Intel processors, available only on AWS. C7i instances deliver up to 15% better price-performance versus C6i instances and are a great choice for all compute-intensive workloads, such as batch processing, distributed analytics, ad-serving, and video encoding. C7i instances offer larger instance sizes, up to 48xlarge, and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads. C7i instances support new Intel Advanced Matrix Extensions (AMX) that accelerate matrix multiplication operations for applications such as CPU-based ML. Customers can attach up to 128 EBS volumes to a C7i instance vs. up to 28 EBS volumes to a C6i instance. This allows processing of larger amounts of data, scale workloads, and improved performance over C6i instances. To learn more, visit Amazon EC2 C7i Instances. To get started, see the AWS Management Console.

ec2
#ec2#now-available#support

Amazon Elastic Container Service (Amazon ECS) now supports custom container stop signals for Linux tasks running on AWS Fargate, honoring the stop signal configured in Open Container Initiative (OCI) images when tasks are stopped. The enhancement improves graceful shutdown behavior by aligning Fargate task termination with each container’s preferred termination signal. Previously, when an Amazon ECS task running on AWS Fargate was stopped, each Linux container always received SIGTERM followed by SIGKILL after the configured timeout. With the new behavior, the Amazon ECS container agent reads the stop signal from the container image configuration and sends that signal when stopping the task. Containers that rely on signals such as SIGQUIT or SIGINT for graceful shutdown can now run on Fargate with their intended termination semantics. If no STOPSIGNAL is configured, Amazon ECS continues to send SIGTERM by default. Customers can use custom stop signals on Amazon ECS with AWS Fargate by adding a STOPSIGNAL instruction (for example, STOPSIGNAL SIGQUIT) to their OCI‑compliant container images. Support for container‑defined stop signals is available in all AWS Regions. To learn more, refer to the ECS Developer Guide.

ecsfargate
#ecs#fargate#ga#enhancement#support

This post shows how to implement automated smoke testing using Amazon Nova Act headless mode in CI/CD pipelines. We use SauceDemo, a sample ecommerce application, as our target for demonstration. We demonstrate setting up Amazon Nova Act for headless browser automation in CI/CD environments and creating smoke tests that validate key user workflows. We then show how to implement parallel execution to maximize testing efficiency, configure GitLab CI/CD for automatic test execution on every deployment, and apply best practices for maintainable and scalable test automation.

nova
#nova

Today, AWS announces the general availability of the new Amazon Elastic Block Storage (Amazon EBS) optimized Amazon Elastic Compute Cloud (Amazon EC2) C8gb instances. These instances are powered by AWS Graviton4 processors to deliver up to 30% better compute performance than AWS Graviton3 processors. At up to 150 Gbps of EBS bandwidth, these instances offer higher EBS performance compared to same-sized equivalent Graviton4-based instances. Take advantage of the higher block storage performance offered by these new EBS optimized EC2 instances to scale the performance and throughput of workloads such as high-performance file systems, while optimizing the cost of running your workloads. For increased scalability, these instances offer instance sizes up to 24xlarge, including a metal-24xl size, up to 192 GiB of memory, up to 150 Gbps of EBS bandwidth, up to 200 Gbps of networking bandwidth. These instances support Elastic Fabric Adapter (EFA) networking on the 16xlarge, 24xlarge, metal-24xl sizes, which enables lower latency and improved cluster performance for workloads deployed on tightly coupled clusters. The new C8gb instances are available in US East (N. Virginia) and US West (Oregon) regions. Metal sizes are only available in US East (N. Virginia) region. To learn more, see Amazon EC2 C8gb Instances. To begin your Graviton journey, visit the Level up your compute with AWS Graviton page. To get started, see AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.

ec2graviton
#ec2#graviton#generally-available#support

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) X8g instances are available in Asia Pacific (Sydney) region. These instances are powered by AWS Graviton4 processors and deliver up to 60% better performance than AWS Graviton2-based Amazon EC2 X2gd instances. X8g instances offer up to 3 TiB of total memory and increased memory per vCPU compared to other Graviton4-based instance. They have the best price performance among EC2 X-series instances, and are ideal for memory-intensive workloads such as electronic design automation (EDA) workloads, in-memory databases (Redis, Memcached), relational databases (MySQL, PostgreSQL), real-time big data analytics, real-time caching servers, and memory-intensive containerized applications. X8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 3TiB) than Graviton2-based X2gd instances. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Elastic Fabric Adapter (EFA) networking support is offered on 24xlarge, 48xlarge, and bare metal sizes, and Elastic Network Adapter (ENA) Express support is available on instance sizes larger than 12xlarge. To learn more, see Amazon EC2 X8g Instances. To quickly migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program. To get started, see the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.

ec2graviton
#ec2#graviton#now-available#support

Amazon Braket now supports Qiskit 2.0, enabling quantum developers to use the latest version of the most popular quantum software framework with native primitives and client-side compilation capabilities. With this release, Braket provides native implementations of Qiskit's Sampler and Estimator primitives that leverage Braket's program sets for optimized batching, reducing execution time and costs compared to generic wrapper approaches. The native primitives handle parameter sweeps and observable measurements service-side, eliminating the need for customers to implement this logic manually. Additionally, the bidirectional circuit conversion capability enables customers to use Qiskit's extensive compilation framework for client-side transpilation before submitting to Braket devices, providing the control and reproducibility that enterprise users and researchers require for device characterization experiments and custom compilation passes. Qiskit 2.0 support is available in all AWS Regions where Amazon Braket is available. To get started, see the Qiskit-Braket provider documentation and the Amazon Braket Developer Guide.

#support

Today, AWS announces that AWS Support Center Console now support screen sharing for troubleshooting support cases. With this new feature, you can request a virtual meeting while in an active chat or call, join support calls with one click through a meeting bridge link. With the new virtual meetings, you will be able to share your screen during the meeting and maintain seamless access to case details for efficient troubleshooting. This enhancement simplifies your support experience by keeping all support interactions within the AWS Support Center console. To learn more visit the AWS Support page.

#new-feature#enhancement#support

Hundreds of thousands of customers build artificial intelligence and machine learning (AI/ML) and analytics applications on AWS, frequently transforming data through multiple stages for improved query performance—from raw data to processed datasets to final analytical tables. Data engineers must solve complex problems, including detecting what data has changed in base tables, writing and maintaining transformation […]

lexglue
#lex#glue

This post evaluates the reasoning capabilities of our latest offering in the Nova family, Amazon Nova 2 Lite, using practical scenarios that test these critical dimensions. We compare its performance against other models in the Nova family—Lite 1.0, Micro, Pro 1.0, and Premier—to elucidate how the latest version advances reasoning quality and consistency.

novalex
#nova#lex#ga#support

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8gn instances, powered by the latest-generation AWS Graviton4 processors, are available in the AWS US East (Ohio) and Middle East (UAE) Regions. The new instances provide up to 30% better compute performance than Graviton3-based Amazon EC2 C7gn instances. Amazon EC2 C8gn instances feature the latest 6th generation AWS Nitro Cards, and offer up to 600 Gbps network bandwidth, the highest network bandwidth among network optimized EC2 instances. Take advantage of the enhanced networking capabilities of C8gn to scale performance and throughput, while optimizing the cost of running network-intensive workloads such as network virtual appliances, data analytics, CPU-based artificial intelligence and machine learning (AI/ML) inference. For increased scalability, C8gn instances offer instance sizes up to 48xlarge, up to 384 GiB of memory, and up to 60 Gbps of bandwidth to Amazon Elastic Block Store (EBS). C8gn instances support Elastic Fabric Adapter (EFA) networking on the 16xlarge, 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes, which enables lower latency and improved cluster performance for workloads deployed on tightly coupled clusters. C8gn instances are available in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon, N.California), Europe (Frankfurt, Stockholm), Asia Pacific (Singapore, Malaysia, Sydney, Thailand), Middle East (UAE) To learn more, see Amazon C8gn Instances. To begin your Graviton journey, visit the Level up your compute with AWS Graviton page. To get started, see AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.

ec2rdsgraviton
#ec2#rds#graviton#ga#now-available#support

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) X8g instances are available in Europe (Stockholm) region. These instances are powered by AWS Graviton4 processors, and they offer up to 3 TiB of total memory and increased memory per vCPU compared to other Graviton4-based instances. X8g instances are ideal for memory-intensive workloads, such as electronic design automation (EDA) workloads, in-memory databases (Redis, Memcached), relational databases (MySQL, PostgreSQL), real-time big data analytics, real-time caching servers, and memory-intensive containerized applications. X8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 3TiB) than Graviton2-based X2gd instances. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Elastic Fabric Adapter (EFA) networking support is offered on 24xlarge, 48xlarge, and bare metal sizes, and Elastic Network Adapter (ENA) Express support is available on instance sizes larger than 12xlarge. X8g instances are currently available in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon), and Europe (Frankfurt, Stockholm). To learn more, see Amazon EC2 X8g Instances. To quickly migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program. To get started, see the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.

ec2graviton
#ec2#graviton#now-available#support

Today, AWS announces deal sizing capability in AWS Partner Central. This new feature, available within the APN Customer Engagements (ACE) Opportunities, uses AI to provide deal size estimates and AWS service recommendations. Deal Sizing capability allows Partners to save time on deal management by simplifying the process of estimating AWS monthly recurring revenue (MMR) when creating or updating opportunities. Partners can optionally import AWS Pricing Calculator URLs to automatically populate AWS service selections and corresponding spend estimates into their opportunities, reducing the need for manual re-entry. When a Pricing Calculator URL is provided, deal sizing delivers enhanced insights including pricing strategy optimization recommendations, potential cost savings analysis, Migration Acceleration Program (MAP) eligibility indicators, and modernization pathway analysis. These enhanced insights help Partners refine their technical approach and strengthen funding applications, accelerating the funding approval process. Deal sizing is now available in AWS Partner Central worldwide. The feature is accessible through both AWS Partner Central and the AWS Partner Central API for Selling, which is available in the US East (N. Virginia) Region. To get started, log in to AWS Partner Central in the console to create or update opportunities and view deal sizing insights. For API integration with your CRM system, see the AWS Partner Central API Documentation. To learn more about deal sizing, visit the Partner Central Sales Guide.

#ga#now-available#new-feature#update#integration

Amazon RDS and Aurora now support resource tagging for automated backups and cluster automated backups. You can now tag your automated backups separately from the parent DB instance or DB cluster, enabling Attribute-Based Access Control (ABAC) and simplifying resource management and cost tracking. With this launch, you can tag automated backups in the same way as other RDS resources using the AWS Management Console, API, or SDK. Use these tags with IAM policies to control access and permissions to automated backups. Additionally, these tags can help you categorize your resources by application, project, department, environment, and more, as well as manage, organize, and track costs of your automated backups. For example, create application specific tags to control permissions for describing, deleting, or restoring automated backups and to organize and track backup costs of the application. This capability is available in all AWS Regions, including the AWS GovCloud (US) Regions where Aurora and RDS are available. To learn more about tagging Aurora and RDS automated backups, see the Amazon documentation on Tagging Amazon Aurora resources, Tagging Amazon RDS resources, and Using tags for attribute-based access control.

rdsiam
#rds#iam#launch#ga#support

Today, Amazon GameLift Servers is launching AI-powered assistance in the AWS Console, leveraging Amazon Q Developer to provide tailored guidance for game developers. This new feature integrates specialized GameLift Servers knowledge to help customers navigate complex workflows, troubleshoot issues, and optimize their game server deployments more efficiently. Developers can now access AI-assisted recommendations for game server integration, fleet configuration, and performance optimization directly within the AWS Console via Amazon GameLift Servers. This enhancement aims to streamline decision making processes, reduce troubleshooting time, and improve overall resource utilization, leading to cost savings and better player experiences. AI-powered assistance is now available in all Amazon GameLift Servers supported regions, except AWS China. To learn more about this new feature, visit the Amazon GameLift Servers documentation.

amazon qq developerlex
#amazon q#q developer#lex#launch#ga#now-available

AWS recently announced the general availability of auto-optimize for the Amazon OpenSearch Service vector engine. This feature streamlines vector index optimization by automatically evaluating configuration trade-offs across search quality, speed, and cost savings. You can then run a vector ingestion pipeline to build an optimized index on your desired collection or domain. Previously, optimizing index […]

opensearchopensearch service
#opensearch#opensearch service

AWS recently announced the general availability of GPU-accelerated vector (k-NN) indexing on Amazon OpenSearch Service. You can now build billion-scale vector databases in under an hour and index vectors up to 10 times faster at a quarter of the cost. This feature dynamically attaches serverless GPUs to boost domains and collections running CPU-based instances. With […]

opensearchopensearch service
#opensearch#opensearch service

AWS Glue zero-ETL with SAP now supports data ingestion and replication from SAP data sources such as Operational Data Provisioning (ODP) managed SAP Business Warehouse (BW) extractors, Advanced Business Application Programming (ABAP), Core Data Services (CDS) views, and other non-ODP data sources. Zero-ETL data replication and schema synchronization writes extracted data to AWS services like Amazon Redshift, Amazon SageMaker lakehouse, and Amazon S3 Tables, alleviating the need for manual pipeline development. In this post, we show how to create and monitor a zero-ETL integration with various ODP and non-ODP SAP sources.

sagemakers3redshiftglue
#sagemaker#s3#redshift#glue#integration#support

This post is about AWS SDK for JavaScript v3 announcing end of support for Node.js versions based on Node.js release schedule, and it is not about AWS Lambda. For the latter, refer to the Lambda runtime deprecation policy. In the second week of January 2026, the AWS SDK for JavaScript v3 (JS SDK) will start […]

lambda
#lambda#support

Have you ever wondered what it is really like to be a woman in tech at one of the world's leading cloud companies? Or maybe you are curious about how diverse perspectives drive innovation beyond the buzzwords? Today, we are providing an insider's perspective on the role of a solutions architect (SA) at Amazon Web Services (AWS). However, this is not a typical corporate success story. We are three women who have navigated challenges, celebrated wins, and found our unique paths in the world of cloud architecture, and we want to share our real stories with you.

novards
#nova#rds#ga

Organizations often have large volumes of documents containing valuable information that remains locked away and unsearchable. This solution addresses the need for a scalable, automated text extraction and knowledge base pipeline that transforms static document collections into intelligent, searchable repositories for generative AI applications.

bedrockstep functionsorganizations
#bedrock#step functions#organizations#ga

In this post, we demonstrate how to utilize AWS Network Firewall to secure an Amazon EVS environment, using a centralized inspection architecture across an EVS cluster, VPCs, on-premises data centers and the internet. We walk through the implementation steps to deploy this architecture using AWS Network Firewall and AWS Transit Gateway.

#ga

You can now develop AWS Lambda functions using Node.js 24, either as a managed runtime or using the container base image. Node.js 24 is in active LTS status and ready for production use. It is expected to be supported with security patches and bugfixes until April 2028. The Lambda runtime for Node.js 24 includes a new implementation of the […]

lambda
#lambda#now-available#support

Organizations running critical workloads on Amazon Elastic Compute Cloud (Amazon EC2) reserve compute capacity using On-Demand Capacity Reservations (ODCR) to have availability when needed. However, reserved capacity can intermittently sit idle during off-peak periods, between deployments, or when workloads scale down. This unused capacity represents a missed opportunity for cost optimization and resource efficiency across the organization.

ec2organizations
#ec2#organizations#ga

Amazon Web Services (AWS) provides many mechanisms to optimize the price performance of workloads running on Amazon Elastic Compute Cloud (Amazon EC2), and the selection of the optimal infrastructure to run on can be one of the most impactful levers. When we started building the AWS Graviton processor, our goal was to optimize AWS Graviton […]

ec2graviton
#ec2#graviton

In this post, you will learn how the new Amazon API Gateway’s enhanced TLS security policies help you meet standards such as PCI DSS, Open Banking, and FIPS, while strengthening how your APIs handle TLS negotiation. This new capability increases your security posture without adding operational complexity, and provides you with a single, consistent way to standardize TLS configuration across your API Gateway infrastructure.

lexrdsapi gateway
#lex#rds#api gateway#ga#new-capability

Event-driven applications often need to process data in real-time. When you use AWS Lambda to process records from Apache Kafka topics, you frequently encounter two typical requirements: you need to process very high volumes of records in close to real-time, and you want your consumers to have the ability to scale rapidly to handle traffic spikes. Achieving both necessitates understanding how Lambda consumes Kafka streams, where the potential bottlenecks are, and how to optimize configurations for high throughput and best performance.

lambdardskafka
#lambda#rds#kafka

Modern generative AI applications often need to stream large language model (LLM) outputs to users in real-time. Instead of waiting for a complete response, streaming delivers partial results as they become available, which significantly improves the user experience for chat interfaces and long-running AI tasks. This post compares three serverless approaches to handle Amazon Bedrock LLM streaming on Amazon Web Services (AWS), which helps you choose the best fit for your application.

bedrock
#bedrock

Today, AWS is announcing tenant isolation for AWS Lambda, enabling you to process function invocations in separate execution environments for each end-user or tenant invoking your Lambda function. This capability simplifies building secure multi-tenant SaaS applications by managing tenant-level compute environment isolation and request routing, allowing you to focus on core business logic rather than implementing tenant-aware compute environment isolation.

lambda
#lambda

In this post, we'll explore a reference architecture that helps enterprises govern their Amazon Bedrock implementations using Amazon API Gateway. This pattern enables key capabilities like authorization controls, usage quotas, and real-time response streaming. We'll examine the architecture, provide deployment steps, and discuss potential enhancements to help you implement AI governance at scale.

bedrockapi gateway
#bedrock#api gateway#ga#enhancement

Today, AWS announced support for response streaming in Amazon API Gateway to significantly improve the responsiveness of your REST APIs by progressively streaming response payloads back to the client. With this new capability, you can use streamed responses to enhance user experience when building LLM-driven applications (such as AI agents and chatbots), improve time-to-first-byte (TTFB) performance for web and mobile applications, stream large files, and perform long-running operations while reporting incremental progress using protocols such as server-sent events (SSE).

api gateway
#api gateway#ga#support#new-capability

Amazon Elastic Cloud Compute (Amazon EC2) instances with locally attached NVMe storage can provide the performance needed for workloads demanding ultra-low latency and high I/O throughput. High-performance workloads, from high-frequency trading applications and in-memory databases to real-time analytics engines and AI/ML inference, need comprehensive performance tracking. Operating system tools like iostat and sar provide valuable system-level insights, and Amazon CloudWatch offers important disk IOPs and throughput measurements, but high-performance workloads can benefit from even more detailed visibility into instance store performance.

ec2cloudwatch
#ec2#cloudwatch

At re:Invent 2025, we introduce one new lens and two significant updates to the AWS Well-Architected Lenses specifically focused on AI workloads: the Responsible AI Lens, the Machine Learning (ML) Lens, and the Generative AI Lens. Together, these lenses provide comprehensive guidance for organizations at different stages of their AI journey, whether you're just starting to experiment with machine learning or already deploying complex AI applications at scale.

lexorganizations
#lex#organizations#launch#ga#update

We are delighted to announce an update to the AWS Well-Architected Generative AI Lens. This update features several new sections of the Well-Architected Generative AI Lens, including new best practices, advanced scenario guidance, and improved preambles on responsible AI, data architecture, and agentic workflows.

#update

AWS Lambda now supports Python 3.14 as both a managed runtime and container base image. Python is a popular language for building serverless applications. Developers can now take advantage of new features and enhancements when creating serverless applications on Lambda.

lambda
#lambda#now-available#new-feature#enhancement#support

Today, AWS Lambda is promoting Rust support from Experimental to Generally Available. This means you can now use Rust to build business-critical serverless applications, backed by AWS Support and the Lambda availability SLA.

lambda
#lambda#experimental#generally-available#support

You can now develop AWS Lambda functions using Java 25 either as a managed runtime or using the container base image. This blog post highlights notable Java language features, Java Lambda runtime updates, and how you can use the new Java 25 runtime in your serverless applications.

lambda
#lambda#update#support

From December 1st to December 5th, Amazon Web Services (AWS) will hold its annual premier learning event: re:Invent. There are over 2000+ learning sessions that focus on specific topics at various skill levels, and the compute team have created 76 unique sessions for you to choose. There are many sessions you can choose from, and we are here to help you choose the sessions that best fit your needs. Even if you cannot join in person, you can catch-up with many of the sessions on-demand and even watch the keynote and innovation sessions live.

nova
#nova

This post was co-written with Frederic Haase and Julian Blau with BASF Digital Farming GmbH. At xarvio – BASF Digital Farming, our mission is to empower farmers around the world with cutting-edge digital agronomic decision-making tools. Central to this mission is our crop optimization platform, xarvio FIELD MANAGER, which delivers actionable insights through a range […]

eks
#eks

Version 2.0 of the AWS Deploy Tool for .NET is now available. This new major version introduces several foundational upgrades to improve the deployment experience for .NET applications on AWS. The tool comes with new minimum runtime requirements. We have upgraded it to require .NET 8 because the predecessor, .NET 6, is now out of […]

#now-available

The global real-time payments market is experiencing significant growth. According to Fortune Business Insights, the market was valued at USD 24.91 billion in 2024 and is projected to grow to USD 284.49 billion by 2032, with a CAGR of 35.4%. Similarly, Grand View Research reports that the global mobile payment market, valued at USD 88.50 […]

Generative AI agents in production environments demand resilience strategies that go beyond traditional software patterns. AI agents make autonomous decisions, consume substantial computational resources, and interact with external systems in unpredictable ways. These characteristics create failure modes that conventional resilience approaches might not address. This post presents a framework for AI agent resilience risk analysis […]

The AWS SDK for Java 1.x (v1) entered maintenance mode on July 31, 2024, and will reach end-of-support on December 31, 2025. We recommend that you migrate to the AWS SDK for Java 2.x (v2) to access new features, enhanced performance, and continued support from AWS. To help you migrate efficiently, we’ve created a migration […]

#new-feature#support

In this post, we explore how Metagenomi built a scalable database and search solution for over 1 billion protein vectors using LanceDB and Amazon S3. The solution enables rapid enzyme discovery by transforming proteins into vector embeddings and implementing a serverless architecture that combines AWS Lambda, AWS Step Functions, and Amazon S3 for efficient nearest neighbor searches.

lambdas3step functions
#lambda#s3#step functions

In this post, we explore an efficient approach to managing encryption keys in a multi-tenant SaaS environment through centralization, addressing challenges like key proliferation, rising costs, and operational complexity across multiple AWS accounts and services. We demonstrate how implementing a centralized key management strategy using a single AWS KMS key per tenant can maintain security and compliance while reducing operational overhead as organizations scale.

lexorganizations
#lex#organizations#ga

This two-part series shows how Karrot developed a new feature platform, which consists of three main components: feature serving, a stream ingestion pipeline, and a batch ingestion pipeline. This post covers the process of collecting features in real-time and batch ingestion into an online store, and the technical approaches for stable operation.

#new-feature

In this post, we demonstrate how to deploy the DeepSeek-R1-Distill-Qwen-32B model using AWS DLCs for vLLMs on Amazon EKS, showcasing how these purpose-built containers simplify deployment of this powerful open source inference engine. This solution can help you solve the complex infrastructure challenges of deploying LLMs while maintaining performance and cost-efficiency.

lexeks
#lex#eks

As cloud spending continues to surge, organizations must focus on strategic cloud optimization to maximize business value. This blog post explores key insights from MIT Technology Review's publication on cloud optimization, highlighting the importance of viewing optimization as a continuous process that encompasses all six AWS Well-Architected pillars.

organizations
#organizations#ga

Today, we are excited to announce the general availability of the AWS .NET Distributed Cache Provider for Amazon DynamoDB. This is a seamless, serverless caching solution that enables .NET developers to efficiently manage their caching needs across distributed systems. Consistent caching is a difficult problem in distributed architectures, where maintaining data integrity and performance across […]

dynamodb
#dynamodb#generally-available

This blog was co-authored by Afroz Mohammed and Jonathan Nunn, Software Developers on the AWS PowerShell team. We’re excited to announce the general availability of the AWS Tools for PowerShell version 5, a major update that brings new features and improvements in security, along with a few breaking changes. New Features You can now cancel […]

#generally-available#new-feature#update#improvement

Software development is far more than just writing code. In reality, a developer spends a large amount of time maintaining existing applications and fixing bugs. For example, migrating a Go application from the older AWS SDK for Go v1 to the newer v2 can be a significant undertaking, but it’s a crucial step to future-proof […]

amazon qq developer
#amazon q#q developer

We’re excited to announce that the AWS Deploy Tool for .NET now supports deploying .NET applications to select ARM-based compute platforms on AWS! Whether you’re deploying from Visual Studio or using the .NET CLI, you can now target cost-effective ARM infrastructure like AWS Graviton with the same streamlined experience you’re used to. Why deploy to […]

graviton
#graviton#support

Version 4.0 of the AWS SDK for .NET has been released for general availability (GA). V4 has been in development for a little over a year in our SDK’s public GitHub repository with 13 previews being released. This new version contains performance improvements, consistency with other AWS SDKs, and bug and usability fixes that required […]

#preview#ga#improvement

Today, AWS launches the developer preview of the AWS IoT Device SDK for Swift. The IoT Device SDK for Swift empowers Swift developers to create IoT applications for Linux and Apple macOS, iOS, and tvOS platforms using the MQTT 5 protocol. The SDK supports Swift 5.10+ and is designed to help developers easily integrate with […]

#launch#preview#support

We are excited to announce the Developer Preview of the Amazon S3 Transfer Manager for Rust, a high-level utility that speeds up and simplifies uploads and downloads with Amazon Simple Storage Service (Amazon S3). Using this new library, developers can efficiently transfer data between Amazon S3 and various sources, including files, in-memory buffers, memory streams, […]

s3
#s3#preview

In a recent post we gave some background on .NET Aspire and introduced our AWS integrations with .NET Aspire that integrate AWS into the .NET dev inner loop for building applications. The integrations included how to provision application resources with AWS CloudFormation or AWS Cloud Development Kit (AWS CDK) and using Amazon DynamoDB local for […]

lambdadynamodbcloudformation
#lambda#dynamodb#cloudformation#ga#integration

.NET Aspire is a new way of building cloud-ready applications. In particular, it provides an orchestration for local environments in which to run, connect, and debug the components of distributed applications. Those components can be .NET projects, databases, containers, or executables. .NET Aspire is designed to have integrations with common components used in distributed applications. […]

#integration

AWS announces important configuration updates coming July 31st, 2025, affecting AWS SDKs and CLIs default settings. Two key changes include switching the AWS Security Token Service (STS) endpoint to regional and updating the default retry strategy to standard. These updates aim to improve service availability and reliability by implementing regional endpoints to reduce cross-regional dependencies and introducing token-bucket throttling for standardized retry behavior. Organizations should test their applications before the release date and can opt-in early or temporarily opt-out of these changes. These updates align with AWS best practices for optimal service performance and security.

organizations
#organizations#ga#update