SOCI enables containers to launch faster by lazily loading files from the container image. Ordinarily, a container image has to be fully downloaded and de-compressed before it can start. With SOCI, an index of the container image is created, stored in the registry as another OCI artifact, and then linked back to the container image by OCI reference types. This avoids having to convert container images or update image signatures as the digest of the image does not change.
SOCI and the soci-snapshotter are open sourced under Apache 2.0, and you can learn more about the project on GitHub
.
Fargate quotas are moving from pod count to vCPU based starting October 3rd.
You can now opt-in the the new vCPU based quotas, giving you additional time to get familiar with the new vCPU-based quotas and make modifications to your quota management tools. If you run into issues with vCPU-based quotas, you can temporarily opt-out of vCPU quotas until October 31, 2022 and remediate your systems.
AMP now provides Alert Manager & Ruler logs to help customers troubleshoot their alerting pipeline and configuration in Amazon CloudWatch Logs. It includes the ability to filter alarms before they’re sent to SNS and define recording and alerting rules.
After describing the different information captured in the control plane logs, this blog proposes several ways you can optimize the cost of retaining and analyzing those logs.
Disable or filter control plane logging in non-production environments.
CloudWatch Logs retains logs indefinitely by default. To lower your costs, impose a retention policy.
Export your logs to S3 for long term archival purposes and/or use Anthena to analyze logs in S3
This blog demonstrate how you can troubleshoot your alert manager pipeline via vended logs to correct common misconfigurations, such as not having the correct permissions, having an invalid alert manager template, and rule evaluation failures.
Vended logs are basically Prometheus logs that are routed to CloudWatch Logs.
Walks through how to use a GitOps approach to vend workload clusters and then deploy composite applications, e.g. an application that interfaces with an RDS database, onto those clusters using Crossplane and Flux.
Describes the AWS contributions to the Ray community to enable enterprise-scale AI and machine learning deployments with Ray on AWS. These contributions and AWS service integrations allow AWS customers to scale their Ray-based workloads utilizing secure, cost-efficient, and enterprise-ready AWS services across the complete end-to-end AI and machine learning pipeline with both CPUs and GPUs.
EKS supports Ray on Kubernetes through the KubeRay EKS Blueprint
, contributed by the Amazon EKS team, that quickly deploys a scalable and observable Ray cluster on your Amazon EKS cluster. As compute-demand increases or decreases, Ray works with the Kubernetes-native autoscaler (CAS) to resize the Amazon EKS cluster as needed.