Skip to content
Pablo Rodriguez

Building Microservices Containers

Building Microservice Applications with AWS Container Services

Section titled “Building Microservice Applications with AWS Container Services”

Choosing Containers Over AWS Lambda Functions

Section titled “Choosing Containers Over AWS Lambda Functions”
  • Duration: Services running longer than 15 minutes aren’t suitable for Lambda
  • Memory: Workloads using more than 10 GB memory exceed Lambda limits
  • Container Solution: Containers can be sized to accommodate applications exceeding Lambda limitations
  • Containers Benefit: Help migrate legacy applications from on-premises or EC2 without refactoring
  • Minimal Changes: Port applications to containers with minimum code and configuration changes
  • Custom Runtimes: Simpler solution for different runtimes than porting to Lambda runtime

Lambda functions can become expensive for:

  • Extended duration applications
  • High memory usage applications
  • Continuous high-traffic applications with minimal idle time

Virtual Machines:

  • Multiple applications on different guest operating systems
  • Large packages including app code, dependencies, and guest OS
  • Deployed on hypervisor allocating host resources

Containers:

  • Standardized application packages containing code and dependencies
  • Run on container engine with read-only access to host OS parts
  • Notably smaller than VM packages
  • Lightweight, rapid startup, quick scaling
  • Self-contained and consistent across OS and hardware platforms

Container engines replace hypervisor and guest OS layers:

  • Use fewer host system resources (no guest OS overhead)
  • Run many more containers than VMs on same-sized host
  • Popular engines: Docker Engine, containerd, Red Hat OpenShift
  • Package microservices as container applications
  • Run in isolated processes without affecting other microservices
  • Independent scaling and deployment
  • Suitable for prolonged processing jobs
  • Example: ETL jobs that scale dynamically based on demand
  • Run as long as transformation takes
  • AWS Usage: Amazon SageMaker uses containers for packaged algorithms
  • Customer Benefits: Scale and run ML models close to large training datasets
  • Deploy and train ML models efficiently
  • Multi-Environment: Standardize applications spanning AWS cloud and on-premises
  • Maintenance Reduction: Lighten administration load for separate environments
  • Consistency: Same application packages across environments
  • Rapid Migration: Move applications from on-premises to cloud without code changes
  • Packaging: Container packages deploy directly in cloud
  • Minimal Disruption: No application refactoring required
  1. Dockerfile Creation: Text file with build instructions for container image
  2. Image Building: Run container engine build command
  3. Local Testing: Verify container image on local host
  4. Repository Upload: Push container image to container repository
  5. Deployment: Deploy multiple containers from single image

Companies prefer orchestration services over manual deployment:

  • Management: Deploy and scale hundreds or thousands of containers
  • Multi-Platform: Use one or multiple compute platforms
  • Health Monitoring: Track health of compute platform and containers
  • Image Management: Pull required images from repositories

Standard Open Container Initiative (OCI) provides industry standards for container image, runtime, and distribution specifications, enabling portability across platforms.

Amazon Elastic Container Registry (Amazon ECR):

  • AWS managed container image registry service
  • Secure, scalable, and reliable
  • Supports private and public repositories
  • Resource-based permissions using IAM

Amazon Elastic Container Service (Amazon ECS):

  • Developed by AWS with built-in configuration and operational best practices
  • Integrated with AWS and third-party tools (ECR, Docker)
  • Accelerates development time for teams

Amazon Elastic Kubernetes Service (Amazon EKS):

  • Uses Kubernetes system for container management
  • Open-source system for automated management, scaling, and deployment
  • Suitable for customers running Kubernetes clusters on-premises

Both Amazon ECS and Amazon EKS support:

  • Amazon EC2 instances: Customer VPC nodes
  • AWS Fargate nodes: AWS managed VPC nodes
  • Hybrid Clusters: Both EC2 and Fargate nodes in same cluster
  • Anywhere Features: Deploy clusters in on-premises environments

Alternative AWS Lambda can also launch containers using up to 10 GB memory, but isn’t managed by container orchestration services.

  • No Server Management: No provisioning, configuration, or maintenance
  • No Cluster Optimization: No cluster packing optimization required
  • Automatic Operations: AWS handles all infrastructure management
  • Per-Second Billing: Pay only for compute resources used
  • Automatic Scaling: Dynamically scale tasks based on CPU, memory, or other metrics
  • Capacity Providers: Control scaling with Amazon ECS capacity providers
  • Beginner-Friendly: Suitable for teams new to container technology
  • Built-in Features: Scaling and security built-in
  • Reduced Complexity: No need for comprehensive container technology knowledge

Create ECS cluster with compute nodes on platforms like AWS Fargate and Amazon EC2.

  • Blueprint: JSON format text file describing application parameters
  • Container Information: Image, CPU, memory, launch type specifications
  • Smallest Unit: Tasks are smallest compute unit in Amazon ECS
  • Container Grouping: Set of containers placed together with defined properties

Running Tasks:

  • Single execution of task definition
  • Results in deployed containers within cluster

ECS Services:

  • Manages multiple tasks simultaneously
  • Maintains desired number of tasks for high availability
  • Launches replacement tasks when failures occur
  • Load Balancing: Application Load Balancer distributes requests
  • Task Definition 1: Single container image → Fargate container deployment
  • Task Definition 2: Multiple container images → Service with load balancer
  • Service Routing: Application Load Balancer listener rules (e.g., /api/login → Login microservice)
  • Create Amazon EKS cluster with worker machine nodes
  • Support for AWS Fargate and Amazon EC2 instance nodes

Pods:

  • Smallest unit of deployment in Kubernetes
  • Can deploy single container or multiple dependent containers
  • PodSpec: Specifies container image to use

Deployments and ReplicaSets:

  • Deployment: Manages multiple highly available Pods
  • ReplicaSet: Owned by Deployment, maintains specified number of running Pods

Services:

  • Enable Pod communication and external access
  • Use Pod labels to route network requests
  • Route to available Pods with matching labels
  • PodSpec: Container image 1 → Pod 1 with container 1
  • Deployment: Container images 1 and 2 → Pod 2 with containers 2 and 3
  • Service: Accept and route requests between front-end application and Pods

Choosing Between Amazon ECS and Amazon EKS

Section titled “Choosing Between Amazon ECS and Amazon EKS”

Simplicity Focus:

  • Simplifies cluster creation and maintenance
  • Automated scaling based on demand
  • Amazon ECS toolset
  • Ideal for teams new to container cluster architecture

AWS Integration:

  • Native AWS service integration
  • Reduces decisions around compute, network, and security configurations
  • AWS-opinionated solution for running containers at scale

Choose container solutions over Lambda functions when applications require resources exceeding Lambda service limits. Amazon ECR provides container image repository services, while Amazon ECS and Amazon EKS offer different orchestration approaches. AWS Fargate manages serverless cluster nodes and can be deployed in both ECS and EKS clusters.