Building Microservices Containers
Building Microservice Applications with AWS Container Services
Section titled “Building Microservice Applications with AWS Container Services”Choosing Containers Over AWS Lambda Functions
Section titled “Choosing Containers Over AWS Lambda Functions”Runtime Limitations
Section titled “Runtime Limitations”- Duration: Services running longer than 15 minutes aren’t suitable for Lambda
- Memory: Workloads using more than 10 GB memory exceed Lambda limits
- Container Solution: Containers can be sized to accommodate applications exceeding Lambda limitations
Legacy Application Migration
Section titled “Legacy Application Migration”- Containers Benefit: Help migrate legacy applications from on-premises or EC2 without refactoring
- Minimal Changes: Port applications to containers with minimum code and configuration changes
- Custom Runtimes: Simpler solution for different runtimes than porting to Lambda runtime
Cost Considerations
Section titled “Cost Considerations”Lambda functions can become expensive for:
- Extended duration applications
- High memory usage applications
- Continuous high-traffic applications with minimal idle time
Benefits of Containers
Section titled “Benefits of Containers”Virtual Machines vs Containers
Section titled “Virtual Machines vs Containers”Virtual Machines:
- Multiple applications on different guest operating systems
- Large packages including app code, dependencies, and guest OS
- Deployed on hypervisor allocating host resources
Containers:
- Standardized application packages containing code and dependencies
- Run on container engine with read-only access to host OS parts
- Notably smaller than VM packages
- Lightweight, rapid startup, quick scaling
- Self-contained and consistent across OS and hardware platforms
Resource Efficiency
Section titled “Resource Efficiency”Container engines replace hypervisor and guest OS layers:
- Use fewer host system resources (no guest OS overhead)
- Run many more containers than VMs on same-sized host
- Popular engines: Docker Engine, containerd, Red Hat OpenShift
Container Use Cases
Section titled “Container Use Cases”Microservice Applications
Section titled “Microservice Applications”- Package microservices as container applications
- Run in isolated processes without affecting other microservices
- Independent scaling and deployment
Batch Processing
Section titled “Batch Processing”- Suitable for prolonged processing jobs
- Example: ETL jobs that scale dynamically based on demand
- Run as long as transformation takes
Machine Learning Models
Section titled “Machine Learning Models”- AWS Usage: Amazon SageMaker uses containers for packaged algorithms
- Customer Benefits: Scale and run ML models close to large training datasets
- Deploy and train ML models efficiently
Hybrid Architecture Standardization
Section titled “Hybrid Architecture Standardization”- Multi-Environment: Standardize applications spanning AWS cloud and on-premises
- Maintenance Reduction: Lighten administration load for separate environments
- Consistency: Same application packages across environments
Cloud Migration
Section titled “Cloud Migration”- Rapid Migration: Move applications from on-premises to cloud without code changes
- Packaging: Container packages deploy directly in cloud
- Minimal Disruption: No application refactoring required
Creating and Deploying Docker Containers
Section titled “Creating and Deploying Docker Containers”Development Process
Section titled “Development Process”- Dockerfile Creation: Text file with build instructions for container image
- Image Building: Run container engine build command
- Local Testing: Verify container image on local host
- Repository Upload: Push container image to container repository
- Deployment: Deploy multiple containers from single image
Container Orchestration
Section titled “Container Orchestration”Companies prefer orchestration services over manual deployment:
- Management: Deploy and scale hundreds or thousands of containers
- Multi-Platform: Use one or multiple compute platforms
- Health Monitoring: Track health of compute platform and containers
- Image Management: Pull required images from repositories
Standard Open Container Initiative (OCI) provides industry standards for container image, runtime, and distribution specifications, enabling portability across platforms.
AWS Container Services
Section titled “AWS Container Services”Registry
Section titled “Registry”Amazon Elastic Container Registry (Amazon ECR):
- AWS managed container image registry service
- Secure, scalable, and reliable
- Supports private and public repositories
- Resource-based permissions using IAM
Orchestration Services
Section titled “Orchestration Services”Amazon Elastic Container Service (Amazon ECS):
- Developed by AWS with built-in configuration and operational best practices
- Integrated with AWS and third-party tools (ECR, Docker)
- Accelerates development time for teams
Amazon Elastic Kubernetes Service (Amazon EKS):
- Uses Kubernetes system for container management
- Open-source system for automated management, scaling, and deployment
- Suitable for customers running Kubernetes clusters on-premises
Compute Options
Section titled “Compute Options”Both Amazon ECS and Amazon EKS support:
- Amazon EC2 instances: Customer VPC nodes
- AWS Fargate nodes: AWS managed VPC nodes
- Hybrid Clusters: Both EC2 and Fargate nodes in same cluster
- Anywhere Features: Deploy clusters in on-premises environments
Alternative AWS Lambda can also launch containers using up to 10 GB memory, but isn’t managed by container orchestration services.
Benefits of AWS Fargate
Section titled “Benefits of AWS Fargate”Simplified Management
Section titled “Simplified Management”- No Server Management: No provisioning, configuration, or maintenance
- No Cluster Optimization: No cluster packing optimization required
- Automatic Operations: AWS handles all infrastructure management
Flexible Billing and Scaling
Section titled “Flexible Billing and Scaling”- Per-Second Billing: Pay only for compute resources used
- Automatic Scaling: Dynamically scale tasks based on CPU, memory, or other metrics
- Capacity Providers: Control scaling with Amazon ECS capacity providers
Team Accessibility
Section titled “Team Accessibility”- Beginner-Friendly: Suitable for teams new to container technology
- Built-in Features: Scaling and security built-in
- Reduced Complexity: No need for comprehensive container technology knowledge
Deploying Containers on Amazon ECS
Section titled “Deploying Containers on Amazon ECS”Cluster Creation
Section titled “Cluster Creation”Create ECS cluster with compute nodes on platforms like AWS Fargate and Amazon EC2.
Task Definitions
Section titled “Task Definitions”- Blueprint: JSON format text file describing application parameters
- Container Information: Image, CPU, memory, launch type specifications
- Smallest Unit: Tasks are smallest compute unit in Amazon ECS
- Container Grouping: Set of containers placed together with defined properties
Services and Tasks
Section titled “Services and Tasks”Running Tasks:
- Single execution of task definition
- Results in deployed containers within cluster
ECS Services:
- Manages multiple tasks simultaneously
- Maintains desired number of tasks for high availability
- Launches replacement tasks when failures occur
- Load Balancing: Application Load Balancer distributes requests
Example Configuration
Section titled “Example Configuration”- Task Definition 1: Single container image → Fargate container deployment
- Task Definition 2: Multiple container images → Service with load balancer
- Service Routing: Application Load Balancer listener rules (e.g.,
/api/login
→ Login microservice)
Deploying Containers on Amazon EKS
Section titled “Deploying Containers on Amazon EKS”Cluster and Nodes
Section titled “Cluster and Nodes”- Create Amazon EKS cluster with worker machine nodes
- Support for AWS Fargate and Amazon EC2 instance nodes
Kubernetes Concepts
Section titled “Kubernetes Concepts”Pods:
- Smallest unit of deployment in Kubernetes
- Can deploy single container or multiple dependent containers
- PodSpec: Specifies container image to use
Deployments and ReplicaSets:
- Deployment: Manages multiple highly available Pods
- ReplicaSet: Owned by Deployment, maintains specified number of running Pods
Services:
- Enable Pod communication and external access
- Use Pod labels to route network requests
- Route to available Pods with matching labels
Example Deployment
Section titled “Example Deployment”- PodSpec: Container image 1 → Pod 1 with container 1
- Deployment: Container images 1 and 2 → Pod 2 with containers 2 and 3
- Service: Accept and route requests between front-end application and Pods
Choosing Between Amazon ECS and Amazon EKS
Section titled “Choosing Between Amazon ECS and Amazon EKS”Simplicity Focus:
- Simplifies cluster creation and maintenance
- Automated scaling based on demand
- Amazon ECS toolset
- Ideal for teams new to container cluster architecture
AWS Integration:
- Native AWS service integration
- Reduces decisions around compute, network, and security configurations
- AWS-opinionated solution for running containers at scale
Kubernetes Control:
- More control over cluster using complex interface
- Manual configuration of autoscaling groups
- Kubernetes toolset
- Suitable for teams familiar with Kubernetes
Flexibility:
- Provides flexibility of Kubernetes with AWS security and resiliency
- Secure, reliable, scalable Kubernetes environment
- Optimized for building highly available services
Choose container solutions over Lambda functions when applications require resources exceeding Lambda service limits. Amazon ECR provides container image repository services, while Amazon ECS and Amazon EKS offer different orchestration approaches. AWS Fargate manages serverless cluster nodes and can be deployed in both ECS and EKS clusters.