Effective logging is crucial for debugging, monitoring, and maintaining containerized applications. AWS Fargate offers multiple logging options, each with its own strengths. This guide covers everything you need to know about logging in Fargate.
> Key Takeaways > > - AWS Fargate supports three primary log drivers: awslogs (CloudWatch), FireLens (Fluent Bit/Fluentd), and Splunk direct integration > - FireLens enables multi-destination routing, log filtering, and transformation without modifying application code > - Structured JSON logging with correlation IDs is essential for effective debugging in distributed containerized environments > - Cost optimization through retention policies, log filtering, and S3 archival can reduce CloudWatch spending by 40-60%
What Logging Options Are Available in Fargate?
AWS Fargate supports four log driver options: awslogs for native CloudWatch integration, FireLens for flexible routing via Fluent Bit or Fluentd, splunk for direct Splunk ingestion, and awsfirelens for custom log processing pipelines.Fargate supports several log drivers:
CloudWatch Logs (awslogs)
The simplest approach for Fargate logging.
Basic Configuration
{
"containerDefinitions": [
{
"name": "my-app",
"image": "my-app:latest",
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/my-app",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs"
}
}
}
]
}
Creating Log Groups
Create log groups before deployment:
aws logs create-log-group \
--log-group-name /ecs/my-app
aws logs put-retention-policy \
--log-group-name /ecs/my-app \
--retention-in-days 30
IAM Permissions
Ensure the task execution role has logging permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:::log-group:/ecs/"
}
]
}
How Does FireLens Enable Advanced Logging?
FireLens is an AWS-native log router that runs as a sidecar container in your Fargate task, using Fluent Bit or Fluentd to intercept stdout/stderr from application containers and route, filter, parse, and transform logs before sending them to one or more destinations.Why FireLens?
- Route logs to multiple destinations
- Filter and transform logs
- Parse structured logs
- Lower CloudWatch costs with log aggregation
Basic FireLens Configuration
{
"containerDefinitions": [
{
"name": "log_router",
"image": "amazon/aws-for-fluent-bit:latest",
"essential": true,
"firelensConfiguration": {
"type": "fluentbit",
"options": {
"enable-ecs-log-metadata": "true"
}
}
},
{
"name": "my-app",
"image": "my-app:latest",
"logConfiguration": {
"logDriver": "awsfirelens",
"options": {
"Name": "cloudwatch_logs",
"region": "us-east-1",
"log_group_name": "/ecs/my-app",
"log_stream_prefix": "app-",
"auto_create_group": "true"
}
}
}
]
}
Multi-Destination Routing
Send logs to multiple destinations:
# fluent-bit.conf
[OUTPUT]
Name cloudwatch_logs
Match
region us-east-1
log_group_name /ecs/my-app
log_stream_prefix app-
auto_create_group true
[OUTPUT]
Name s3
Match
region us-east-1
bucket my-logs-bucket
total_file_size 10M
upload_timeout 1m
[OUTPUT]
Name datadog
Match
Host http-intake.logs.datadoghq.com
TLS on
apikey ${DD_API_KEY}
Log Filtering
Filter logs before sending:
[FILTER]
Name grep
Match
Exclude log healthcheck
[FILTER]
Name modify
Match
Remove_wildcard password
Remove_wildcard secret
Structured Logging
JSON Logging Best Practices
Configure your application to output JSON logs:
import json
import logging
from datetime import datetime
class JSONFormatter(logging.Formatter):
def format(self, record):
log_record = {
"timestamp": datetime.utcnow().isoformat(),
"level": record.levelname,
"message": record.getMessage(),
"logger": record.name,
"module": record.module,
"function": record.funcName
}
if record.exc_info:
log_record["exception"] = self.formatException(record.exc_info)
return json.dumps(log_record)
Configure logging
handler = logging.StreamHandler()
handler.setFormatter(JSONFormatter())
logging.root.addHandler(handler)
logging.root.setLevel(logging.INFO)
Parsing JSON in FireLens
[FILTER]
Name parser
Match
Key_Name log
Parser json
Reserve_Data true
How Do You Query and Analyze Fargate Logs?
CloudWatch Logs Insights provides a purpose-built query language for searching, filtering, and aggregating log data across multiple log groups, enabling you to find errors, measure latency, and identify trends without exporting data to external tools.CloudWatch Logs Insights
Query your logs effectively:
-- Find errors in the last hour
fields @timestamp, @message
| filter @message like /ERROR/
| sort @timestamp desc
| limit 100
-- Aggregate by log level
fields @message
| parse @message '"level":""' as level
| stats count() by level
-- Latency analysis
fields @timestamp, @message
| parse @message '"duration":,' as duration
| stats avg(duration), max(duration), min(duration) by bin(5m)
Creating CloudWatch Dashboards
# CloudFormation dashboard
LogsDashboard:
Type: AWS::CloudWatch::Dashboard
Properties:
DashboardName: fargate-logs
DashboardBody: !Sub |
{
"widgets": [
{
"type": "log",
"properties": {
"query": "fields @timestamp, @message | filter @message like /ERROR/ | sort @timestamp desc",
"region": "${AWS::Region}",
"title": "Error Logs",
"view": "table"
}
}
]
}
Third-Party Integration
Datadog Integration
{
"logConfiguration": {
"logDriver": "awsfirelens",
"options": {
"Name": "datadog",
"Host": "http-intake.logs.datadoghq.com",
"dd_service": "my-app",
"dd_source": "fargate",
"dd_tags": "env:production",
"TLS": "on",
"provider": "ecs"
},
"secretOptions": [
{
"name": "apikey",
"valueFrom": "arn:aws:secretsmanager:region:account:secret:datadog-api-key"
}
]
}
}
Splunk Integration
{
"logConfiguration": {
"logDriver": "splunk",
"options": {
"splunk-url": "https://your-splunk-hec.com:8088",
"splunk-source": "fargate",
"splunk-sourcetype": "docker",
"splunk-index": "main",
"splunk-format": "json"
},
"secretOptions": [
{
"name": "splunk-token",
"valueFrom": "arn:aws:secretsmanager:region:account:secret:splunk-token"
}
]
}
}
How Can You Optimize Fargate Logging Costs?
Fargate logging costs are driven primarily by CloudWatch Logs ingestion and storage. You can reduce costs by 40-60% through shorter retention periods, filtering out verbose logs at the FireLens level, archiving to S3 for long-term storage, and compressing log data.AWS reports that CloudWatch Logs ingestion costs $0.50 per GB in US East, and storage costs $0.03 per GB per month (source: AWS, "CloudWatch Pricing," 2024). For high-volume Fargate deployments, these costs add up quickly without proper optimization.
Log Retention Strategies
# Set appropriate retention
aws logs put-retention-policy \
--log-group-name /ecs/production \
--retention-in-days 90
aws logs put-retention-policy \
--log-group-name /ecs/development \
--retention-in-days 7
Log Filtering to Reduce Volume
# Drop debug logs in production
[FILTER]
Name grep
Match
Exclude level DEBUG
Sample high-volume logs
[FILTER]
Name throttle
Match
Rate 1000
Window 30
Print_Status true
Archive to S3
[OUTPUT]
Name s3
Match
region us-east-1
bucket logs-archive
total_file_size 100M
upload_timeout 10m
s3_key_format /$TAG[2]/$TAG[0]/%Y/%m/%d/%H/%M/%S
compression gzip
When choosing between EC2 and Fargate for your container workloads, logging cost and flexibility are important factors in the decision.
Troubleshooting
Common Issues
No logs appearing:{
"firelensConfiguration": {
"type": "fluentbit",
"options": {
"enable-ecs-log-metadata": "true"
}
}
}
Debugging FireLens
Enable debug logging:
[SERVICE]
Log_Level debug
[OUTPUT]
Name stdout
Match
Best Practices
Application Logging
Infrastructure Configuration
Teams managing container workloads should also consider how logging integrates with their broader container orchestration strategy across ECS and EKS clusters. Additionally, automating log infrastructure setup through CI/CD pipelines with GitHub Actions ensures consistent logging configuration across all environments.
Operations
According to a 2024 CNCF survey, 78% of organizations running containers in production identified observability (including logging) as their top operational challenge, ahead of security and networking (source: CNCF, "Annual Survey 2024").
How BeyondScale Can Help
At BeyondScale, we specialize in container observability and cloud-native infrastructure implementation. Whether you're setting up Fargate logging for the first time, optimizing an existing log pipeline to reduce costs, or building a multi-destination observability platform, our team can help you design and implement the right logging strategy.
Explore our Implementation Services to learn more.
Conclusion
Effective logging in Fargate requires choosing the right log driver and configuration for your needs. CloudWatch Logs provides simplicity, while FireLens offers flexibility for complex requirements.
Key takeaways:
- Start with awslogs for simplicity
- Use FireLens when you need routing or transformation
- Implement structured logging in applications
- Optimize costs with retention and filtering
- Monitor logging infrastructure health
Frequently Asked Questions
What are the best practices for logging in AWS Fargate?
Best practices include using structured JSON logging in your applications, setting appropriate log retention policies to control costs, choosing the right log driver (awslogs for simplicity, FireLens for advanced routing), filtering out debug and health check logs in production, including correlation IDs for distributed tracing, and encrypting sensitive log data.
Should I use CloudWatch Logs or a third-party logging solution with Fargate?
CloudWatch Logs is the simplest option with native AWS integration and is ideal for smaller deployments. Third-party solutions like Datadog, Splunk, or the ELK stack offer richer querying, visualization, and cross-platform correlation capabilities. Use FireLens to route logs to both CloudWatch and a third-party tool simultaneously for the best of both worlds.
How do you configure FireLens for Fargate log routing?
FireLens is configured by adding a sidecar container running Fluent Bit or Fluentd to your Fargate task definition, then setting the logDriver to awsfirelens in your application container's log configuration. You can customize routing rules, filters, and output destinations via a custom Fluent Bit configuration file stored in S3 or embedded in the container image.
How can I reduce CloudWatch Logs costs for Fargate?
Reduce CloudWatch Logs costs by setting shorter retention periods for non-production environments, filtering out verbose debug and health check logs using FireLens grep filters, archiving older logs to S3 with lifecycle policies, compressing log data before storage, sampling high-volume logs using throttle filters, and batching log entries where possible.
What is the recommended log retention period for Fargate containers?
Log retention depends on your compliance and operational needs. A common approach is 7 days for development, 30 days for staging, and 90 days for production in CloudWatch, with longer-term archival to S3 for compliance. Regulatory requirements such as HIPAA or SOX may mandate retention of 1-7 years for audit logs.
BeyondScale Team
DevOps Team
DevOps Team at BeyondScale Technologies, an ISO 27001 certified AI consulting firm and AWS Partner. Specializing in enterprise AI agents, multi-agent systems, and cloud architecture.

