提问人: 提问时间:1/8/2019 更新时间:9/26/2023 访问量:3502
AWS Fargate 任务 - awslogs 驱动程序 - 间歇性日志
AWS Fargate Task - awslogs driver - Intermittent Logs
问:
我正在运行一个运行小型 python 脚本的一次性 Fargate 任务。任务定义配置为用于将日志发送到 Cloudwatch,但我遇到了一个非常奇怪的间歇性问题。awslogs
日志有时会出现在新创建的 Cloudwatch 流中,有时则不会。我已经尝试删除部分代码,现在,这是我所拥有的。
当我删除 asyncio/aiohttp 提取逻辑时,print 语句会正常显示在 Cloudwatch Logs 中。虽然由于问题是间歇性的,我不能 100% 确定这种情况会一直发生。
但是,在包含提取逻辑的情况下,我有时会在 Fargate 任务退出后获得完全为空的日志流。没有显示“作业启动”、“作业结束”或“将文件放入 S3”的日志。也没有错误日志。尽管如此,当我检查 S3 存储桶时,还是创建了具有相应时间戳的文件,表明脚本确实运行完成。我无法理解这怎么可能。
dostuff.py
#!/usr/bin/env python3.6
import asyncio
import datetime
import time
from aiohttp import ClientSession
import boto3
def s3_put(bucket, key, body):
try:
print(f"Putting file into {bucket}/{key}")
client = boto3.client("s3")
client.put_object(Bucket=bucket,Key=key,Body=body)
except Exception:
print(f"Error putting object into S3 Bucket: {bucket}/{key}")
raise
async def fetch(session, number):
url = f'https://jsonplaceholder.typicode.com/todos/{number}'
try:
async with session.get(url) as response:
return await response.json()
except Exception as e:
print(f"Failed to fetch {url}")
print(e)
return None
async def fetch_all():
tasks = []
async with ClientSession() as session:
for x in range(1, 6):
for number in range(1, 200):
task = asyncio.ensure_future(fetch(session=session,number=number))
tasks.append(task)
responses = await asyncio.gather(*tasks)
return responses
def main():
try:
loop = asyncio.get_event_loop()
future = asyncio.ensure_future(fetch_all())
responses = list(filter(None, loop.run_until_complete(future)))
except Exception:
print("uh oh")
raise
# do stuff with responses
body = "whatever"
key = f"{datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d_%H-%M-%S')}_test"
s3_put(bucket="my-s3-bucket", key=key, body=body)
if __name__ == "__main__":
print("Job starting")
main()
print("Job complete")
Docker文件
FROM python:3.6-alpine
COPY docker/test_fargate_logging/requirements.txt /
COPY docker/test_fargate_logging/dostuff.py /
WORKDIR /
RUN pip install --upgrade pip && \
pip install -r requirements.txt
ENTRYPOINT python dostuff.py
任务定义
{
"ipcMode": null,
"executionRoleArn": "arn:aws:iam::xxxxxxxxxxxx:role/ecsInstanceRole",
"containerDefinitions": [
{
"dnsSearchDomains": null,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "test-fargate-logging-stg-log-group",
"awslogs-region": "ap-northeast-1",
"awslogs-stream-prefix": "ecs"
}
},
"entryPoint": null,
"portMappings": [],
"command": null,
"linuxParameters": null,
"cpu": 256,
"environment": [],
"ulimits": null,
"dnsServers": null,
"mountPoints": [],
"workingDirectory": null,
"secrets": null,
"dockerSecurityOptions": null,
"memory": 512,
"memoryReservation": null,
"volumesFrom": [],
"image": "xxxxxxxxxxxx.dkr.ecr.ap-northeast-1.amazonaws.com/test-fargate-logging-stg-ecr-repository:xxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"disableNetworking": null,
"interactive": null,
"healthCheck": null,
"essential": true,
"links": null,
"hostname": null,
"extraHosts": null,
"pseudoTerminal": null,
"user": null,
"readonlyRootFilesystem": null,
"dockerLabels": null,
"systemControls": null,
"privileged": null,
"name": "test_fargate_logging"
}
],
"placementConstraints": [],
"memory": "512",
"taskRoleArn": "arn:aws:iam::xxxxxxxxxxxx:role/ecsInstanceRole",
"compatibilities": [
"EC2",
"FARGATE"
],
"taskDefinitionArn": "arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task-definition/test-fargate-logging-stg-task-definition:2",
"family": "test-fargate-logging-stg-task-definition",
"requiresAttributes": [
{
"targetId": null,
"targetType": null,
"value": null,
"name": "ecs.capability.execution-role-ecr-pull"
},
{
"targetId": null,
"targetType": null,
"value": null,
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"targetId": null,
"targetType": null,
"value": null,
"name": "ecs.capability.task-eni"
},
{
"targetId": null,
"targetType": null,
"value": null,
"name": "com.amazonaws.ecs.capability.ecr-auth"
},
{
"targetId": null,
"targetType": null,
"value": null,
"name": "com.amazonaws.ecs.capability.task-iam-role"
},
{
"targetId": null,
"targetType": null,
"value": null,
"name": "ecs.capability.execution-role-awslogs"
},
{
"targetId": null,
"targetType": null,
"value": null,
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
},
{
"targetId": null,
"targetType": null,
"value": null,
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
}
],
"pidMode": null,
"requiresCompatibilities": [
"FARGATE"
],
"networkMode": "awsvpc",
"cpu": "256",
"revision": 2,
"status": "ACTIVE",
"volumes": []
}
观察
- 当我将任务数量(要获取的 url)减少到 10 而不是 ~1000 时,日志似乎在大部分时间/全部(?) 出现。同样,这个问题是间歇性的,所以我不能 100% 确定。
- 我的原始脚本具有额外的逻辑,用于在失败时重试获取,并解析我在故障排除时删除的逻辑。当时的日志记录行为至少具有“作业启动”的日志和异步 aiohttp 请求期间的日志。但是,写入 S3 的日志和最终的“作业完成”日志间歇性地出现。使用上面的简化脚本,我似乎要么获得所有日志,要么根本没有日志。
- python 的库也发生了问题,我更改为以排除问题
logging
print
logging
答:
问题
我一直在遇到同样的问题;在 CloudWatch 中间歇性丢失 ECS Fargate 任务的日志。
虽然我无法回答为什么会发生这种情况,但我可以提供我刚刚测试过的解决方法。
什么对我有用:
升级到 Python 3.7 版本(我看到您正在使用 3.6.,就像我遇到相同问题时一样)。
我现在看到了我所有的日志,并从最新版本的 Python 中受益。
我希望这对你有所帮助,因为它帮助了我。
评论
根据此AWS论坛链接,此问题现在似乎已解决 我遇到了类似的问题,这个问题的答案中有一些有用的解决方法: 从 ECS Docker 容器写入 cloudwatch 时缺少日志行
您不应该再遇到此问题。如果是,请尝试部署任务定义的新版本,这应该可以修复它。
尝试打印到 stderr:
print("Log message", file=sys.stderr)
确保导入了 sys
评论