- 필수 기능
- 시작하기
- Glossary
- 표준 속성
- Guides
- Agent
- 통합
- 개방형텔레메트리
- 개발자
- Administrator's Guide
- API
- Datadog Mobile App
- CoScreen
- Cloudcraft
- 앱 내
- 서비스 관리
- 인프라스트럭처
- 애플리케이션 성능
- APM
- Continuous Profiler
- 스팬 시각화
- 데이터 스트림 모니터링
- 데이터 작업 모니터링
- 디지털 경험
- 소프트웨어 제공
- 보안
- AI Observability
- 로그 관리
- 관리
",t};e.buildCustomizationMenuUi=t;function n(e){let t='
",t}function s(e){let n=e.filter.currentValue||e.filter.defaultValue,t='${e.filter.label}
`,e.filter.options.forEach(s=>{let o=s.id===n;t+=``}),t+="${e.filter.label}
`,t+=`Use the Amazon S3 destination to send logs to Amazon S3. To send logs in Datadog-rehydratable format to Amazon S3 for archiving and rehydration, configure Log Archives and then set up the Amazon S3 destination in your pipeline.
You can also route logs to Snowflake using the Amazon S3 destination.
If you want to send logs to Amazon S3 in Datadog-rehydratable format for archiving and rehydration, you need to set up a Datadog Log Archive. If you already have a Datadog Log Archive configured for Observability Pipelines, skip to Set up the destination for your pipeline.
You need to have Datadog’s AWS integration installed to set up Datadog Log Archives.
<MY_BUCKET_NAME>
and <MY_BUCKET_NAME_1_/_MY_OPTIONAL_BUCKET_PATH_1>
with the information for the S3 bucket you created earlier.{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DatadogUploadAndRehydrateLogArchives",
"Effect": "Allow",
"Action": ["s3:PutObject", "s3:GetObject"],
"Resource": "arn:aws:s3:::<MY_BUCKET_NAME_1_/_MY_OPTIONAL_BUCKET_PATH_1>/*"
},
{
"Sid": "DatadogRehydrateLogArchivesListBucket",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<MY_BUCKET_NAME>"
}
]
}
observability_pipelines_read_only_archive
쿼리를 추가하면 해당 태그가 추가된 파이프라인으로 로그가 유입되지 않습니다.추가 정보는 로그 아카이브 설명서를 참조하세요.
Set up the Amazon S3 destination and its environment variables when you set up an Archive Logs pipeline. The information below is configured in the pipelines UI.
/
to act as a directory path; a trailing /
is not automatically added.There are no environment variables to configure.
You can route logs from Observability Pipelines to Snowflake using the Amazon S3 destination by configuring Snowpipe in Snowflake to automatically ingest those logs. To set this up:
The Observability Pipelines Worker uses the standard AWS credential provider chain for authentication. See AWS SDKs and Tools standardized credential providers for more information.
For Observability Pipelines to send logs to Amazon Security Lake, the following policy permissions are required:
s3:ListBucket
s3:PutObject
A batch of events is flushed when one of these parameters is met. See event batching for more information.
Max Events | Max Bytes | Timeout (seconds) |
---|---|---|
None | 100,000,000 | 900 |