이 페이지는 아직 영어로 제공되지 않습니다. 번역 작업 중입니다.
현재 번역 프로젝트에 대한 질문이나 피드백이 있으신 경우 언제든지 연락주시기 바랍니다.

Use the Amazon S3 destination to send logs to Amazon S3. To send logs in Datadog-rehydratable format to Amazon S3 for archiving and rehydration, configure Log Archives and then set up the Amazon S3 destination in your pipeline.

You can also route logs to Snowflake using the Amazon S3 destination.

Configure Log Archives

If you want to send logs to Amazon S3 in Datadog-rehydratable format for archiving and rehydration, you need to set up a Datadog Log Archive. If you already have a Datadog Log Archive configured for Observability Pipelines, skip to Set up the destination for your pipeline.

You need to have Datadog’s AWS integration installed to set up Datadog Log Archives.

Create an Amazon S3 bucket

  1. Navigate to Amazon S3 buckets.
  2. Click Create bucket.
  3. Enter a descriptive name for your bucket.
  4. Do not make your bucket publicly readable.
  5. Optionally, add tags.
  6. Click Create bucket.

Set up an IAM policy that allows Workers to write to the S3 bucket

  1. Navigate to the IAM console.
  2. Select Policies in the left side menu.
  3. Click Create policy.
  4. Click JSON in the Specify permissions section.
  5. Copy the below policy and paste it into the Policy editor. Replace <MY_BUCKET_NAME> and <MY_BUCKET_NAME_1_/_MY_OPTIONAL_BUCKET_PATH_1> with the information for the S3 bucket you created earlier.
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "DatadogUploadAndRehydrateLogArchives",
                "Effect": "Allow",
                "Action": ["s3:PutObject", "s3:GetObject"],
                "Resource": "arn:aws:s3:::<MY_BUCKET_NAME_1_/_MY_OPTIONAL_BUCKET_PATH_1>/*"
            },
            {
                "Sid": "DatadogRehydrateLogArchivesListBucket",
                "Effect": "Allow",
                "Action": "s3:ListBucket",
                "Resource": "arn:aws:s3:::<MY_BUCKET_NAME>"
            }
        ]
    }
    
  6. Click Next.
  7. Enter a descriptive policy name.
  8. Optionally, add tags.
  9. Click Create policy.

Create an IAM user or role

Create an IAM user or role and attach the policy to it.

서비스 계정 생성

서비스 계정을 생성하여 위에서 생성한 정책을 사용하세요.

Create an IAM user or role

Create an IAM user or role and attach the policy to it.

Create an IAM user or role

Create an IAM user or role and attach the policy to it.

S3 버킷을 Datadog 로그 아카이브에 연결하기

  1. Datadog Log Forwarding으로 이동합니다.
  2. New archive를 클릭합니다.
  3. 구체적인 아카이브 이름을 입력합니다.
  4. 로그 파이프라인을 거치는 모든 로그를 필터링하는 쿼리를 추가하여 해당 로그가 이 아카이브로 유입되지 않도록 합니다. 예를 들어 observability_pipelines_read_only_archive 쿼리를 추가하면 해당 태그가 추가된 파이프라인으로 로그가 유입되지 않습니다.
  5. AWS S3을 선택합니다.
  6. 버킷이 있는 AWS 계정을 선택합니다.
  7. S3 버킷 이름을 입력합니다.
  8. 선택적으로 경로를 입력합니다.
  9. 확인을 요구하는 문장에 체크 표시합니다.
  10. 옵션으로 태그를 추가하고 Rehydration을 위한 최대 스캔 크기를 정의합니다. 자세한 정보는 고급 설정을 참조하세요.
  11. Save을 클릭합니다.

추가 정보는 로그 아카이브 설명서를 참조하세요.

Set up the destination for your pipeline

Set up the Amazon S3 destination and its environment variables when you set up an Archive Logs pipeline. The information below is configured in the pipelines UI.

  1. Enter the S3 bucket name for the S3 bucket you created earlier.
  2. Enter the AWS region the S3 bucket is in.
  3. Enter the key prefix.
    • Prefixes are useful for partitioning objects. For example, you can use a prefix as an object key to store objects under a particular directory. If using a prefix for this purpose, it must end in / to act as a directory path; a trailing / is not automatically added.
    • See template syntax if you want to route logs to different object keys based on specific fields in your logs.
  4. Select the storage class for your S3 bucket in the Storage Class dropdown menu.
    • Note: Rehydration only supports the following storage classes:
    • If you wish to rehydrate from archives in another storage class, you must first move them to one of the supported storage classes above.
  5. Optionally, select an AWS authentication option. If you are only using the user or role you created earlier for authentication, do not select Assume role. The Assume role option should only be used if the user or role you created earlier needs to assume a different role to access the specific AWS resource and that permission has to be explicitly defined.
    If you select Assume role:
    1. Enter the ARN of the IAM role you want to assume.
    2. Optionally, enter the assumed role session name and external ID.

Set the environment variables

There are no environment variables to configure.

Route logs to Snowflake using the Amazon S3 destination

You can route logs from Observability Pipelines to Snowflake using the Amazon S3 destination by configuring Snowpipe in Snowflake to automatically ingest those logs. To set this up:

  1. Configure Log Archives if you want to archive and rehydrate your logs. If you only want to send logs to Amazon S3, skip to step 2.
  2. Set up a pipeline to use Amazon S3 as the log destination. When logs are collected by Observability Pipelines, they are written to an S3 bucket using the same configuration detailed in Set up the destination for your pipeline, which includes AWS authentication, region settings, and permissions.
  3. Set up Snowpipe in Snowflake. See Automating Snowpipe for Amazon S3 for instructions. Snowpipe continuously monitors your S3 bucket for new files and automatically ingests them into your Snowflake tables, ensuring near real-time data availability for analytics or further processing.

How the destination works

AWS Authentication

The Observability Pipelines Worker uses the standard AWS credential provider chain for authentication. See AWS SDKs and Tools standardized credential providers for more information.

Permissions

For Observability Pipelines to send logs to Amazon Security Lake, the following policy permissions are required:

  • s3:ListBucket
  • s3:PutObject

Event batching

A batch of events is flushed when one of these parameters is met. See event batching for more information.

Max EventsMax BytesTimeout (seconds)
None100,000,000900