S3 Data Destination

Set up an S3 data destination to export data from Singular automatically to an Amazon S3 bucket.

Note: Data destinations are an enterprise feature (learn more).

 

Setup Instructions

1. Create an S3 Bucket

Create an Amazon S3 bucket that Singular will export data to:

  1. In your AWS console, go to Services > S3 and click Create Bucket.
  2. Enter a name for your bucket. The name should start with the prefix "singular-s3-exports-", e.g., singular-s3-exports-mycompanyname.
  3. Select an AWS region for your bucket. The region should typically be the same as the region in which you use other AWS services, such as EC2 or Redshift.
  4. You can keep the default values for the rest of the settings.

2. Give Singular Permissions to the Bucket

There are two ways to set up access to your bucket:

Option A (Recommended): Create a Bucket Policy

You can provide Singular's AWS account with direct access to your S3 bucket as follows:

  1. In the AWS console, go to Services > S3 and select the relevant S3 bucket.
  2. Select the Permissions tab and select Bucket Policy.
  3. Add the following to your Bucket Policy (replace <YOUR_S3_BUCKET_NAME> with the real bucket name):
    {
    "Version": "2012-10-17",
    "Id": "",
    "Statement": [
    {
    "Sid": "SingularS3Access",
    "Effect": "Allow",
    "Principal": {
    "AWS": [
    "arn:aws:iam::623184213151:root"
    ]
    },
    "Action": [
    "s3:GetObject",
    "s3:PutObject",
    "s3:ListBucket",
    "s3:GetBucketLocation",
    "s3:DeleteObject"
    ],
    "Resource": [
    "arn:aws:s3:::<YOUR_S3_BUCKET_NAME>",
    "arn:aws:s3:::<YOUR_S3_BUCKET_NAME>/*"
    ]
    }
    ]
    }

Option B: Give Singular an Access Key ID + Secret Access Key

If you prefer, you can manage permissions by creating a dedicated AWS user that has access (only) to the relevant S3 bucket, and giving Singular an access key ID and secret access key.

Note: This will require contacting Singular support later to finish configuring your data connector.

To do so:

  1. In the AWS console, go to Services > IAM and in the menu on the left click Policies.
  2. Click Add Policy and click the JSON tab.
  3. Add the following policy (replace <YOUR_S3_BUCKET_NAME> with the real bucket name):
    {
    "Version": "2012-10-17",
    "Id": "",
    "Statement": [
    {
    "Sid": "SingularS3Access",
    "Effect": "Allow",
    "Action": [
    "s3:GetObject",
    "s3:PutObject",
    "s3:ListBucket",
    "s3:GetBucketLocation",
    "s3:DeleteObject"
    ],
    "Resource": [
    "arn:aws:s3:::<YOUR_S3_BUCKET_NAME>",
    "arn:aws:s3:::<YOUR_S3_BUCKET_NAME>/*"
    ]
    }
    ]
    }
  4. Click Review Policy and give the new policy a name (e.g. "singular-s3-exports").
  5. Click Users > Add User.
  6. Choose a name for the user and enable "Programmatic Access" (but do not enable console access):

    mceclip6.png

  7. Click Next: Permissions and under Set Permissions select Attach existing policies directly.
  8. Add the policy you just created:

    mceclip7.png

  9. Finish creating the user and save the newly created access key ID and secret access key.

3. Add an S3 Data Destination

To add an S3 data destination in Singular:

  1. In your Singular account, go to Settings > Data Destinations and click Add a new destination.
  2. Type in either "S3 Destination" (to export aggregated marketing data) or "S3 User-Level Destination" (to export user-level data).
  3. In the window that opens, fill in the bucket name you created:

    mceclip1.png

  4.  If you created an access key ID and secret access key then you should select "Using AWS Access Key ID + AWS Secret Access Key" in the "Bucket Access Type" dropdown
    Note that if you gave Singular access using a bucket policy then you should leave "Using Bucket Policy"
    mceclip2.png
  5. Select an output format "CSV" or "Parquet"
  6. Choose an output path in your S3 Bucket, for example, "singular_marketing_data/{date}{extension}"
  7. Choose the schema of the data loaded into the destination. For your schema options, see Data Destinations: Aggregated Marketing Data Schemas and Data Destinations: User-Level Data Schemas.

Note that Singular supports several placeholders (macros) that will get expanded automatically:

Placeholder Description Example
{date} Date of the data being exported from Singular 2020-03-19
{day} The day part of the data being exported from Singular (zero-padded) 19
{month} The month part of the data being exported from Singular 03
{year} The year part of the data being exported from Singular 2020
{extension} Output file extension .csv or .parquet
{timestamp} The exact time of the actual data. Only for user-level data. 2020-03-19 15:01:30
{job_timestamp} The exact time the ETL job started running. Use this if you would like to have a new file every day (for example, have a new folder for each run with all the dates that were pulled in that job). 2020-03-20 16:12:34
{job_date}

The date the ETL job started running. Similar to {job_timestamp}, but contains only the date of the job rather than the full timestamp.

2020-03-20