Set up an S3 data destination to export data from Singular automatically to an Amazon S3 bucket.
Data destinations are an enterprise feature (learn more).
Setup Instructions
1. Create an S3 Bucket
Create an Amazon S3 bucket to which Singular will export data.
- In your AWS console, go to Services > S3 and click Create Bucket.
-
Enter a name for your bucket. The name must start with singular-s3-exports-, as Singular filters for buckets with this prefix. For example:
singular-s3-exports-mycompanyname
- Select an AWS region for your bucket. The region should typically be the same as the region in which you use other AWS services, such as EC2 or Redshift.
- You can keep the default values for the rest of the settings.
2. Give Singular Permissions to the Bucket
There are two ways to set up access to your bucket:
Option A (Recommended): Create a Bucket Policy
You can provide Singular's AWS account with direct access to your S3 bucket as follows:
- In the AWS console, go to Services > S3 and select the relevant S3 bucket.
- Select the Permissions tab and select Bucket Policy.
-
Add the following to your Bucket Policy (replace <YOUR_S3_BUCKET_NAME> with the real bucket name):
{
"Version": "2012-10-17",
"Id": "",
"Statement": [
{
"Sid": "SingularS3Access",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::623184213151:root"
]
},
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::<YOUR_S3_BUCKET_NAME>",
"arn:aws:s3:::<YOUR_S3_BUCKET_NAME>/*"
]
}
]
}Important: Singular sets ACL to bucket-owner-full-control to ensure the bucket owner has full access to uploaded files. Make sure your bucket policy does not deny PutObjectACL permissions to the Singular account!
Option B: Give Singular an Access Key ID + Secret Access Key
If you prefer, you can manage permissions by creating a dedicated AWS user that has access (only) to the relevant S3 bucket, and giving Singular an access key ID and secret access key.
Note: This will require contacting Singular support later to finish configuring your data connector.
- In the AWS console, go to Services > IAM. In the menu on the left, click Policies.
- Click Add Policy and click the JSON tab.
- Add the following policy (replace <YOUR_S3_BUCKET_NAME> with the real bucket name):
{
"Version": "2012-10-17",
"Id": "",
"Statement": [
{
"Sid": "SingularS3Access",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::<YOUR_S3_BUCKET_NAME>",
"arn:aws:s3:::<YOUR_S3_BUCKET_NAME>/*"
]
}
]
} - Click Review Policy and give the new policy a name (e.g. "singular-s3-exports").
- Click Users > Add User.
-
Choose a name for the user and enable "Programmatic Access" (but do not enable console access):
- Click Next: Permissions and under Set Permissions select Attach existing policies directly.
-
Add the policy you just created:
- Finish creating the user and save the newly created access key ID and secret access key.
3. Add an S3 Data Destination
To add an S3 data destination in Singular:
- In your Singular account, go to Settings > Data Destinations and click Add a new destination.
- Type in either "S3 Destination" (to export aggregated marketing data) or "S3 User-Level Destination" (to export user-level data).
-
In the window that opens, fill in the bucket name you created:
-
If you created an access key ID and secret access key, select "Using AWS Access Key ID + AWS Secret Access Key" in the "Bucket Access Type" dropdown.
If you gave Singular access using a bucket policy, you should leave "Using Bucket Policy".
- Select an output format: "CSV" or "Parquet".
-
Choose an output path in your S3 Bucket, for example, "singular_marketing_data/{date}{extension}".
Important: For the file partitions to be uploaded correctly, the final object in the output key pattern must be:
- {date}{extension} for aggregate destinations
- {timestamp}{extension} for user-level destinations
- Choose the schema of the data loaded into the destination. For your schema options, see Data Destinations: Aggregated Marketing Data Schemas and Data Destinations: User-Level Data Schemas.
Placeholders
Singular supports the following placeholders (macros) that will get expanded automatically:
Placeholder | Description | Sample Value |
Example Output Key Pattern |
---|---|---|---|
{date} | Date of the data being exported from Singular. | 2020-03-19 | s3://<your-bucket-name>/singular_marketing_data/{date}{extension} |
{extension} | Output file extension. | .csv or .parquet | |
{day} | The day part of the data being exported from Singular (zero-padded). | 19 | s3://<your-bucket-name>/singular_marketing_data/{year}/{month}/{day}/{date}{extension} |
{month} | The month part of the data being exported from Singular. | 03 | |
{year} | The year part of the data being exported from Singular. | 2020 | |
{timestamp} | The exact time of the actual data. Only for user-level data. | 2020-03-19 15:01:30 | s3://<your-bucket-name>/singular_marketing_data/{timestamp}{extension} |
{job_timestamp} |
The exact time the ETL job started running. Use this pattern if you want to have a new file for previous days’ data, organized in objects ("folders"), for each new ETL job timestamp. For example:
|
2020-03-20 16:12:34 | s3://<your-bucket-name>/singular_marketing_data/{job_timestamp}/{date}{extension} |
{job_date} |
The date the ETL job started running. Similar to {job_timestamp}, but contains only the date of the job rather than the full timestamp. |
2020-03-20 | s3://<your-bucket-name>/singular_marketing_data/{job_date}/{date}{extension} |