Singular Data Destinations (ETL) FAQ and Troubleshooting

Singular Data Destinations (ETL) is an enterprise feature. If you're interested in using this feature, reach out to your Customer Success Manager.

Singular can feed data directly into your data warehouse, storage service, or BI platform, where you can use your own BI and visualization tools to process and analyze the data.

By setting up data destinations, you can get your Singular data pushed into your storage service or platform at regular intervals - without requiring any engineering work on your side to implement Singular's Reporting API or process internal BI postbacks.

Troubleshooting

Why are network metrics and tracker metrics broken into separate rows?

If you are using data destinations with a custom schema that includes tracker metrics, they may appear in a separate row. This is because, by default, data destinations include fields that identify the data connector (integration) that the metrics were pulled through. Network metrics such as cost are pulled through the network data connector, while tracker metrics are pulled either internally from Singular (if Singular is your MMP) or from the data connector for your third-party attribution provider.

To merge network and tracker data into the same row, e.g., to see both cost and revenue data for a campaign, you can perform a query in your database that groups the data and excludes the data connector fields ("data_connector_XXX").

» For more information about how Singular works with tracker and network data, see Understanding Singular Reporting Data.

How do I check the status of ETL updates? How do I know when the ETL was last updated?

There are a number of ways you can check ETL status:

    1. The ETL Availability API helps you check the status of your data destination updates.
    2. Go to Settings > Data Destinations and check the ETL Status column.
    3. In every data destination, there is a dimension called data_connector_timestamp_utc. This dimension shows the timestamp for when we pulled the data from the network.
    4. In every data destination there is a dimension called etl_query_timestamp_utc. This dimension shows the last time we updated data in the ETL.

Note: In continuous ETL, the timestamp means that no new data has come in for this row of data since this last timestamp. That means that there could have been subsequent ETL tasks, but for this particular row of data, there was no need to update.

FAQ

Which platforms are supported as destinations?
What data can be exported?
Type of Data Description Fields Included
Aggregated Campaign Data Ad spend (campaign cost), creative data, install and re-engagement stats, and other KPIs Singular pulls from ad networks and MMPs, broken down by source (network), app, os, country, and other dimensions. See Data Destinations: Aggregated Data Schemas
Aggregated Ad Monetization Data Data about your ad revenue, ad requests, and ad impressions. For more information about Singular's ad monetization data, see the Ad Monetization Analytics FAQ.
User-Level Data User-level information from Singular's attribution logs, including attributions (conversions), in-app events, SKAN postbacks, and others. This data is available if you are using Singular's attribution service (for mobile, web, or cross-device). This data resembles what's you would see from the exported logs (see the Export Logs and User-Level Data FAQ). See Data Destinations: User-Level Data Schemas
User-Level Ad Revenue Data If you are collecting user-level ad monetization data in Singular from mediation partners (see Ad Revenue Attribution FAQ and Troubleshooting), you can export it using this type of data destination. You can export data from IronSource and AppLovin.
See Data Destinations: User-Level Ad Monetization Data Schema
How often is the data refreshed?

Aggregated data is typically pushed every 6 hours, while user-level data is pushed every hour. Note that this does not necessarily represent the freshness of the data, which depends on other variables, including the data source used by Singular.

For user-level data, you can expect 95% of events to be exported to your system within 3 hours after they occurred. A small minority of events may take a longer time to be processed and exported.

What does the date column represent?

The date column shows the date for which the aggregated data has been queried. The timezone set for the querying account affects what data is included within the query date.

How do I set up data destinations?

If data destinations are enabled for your account, you will be able to add new destinations through the Data Destinations page:

  1. Log into your Singular account and go to Settings > Data Destinations.
  2. Click Add a New Destination to display a list of the supported destinations for your account. The destinations marked "User-Level" are used to export user-level data. The others are used to export aggregated data.

However, before you add the destination in Singular, you usually need to perform some configuration steps in the partner platform, e.g., to give Singular permissions to push data to the platform. For instructions for setting up a specific data destination, find the desired destination in the Data Destinations section.

Note: Make sure to whitelist the Singular server IPs.

Which Singular IP addresses should I whitelist?

Depending on your platform's security settings, you may need to whitelist the following Singular server IPs before you can receive data exports:

54.183.135.179/32
54.183.113.72/32
13.52.189.144/32
How can I deduplicate the data?

Singular data destination exports don't have a unique key for each row of data. Instead, the data is partitioned, and whenever any information in one of the partitions is changed, Singular fully replaces the existing partition with the new one.

In S3 destinations, each partition is in a separate file. When you process the files, you should look for files that have changed and fully replace your existing data with the new data.

The partition key depends on the type of data:

Export Type Partition Key
Aggregate data (e.g. "S3 Destination") Date
User-level data (e.g. "S3 User-Level Destination") Hour (etl_record_processing_hour_utc)
Ad revenue user-level data (e.g. "S3 Ad Revenue User-Level Destination") Date