The Data Warehouse S3 Connector automatically syncs all the data in your Funnel account to Amazon S3. All you have to do is connect your ad accounts to Funnel as usual, set up an S3 bucket with write access for Funnel and enable the upload. This can be useful as a way to get programmatic access to the cost data from your ad platforms, or serve as a way of importing the data to Redshift or another part of your Data Warehouse pipeline.


The Data Warehouse connector allows you to pick and choose whichever dimensions and metrics you like. Any changes to your Funnel account or the fields available to the connected sources will not be reflected in your exports until you manually make a change to the set of selected fields. Once your export has run you'll find a number of files in your bucket. Most notably you'll see one file per month for the date range you have selected, e.g. funnel_data_2018_01, funnel_data_2018_02 and so on. Along with these you'll find a file named schema.funnel_data_{startYYYY}_{startMM}.sql, containing the CREATE TABLE syntax for a table matching your configured export.
The files will contain the fields in the same order as they are configured in the export.


The Data Warehouse connector allows for some configurability in terms of file format and formatting of the data. This also includes settings for different types of header rows within each file and formatting of metric values. 


To get started you will need to prepare an S3 bucket in you AWS account. More information on this process can be found in the article Amazon S3 bucket configuration.

Export configuration

In Funnel open your Account, select "Data Warehouse" in the left hand side navigation and click the "New export" button to setup a new export.

Choose Amazon S3 as the destination for your export and enter a name.

Enter the details of your bucket, the path to where you wish to export your data and a file name template to use when naming the export files. Once you have filled in your details it may be useful to verify that everything is correctly set up by clicking the Test access button. If there are any connection errors, please refer to the article Amazon S3 bucket configuration.

The next step is selecting a file format for your exports. If your are planning on importing the data to Redshift suitable options for your S3 export would be:

  • gzip turned on

  • file format TSV

  • metric format Export

  • headers None

Once you have selected a file format you should click the Choose fields button in order to select which dimensions and metrics to include in your export. This step will also lets you specify if you want monetary values currency converted in to a single currency or use the original currency they are reported in, as well as specify a schedule for when to run you export.

Verify the bucket

Once you have saved your configuration you will need to verify the bucket is correct.
When you click the 'Verify...' button shown below we will create a file containing a token and upload it to your bucket.

Follow the link in the dialog, open the file, and copy the contents into the box:

Enable the export

Once everything is set up you can review your configuration. If everything looks good you need to toggle the Enable switch in the top right hand corner. The first export will run according to the schedule you specified. In case you wish to verify that everything works as intended you can click the Run now button.

Did this answer your question?