1. In Funnel

Start with setting up an export to S3 with the data you want to include, read more in this help article.

Make sure to

  • Include a header row (don't select "None" for header format)

  • Use "Export" as metric format

  • Select CSV as file format

2. In MongoDB

2.1 Create a Data Lake

Follow their instructions for creating a Data Lake.

When clicking connect data you will be prompted to connect a Data Store, so that you can connect to S3. Follow the steps and create an IAM role for MongoDB to use in your AWS account.

2.2 Define Data Store path

Copy the paths to one of your files exported from step 1 and input it in the "Example S3 Path". Make sure to select any value (*) for the file name and select static for any other folders that you might have (I have testPath as folder but it has been selected as a prefix in the bucket setup in the previous step so it's not shown here).

Done!

Now the setup is complete and you can graph or query all the data included in the configured S3 folder from the S3 export. The pipeline will automatically update the data and add new fields.

Did this answer your question?