What is Statistical significance (stat sig)?
The confidence level associated with behavioral lift. If a metric does not reach stat sig that means its confidence is under 60% and does not populate in the UI due to volatility of the data. Therefore, the metric’s probability of being accurate is not as strong and why we do not populate for the client.
How are the displayed impressions in the UI constructed?
Raw pixel fires/ftp files * scale factor
Define frequency
Average number of impressions served per exposed user
How would we evaluate impression index and behavioral lift index together?
If a demo is driving lift index it could either be a targeting validation if impression index is over indexing, or could be an area of optimization if impression index is under indexing
Name the three Marketers products
Targeting, Attribution, and Visits
What is a conversion window?
Bonus: And why does it vary by locations being measured?
Number of days after exposure that Foursquare will attribute visits to the campaign. Our conversion window recommendation is based on each chain's organic foot traffic patterns. We take into account the population visit rate (adjusted for the chain's DMA footprint), the brand’s average visit cycle (number of days between visits), as well as the average visit cycle of the overall vertical (QSR, casual dining, etc.).
How do we calculate store visits?
Conversion rate * displayed impressions
Impression Index
Indicates how the exposed group is indexing against the general US population.
Ex: female impression index of 130 = exposed group is 30% more likely to be female than gen pop
What does a low CPV show?
The campaigns efficiency at driving store visitation
What is the partner and subplacement in this pixel?
<img src="https://p.placed.com/api/v2/sync/impression?partner=ironsource&version=1.0&plaid=006UU000000V9ILYA0&payload_campaign_identifier=kfcwraps&payload_device_identifier=REPLACE-DEVICE-ID-MACRO&payload_timestamp=REPLACE-TIMESTAMP-MACRO&payload_type=impression" border="0" width="1" height="1"/>
partner: ironsource
subplacement:kfcwraps
Why is fractional attribution better than last touch methodology?
Fractional attribution is giving partial credit of 1 visit to multiple exposures across varying channels, partners, and/or tactics. For example, a user is exposed to media from a campaign via mobile, linear, and OOH. They make a visit. This visit count gets divided into .33 of a visit and credited across each channel. Last touch attribution will only credit the last exposure to the visit while multiple media channels lead to that visit.
How is CPV (Cost Per Visit) calculated?
Ad spend / total visits
Conversion Rate (CVR)
The percentage of impressions that result in a store visit
Why does behavioral lift not populate for every report?
Because it is only present when the exposed conversion rate is higher than the unexposed behavioral conversion rate. Since the unexposed look alike group’s predicted organic visitation rate is higher than the exposed groups actual visitation.
Name 2 partners we cannot measure?
bonus: why?
YouTube and Meta (Facebook & Instagram)
both are walled gardens and do not accept 3P tracking tags
What does it mean when store visits are extrapolated/scaled up?
Attribution is estimating store visit counts based of the store visits we observe from our panel, the store visits displayed in the UI are modeled up from the conversion rate and impressions served
How is reach calculated?
Displayed impressions/frequency
Historical Visitation Cohorts (HVC)
Buckets your exposed visiting group into quartiles based on their visit frequency in the year prior to campaign launch. High frequenters = top 25% of users by visit frequency, Medium = middle 50%, Low = bottom 25%. Non Visitors haven't been observed making a visit prior
Why is it encouraged to compare CVRs vs store visits when evaluating partner performance?
Store visits are based on impressions, and partners impression volumes vary while CVR is an equalizer since it is a percentage of impressions that result in a store visits not a raw number
Why is it our best practice to set up attribution reports max 3 months?
the model will be less accurate if we are training it with a large amount of additional unrelated data. There’s a higher chance that the model will capture noise in a 3+ months worth of visit data. Adding this noise can affect the synthetic control, which in turn decreases lift accuracy.
What is our attribution methodology?
Your advertisements reach consumers on a billboard, app or TV. → Foursquare matches ad exposures to visit data using our always-on proprietary panel. → We create a synthetic control group to demonstrate how exposed consumers would have behaved had they not seen your ad. → We surface incremental lift and insights in our dashboard by tactic, channel, demo, region and more allowing you to optimize throughout the campaign.
What is the CVR calculation?
Matched visits / matched exposed users (*weighting)
Behavioral Lift
Exposed groups is visiting at a higher rate than an unexposed lookalike group that displays similar demographics and pre-exposure visitation patterns.
Explain the relationship between frequency and CVR
Frequency and CVR have an inverse relationship, when frequency increases, CVR tends to decrease and vice versa. In example, when you expose a user too frequently (i.e. 50x) they can only visit so often until maxing out on visitation potential based on media.
Why use a synthetic control based methodology versus 1:1 matching?
More exposure data, reduce imbalance, and panel leverage – allows the data used to create the metrics to be malleable, large while a 1:1 matching methodology will reduce the usable data therefore reducing accuracy