Skip to content

How to build a dashboard for your Digital Ad Experiments?

More to go 🙂 Not updated

 

Here is the Product Requirement Document for AD Experiments Dashboard

Scope: To create experiments for Ad copies, Experiment Analytics to conclude and Scale up to 100% across campaigns.

Product User Flow:

The AD Experiments Dashboard will have 2 user interfaces

  1. AD repository
  2. Experiments

Ad Repository

Ad Repository will have all copies used till now in campaigns – will be in sync with Adwords. And will facilitate creation of ad copies that can be used in experiments.

Similar to the current LP page-tag repository, we need to make separate repositories for all elements of ad copies

  1. H1
  2. H2
  3. Description
  4. Path 1
  5. Path 2
  6. LP
  7. Ad extensions

Along with individual repository, we will also create a ‘Full Ad copy’ repository. While creating experiment, user can either create a complete copy from above individual copies or can just choose a complete ad from Full ad copy repository.

All the repositories will have following metrics through which user can filter

  1. Ad Status
  2. Category
  3. City
  4. Device
  5. Label
  6. Campaign Type/Contains

Ad Status  – if the ad is enabled or disabled in the Adwords

User Activities Scope:

  1. User will have an option to either create
    1. individual components of Adcopies or
    2. can go to Complete Ad copy in the ‘Full Ad copy’ repository
  2. A content creator can go to any of the above repositories and create the Ad. Once created a label will be assigned for the ad copies, a label that will be used to identify set of adcopies created on that day for specific category. Label shall be ‘Category_Date’
  3. The Repositories syncs every 24 hours with Adwords and
    1. update the status of the Ads and
    2. Add Ads to the system if they are not present in the repository

Create Experiment

  1. Users can choose any of the ‘Cluster Key’ ‘Category Key’ ‘City’ ‘Campaign Contains/doesn’t contains’ ‘Campaign Adgroup Contains/doesn’t contains’ Filters to create experiment. Above filters will be primary filters, which will be used to filters campaigns from Adwords.
  2. Apply filters → Choose ad type for creating experiment → 1. Expanded Text Ads 2. Responsive Text ads [Dashboard architecture is made on basis of Ad-type for creating experiments to facilitate any channel integration with our dashboard in future]
  3. If Expanded Text Ads/Responsive Text ads → Add ‘Ad Elements’ for creating experiment [Users can choose only 1. Single Ad element or 2. choose multiple Ad elements 3. From Full ad copy repository for experiment creation]
    1. In case of ‘Expanded Text Ads’ – User will option add up to 15 copies for each elements. Combination algorithm in case of  1. Single Ad element or 2. choose multiple Ad elements need to be worked out.
    2. In case of ‘Responsive Text ads’ – User will option add up to 15 headlines for each elements, positions to fix if any. 4 Descriptions and other ad elements can be given as input, if not given, remaining elements will be taken from the copy which are live in campaigns.
    3. User can export copies on basis of label on the corresponding ad repository
    4. User can edit the ad copies even after exporting, any edits made here will be updated in to the repository
    5. User can also include or add DKI ad copies here itself
    6. User can also add any new ad element here itself
    7. All new elements added will be saved and synced with the repository and click on ‘Save Ad Variants’
    8. Once saved, If Expanded Text Ads – System will give a output of total number of ad variants created.

Push Experiment

In the above step we have created variations of ad we need to test. In this step, we will be identifying where these variations need to be pushed and push them.

Here the interface will have a check list view of ‘Campaigns’ ‘Adgroups’ ‘Ads’ – Similar expansion view is present in adwords editor or MCC reports.

From the view user can do 2 things

  1. Ad performance –  Can select the Ads and Ad range to see the performance in certain time period. – All metrics in the ‘Experiment Analytics’ will be shown here. On basis of data, user can pause any ads from here.
  2. Push in to Campaigns – On basis of filters [1. Campaign 2. device 3. Category 4. Cluster 5. City & All Ad elements – with contains and doesn’t contains] User can select ads. Once applied filters user can
    1. Click ‘Pause & Push’. Once pushed, the select ads will be paused and new ads will be built with the experiment adcopy
    2. Click ‘Duplicate & Push’. Once pushed, the select ads will be Duplicate and new ads will be built with the experiment adcopy

After push to campaigns – On basis of total no.of ads, users can set threshold limits for the experiment. Threshold limits for 1. Automate Experiment scaling 2. Manual Scaling

  1. Automatic scaling would need threshold limits 1. How many ads need to be testing at one time ? 2. Till how many sessions we need to run an experiment. 3. Optimized/Rotate indefinitely
  2. Manual scaling will always work on Optimized mode

Experiment Analytics

If ‘Automatic Scaling’ – After reaching threshold limits – System will decide which ads need to be paused on basis of Blended Metric = (A*CTR +B*CVR)*100 – A & B values will be given from settings – A + B will always be = 1. Lower performing ads will be paused and new ads will added for experimentation from the queue.

At any time – user can come and see experiment analytics and will be able to see as per follows

Filters – ‘Device’ ‘City’ ‘Time range’

Dimensions

  1. All Experiment A/B Versions
  2. Other A/B existing the campaign

Metrics

  1. Impressions
  2. Clicks
  3. Avg Position
  4. Conversions
  5. Cost with Tax
  6. CTR
  7. CVR
  8. CPR
  9. Category Started
  10. Category Started Rate
  11. Category Started/Conversion Rate
  12. Blended Metric
  • KW_Normalized Analogy – Check box, if a user would like to see the experiment performance for the same set of keywords. If not selected, data will be shown to all keywords. – The metrics we are seeing for A/B, we should see them for the same set of KWs Normalised by Avg Pos (called KW_Corrected) across 2 ad copies to know the actual impact of copies, separating out the impact of KW portfolio change and bid changes.

Conclude Experiment

If Manual Scaling, From the above Analytics interface – user can

  1. Choose best performing experiment & Scale 100% – Will pause other existing ads and scale 100%
  2. Pause/Remove –  Experiment A/B Versions built during create experiment  – in Specific City/Campaign type/Device

Settings

In Settings

  1. Blended Metric – A & B values
  2. Cluster and Category Mapping in SEM from Campaign Name
Published inUncategorized

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *