Skip to content
All Posts
AnalyticsGovernanceAdobe Target

Naming Conventions That Scale: From Chaos to Governance

April 1, 2025 | 5 min read | Mihai Hurjui

What Happens Without a Naming Convention

You inherit an enterprise testing program with 300+ activities. You open the Target activity list and see names like “Homepage Test 2,” “John’s test - DO NOT DELETE,” “Q4 campaign v3 FINAL,” and “test.” You need to find all A/B tests that ran on product pages in APAC last quarter. Good luck.

Without a naming convention:

  • Reporting requires manual lookup and tribal knowledge
  • Test history is unsearchable — learnings from previous tests are lost
  • Duplicate tests run because nobody can tell what’s already been tested
  • Dashboards can’t auto-classify activities by product, region, or test type
  • New team members have no way to understand the testing program’s scope

This compounds. At 500+ annual optimization activities across multiple regions and business units, a missing naming convention doesn’t just slow you down — it makes the testing program ungovernable.

Anatomy of a Naming Convention That Works

The activity naming format uses 8 components separated by underscores:

[TYPE]_[REGION]_[PRODUCT]_[PAGE]_[AUDIENCE]_[DATE]_[VERSION]_[HYPOTHESIS]

Each component:

  • TYPE: Activity type code. ABT = A/B Test, XT = Experience Targeting, MVT = Multivariate Test, AP = Automated Personalization, AT = Auto-Target.
  • REGION: Geographic scope. WW = Worldwide, AM = Americas, EM = EMEA, AP = Asia Pacific. Define codes for every market you operate in.
  • PRODUCT: Product or business unit identifier. Use short codes from your internal taxonomy (PROD1, PROD2, etc.). These map to your product hierarchy.
  • PAGE: Page type. PDP = Product Detail Page, CMP = Campaign Landing Page, THK = Thought Leadership/Content, HP = Homepage.
  • AUDIENCE: Target audience segment. ALL = All Visitors, NEW = New Visitors, RET = Returning, TEC = Technical Audience, BUS = Business Decision Makers.
  • DATE: Launch month in YYYYMM format. Not the end date — the date the activity first went live.
  • VERSION: Iteration counter. V1, V2, V3. Increments when you relaunch a test on the same page with a modified hypothesis.
  • HYPOTHESIS: Optional reference to a ticket tracking ID or hypothesis summary. Keeps the activity traceable to its origin.

Example: ABT_WW_PROD1_PDP_ALL_202412_V1 reads as “A/B Test, Worldwide, Product 1, Product Detail Page, All Visitors, December 2024, Version 1.”

Every component is required except HYPOTHESIS. All uppercase. Underscore delimiter only — no spaces, hyphens, or dots. Maximum 100 characters.

Experience Naming

Activities contain experiences, and those need structure too. The companion format uses 5 components:

[VARIANT]_[ELEMENT]_[CHANGE]_[DEVICE]_[PRIORITY]
  • VARIANT: CTRL for control, VAR1, VAR2, etc. for variations
  • ELEMENT: What’s being changed — HERO, CTA, NAV, FORM, SIDEBAR
  • CHANGE: Type of change — IMG = image swap, CLR = color, CPY = copy, LYT = layout
  • DEVICE: Omit for all devices. MOB = mobile only, DSK = desktop only.
  • PRIORITY: P1 = high priority, P2 = standard

Example: VAR1_HERO_IMG_P1 means “Variant 1, Hero banner, Image change, Priority 1.” Maximum 50 characters.

Why This Format Enables Automated Reporting

When every activity follows a parseable format, you unlock automated reporting that would be impossible with freeform names.

SAINT classification in Adobe Analytics: Write regex rules that parse activity names into classification dimensions. The TYPE component becomes a “Test Type” dimension, REGION becomes a “Geography” dimension, PRODUCT maps to your product hierarchy. One SAINT upload and every Target activity is automatically classified in your Analytics reports.

API compatibility: The Target Admin API can programmatically filter and sort activities by any component. Need a list of all A/B tests running in EMEA? Parse the activity names. No manual tagging required.

A4T integration: When you use Analytics for Target reporting, the structured activity names flow into Analytics automatically. Your A4T reports inherit the naming structure without additional configuration.

Self-service analytics: Business users can filter by region, product, page type, or test type in Analysis Workspace without asking the analytics team to build custom reports. The naming convention is the data layer.

This enables four categories of dashboards:

  1. Executive KPI dashboard: Program win rate, test velocity, estimated revenue impact
  2. Operational dashboard: Live test monitoring, collision detection between overlapping tests, QA health checks
  3. Product performance dashboard: Testing coverage and results by product line
  4. Regional dashboard: Geographic coverage and win rates by market

Governance Rules

A naming convention without enforcement is a suggestion. Make it enforceable:

  • All components uppercase
  • Underscore delimiter only
  • 100-character maximum for activity names, 50-character maximum for experience names
  • No special characters beyond underscores
  • Required components: TYPE, REGION, PRODUCT, PAGE, AUDIENCE, DATE, VERSION
  • Every new activity is validated before launch

Validation Before Launch

Automate the enforcement. A validation script checks every activity name against the convention and flags violations with specific error messages:

import re

VALID_TYPES = {'ABT', 'XT', 'MVT', 'AP', 'AT'}
VALID_REGIONS = {'WW', 'AM', 'EM', 'AP', 'LA', 'CN', 'JP', 'IN'}
PATTERN = r'^(ABT|XT|MVT|AP|AT)_[A-Z]{2}_[A-Z0-9]+_[A-Z]+_[A-Z]+_\d{6}_V\d+(_.*)?$'

def validate_activity_name(name):
    if len(name) > 100:
        return False, "Name exceeds 100 characters"
    if not re.match(PATTERN, name):
        return False, "Name does not match required format"
    parts = name.split('_')
    if parts[0] not in VALID_TYPES:
        return False, f"Unknown type code: {parts[0]}"
    if parts[1] not in VALID_REGIONS:
        return False, f"Unknown region code: {parts[1]}"
    return True, "Valid"

Run this as a pre-launch checklist step. Invalid names get flagged with actionable error messages — “Unknown region code: EU — did you mean EM?” is more useful than a generic “invalid name” error.

Monthly compliance audits catch naming drift. Pull all active activities via the Target API, run them through validation, and report violations. Target 95% compliance within 60 days of adoption.

Migration Strategy

You can’t rename 300 existing activities overnight. Phase it:

  • Phase 1 (immediate): All new activities follow the convention. Validation is mandatory for every new launch.
  • Phase 2 (month 1-2): Rename active high-value tests. Maintain a lookup table mapping old names to new names for historical continuity.
  • Phase 3 (month 3-6): Archive historical activities with classification tags. Complete the migration.

What This Costs and What It Saves

The upfront investment is modest: defining the format, building the validation script, training the team. The return compounds over time:

  • 80% reduction in reporting effort. Self-service dashboards replace manual data pulls and spreadsheet wrangling.
  • Under 5 minutes from question to insight. “How are product page tests performing in EMEA this quarter?” becomes a filter operation, not a research project.
  • 100% test traceability. Every activity traces back to a hypothesis through the naming convention. No more orphaned tests with unclear origins.
  • Knowledge preservation. When someone leaves the team, the testing program’s history doesn’t leave with them. Five hundred tests remain searchable and learnable for every future team member.

Naming conventions are the least glamorous part of a testing program and the single highest-leverage governance investment you can make. Get this right early and everything built on top of it — reporting, analysis, optimization — works better by default.

If your naming feeds into CJA cross-channel reporting, the SAINT classification approach described above carries directly into your data views.

Written by Mihai Hurjui

Adobe Experience Platform Consultant

More Posts