Skip to content
All Posts
PersonalizationAdobe TargetArchitecture

Building a Visitor Scoring Model for Enterprise Personalization

March 15, 2025 | 5 min read | Mihai Hurjui

Why You Need a Scoring Model Before You Personalize

Most enterprise personalization programs start with audience segments — new vs. returning, geographic region, device type — and build experiences around them. This works for basic targeting but breaks down when you need to personalize across content types, funnel stages, and visitor intent simultaneously. You end up with a matrix of overlapping segments and no coherent model for how they interact.

A visitor scoring model solves this by assigning continuous scores across multiple dimensions rather than forcing visitors into discrete buckets. Instead of “returning visitor from the US,” you get “engagement 72, intent 45, fit 85, strong Fire topic affinity.” That level of resolution gives you far more precision for experience decisioning.

The Four Dimensions

Each dimension captures a different aspect of visitor understanding. All use a 0-100 scale.

Topic Affinity (What They Care About)

Topic affinity tracks interest across your content categories — for example, a gaming site might track Fire, Water, Grass, and Electric. Scores accumulate based on page views, time spent on content, and content depth (introductory vs. advanced material).

The critical detail: scores must decay over time. Without recency decay, a visitor who researched Fire six months ago but has since shifted to Water content still shows high Fire affinity. Stale scores produce stale personalization.

Use an exponential decay formula: score * e^(-decay_constant * days_since_last_interaction). Configure decay speed per business need — a 24-day half-life (decay constant 0.05) for fast-moving topics, 120-day half-life (constant 0.01) for stable interests.

The output is a JSON map of topic scores:

{
  "topicAffinity": {
    "Fire": 85,
    "Water": 72,
    "Grass": 58,
    "Electric": 31
  }
}

Use case: topic-based content recommendations, personalized navigation highlighting relevant product areas.

Engagement Score (How Deeply They Engage)

A composite score from four weighted components:

  • Page Depth (25%): Pages viewed per session
  • Content Variety (25%): Breadth of content categories explored
  • Return Frequency (30%): Session count over a rolling window
  • Time on Site (20%): Active engagement duration

Calibration benchmarks: a new visitor with 2 page views scores around 30. An active visitor on their 3rd session with 5 pages scores roughly 55. A power user with deep, repeated engagement hits 85+.

Use case: content tier routing (beginner/intermediate/advanced material), feature gating for advanced tools.

Intent Score (How Likely They Are to Convert)

Three signal categories weighted by predictive value:

  • High-Value Actions (40%): Downloads, form starts, demo requests
  • Content Progression (35%): Movement from introductory to advanced content within a topic
  • Conversion Proximity (25%): Visits to pricing, trial, or contact pages

Calibration: no qualifying actions yields a baseline around 15. One asset download pushes to roughly 35. A demo request combined with a pricing page visit reaches 70+.

Use case: CTA personalization (demo CTA vs. case study CTA vs. educational CTA), lead routing, sales prioritization.

Fit Score (How Well They Match Your Target Audience)

Three alignment factors:

  • Taxonomy Alignment (50%): Engagement concentration in target content categories
  • Audience Match (30%): Persona trait matching from behavioral signals (technical vs. business decision maker patterns)
  • Geo Relevance (20%): Target market fit based on location data

A visitor with strong market fit and consulting-focused engagement scores around 95. An off-market visitor with mixed content focus lands near 45.

Use case: audience segmentation, account-level scoring, market prioritization for outbound programs.

Implementation Architecture

The scoring model runs in production as a set of server-side profile scripts in Adobe Target, fed by client-side data elements.

Profile Scripts

Six core scripts execute server-side on each Target request. Each script reads mbox parameters, calculates or updates one or more scores, and stores results in the visitor profile. The execution time budget is under 5ms per script — anything slower degrades page load performance.

One constraint: profile scripts must be ES5 JavaScript. No arrow functions, no const/let, no template literals.

// Simplified engagement score calculation (ES5)
var pageDepth = parseInt(mbox.param('pageDepth')) || 1;
var sessionCount = parseInt(user.get('sessionCount')) || 1;
var engagementRaw = (pageDepth * 0.25) + (sessionCount * 0.30 * 10);
var engagement = Math.min(Math.round(engagementRaw), 100);
user.set('engagementScore', engagement);

Data Elements and Tracking Rules

Twelve data elements capture behavioral signals from the page: current page category, content depth indicator, scroll percentage, time on page, referral source, and others. Three event tracking rules fire on key actions — asset downloads, form starts, and form completions.

Data elements feed mbox parameters, which feed profile scripts. The chain looks like this:

Page load -> Data elements capture signals -> mbox parameters sent to Target
-> Profile scripts calculate/update scores -> Scores stored in visitor profile
-> Next request uses updated scores for experience decisioning

Each visit updates the scores incrementally. The model gets more accurate with each return visit as the behavioral signal accumulates.

Using Scores for Experience Decisioning

With four continuous scores available in the visitor profile, you can build composable targeting rules:

  1. Engagement-based content tiering: Score >= 70 serves advanced content. 40-69 serves intermediate. Below 40 serves beginner material.
  2. Intent-based CTA personalization: Intent >= 60 AND Fit >= 70 triggers a demo CTA. Intent 40-59 shows a case study CTA. Intent below 40 presents educational content.
  3. Smart content gating: High engagement + high intent + high fit = ungated content (no form required). Everyone else sees gated content. This rewards your most engaged prospects instead of penalizing them.
  4. Lifecycle stage detection: Combine engagement, intent, and conversion history to assign visitor/lead/MQL/customer stages automatically, without manual CRM syncing.

Validation and Coverage

How to know the model is working:

  • Performance: Under 5ms script execution per request, under 0.1% error rate
  • Coverage: 93-100% of visitors should have calculated scores after their first page load. Anonymous first-time visitors start with baseline scores; model accuracy increases with return visits.
  • Distribution: Score distributions should approximate a normal curve. If everyone clusters at 0 or 100, the component weights need adjustment.
  • Outcome validation: High-intent-score visitors should actually convert at higher rates. If they don’t, the intent signals are miscalibrated.

A phased rollout works best: topic affinity in week one, engagement in week two, intent in week three, fit in week four. Each dimension can be validated independently before adding the next.

Once you have continuous scores across four dimensions, personalization decisions become composable. You can combine dimensions for increasingly specific targeting without creating exponential segment combinations. The scoring model becomes the shared foundation for every personalization activity rather than a one-off audience definition.

If you’re running the scoring model across a large program, structured naming conventions become essential for tracking which activities use which scoring dimensions.

Written by Mihai Hurjui

Adobe Experience Platform Consultant

More Posts