Example outputs

What Fidcern review outputs look like

These are example Fidcern outputs designed to support review before activity is treated as commercially trusted. Each artefact is built to be scannable, reviewable, and defensible in an internal meeting.

Illustrative mock output only. Actual deliverables vary by workflow and scope.

Finding Cards

Each card is one reviewable unit from a Workflow Confidence Review. Cards can be discussed in a review meeting, shared with a stakeholder, or attached to an evidence pack.

Confirmed — clean activity
FC-001 Confirmed

App engagement path for [Sponsor] matchday activation verified as clean across all four dimensions.

✓ All four verification checks passed
  • Device and session coherence consistent with genuine user behaviour
  • Eligibility criteria met: ticket holder, app-registered, first-time claimant
  • Reward issuance aligned with stated campaign rules

This path can carry full commercial weight in sponsor reporting.

Include in counted activity. No further review required.
Workflow: Matchday Fan Activation Scope: [Sponsor] Q1 Campaign Surface: App engagement Period: Jan 4 – Feb 15 2026 Illustrative example
Flagged — exceptions requiring review
FC-002 High Confidence

34% of coupon claims in [Retailer] loyalty workflow show patterns consistent with duplicate participation across linked accounts.

⚠ Did the activity meet the intended rules?
  • Repeated claim timing within 8-minute intervals across 3 linked device indicators
  • Same reward redeemed under different account tokens sharing session attributes
  • Eligibility rule specifies one claim per unique participant per campaign period

34% of claimed value in this workflow may not be defensible in supplier billing.

Exclude from counted activity pending manual review. Flag for rule-exception audit before supplier reporting.
Workflow: Loyalty Reward Claim Scope: [Retailer] Spring Promotion Surface: Coupon redemption Period: Mar 1 – Mar 28 2026 Illustrative example
FC-003 Confirmed

Stadium scan-to-reward path for [Sponsor] activation verified as clean. Reward issuance consistent with stated eligibility.

✓ Should the reward have been given?
  • QR scan event confirmed at venue with valid timestamp and geofence match
  • Reward issued to single account; no linked-account duplication detected

Reward cost is justified. Path supports sponsor-facing reporting.

Include in counted activity. Suitable for sponsor renewal evidence.
Workflow: Matchday Activation Scope: [Sponsor] Scan Campaign Surface: Stadium scan Period: Feb 8 – Apr 5 2026 Illustrative example
FC-004 Moderate Confidence

Cluster of 12 loyalty-point accruals share a device fingerprint but use distinct account tokens. Pattern consistent with multi-accounting.

⚠ Was the participant real?
  • 12 accounts sharing 2 device indicators, all created within a 72-hour window
  • Point accrual velocity 6× baseline for this promotion tier
  • No purchase events linked to 9 of 12 accruals

If representative, this cluster inflates reported participation by ~7% for this surface.

Hold from counted activity. Request [Retailer] confirm whether accounts are known test or staff accounts.
Workflow: Loyalty Point Accrual Scope: [Retailer] Q1 Bonus Campaign Surface: Loyalty platform Period: Jan 15 – Feb 28 2026 Illustrative example
FC-005 High Confidence

Campaign rule specifies "new app registrants only," but 23% of reward recipients had prior app activity predating the campaign by 4+ months.

⚠ Did the activity meet the intended rules?
  • Eligibility mismatch: 187 of 814 recipients show pre-campaign app activity
  • Prior activity includes login, browse, and wishlist events — not just dormant accounts
  • Campaign brief defines "new" as first app registration during campaign window

23% of reported "new registrant" activity may not hold up if the sponsor audits against the stated rule.

Review rule exception before supplier reporting. Consider whether "re-engaged" should be a separate category.
Workflow: New Registrant Reward Scope: [Sponsor] App Launch Campaign Surface: App registration Period: Feb 1 – Mar 15 2026 Illustrative example
Borderline — review recommended
FC-006 Flagged for Review

Digital interaction path shows valid participation indicators, but session duration and interaction depth fall below the confidence boundary for "meaningful engagement."

? Is the result strong enough to count?
  • Session duration: median 4.2 seconds (workflow baseline: 18 seconds)
  • No scroll or secondary interaction detected in 71% of sessions
  • Participant identity and eligibility otherwise confirmed

If counted as full engagement, this surface inflates the workflow's interaction metric by an estimated 14%.

Include with adjusted confidence weighting. Consider revising the "meaningful engagement" threshold before next campaign.
Workflow: Digital Sponsor Activation Scope: [Sponsor] Content Campaign Surface: Digital interaction Period: Mar 10 – Apr 2 2026 Illustrative example
FC-007 Moderate Confidence

Reward redemption cluster shows 8 claims from the same network segment within a 12-minute window. Pattern is atypical but not conclusively inorganic.

⚠ Should the reward have been given?
  • 8 redemptions from same /24 network range between 14:03 and 14:15
  • Accounts are distinct with different registration dates and contact tokens
  • Timing cluster is 3.2× above daily baseline for this reward type

Inconclusive. Could be legitimate (e.g. corporate office) or coordinated. Risk is bounded to ~£340 in reward value.

Flag for next review cycle. Monitor whether the same network segment appears in future campaigns.
Workflow: Reward Redemption Scope: [Retailer] Weekly Promotion Surface: Coupon redemption Period: Mar 18 – Mar 24 2026 Illustrative example
FC-008 High Confidence

41 reward issuances in this workflow are linked to accounts that had already received the same reward in a prior campaign period. Eligibility rules do not permit repeat claims.

⚠ Did the activity meet the intended rules?
  • 41 accounts matched to prior-period reward records via hashed contact token
  • Campaign rule: "one per customer lifetime, first claim only"
  • Prior claims confirmed across 3 separate campaign cycles

Repeat issuance represents approximately £1,230 in rewards that should not have been released under the stated rules.

Exclude from counted activity. Review whether eligibility-check logic is enforcing the lifetime cap correctly.
Workflow: First-Time Buyer Reward Scope: [Sponsor] Acquisition Campaign Surface: Reward issuance Period: Feb 10 – Mar 31 2026 Illustrative example

Touchpoint Contribution View

This view shows how different touchpoints in a sponsor-funded workflow contribute to qualified, verified activity versus total recorded activity. This is a contribution view, not a causal attribution claim.

[Sponsor] Matchday Activation — App + Stadium + Digital

Illustrative — based on representative workflow patterns

68%
of recorded activity met all verification criteria
14,280
Recorded
11,634
Eligible
9,710
Verified
9,487
Counted

Verified = passed all verification criteria. Counted = passed verification and met the commercial-confidence threshold for sponsor reporting. The difference (223 events) reflects activity that was technically verified but held for review due to borderline confidence scoring.

Recorded Verified
App engagement
41%
Matchday scan
27%
Digital interaction
13%
Reward claim
12%
Loyalty confirmation
7%

Contribution percentages show what proportion of final counted activity came from each surface. This is a participation-based contribution view, not a causal attribution claim.

  • App engagement contributed 41% of verified activity but had a 27% gap between recorded and verified — the largest drop-off of any surface in this workflow.
  • Matchday scan had the tightest recorded-to-verified ratio (93%), suggesting high-confidence physical verification at this touchpoint.
  • Digital interaction showed the widest gap (56% drop-off), driven primarily by session-depth and engagement-threshold failures flagged in FC-006.

How teams use these outputs

Review meetings

Walk through finding cards with stakeholders

Sponsor recaps

Include evidence packs in renewal discussions

Supplier billing

Defend counted activity before invoicing

Next campaign

Apply control recommendations to improve quality

See what your own workflow evidence looks like.

The next step is not a larger dashboard. It is a bounded review of one workflow. We scope the workflow, review the evidence, and show you what would count.

No commitment required. Start with one workflow. We reply within 24 hours.

Illustrative mock output only. Actual deliverables vary by workflow and scope.