Example outputs
These are example Fidcern outputs designed to support review before activity is treated as commercially trusted. Each artefact is built to be scannable, reviewable, and defensible in an internal meeting.
Illustrative mock output only. Actual deliverables vary by workflow and scope.
Each card is one reviewable unit from a Workflow Confidence Review. Cards can be discussed in a review meeting, shared with a stakeholder, or attached to an evidence pack.
App engagement path for [Sponsor] matchday activation verified as clean across all four dimensions.
✓ All four verification checks passedEvidence basis
This path can carry full commercial weight in sponsor reporting.
34% of coupon claims in [Retailer] loyalty workflow show patterns consistent with duplicate participation across linked accounts.
⚠ Did the activity meet the intended rules?Evidence basis
34% of claimed value in this workflow may not be defensible in supplier billing.
Stadium scan-to-reward path for [Sponsor] activation verified as clean. Reward issuance consistent with stated eligibility.
✓ Should the reward have been given?Evidence basis
Reward cost is justified. Path supports sponsor-facing reporting.
Cluster of 12 loyalty-point accruals share a device fingerprint but use distinct account tokens. Pattern consistent with multi-accounting.
⚠ Was the participant real?Evidence basis
If representative, this cluster inflates reported participation by ~7% for this surface.
Campaign rule specifies "new app registrants only," but 23% of reward recipients had prior app activity predating the campaign by 4+ months.
⚠ Did the activity meet the intended rules?Evidence basis
23% of reported "new registrant" activity may not hold up if the sponsor audits against the stated rule.
Digital interaction path shows valid participation indicators, but session duration and interaction depth fall below the confidence boundary for "meaningful engagement."
? Is the result strong enough to count?Evidence basis
If counted as full engagement, this surface inflates the workflow's interaction metric by an estimated 14%.
Reward redemption cluster shows 8 claims from the same network segment within a 12-minute window. Pattern is atypical but not conclusively inorganic.
⚠ Should the reward have been given?Evidence basis
Inconclusive. Could be legitimate (e.g. corporate office) or coordinated. Risk is bounded to ~£340 in reward value.
41 reward issuances in this workflow are linked to accounts that had already received the same reward in a prior campaign period. Eligibility rules do not permit repeat claims.
⚠ Did the activity meet the intended rules?Evidence basis
Repeat issuance represents approximately £1,230 in rewards that should not have been released under the stated rules.
This view shows how different touchpoints in a sponsor-funded workflow contribute to qualified, verified activity versus total recorded activity. This is a contribution view, not a causal attribution claim.
Illustrative — based on representative workflow patterns
Verified = passed all verification criteria. Counted = passed verification and met the commercial-confidence threshold for sponsor reporting. The difference (223 events) reflects activity that was technically verified but held for review due to borderline confidence scoring.
Contribution percentages show what proportion of final counted activity came from each surface. This is a participation-based contribution view, not a causal attribution claim.
Walk through finding cards with stakeholders
Include evidence packs in renewal discussions
Defend counted activity before invoicing
Apply control recommendations to improve quality
The next step is not a larger dashboard. It is a bounded review of one workflow. We scope the workflow, review the evidence, and show you what would count.
No commitment required. Start with one workflow. We reply within 24 hours.
Illustrative mock output only. Actual deliverables vary by workflow and scope.