Skip to content
For teams that need better reporting, attribution & growth decisions

Fix your analytics, trust your numbers, and make smarter growth decisions

I help brands and growth teams clean up messy reporting, rebuild attribution, and create analytics systems that actually support marketing decisions — not confuse them.

Know what is actually working

Fix attribution, measure incrementality, and stop relying on noisy platform metrics.

Save time with better systems

Turn manual reporting into streamlined dashboards and AI-assisted workflows.

Typical help with Attribution redesign MMM / incrementality CRM & LTV analysis AI-ready reporting

What clients usually need fixed

Attribution nobody trusts

Different channels claim credit, leadership gets conflicting numbers, and budget moves without confidence.

Manual reporting that wastes hours

Teams export data into spreadsheets every week instead of working from a reliable reporting layer.

Dashboards that show charts, not decisions

Metrics exist, but nobody can clearly see what changed, why it changed, or what to do next.

Selected outcomes

45%

ROAS overstatement found

$34.2K

misallocated budget uncovered

15%

CVR improvement

45s

workflow after automation

About me

I turn messy marketing data into decisions teams can trust.

I work at the intersection of measurement, attribution, lifecycle analytics, and AI systems — helping teams understand what is actually driving performance, where money is being wasted, and what to do next.

My approach is business-first: start with the decision, build the right measurement layer, and turn analysis into reporting that leaders can confidently act on.

Introduction video

How I approach analytics, attribution, and growth systems

Play

Focus

Measurement that drives decisions

Approach

Business-first, model-second

Outcome

Clearer reporting, faster action

How I work

From business question to decision system

01 Diagnose the decision

Start with the business decision that needs better evidence, not with a dashboard request.

02 Choose the right method

Use experimentation, attribution, modeling, or automation based on the actual problem structure.

03 Translate insight clearly

Make outputs understandable for stakeholders so insights can influence action, not just analysis.

04 Build for repeatability

Turn one-off wins into reusable systems, workflows, and measurement infrastructure.

01 Capability area

Measurement & Incrementality

Projects focused on understanding what truly caused growth, not just what appeared to correlate with it.

I use testing and modeling to separate reported performance from actual business lift, so budget decisions are based on evidence rather than platform optics.

Case study 01

Geo-holdout incrementality testing

Experimentation

Problem

The business could see media conversions in-platform, but it could not tell whether spend was creating new demand or simply harvesting existing intent.

Approach

Built matched-market holdout testing, defined treatment and control logic, and evaluated incremental lift through a disciplined test design.

Key insight

A meaningful share of reported returns would likely have happened without the same level of ad exposure.

Business impact

Leadership gained a clearer baseline for evaluating efficiency and avoided scaling spend on overstated channel performance.

Case study 02

Bayesian media mix modeling

MMM

Problem

The team needed a broader measurement framework that could estimate channel contribution across the whole portfolio, not just last-click conversions.

Approach

Built a Bayesian MMM with response curves, adstock assumptions, and scenario planning to model expected returns from alternative budget allocations.

Key insight

Some channels were saturating earlier than expected, while others had more headroom than platform metrics suggested.

Business impact

Budget planning shifted from reactive channel-by-channel optimization toward more confident, portfolio-level decision-making.

02 Capability area

Attribution & Media Performance

Projects centered on understanding which channels deserved credit, where reporting was misleading, and how spend should move.

Case study 03

Markov chain attribution redesign

Attribution modeling

Problem

Last-click reporting gave outsized credit to lower-funnel touchpoints and obscured the role of assisting channels.

Approach

Rebuilt the attribution logic using a Markov chain framework to estimate removal effects across the customer journey.

Key insight

Several channels that looked weak in last-click were playing a stronger assist role than expected.

Business impact

Spend decisions became more balanced, and the team surfaced budget that had been misallocated under a flawed attribution view.

Case study 04

Media performance reallocation analysis

Budget strategy

Problem

Channel reporting was fragmented across platforms, making it hard to compare true efficiency and identify wasted spend.

Approach

Consolidated performance data, normalized KPIs, and evaluated channel performance through a business-outcome lens rather than platform defaults.

Key insight

The channels reporting the strongest in-platform returns were not always the ones producing the most defensible business value.

Business impact

Budget was redirected toward stronger-performing media levers, improving confidence in performance management and planning.

03 Capability area

Predictive & AI Systems

Projects that move beyond reporting into prediction, operational leverage, and AI-assisted decision workflows.

Case study 05

Churn prediction and retention prioritization

Predictive modeling

Problem

The team knew churn was hurting revenue, but lacked a reliable way to identify the highest-risk users early enough to intervene.

Approach

Built a predictive churn model, ranked risk factors, and translated model output into practical retention triggers and action lists.

Key insight

Behavioral and payment-related signals emerged as stronger churn indicators than broad demographic assumptions.

Business impact

Retention efforts became more targeted, helping the team focus resources where intervention was most likely to matter.

Case study 06

AI insights copilot for faster decision workflows

AI workflow system

Problem

Reporting workflows were slow, repetitive, and dependent on manual interpretation before decision-makers could act.

Approach

Built an AI-assisted workflow that combined analytics inputs, structured prompts, and decision summaries to compress reporting cycles.

Key insight

The biggest win was not “AI for AI’s sake,” but removing time spent on repetitive synthesis and packaging.

Business impact

Teams received faster, more usable summaries, reducing turnaround time and making insights easier to operationalize.

Where I’ve worked

Baz Bros

CRM Media Analyst

Jul 2025 – Apr 2026

Los Angeles, CA

→ Rebuilt attribution from last-click to Markov chain across Google, Meta, Amazon — surfaced $34.2K in misallocated budget, reoriented Q2 spend strategy

→ Built Bayesian MMM and geo-holdout frameworks to measure true channel contribution beyond platform-reported numbers

→ Ran A/B tests on creative variants measuring post-click CVR as primary signal — achieved 15% CVR improvement

→ Built AI Marketing Insights Copilot (Claude API, Python, Streamlit) — compressed 4–5 hour workflow to 45 seconds

Recyy

Marketing Analyst

Jul 2024 – Jan 2025

Lantana, FL

→ Diagnosed 80% subscriber drop-off as nurture problem, not targeting — redesigned Klaviyo flow, lifted repeat purchase rate +25%

→ Found $342 vs $38 LTV gap via RFM — shifted budget to Champions lookalikes, achieved 4.8x paid Instagram ROAS

→ Cohort repurchase window analysis drove 30% online sales lift through send-timing realignment

AI Priori

Marketing Consultant

Jun 2023 – Sept 2023

Washington, DC

→ Geo-holdout across 16 matched DMAs revealed 45% ROAS overstatement — Google Display at reported 2.9x had iROAS of 1.2x

→ Bayesian MMM (PyMC) projected 18.4% revenue uplift from reallocation without increasing total budget

→ Churn model (ROC-AUC 0.981) identified missed payments as 5x strongest churn signal — drove 18% churn reduction

Measurement & Attribution

  • Bayesian MMM (PyMC)
  • Geo-holdout incrementality
  • Markov MTA
  • Difference-in-Differences
  • iROAS / causal inference
  • A/B + Bayesian testing
  • Time series (Prophet)

Lifecycle & CRM

  • Klaviyo flow design
  • RFM & LTV segmentation
  • Churn prediction
  • Cohort & funnel analysis
  • HubSpot · Salesforce · Braze
  • Win-back & re-engagement
  • Customer journey mapping

Agentic AI & Engineering

  • Claude API
  • Python (pandas, scikit-learn, PyMC)
  • Streamlit + REST APIs
  • SQL · R
  • GA4 · Adobe Analytics · GTM
  • BigQuery · Snowflake
  • GitHub

Platforms & Visualization

  • Meta Ads · Google Ads · Amazon
  • Tableau · Looker Studio · Power BI
  • Shopify Analytics
  • Klaviyo · HubSpot · Marketo
  • Domo · Salesforce
  • SEMrush

M.Sc. Business Analytics

University of Massachusetts Amherst

3.7 GPA · 2024–2025

M.P.S. Integrated Marketing Communications

Georgetown University

3.9 GPA · 2022–2023

B.B.A. Business Administration

Lancaster University

3.5 GPA · 2016–2019

Let’s talk

Need sharper measurement, better attribution, or an AI-ready analytics system?

I help teams turn messy reporting into decision-ready analytics with clear business impact.