The MQL Prediction Model
- The model connects content investment to a specific number of MQLs per month before you publish a single page
- Three scenarios (conservative, baseline, optimistic) produce a range, not a point estimate
- Every assumption is visible, debatable, and adjustable
- Forecasting changes the organic program from reporting on weather to speaking in capital allocation language
Most organic programs can’t answer the simplest question a CFO will ask: “If we invest in this content, how many leads will it produce?”
They can show traffic projections. They can wave at keyword volumes. But connecting a proposed content investment to a specific number of marketing-qualified leads per month? Most SEO teams go quiet. The conversation stalls, the budget gets cut, and organic stays an experiment instead of becoming infrastructure.
The MQL Prediction Model exists to answer that question before you publish a single page.
It’s not a machine learning black box. It’s not a guarantee of rankings. It’s a structured forecast that chains together search demand, realistic ranking assumptions, and your actual funnel conversion data to produce a time-based lead projection with scenario ranges so you can plan honestly.
Why forecasting changes the conversation
Section titled “Why forecasting changes the conversation”When you walk into a budget meeting and say “organic traffic grew 30% last quarter,” you’re reporting on weather. Interesting, not actionable.
When you walk in and say “if we build the healthcare segment cluster, 9 pages, the model projects 35 to 55 MQLs per month from that segment within 8 months, at a blended cost per lead of €40 compared to your current paid CAC of €280,” you’re speaking in capital allocation language. That’s a business case, not an SEO report.
Forecasting changes the organic program in three ways.
It justifies investment before results arrive. Content takes 4 to 8 months to mature. Without a forecast, you’re asking leadership to fund a channel on faith. With one, you’re asking them to fund a channel with explicit assumptions they can scrutinize and approve.
It forces honest prioritization. When you model MQL output per segment, you quickly see that some segments produce 5x the leads of others at the same production cost. That changes what you build first.
It creates accountability in both directions. If the forecast says 40 MQLs by month 6 and you’re at 15, you have a specific conversation. Did traffic underperform the assumption? Did conversion underperform? Did we publish late? That’s a calibration discussion, not a blame conversation.
The calculation chain
Section titled “The calculation chain”The model works in layers. Each layer takes an input from the layer above and applies a configured assumption. Every assumption is visible, debatable, and adjustable. That’s the point.
Layer 1: Search demand
Start with keyword search volume for each page in your planned content portfolio. Adjust for seasonality where monthly data exists. For future months, scale using category demand trends so the forecast isn’t flat forever.
Layer 2: Click-through rate
Apply a CTR curve based on target ranking position. If you have Google Search Console data, calibrate this to your actual click-through rates. If not, use industry benchmarks as a starting point and refine as data comes in. Pages competing with AI Overviews or rich SERP features get a CTR discount, because less of the click share goes to a traditional organic result.
Layer 3: Sessions
Combine adjusted volume and CTR to get forecasted sessions. Apply a secondary keyword multiplier per funnel stage, because a well-built page earns traffic from more than just its primary keyword. Bottom-funnel pages tend to have a tight keyword focus. Top-funnel pages can attract 2 to 3x their primary keyword volume from related queries.
Layer 4: Maturity
New content doesn’t perform on day one. The model applies a maturity curve that ramps sessions over months after publish, for example:
- Month 1: roughly 20% of steady-state traffic
- Month 3: roughly 50%
- Month 6 to 7: full run-rate
Pages being optimized (not newly published) use a faster ramp because the URL already has history.
Layer 5: Conversion
Apply your MQL session rate per funnel stage. This is the percentage of organic sessions on that content type that become qualified leads. Bottom-funnel content converts at a higher rate than top-funnel. If you don’t know your rates yet, start with conservative estimates and calibrate quarterly against actual lead data.
Layer 6: Scenarios
Run the model at three assumption levels:
- Conservative: lower traffic capture and conversion assumptions
- Baseline: your best current estimates
- Optimistic: higher assumptions for both
This produces a range, not a point estimate. A single number implies false precision that nobody should trust.
The output: a monthly MQL forecast per scenario, rolled up across your entire content portfolio or sliced by segment, funnel stage, or content type.
A simplified example
Section titled “A simplified example”Say you’re a B2B SaaS company selling procurement software, and you’ve identified the construction industry as a priority ABM segment.
Here’s how the model works for one page in that cluster.
Your pillar page targets “construction procurement software” with 480 monthly searches. You’re forecasting a position 4 ranking within 6 months (realistic for a focused page in a niche with limited competition). At position 4, your calibrated CTR is roughly 8%. That gives you about 38 sessions per month at steady state.
But the page won’t hit steady state immediately. The maturity curve says month 1 delivers 20% of that, month 3 delivers 55%, and month 6 reaches full run-rate. So month 3 looks like roughly 21 sessions, not 38.
Your MOFU MQL session rate is 4.5% (calibrated from existing content performance). A secondary keyword multiplier of 1.4x accounts for related queries. At month 6, that’s approximately 2.4 MQLs per month (38 sessions x 1.4 secondary keywords x 4.5% MQL rate) from this single page.
That sounds small. But multiply it across the full cluster.
MQL session rates by funnel stage (applied to all pages in that stage):
- BOFU: 7.2%
- MOFU: 4.5%
- TOFU: 1.8%
Most companies don’t have per-page MQL rates, and you don’t need them to start. Instead, segment your organic conversion data by funnel stage and apply those rates across all pages in that stage. As your tracking matures, you can refine rates per page or per content type. Calibrate quarterly as actuals come in.
| Page | Funnel | Volume | Target pos. | Sessions | MQLs/mo |
|---|---|---|---|---|---|
| Construction procurement software | MOFU | 480 | 4 | 53 | 2.4 |
| Construction procurement solutions | BOFU | 320 | 3 | 42 | 3.0 |
| [Product] vs [Competitor] construction | BOFU | 210 | 2 | 38 | 2.7 |
| Construction procurement ROI calculator | BOFU | 170 | 3 | 24 | 1.7 |
| How to streamline construction purchasing | MOFU | 390 | 5 | 38 | 1.7 |
| Construction material cost tracking guide | MOFU | 260 | 5 | 25 | 1.1 |
| Procurement challenges in construction 2026 | TOFU | 720 | 6 | 54 | 1.0 |
| Construction supply chain management trends | TOFU | 580 | 7 | 36 | 0.6 |
| What is e-procurement in construction | TOFU | 440 | 5 | 43 | 0.8 |
| Total (construction segment) | — | 3,570 | — | 353 | 15.0 |
Baseline scenario at steady state (month 6+). Sessions include secondary keyword multiplier. MQL rates are per funnel stage, not per page.
Nine pages across the construction segment, each contributing 1 to 4 MQLs depending on funnel position and volume, can produce 15 to 25 MQLs per month from one segment. At an average deal size of €45K and a 20% close rate, that’s €135K to €225K in pipeline from nine pages.
Now model three segments. Organic isn’t a cost center. It’s a growth engine with a quantifiable return.
What the model doesn’t do
Section titled “What the model doesn’t do”Being clear about boundaries builds more trust than overselling.
It doesn’t promise rankings. The model says: “if we reach position X, here’s the implied lead volume.” That’s a planning assumption, not a prediction of Google’s behavior.
It doesn’t model individual CRO changes. Adding a trust badge, improving page speed, or rewriting a CTA might improve conversion. But attributing a specific MQL increment to each tactical change creates false precision. The model holds your baseline conversion rates. Tactical improvements show up as actual performance exceeding the forecast.
It doesn’t replace CRM data. The forecast tells you what organic should produce. Your CRM tells you what it did produce. Comparing the two is how you calibrate the model over time and get sharper with each quarter.
It doesn’t run on autopilot. The assumptions need human judgment: which position to target, which conversion rates to use, how aggressively to set scenarios. The model is a thinking tool, not a magic spreadsheet.
The real value: prioritization and course correction
Section titled “The real value: prioritization and course correction”A forecast without action is a spreadsheet. The MQL Prediction Model is useful because of what it makes visible between the numbers.
Once you have a forecast running, you’re comparing it against actuals every month. That comparison is where prioritization happens.
If the healthcare segment is tracking below forecast, you have a specific diagnostic conversation. Is traffic the issue, or conversion?
If traffic is lagging, the work might be:
- Internal linking improvements to pass more authority to the cluster
- Backlink acquisition to strengthen the pillar page
- EEAT signals: author credibility, external mentions, client proof points
- Technical fixes that are blocking indexation or crawl efficiency
If conversion is lagging, the work might be:
- Landing page optimization and stronger CTAs
- Better content-to-intent alignment
- Social proof and trust signals on key pages
The model doesn’t tell you what to fix. It tells you where to look and what matters most right now.
This is the shift from backward-looking SEO (audit what happened, react to it) to forward-looking SEO (set a target, measure against it, prioritize the work that closes the gap).
Content production is one lever. Internal linking is another. Backlinks, technical improvements, EEAT building, conversion rate optimization: they all matter. But they don’t all matter equally at any given moment. The forecast-versus-actuals gap tells you which lever to pull next.
Most organic programs operate in audit mode. Here’s what’s broken, let’s fix it. The MQL Prediction Model operates in planning mode. Here’s where we’re going, here’s where we are, and here’s the highest-impact work to close the distance.
That’s the difference between SEO as a maintenance function and SEO as a growth engine.
From forecast to funding
Section titled “From forecast to funding”The most powerful use of the MQL Prediction Model isn’t the number itself. It’s the conversation the number enables.
When you can show a CFO that investing €30K in content production for two segments is projected to produce 40 to 70 MQLs per month within 8 months, with explicit assumptions they can challenge, organic moves from “nice to have” to “funded growth channel”.
And when the first quarter of actual data comes in and you can show forecast vs. actuals, calibrate the assumptions, and present an updated projection, you’ve built something most organic programs never achieve: a credible, evolving business case that compounds trust with every review cycle.
This is how organic earns a seat at the planning table. Not by reporting traffic, but by forecasting revenue.
Get notified when new chapters publish
Summarize with AI