Problem
When sp_StatUpdate runs against a database that has multi-statement TVFs (MSTVFs) referenced in frequently-executed queries, updating the base table statistics does nothing to improve the CE behavior for those queries. MSTVFs carry a fixed cardinality estimate (1 row in SQL Server 2014–2016; 100 rows in SQL Server 2017+ with table variable deferred compilation; but still wrong for large actual outputs). The optimizer cannot see inside the MSTVF, so no histogram on the input tables helps.
The problem: sp_StatUpdate does not detect this scenario and does not warn that statistics updates on the underlying tables will have zero cardinality benefit for queries that join through MSTVFs. Operators who schedule sp_StatUpdate expecting to fix poor cardinality estimates will be confused when plan quality does not improve after a full-scan update on a table that is primarily accessed through MSTVFs.
More directly actionable: sys.query_store_runtime_stats query data is in scope (Phase 6). MSTVF detection is possible via sys.sql_modules + sys.objects for objects of type TF (table-valued function) with multi-statement bodies. This intersection — "top CPU queries in QS that reference objects with MSTVF inputs" — is calculable but never computed.
Evidence
-- sys.query_store_plan_feedback, sys.query_store_runtime_stats are queried in Phase 6
-- But sys.objects for type 'TF' (table function) is never consulted
-- No check: does this frequently-executed query reference a MSTVF that will produce
-- a fixed cardinality regardless of stats freshness?
-- SQL Server 2014 white paper documented behavior:
-- "Multi-statement TVFs carry a fixed cardinality value regardless of actual output"
-- SQL Server 2017+ increased MSTVF default estimate but still fixed (100 rows, not actual)
A full FULLSCAN update on a 500M-row table with a 90-second runtime will not improve cardinality estimates for the join above an MSTVF that returns 50,000 rows but is estimated at 100. The expensive stats run was effectively wasted for those query patterns.
Proposed Fix
In Phase 6 (Query Store enrichment), when @QueryStorePriority = 'Y', add a supplemental check:
-- For each object_id in #stat_candidates, check whether any top-N QS queries
-- referencing that object_id also reference MSTVF inputs
SELECT DISTINCT qt.object_id_text -- or via plan XML parsing
FROM sys.query_store_query_text qt
JOIN sys.query_store_query q ON q.query_text_id = qt.query_text_id
JOIN sys.dm_exec_sql_text(...) -- to check for MSTVF references in query text
Emit an advisory in the pre-execution output: "NOTE: [table] is referenced by queries that also use multi-statement TVFs. Statistics update will not improve cardinality estimates for those query patterns. Consider converting MSTVFs to inline TVFs or adding OPTION(RECOMPILE) to affected queries."
Add has_mstvf_dependency flag to #stats_to_process for reporting.
Impact
This is the most common reason statistics maintenance "doesn't help" — the underlying cardinality problem is in the MSTVF, not the base table statistics, and no amount of FULLSCAN updates resolves it. Surfacing this in sp_StatUpdate prevents wasted full-scan cycles and directs the operator to the actual fix. It also correctly sets expectations: a FULLSCAN run that "didn't improve performance" may have done exactly what it should; the blocker is upstream of the statistics object.
Problem
When
sp_StatUpdateruns against a database that has multi-statement TVFs (MSTVFs) referenced in frequently-executed queries, updating the base table statistics does nothing to improve the CE behavior for those queries. MSTVFs carry a fixed cardinality estimate (1 row in SQL Server 2014–2016; 100 rows in SQL Server 2017+ with table variable deferred compilation; but still wrong for large actual outputs). The optimizer cannot see inside the MSTVF, so no histogram on the input tables helps.The problem:
sp_StatUpdatedoes not detect this scenario and does not warn that statistics updates on the underlying tables will have zero cardinality benefit for queries that join through MSTVFs. Operators who schedulesp_StatUpdateexpecting to fix poor cardinality estimates will be confused when plan quality does not improve after a full-scan update on a table that is primarily accessed through MSTVFs.More directly actionable:
sys.query_store_runtime_statsquery data is in scope (Phase 6). MSTVF detection is possible viasys.sql_modules+sys.objectsfor objects of typeTF(table-valued function) with multi-statement bodies. This intersection — "top CPU queries in QS that reference objects with MSTVF inputs" — is calculable but never computed.Evidence
A full FULLSCAN update on a 500M-row table with a 90-second runtime will not improve cardinality estimates for the join above an MSTVF that returns 50,000 rows but is estimated at 100. The expensive stats run was effectively wasted for those query patterns.
Proposed Fix
In Phase 6 (Query Store enrichment), when
@QueryStorePriority = 'Y', add a supplemental check:Emit an advisory in the pre-execution output:
"NOTE: [table] is referenced by queries that also use multi-statement TVFs. Statistics update will not improve cardinality estimates for those query patterns. Consider converting MSTVFs to inline TVFs or adding OPTION(RECOMPILE) to affected queries."Add
has_mstvf_dependencyflag to#stats_to_processfor reporting.Impact
This is the most common reason statistics maintenance "doesn't help" — the underlying cardinality problem is in the MSTVF, not the base table statistics, and no amount of FULLSCAN updates resolves it. Surfacing this in
sp_StatUpdateprevents wasted full-scan cycles and directs the operator to the actual fix. It also correctly sets expectations: a FULLSCAN run that "didn't improve performance" may have done exactly what it should; the blocker is upstream of the statistics object.