top of page
Rechercher

Can AI Really Save Productivity? - Part II

by Paul Allard, MBA, CEO at Persevere Consulting

Published on November 27, 2025


Executive summary


The promise of AI is simple: automate routine work, augment decision‑making, and free humans for higher‑value tasks. But as recent reporting in HBR shows, AI can also generate “work‑slop” — noisy, low‑value outputs that create more rework than they eliminate. Complementing that critique, PhD C. Coulombe’s article "GPT‑5 — a damp squib? Non‑event, LLMs reaching plateau" argues that foundation models may be hitting a performance plateau and that incremental model upgrades alone won’t solve practical enterprise problems. Together, these views point to the same conclusion: productivity gains depend less on headline model versions and more on data quality, governance, sovereignty controls, and operational integration. This article draws on both sources to explain why weak data practices and unrealistic expectations drive AI‑related rework — and how governance and fully Sovereign AI infrastructure can tip the balance back toward productivity.


Why AI sometimes reduces productivity


AI systems can deliver large gains — but often produce:

  • Hallucinations and incorrect outputs that require human correction.

  • Inconsistent results from identical prompts due to model drift or varying fine‑tuning.

  • Tool sprawl and duplicated efforts as teams assemble ad‑hoc solutions.

  • Compliance and sovereignty headaches when data is used without proper controls.


Coulombe’s analysis emphasizes another root cause: chasing marginal model improvements (e.g., hyped new model releases) without addressing systemic issues leads to disappointment. If models are plateauing in capability, the leverage shifts decisively to data, integration, and governance.


Data governance and sovereignty: the productivity levers


Three governance failures drive work‑slop:


1) Poor provenance and lineage

Without lineage, teams can’t verify what produced an output, whether cleansing occurred, or which biases exist — leading to repeated validation and rework.


2) Inconsistent access and stewardship

Ad‑hoc copying into local notebooks and multiple competing models arise when stewardship roles and metadata standards are absent.


3) Cross‑border and jurisdictional ambiguity

Data sovereignty constraints complicate training and fine‑tuning; ignoring them can block deployments and trigger lengthy remediations.


Coulombe argument about diminishing returns from model upgrades makes these governance failures more consequential: if improved model architecture isn’t delivering proportional value, clean, well‑governed data and robust deployment controls become the primary path to productivity.


Why do you need sovereign AI?


Gen AI is not optional to fix the productivity gap. Canadian organization cannot rely 100% on foreign proprietary Gen-AI solutions to process intellectual property, confidential information or sensitive personal information:

  • Physics beat Law: Contractual safeguards ≠ technical safeguards

  • Residency ≠ sovereignty: local hosting does not prevent foreign access 

  • Dependence on foreign technology is a business continuity risk: weakened bargaining power, technological lock-in, geopolitical and supply chain risk

  • A chain is only as strong as its weakest link: need for a 100% open source and auditable technological Gen AI stack


How to prepare for a sustainable AI deployment


So, how do you prepare your organization for a sustainable AI program deployment that actually boosts productivity and avoids the pitfalls of 'work-slop'? Consider these foundational steps:

  • Strategic Blueprint & Risk Assessment: Begin by identifying your organization's unique competitive competencies. For each, map out the associated processes and clearly delineate where AI-driven innovations can tolerate initial mistakes versus those critical operations where absolute reliability is paramount. This informs a phased deployment strategy.

  • Invest in Data Excellence: As emphasized throughout this article, data is the decisive lever. Prioritize investments in assessing and enhancing the readiness of both structured and unstructured data. Establishing robust data lineage, quality, and governance protocols is essential to combat the 'GIGO effect' and prevent AI from generating more rework than value.

  • Nurture Internal Leadership: Identify and empower your internal AI champions. Provide them with the motivation, resources, and training necessary to lead the charge. These individuals will be instrumental in mobilizing subject matter experts, driving AI adoption, and ensuring initial productivity programs are well-prioritized and effectively implemented.

  • Build a Sovereign AI Foundation: Conclude by implementing a private and sovereign AI infrastructure. This is not merely a technical choice but a strategic imperative to protect your intellectual property, mitigate geopolitical and supply chain risks, and ensure the long-term integrity and trust in your AI-driven productivity gains.


A practical four‑lens framework to assess AI’s net productivity impact

  1. People: Defined stewardship roles, training in prompt design, and human‑in‑the‑loop validation.

  2. Process: Validation gates for outputs, incident response for hallucinations/exfiltration, and user feedback loops.

  3. Data Governance: Policies for provenance, access controls, regional enforcement, vendor SLAs, and Sovereign AI infrastructure deployment where appropriate.

  4. Technology: Central catalog with lineage, model registry, CI/CD with drift monitoring, and regionally constrained pipelines.



Mini case vignettes

Failure: Marketing fine‑tuned a vendor model with a scraped dataset lacking consent metadata; outputs breached consent terms, requiring legal remediation and campaign rollbacks.


Success: A healthcare consortium used a Private Sovereign AI infrastructure to pool de‑identified patient data with strict provenance and use policies; validated inputs shortened model validation cycles and accelerated insights.


KPIs to measure progress

- Reduction in time spent fixing AI outputs.

- % of models with verified provenance.

- Number of sovereignty incidents per quarter.

- Time from model discovery to validated deployment.


Conclusion — can AI save productivity?


Yes — but not by relying solely on new model releases. As the HBR piece highlights the danger of AI‑generated work‑slop and the Dr. Claude Coulombe’s analysis signals a capability plateau for foundation models, the decisive levers are data governance, data sovereignty, and operational practices. Treat data governance as a productivity investment. Build your Sovereign AI infrastructure to protect and build your IP. Focus less on chasing model hype and more on cleaning the data plumbing, enforcing provenance, and baking in human oversight; then AI becomes a durable productivity engine rather than a source of rework.



References


Disclaimer: AI contributed to the creation of this article, but it was guided, reviewed and fact-checked by Persevere Consulting’s human experts. Please note that the content and material provided in this article is for general information purposes only. It is not to be taken or relied upon as legal or management advice and should not be used for professional or commercial purposes. This article is intended to communicate general information about relevant  sustainable productivity, sustainable & sovereign AI, and data governance matters as of the indicated date. The content is subject to change based on a constantly


 
 
 

Posts récents

Voir tout

Commentaires


bottom of page