The CEO of the company building some of the most powerful AI in the world is the one sounding the alarm. Dario Amodei of Anthropic told Axios that AI could eliminate half of all entry-level white-collar jobs within five years, specifically naming finance, consulting, and law, warning that AI’s “cognitive breadth” means it will not disrupt one industry at a time, leaving workers with fewer options to switch fields.
Jensen Huang of Nvidia disagrees. He has argued that overheated rhetoric discourages young people from pursuing fields the economy still needs, calling such comments “not helpful” and suggesting some executives slip into a “god complex.”
Both men are talking about labour. Neither is talking about the customer.
That is the gap this piece is about.
The financial services industry’s relationship with AI has, until very recently, been shaped by one question: where can we cut costs? It is a legitimate question. The numbers behind it are serious.
McKinsey predicts agentic AI could help banks reduce operational workloads by 50 to 60%. Firms like Bank of America, Citi and Wells Fargo are reporting strong profits while reducing headcount, with AI cited as a contributing factor, in areas ranging from back-office compliance work to front-office transaction processing. The pressure to automate is structural, not cyclical, and any leadership team that pretends otherwise is not doing its job.
But efficiency AI and experience AI are not the same investment, and they do not produce the same outcomes. The sector has been running one of them hard. The other is largely unbuilt.
Here is what has been happening on the customer side of the ledger during the same period.
46% of consumers say AI-powered customer service either rarely or never leads to successful outcomes. Consumer trust in AI fell from 62% in 2023 to 59% in 2025. The share of consumers describing AI as “very untrustworthy” more than doubled in that period, from 5% to 12%.
74% of FSI executives say their customers expect personalised interactions, yet only 36% of the customer journey is currently personalised. That gap is most visible in early-stage discovery and research, precisely where future growth is shaped.
Read those two data points together. Customers expect more. They are getting less. And trust is declining as spending increases. That is not a technology failure. It is a strategy failure, a failure to ask the right question before deploying the capability.
The AI your customers encounter is not the AI that reduces your fraud losses or accelerates your compliance reporting. It is the chatbot that cannot answer their pension transfer query. It is the onboarding journey that still feels like it was designed in 2017. It is the experience that deflects rather than resolves, that contains rather than serves.
74% of consumers have stopped doing business with a company after a single frustrating experience. In financial services, where switching costs are real but so is long-term loyalty, attrition does not always show up immediately. It shows up in NPS, in renewal rates, in referral behaviour, the metrics that compound quietly until they become a crisis.
The evidence that this is a known problem, not an unknown one, is everywhere.
The Cambridge Centre for Alternative Finance’s 2026 Global AI in Financial Services Report found that while four in five FSI firms are deploying AI at some level, only 14% currently see AI as transformational to their organisational strategy and competitive advantage, pointing to a significant execution and business integration gap.
KPMG research across more than 17 million firms found that 99% of companies plan to put agents into production, but only 11% have done so, with implementation challenges around data, governance and security cited as the primary blockers.
That 88-point gap between intention and execution is not a shortage of ambition. It is a shortcoming of the operating model. Organisations know what they want to build. They do not have the architecture to build it within the constraints that financial services impose, and those constraints are unlike those of any other sector.
This is the point that most agentic AI vendors, however well-capitalised, will either underestimate or ignore entirely.
The FCA’s financial promotions regime does not bend for product velocity. Consumer duty requires demonstrable, evidenced outcomes. Vulnerability flags are a regulatory obligation with real consequences, not a UX consideration. Release cycles in most institutions are gated by compliance forums that no amount of agile methodology can circumvent. Suitability, consent, life-stage segmentation: these are not data preferences; they are regulatory duties.
Leading institutions are not choosing between speed and governance. They are redesigning operating models to ensure governance is embedded from the start.
That redesign does not come from a platform vendor. It comes from people who understand both the technology and the regulatory environment in which it operates. Strategy and consulting is not a wrapper around the technology in FSI. It is the foundation on which every other capability rests.
Sequoia partner Julien Bek framed the broader market opportunity in a way that resonates here: for every dollar spent on software, six are spent on services. [10] In financial services, where the services layer carries compliance complexity that general enterprise AI was not designed for, that ratio is almost certainly higher. The organisations that will win are not the ones with the best model. They are the ones with the best operating model, and those are not the same thing.
Think about the last time a financial services experience genuinely surprised you. Not with a feature. With judgment.
A customer trying to understand their pension options does not want a chatbot. They want confidence that the person or system they are dealing with knows their situation, respects what is at stake, and provides information they can use. That is an experience problem before it is a technology problem, and it requires a fundamentally different set of disciplines working together than most FSI AI programmes are currently running.
Content must be built for explainability, not persuasion. In pensions, investments, and insurance, a customer who does not fully understand what they have been told is not a marketing problem. They are a conduct risk. Serving them well means modular, pre-approved content that can be assembled intelligently around their specific situation rather than broadcast at them. That is not a content management task. It is an editorial and compliance architecture.
The channels where most FSI customers make their most consequential decisions are still assisted ones: advice appointments, broker conversations, and call centres. The digital estate should build trust and frame the context that makes those conversations easier, not replace them prematurely. The highest-value optimisation in most institutions is not acquiring more digital traffic. It is converting the existing intent through better reassurance at the moments that matter.
The data layer underneath all of this is where the FCA lives. Consent is not a checkbox. Suitability is not a tag. Vulnerability is not a segment. These are obligations that must be woven into the data architecture from the start, not retrofitted, which is why so many well-intentioned personalisation programmes stall before they reach the customer. The model is right. The infrastructure underneath it was not built to carry it.
And none of this works without the people closest to customers being equipped to work alongside intelligent systems with confidence. Role-based training for authors, approvers, and advisers. Clear accountability for regulated content. Ongoing education on what AI can and cannot do within a conduct framework. Agentic AI in FSI without that human layer is not an operating model. It is a liability.
Remarkable’s Revolution model for FSI was built specifically for this shape of problem: strategy and consulting, content and experience, channel optimisation, platforms and operations, and CRM and data, running in concert with agentic orchestration at the centre. Not sequentially. Not in isolation. Together, because the customer does not experience your org chart.
Anthropic announced the formation of a new AI-native enterprise services firm alongside Goldman Sachs, Blackstone, and Hellman & Friedman, backed by a consortium including Sequoia Capital, Apollo, and GIC, with Anthropic engineering resources embedded directly within the team.
The stated mission: to help leading businesses deploy AI at the speed and scale their competitive positions require.
Goldman. Blackstone. Sequoia. Anthropic. These organisations do not form consortia speculatively. They move when they believe the market has reached an inflection point. And the market they are moving into is yours.
By the end of 2026, the industry will be re-segmented not by who adopted AI, but by who made it work in practice. Early adoption no longer confers advantage. Execution does. [12]
The experience play is still available. Not for much longer.
Remarkable is offering a free diagnostic call for FSI organisations ready to move from intent to operating model. We will not pitch you a platform. We will ask the right questions, map your current gaps across strategy, content, channel, data, and platforms, and give you an honest view of where to start.
Get in touch today to book your free FSI diagnostic call! https://remarkable.global/contact/