LSI Insights - Future of Work

Career durability: how to test whether expertise will age well with AI

AI is shifting the shelf life of expertise. The issue is not only whether jobs disappear, but whether the value of what people know decays faster than it can be refreshed. Pay, progression, and professional identity have often been built on slow-moving skill requirements. That assumption is starting to look fragile in many sectors.

16 min read July 22, 2025
Executive summary
Career durability has become a live question because AI changes work through tasks first, then job design, pay, and credentials. Some expertise becomes more valuable when paired with AI; other expertise becomes cheaper, more standardised, or more tightly monitored. The most useful response is not prediction, but testing: mapping task exposure, identifying where judgement and accountability sit, watching labour market signals, and choosing learning pathways that keep options open amid uncertainty.
Expertise no longer has a predictable shelf life

Expertise no longer has a predictable shelf life

Many careers were built on a simple bargain: build specialised knowledge, then rent it out for decades. AI challenges that bargain unevenly, in ways that are easy to miss if change is measured only by job titles.

Task-level change can look like stability

A role can look intact while its centre of gravity shifts. In law, early case research and document review can be accelerated by AI-assisted search and summarisation, changing what junior roles are for and how billable hours are justified. In finance, reconciliations, variance explanations, and report drafting can be generated quickly, but the accountability for sign-off does not disappear. In marketing, first drafts of copy and segmentation ideas can be abundant, while brand risk and channel strategy become more consequential.

This is why “job safe” versus “job at risk” is often the wrong framing. The better question is whether the high-value tasks in a role are moving towards harder-to-automate judgement, or being pulled into templates and dashboards where price and autonomy tend to fall.

Durability is also about bargaining power

Even when headcount is stable, AI can change who captures the gains. Productivity increases can raise profits without raising wages, particularly where workers have limited negotiating power or where outputs are easier to measure and compare. Some professions may see more work “unbundled” into smaller contracted tasks on platforms, widening access for some while weakening security for others. The risk is not only redundancy, but a gradual shift into lower-control work with tighter monitoring.

Career durability therefore contains a distributional question: which groups gain access to the new, higher-value tasks, and which groups get pushed into the standardised remainder. Geography matters, too, as regions with fewer high-quality employers can experience a sharper drop in local opportunities when work can be remote, automated, or centrally procured.

AI reshapes work by changing tasks

AI rarely replaces an entire occupation at once. It inserts itself into workflows, reallocates tasks, and alters what counts as competence. That mechanism suggests a different way to assess the future.

AI reshapes work by changing tasks

Workflows are the unit of disruption

In many organisations, AI arrives first as decision support. A clinician gets a suggested triage note, a call centre agent gets a recommended reply, an engineer gets code completion. Over time, the workflow changes: what used to be drafted from scratch becomes reviewed, corrected, or exception-handled. This can lift throughput and reduce errors, but it also changes learning. If early-career staff mainly review machine output, where does deep pattern recognition come from?

Automation can increase oversight

When tasks are digitised, they become measurable. This can be helpful, for example in safety-critical operations where audit trails matter. It can also fuel algorithmic management, where performance is judged through proxy metrics and constant monitoring. The result may be higher output with lower discretion, and a new form of skills mismatch: a person may have expertise, but the workflow may not allow it to be exercised.

New tasks appear, but not always at scale

Optimistic narratives often point to new roles: AI product managers, model auditors, prompt specialists. Some will grow, particularly where regulation and risk management expand. Others may remain small or consolidate into existing professions. The more reliable shift is that many roles will require a baseline of AI literacy, and a smaller subset will require deep capability in data governance, evaluation, and human-centred design. The open question is how quickly institutions can redesign training and progression so that new entrants still accumulate real expertise rather than only tool usage.

Advanced AI Prompt Engineering

Advanced AI Prompt Engineering

Large language models, such as ChatGPT, have transformed how we interact with technology. Leveraging their full capabilities, however, requires more than basic queries. This module delves into the art and science of crafting prompts that elicit...

Learn more

A durability test for any expertise

No single forecast can settle whether expertise will age well. A more useful approach is a stress test that can be repeated as tools and labour markets evolve, with attention to both technical and social factors.

A durability test for any expertise

Exposure, complementarity, and control

Career durability can be assessed by asking where AI is likely to sit in the workflow and what it does to control over outcomes. The following questions can be used as a repeatable test.

  • Task exposure: How much of the role consists of pattern matching, routine drafting, standard classifications, or searching through known materials? These tasks are often easiest to accelerate. If they dominate, the role may be vulnerable to price pressure even if employment persists.

  • Complementarity: Does AI make the best practitioners more valuable, or does it compress quality differences? In some design and engineering contexts, AI can expand exploration while leaving taste, constraint-setting, and trade-off reasoning as differentiators. In other contexts, outputs become “good enough” and buyers stop paying for excellence.

  • Accountability and liability: Who carries responsibility when AI-assisted decisions go wrong? Where humans remain accountable, durable expertise often shifts towards verification, escalation, and ethical judgement. Where accountability is diffuse, roles can be regraded or outsourced.

  • Data rights and access: Does the role depend on proprietary data, local relationships, or institutional trust? Work grounded in protected datasets, regulated processes, or community legitimacy can be more durable, but it may also become more exclusive.

  • Learning loop quality: Does day-to-day work still generate feedback that builds mastery, or does automation remove the practice that develops judgement? If the learning loop is hollowed out, long-term durability can weaken even if the job remains.

  • Labour market signals: Are postings shifting towards “AI-enabled” versions of the role? Are wages rising for hybrid profiles or falling as tasks standardise? Are credentials inflating, suggesting more competition for fewer high-quality roles?

Two counter-arguments worth holding

First, automation is not adoption. Many organisations struggle with data quality, integration, procurement, and trust. Expertise can age well simply because change is slower than headlines suggest. Second, some fields become more important precisely because AI raises stakes, for example information security, safety engineering, procurement assurance, or safeguarding in education and care. The durability question is therefore less about whether AI can do a task, and more about whether institutions choose to reorganise around that capability.

Learning pathways that keep options open

Once durability is framed as a task and workflow problem, learning choices can be made with less guesswork. The aim is not constant retraining for its own sake, but preserving mobility across plausible futures.

Learning pathways that keep options open

Credentials as a moving target

Degrees, apprenticeships, and micro-credentials each carry different risks. In some sectors, employers may raise credential requirements because AI increases applicant volume and makes screening harder, not because the work truly needs more theory. That can lock out capable people without improving productivity. On the other hand, fields with real licensing, safety requirements, or hard-to-fake competence may retain the strongest value from formal qualifications.

Apprenticeships can offer durable advantages when they provide real supervised practice, not only tool exposure. Short courses can be effective when they are linked to demonstrable outputs and portfolio evidence, rather than attendance. The key question is whether the pathway builds transferable judgement and not only familiarity with a current software interface.

Test-fit before committing

In uncertain labour markets, optionality has value. Some practical ways to test-fit a role include short project contracts, job shadowing through professional networks, simulated exercises, or employer-led bootcamps tied to real workflows. In education, simulation-based assessment can reveal whether someone enjoys the actual decisions and trade-offs of a role. At LSI, for example, AI-supported role-play and formative feedback are used to surface how reasoning develops over time rather than treating learning as a single high-stakes exam event.

Pivots that often travel well

Some transitions tend to preserve durability because they move closer to problem definition, stakeholder management, verification, or safety. A content specialist moving towards governance of brand risk and claims substantiation may see more longevity than one moving only into faster content production. A software tester moving towards evaluation of AI behaviour and user harm may gain relevance as regulation matures. These are not universal rules, but illustrations of a broader idea: work that combines domain depth with responsibility for consequences is harder to commoditise.

Institutions shape whether skills endure

Career durability is not only a personal matter. Employer choices, procurement norms, regulation, and education funding determine whether AI becomes a tool for better work or a mechanism for hollowing it out.

Institutions shape whether skills endure

Productivity gains can widen or narrow inequality

If AI raises output per worker, it can create room for wage growth, shorter hours, or improved service quality. It can also concentrate gains among firms that own data, models, and distribution, while squeezing suppliers and contractors. Sectors with fragmented employment and weak bargaining structures may see more downward pressure on pay and conditions, even when demand is stable. Public services face a different tension: using AI to stretch budgets while maintaining trust, privacy, and equity.

Worker protections meet algorithmic management

As AI becomes embedded in scheduling, performance scoring, and workflow assignment, questions of transparency and contestability become employment issues. What counts as fair monitoring? How can a worker challenge an automated performance judgement? Which data can be used to train or evaluate systems? These questions sit at the intersection of data protection, employment law, and organisational culture, and they will shape whether careers remain dignified as well as durable.

A final decision test, then an uncomfortable question

A useful closing test is whether a piece of expertise can be described without naming a tool: a durable capability still makes sense when the software changes. If the value proposition collapses without a specific platform, durability may be low. If the value proposition centres on setting goals, interpreting evidence, managing risk, and taking responsibility for outcomes, durability may be higher even as tasks shift.

The uncomfortable question is not whether AI will take jobs, but whether institutions are prepared to redesign entry-level work so that the next cohort still gets to learn, make mistakes safely, and build judgement, or whether an entire layer of career formation becomes an unaffordable luxury.

London School of Innovation

London School of Innovation

LSI is a UK higher education institution, offering master's degrees, executive and professional courses in AI, business, technology, and entrepreneurship.

Our focus is forging AI-native leaders.

Learn more