LSI Insights - Future of Work

Preparing for multiple futures with AI: a decision framework for workers, families, and employers

AI is shifting work in ways that feel both gradual and sudden: one tool update at a time, then a role redesign, then a changed labour market signal. The difficulty is not only technical. It is governance, pay, job quality, and the reliability of education and career pathways when tasks, not job titles, are being rewritten.

16 min read January 13, 2026
Executive summary
AI is accelerating task automation and decision support, but the outcomes for pay, security, and job quality remain contested and uneven across sectors and places. Planning as if one future is inevitable increases risk for workers, families, employers, and public institutions. A more resilient approach treats jobs as evolving task bundles, uses explicit decision tests for learning and career moves, and builds organisational guardrails around data rights, algorithmic management, and fair progression, while leaving room for multiple plausible scenarios.
Jobs are dissolving into task bundles

Jobs are dissolving into task bundles

AI rarely replaces a job overnight. More often, it absorbs some tasks, amplifies others, and changes how performance is measured. Seeing work as tasks and workflows makes the change legible, and opens up practical options for redesign rather than panic or denial.

Task change arrives before job change

In many organisations, AI first appears as a support layer: drafting, summarising, triaging, forecasting, or quality checking. Customer contact centres have used real-time prompts to reduce handle time and standardise tone, while escalation work becomes more complex and emotionally demanding. In legal services, document review may compress, while client-facing interpretation and risk framing expand. In software, code assistance can speed routine work, yet raise the premium on problem definition, testing discipline, and security awareness.

The observable pattern is not simply displacement. It is a reallocation of attention. Tasks that are easy to specify and verify become cheaper; tasks that depend on trust, judgement, or contested values become more visible and more expensive. This can be good news for job enrichment, but it can also be a route to work intensification if the saved time is immediately converted into higher throughput targets.

New assumptions fail quietly

Several long-held assumptions become less reliable. Credentials do not automatically map to capability if AI tools narrow the advantage of rote expertise. Experience does not automatically protect a role if decision support changes the value of tacit knowledge. “Digital skills” stop being a stable category because AI tool competence has a short half-life. The question becomes: which tasks in a role are becoming more valuable, and which are becoming less valued because they are being standardised or measured differently?

This task lens also clarifies distributional impacts. Roles with high exposure to routinised information handling may see reduced entry-level opportunities, while senior roles gain leverage through higher output. Regions dominated by back-office operations may feel pressure sooner than places with tight labour markets in care, construction, or logistics, where automation faces physical and regulatory friction.

Four plausible labour market directions

Forecasts tend to collapse uncertainty into a single headline. A more useful stance holds multiple futures open, then asks what choices perform well across them. The point is not prediction. It is preparing without over-committing to one storyline.

Four plausible labour market directions

A productivity dividend with shared gains

In one plausible direction, AI increases productivity and some of the gains translate into higher wages, shorter hours, or better services. This requires diffusion beyond frontier firms and a bargaining environment where workers and employers can share upside. Public services might use AI to reduce admin burden and improve access, though this depends on procurement competence and data governance rather than model capability alone.

A polarised market with credential inflation

Another direction is widening inequality. AI makes high-skill work more scalable, raises expectations for “baseline” communication and analysis, and pushes more people into credential chasing. Entry routes narrow as firms rely on fewer juniors supported by AI, and outsource tasks through platforms. The risk is a two-speed economy: well-paid roles with autonomy, and tightly monitored roles with shrinking progression.

A regulated reshaping of job quality

A third direction places governance at the centre. Rules on transparency, data rights, and liability shape how algorithmic management is used. Employers might still adopt AI rapidly, but with constraints on surveillance, automated discipline, and opaque scoring. This path can protect dignity and fairness, yet may slow adoption in smaller firms without compliance capacity.

Localised turbulence and sector-specific shocks

A final direction is unevenness. Creative industries, education, HR, and marketing may see fast workflow change, while care and trades change more slowly. Some sectors face rapid consolidation as AI-enabled firms outcompete peers. The same technology can therefore mean opportunity in one place and wage pressure in another. Any decision framework has to notice sector and geography, not just job titles.

Advanced AI Prompt Engineering

Advanced AI Prompt Engineering

Large language models, such as ChatGPT, have transformed how we interact with technology. Leveraging their full capabilities, however, requires more than basic queries. This module delves into the art and science of crafting prompts that elicit...

Learn more

A decision framework for households and careers

When uncertainty is high, the temptation is to seek a single safe bet: a degree, a tool, a “future-proof” role. A steadier approach sets decision criteria that reduce downside while keeping options open, and treats learning as an investment portfolio rather than a one-off gamble.

A decision framework for households and careers

Start with task exposure and bargaining power

Career resilience is often less about mastering a specific tool and more about holding tasks that are hard to commoditise. This can include responsibility for outcomes, relationships, safety, or regulated judgement. It also includes proximity to revenue, scarcity of talent, or collective bargaining coverage. A role can be AI-exposed yet still attractive if it sits close to decision rights and has a clear pathway to higher-value tasks.

Decision test: does the role concentrate on generating outputs that are easy to compare and price, or on outcomes that require trust and accountability?

Choose learning routes that allow reversibility

Pathways differ in cost, signalling value, and optionality. Degrees can still matter where regulation, professional norms, or deep domain expertise is required. Apprenticeships offer paid learning and labour market attachment, which can be protective during downturns. Micro-credentials can be useful for rapid task shifts, but can also become a treadmill if they lack recognition from employers.

Reversibility can be designed in. Short, work-linked courses, probationary projects, and part-time study reduce the risk of committing to a dead-end. Some mid-career changes are best approached through “test-fit” work: a secondment, freelance project, volunteering for a cross-functional initiative, or a paid work trial.

Questions that reduce hype risk:

  • Which tasks in the target role would still exist if drafting and summarising became near-free?

  • Which employers or sectors pay for training rather than expecting self-funded credential accumulation?

  • What evidence exists that graduates of a course move into better roles within twelve months, not only that they complete it?

  • What is the fallback option if the move fails, and how expensive is that failure in time and debt?

AI literacy as a baseline, not a destination

Basic competence with AI tools is becoming similar to spreadsheet competence: necessary, insufficient, and quickly normalised. More durable capability sits in critical thinking, communication under uncertainty, and domain depth that enables good prompts, good checks, and good decisions about when not to use AI. At LSI, for example, the use of private AI tutors and repeatable role-play simulations is treated as practice for judgement and explanation, not as a replacement for it, which is a useful orientation for any learning provider.

Work redesign choices inside organisations

Employers face a choice that is often framed as technology adoption. In practice it is job design, incentives, and governance. The same AI system can enable better work or worse work depending on how tasks are allocated, measured, and rewarded.

Work redesign choices inside organisations

Two redesign paths with different consequences

One path uses AI to augment capability: offloading admin, improving access to knowledge, and freeing time for higher-value interaction. This can support retention and internal mobility if the freed time is protected rather than immediately monetised as higher throughput. Another path uses AI to intensify output: tighter scripts, continuous monitoring, and performance scoring that narrows discretion. This can reduce cost quickly, but often raises churn, stress-related absence, and reputational risk.

Retail and logistics provide a clear example. AI-driven scheduling can match staffing to demand more accurately, yet can also create unstable hours and reduce predictability for families unless guardrails are set. In professional services, AI can speed drafting, but can also remove junior learning opportunities unless structured apprenticeship tasks are deliberately preserved.

Capability building beats tool rollouts

Training that focuses only on tool tips tends to decay quickly. More robust investment links AI use to workflow changes, data quality practices, and escalation protocols. It also addresses how performance will be assessed when AI assistance becomes normal. If “output per hour” becomes the sole metric, job quality may deteriorate even while productivity rises.

Organisational decision test: when AI improves output, does the system allocate some of the benefit to learning time, service quality, or pay progression, or does it convert all gains into higher targets?

Guardrails for trust, pay, and dignity

Preparing for multiple futures is not only personal and organisational. It is also institutional. Without credible rules and norms, uncertainty is resolved through private power, opaque algorithms, and individualised risk. With guardrails, experimentation can be faster and fairer.

Guardrails for trust, pay, and dignity

Data rights and algorithmic management

AI at work often relies on data generated by workers: communications, keystrokes, case notes, customer interactions. Clarity is needed on what is collected, what is inferred, and what can trigger discipline. Transparency requirements, audit rights, and routes to challenge decisions can prevent “computer says no” management cultures. Where unions or worker councils exist, involvement in AI procurement and monitoring can reduce conflict and improve implementation quality.

Pay dynamics and the distribution of gains

AI can raise productivity without raising wages if labour markets are slack, if roles are deskilled, or if bargaining power weakens. It can also create new pay premiums for those who can combine domain expertise with AI-enabled execution, leaving others trapped in stagnant roles. Policies on minimum standards, progression pathways, and portable benefits for platform workers influence whether AI becomes a broad prosperity tool or a force multiplier for inequality.

Education policy also matters. If funding and accountability systems reward seat time rather than demonstrated competence, credential inflation becomes more likely. If public institutions support modular learning with strong labour market links, transitions become less risky.

The uncomfortable question

Preparedness is less about guessing which jobs vanish, and more about ensuring that people and organisations can renegotiate tasks, rights, and learning fast enough to keep dignity and opportunity intact.

If AI increases output this year, what explicit mechanism ensures that some of the value flows into better job quality and fair progression, rather than disappearing into tighter metrics and thinner entry routes?

London School of Innovation

London School of Innovation

LSI is a UK higher education institution, offering master's degrees, executive and professional courses in AI, business, technology, and entrepreneurship.

Our focus is forging AI-native leaders.

Learn more