Back to Analyzer

Methodology

A detailed look at how this tool assesses workforce automation exposure and skill implications.

Overview

The Workforce Task Intelligence methodology provides a structured approach to understanding how AI capabilities may impact specific job roles. Rather than making broad predictions about job displacement, the tool focuses on task-level analysis to provide actionable insights.

The approach is grounded in the ILO Working Paper 140 (2025) framework for assessing generative AI exposure, combined with comprehensive occupational data from the U.S. Department of Labor's O*NET database.

This approach recognizes that most jobs consist of a portfolio of tasks with varying degrees of automation potential. The same role may have some tasks that are highly automatable while others remain firmly in the human domain.

1

Taxonomy Resolution

What the Tool Does

The tool maps input job titles to standardized occupational classifications using the O*NET (Occupational Information Network) database. A fuzzy search algorithm matches against 57,521 job titles (primary titles plus alternate titles) to find the best match.

Data Sources

  • O*NET Database 30.1 (December 2025) - 1,016 occupations
  • Bureau of Labor Statistics Standard Occupational Classification (SOC)
  • 57,521 searchable job titles (primary + alternates)

Limitations

  • O*NET is US-centric; international job titles may not match well
  • Emerging roles may not have established classifications
  • Organizational variations in job definitions not captured
2

Task Decomposition

What the Tool Does

Each role is broken into its constituent tasks using O*NET task statements. The tool analyzes up to 25 tasks per occupation, capturing 77% of occupations completely. The database contains 18,796 total task statements across all occupations.

Task Classification Framework

Based on ILO Working Paper 140, each task is classified into one of three categories:

Automate (70-100)

Tasks where AI can perform the core function with minimal human oversight

Augment (30-69)

Tasks where AI enhances human capability but human judgment remains essential

Retain (0-29)

Tasks that remain primarily human due to physical, interpersonal, or judgment requirements

Limitations

  • Occupations with more than 25 tasks are partially analyzed
  • Informal tasks and organizational context not fully captured
  • Task interdependencies may affect automation feasibility
3

Six-Dimension Assessment

What the Tool Does

Each task is evaluated using the ILO's six-dimensional assessment framework. Starting from a baseline score of 50, adjustments are made based on each dimension to arrive at an automation potential score (0-100).

Assessment Dimensions

1. Task Structure

Structured, rule-based tasks score higher; unstructured tasks score lower

2. Cognitive vs Physical

Pure information tasks score higher; physical/hands-on tasks score lower

3. Routine vs Novel

Repetitive tasks score higher; unprecedented situations score lower

4. Human Judgment Requirement

Objective criteria score higher; subjective judgment scores lower

5. Interpersonal Intensity

Solo tasks score higher; relationship-dependent tasks score lower

6. Stakes & Accountability

Low-stakes tasks score higher; high-stakes decisions score lower

Capability Level Scenarios

Classification thresholds adjust based on the selected AI capability assumption:

Conservative

Higher thresholds for automation (75+), more tasks classified as Retain

Moderate

Standard thresholds (70+/30+), balanced assessment of current capabilities

Bold

Lower thresholds for automation (65+), assumes rapid capability advancement

Limitations

  • AI capabilities evolving rapidly; assessments reflect analysis date
  • Breakthrough capabilities may not follow historical trends
  • Industry-specific AI adoption rates vary significantly
4

Exposure Calculation

What the Tool Does

Task-level automation potential is aggregated into an overall exposure score. The distribution shows what percentage of tasks fall into each category (Automate, Augment, Retain), providing a clear picture of how the role will transform.

Exposure Categories

0-29
Low
30-49
Moderate
50-69
High
70-100
Very High

Limitations

  • Does not account for industry-specific adoption barriers
  • Regulatory constraints may significantly delay automation
  • Technical potential differs from organizational readiness
5

Skills Inference

What the Tool Does

Based on the task classifications, the tool infers skill implications across three categories to provide actionable workforce development guidance:

Declining Skills

Skills associated with automatable tasks that will decrease in value

Evolving Skills

Skills that need to transform for human-AI collaboration (highest training priority)

Differentiating Skills

Uniquely human skills that become more valuable as AI handles routine work

Limitations

  • Skills are inferred from task analysis, not validated against skill databases
  • Individual development paths depend on current competencies
  • Organizational context affects skill prioritization

Data Sources & References

  • O*NET Database 30.1:December 2025 release. 1,016 occupations, 18,796 task statements, 57,521 job titles.onetcenter.org
  • ILO Working Paper 140:"Generative AI and Jobs: A Refined Global Index of Occupational Exposure" (2025). Six-dimensional assessment framework.ilo.org
  • Claude AI (Sonnet):Anthropic's Claude claude-sonnet-4-20250514 model performs task classification and reasoning.anthropic.com
  • Bureau of Labor Statistics:Standard Occupational Classification (SOC) system for occupation taxonomy

Technical Implementation

Analysis Pipeline

  1. Fuzzy search matches job title to O*NET occupation (~100ms)
  2. Retrieve task statements from O*NET database (~100ms)
  3. Send tasks to Claude API with ILO framework prompt (60-90s)
  4. Parse structured JSON response with classifications and reasoning
  5. Calculate exposure statistics and infer skill implications

Streaming Response

Results stream progressively via Server-Sent Events (SSE): O*NET match appears in ~1 second, followed by task list, then full classification results as AI analysis completes.

Cost & Performance

Each analysis costs approximately $0.05-0.06 in API usage and takes 60-90 seconds to complete. Results are not cached, ensuring fresh analysis each time.

Important Disclaimer

This tool provides AI-generated analysis based on established occupational data and research frameworks. While it uses real O*NET data and ILO methodology, the classifications represent technical automation potential, not predictions of actual job changes.

Real-world workforce impact depends on organizational context, industry adoption rates, regulatory factors, economic considerations, and change management capabilities that vary significantly across employers and regions.