Skills in Practice 2026 Report

What enterprise data actually shows about skills-based transformation.

Cutting through the noise of skills-based transformation with operational evidence from Fortune 500 organizations. Based on verified skills data in practice: not surveys, not intentions, not aspirations.

Report Summary

This analysis identifies a set of consistent patterns in how organizations are operationalizing verified skills data to support real workforce decisions. Its value lies not only in the metrics themselves, but in the nature of the cohort from which they are drawn: a meaningful, representative cross-section of organizations actively using skills data at scale — predominantly Fortune 500 businesses operating in the Northern Hemisphere.

That matters. These are not edge cases, pilot programs, or theoretical frameworks. They are largely drawn from organizations with the scale, structural complexity, regulatory exposure, and workforce coordination challenges that make skills data genuinely difficult to implement well. As a result, the findings should be read less as usage statistics and more as practical signals of how serious organizations are turning verified skills data into operational advantage.

Whether you are early in your skills-based transformation journey or refining an existing program, the patterns and metrics in this report offer a practical point of reference: a way to benchmark your own progress against what leading organizations are actually doing in the field, not just what they aspire to.

Methodology

The analysis was conducted in two stages.

First, a broader cohort of representative organizations was reviewed to establish common patterns across different organization sizes and maturity levels. Second, a more detailed review was undertaken on a subset of larger and more mature organizations. This allowed for closer examination of operating characteristics such as skills architecture density, assessment coverage, supervisor validation, reassessment cadence, and insight-oriented usage behavior.

The result is a dataset that is both broad enough to identify recurring patterns and specific enough to support practical conclusions.

3.9m+

million data points (inc. skills assessed)

8 Months

longitudinal study and time window.

44,000

users included in cohort

The organizations included in the analysis span a broad cross-section of industries, including enterprise technology and IT services, engineering and infrastructure consulting, industrial manufacturing and automation, financial services and insurance, healthcare and medical technology, telecommunications, transport certification, and sports data.

This breadth reflects exposure to both digital-first and highly operational sectors: from knowledge-based service organizations through to complex, asset-intensive industries. The most common segment within the cohort is technology-related businesses, particularly software, cloud, and managed IT services, making it the strongest recurring industry theme across the overall mix.

Architecture

Skills Per Role

Mature organizations maintain dense, role-relevant architectures. Average: 89 skills per role. These aren’t lightweight checklists.

“Most orgs get word clouds of 20,000+ uncontextualized skills.”

Architecture depth is not determined by workforce scale alone. In the broader sample, organizations maintained an average of 938 skills and a median of 646, but skills-per-role ratios ranged dramatically, from as low as 0.02 to over 2.14. Note: this is not the number of skills per role, but a quotient of the taxonomy breadth, calculated as the ratio of the total number of skills used divided by the total number of roles used.

One of the largest organizations in the sample, with more than 18,000 people, recorded the lowest skills-per-role ratio by a wide margin, while a significantly smaller organization maintained a skills library several times larger. The implication is clear: the maturity of a skills system depends less on organizational size than on the discipline with which the model is scoped, curated, and applied.

What makes this particularly noteworthy is that the larger organization with the smaller library per capita was, by objective appraisal, one of the most mature in the cohort. It had successfully evolved its approach to skills management over more than a decade at enterprise scale, refining usage, design, and experience through sustained operational discipline. A leaner architecture, in this case, was the product of maturity, not a lack of it.

Adoption

Assessment Coverage

What proportion of the targeted workforce has been assessed? Median: 82%. Skills data doesn’t create value when confined to partial rollouts.

“Only 38% even maintain an enterprise skills library”

Median 82%

Mature adoption is characterised by broad population coverage. Across the larger sample, the average proportion of the targeted workforce that had been assessed was 71.8%, with a median of 82%.

This is a significant finding. Skills models do not generate meaningful value when confined to isolated business units or partial rollout states. Their utility increases when coverage becomes broad enough to support comparison, aggregation, and action across wider teams and, ultimately, the organisation as a whole. The evidence suggests that serious implementations are distinguished not simply by having a framework in place, but by embedding it across a substantial share of the workforce.

Future

Interest Coverage

Interest consistently outpaces proficiency. Max interest scores run approximately 25% higher than skill scores, surfacing where your workforce wants to grow.

0%
max interest level
exceeding skill level
Average interest rating: 0.28 · Average skill rating: 0.25

Across the larger sample, max interest was approximately 25% higher than max skill, and this pattern held consistently across most observed organizations where both measures were available.

This suggests that a mature skills system is not simply describing the workforce as it stands today. It is also surfacing where employees want to grow beyond their current level of proficiency. In practical terms, this points to a latent development gap between present capability and future-directed interest: a signal that can support more targeted learning investment, internal mobility, mentoring, and workforce planning.

Governance

Supervisor Validation

Credibility in skills data is not achieved through scale alone. It requires governance. Supervisor validation rates range from self-report only to near-complete manager oversight.

“Only 28% of orgs make effective decisions to close skill gaps”

Median 47%

Governance is a defining feature of maturity. Supervisor-validated assessments averaged 41.4% in the detailed sample, with some organisations reaching as high as 97%.

This suggests that robust skills systems evolve beyond employee self-reporting alone. Organizations seeking to use skills data in higher-trust contexts, such as internal mobility, succession, workforce planning, capability risk, or strategic deployment, appear to introduce increasing levels of managerial validation and calibration.

In other words, credibility in skills data is not achieved only through scale, but through governance.

Currency

Reassessment Cadence

Skills data is only useful if it’s current. These organisations treat it as a live operating dataset, not a one-time capture.

“Only ~30% report high maturity in any skills practice”

Skills data appears to be operationally useful only when it is maintained. Across the larger sample, the average reassessment interval was 9.5 months and the median was 6 months.

This is an important marker of seriousness. Once skills data becomes stale, its strategic value deteriorates quickly. The recurring review patterns observed here suggest that higher-maturity organizations treat skills data as a live operating dataset rather than a static record. Those extracting real value tend to refresh it on a deliberate cycle, not when prompted, but as part of ongoing operational discipline.

Skill Composition

Hard Skills vs Soft Skills

Every observed library is overwhelmingly technical. The average is 94.4% hard skills. This is not vague capability mapping.

0%
hard skills avg

The dominant use case is practical and job-relevant rather than abstract. In the detailed sample, hard skills accounted for an average of 94.4% of tracked skills.

This is one of the most important findings in the analysis. It indicates that, in practice, skills systems are being used primarily to model operational, technical, and role-specific capability requirements. This challenges the common assumption that skills initiatives are mainly centered on broad behavioral or aspirational capability models. Across these organizations, many of them large, sophisticated enterprises, meaningful skills adoption appears to be anchored in the concrete requirements of work, in the flow of work as it were.

Behavior

Insight-to-Activity Ratio

Are organisations using skills data to generate insight, or just to populate records? This measures the share of actions that are analytical rather than administrative.

“Skills Base Organization’s are 63% more likely to achieve results”

Insight actions
Administrative actions
sorted by ratio
Avg ratio: 0.189 (vertical line)

Mature organisations do not simply record skills data; they use it to generate insight. Across both samples, insight-generating actions represented a stable share of measured activity. In the broader sample, the activity-to-insight ratio was 0.196; in the larger sample, 0.189. In practical terms, that equates to roughly one insight-generating action for every 4.7 measured actions.

Given that dashboard usage was excluded and newer AI-driven surfaces were not captured, this should be interpreted as a conservative measure of analytical behavior. The true figure is likely considerably higher. The implication is clear: once skills data reaches sufficient maturity, organizations begin to use it analytically rather than administratively.

Future

AI Adoption

Embedded AI usage grew 4.4x over eight months with no additional training or prompting. When placed close to real work, AI becomes a driver of sustained engagement, not a novelty.

0
×
AI usage growth over eight months — organic, with no enablement push

A further longitudinal dataset on AI usage supports the same general conclusion. Following the introduction of agentic AI functionality, usage increased from 62 use cases in November to 274 in June 2025: approximately 4.4 times growth over eight months. This occurred without additional training, user prompting, or material enhancement to the tool.

The most plausible interpretation is that initial curiosity translated into repeat usage as users recognized practical value. This suggests that embedded AI, when placed close to real work and made easy to use, can become a meaningful driver of ongoing engagement with skills data, and ongoing value creation from it, rather than a short-lived novelty feature.

Final Statements

The evidence suggests that high-functioning skills environments share a common operating pattern. They are built on detailed and role-relevant skill architectures. They achieve broad workforce coverage. They introduce validation and governance over time. They refresh the data on a recurring cycle. They surface not only current capability, but also signals of future development demand. And they use that data to generate insight, not merely to populate records.

The fact that these patterns emerge from a cohort weighted toward large Northern
Hemisphere enterprises, many of them Fortune 500 organisations, gives the findings
added significance. These organisations tend to have the strongest need for rigorous
workforce visibility and the lowest tolerance for systems that do not scale or do not
withstand operational scrutiny. Their usage patterns therefore provide a useful proxy
for what serious, enterprise-grade skills adoption looks like.

The central implication for other organizations is clear. A skills initiative should not be approached as a lightweight taxonomy exercise or a one-time HR project.

The evidence suggests that it is more effective to treat it as an operating model: specific enough to reflect real work, broad enough to support enterprise visibility, governed enough to be trusted, current enough to remain useful, and active enough to drive insight and action

Skills in practice FAQ

What is the Skills in Practice Report?

The Skills in Practice Report is published by Skills Base and draws on verified operational data from a representative cross-section of active enterprise organizations. Unlike most research in the skills-based transformation space, which relies on surveys of what executives say they plan to do, this report examines what organizations are actually doing with skills data in practice. The 2026 edition is the first in the series.

No. These are live, operational skills systems embedded across real workforces. More than 75% of the organizations in the sample operate enterprise-wide. The average deployment in the detailed sample reaches 3,198 people, with a median of 1,262 and a maximum of nearly 19,000.

The report identifies a consistent operating pattern across high-functioning organizations. In summary: organizations maintained an average of 89 skills per role (median 63), achieved a median of 82% workforce assessment coverage, averaged 41.4% supervisor-validated assessments, refreshed skills data on a median cycle of 6 months, tracked skill libraries that were 94.4% hard skills on average, and generated approximately 1 insight-oriented action for every 4.7 measured actions.

When referencing findings, please cite as: Skills in Practice Report 2026, Skills Base.

Please use: Skills Base, Skills in Practice Report 2026: What Enterprise Data Actually Shows, March 2026. For shortened references, “The Skills in Practice Report by Skills Base” is the preferred form. Where appropriate make sure to link back to this website landing page.

This measures the proportion of usage that is insight-generating (e.g. interactions with heat matrices, People Finder, skills gap views, comparative analyses) versus administrative (e.g. creating skills, assigning assessments, updating records). The average ratio across the detailed sample was 0.189, roughly 1 in every 4.7 actions. Dashboard views and AI-powered surfaces were excluded from the count, so this should be read as a conservative floor.

Hard skills refer to operational, technical, and role-specific capabilities: software proficiency, regulatory knowledge, equipment operation, or process expertise. The finding that 94.4% of tracked skills are hard skills indicates that these organizations are using skills systems to model the concrete requirements of work, rather than primarily tracking broad behavioral or aspirational competencies.

About Skills Base research

Skills Base research draws on over a decade of experience implementing skills management and intelligence software across enterprise environments worldwide. Our insights are grounded in live operational data from customer deployments, informed by world-leading consulting in skills-based transformation, and shaped by the practical realities of helping organizations move from skills frameworks to functioning operating models.

The authors

Picture of Nick Stanley

Nick Stanley

CEO & Founder, Skills Base

Picture of Steven George

Steven George

Chief Product Officer & Founder, Skills Base

A Skills Base Whitepaper

The Skills Base Methodology
A Framework for Skills-Based Organizations and Teams