There’s a quiet crisis happening inside many HR functions right now. It doesn’t show up in a single budget line or a dramatic resignation spike. It shows up in the slow bleed of misaligned learning programs, skills taxonomies nobody uses, and workforce capability gaps that never quite get closed despite significant investment in technology that was supposed to fix exactly that.
If your Skills Tech isn’t delivering, you’re not alone. But the problem probably isn’t the technology. It’s how it’s being set up, measured, and connected to the rest of your HR ecosystem.
That’s the conversation Skills Base CEO Nick Stanley and Head of Product Steven George are bringing to their upcoming live webinar, Beyond the Hype: What HR Leaders Get Wrong About Skills Technology, broadcasting live from People Matters TechHR Singapore 2026. No grand slideshow. No stock screenshots. Just real product, real data, and the kind of straight talk that L&D leaders actually need.
WEBINAR | Fri, March 27th 2026 @ 12PM GMT +10 (AEDT)
For HR leaders ready to make Skills Tech work.
Real metrics, fresh research, and zero fluff.
The Skills-Based Model Has Stopped Being a Theory
Something shifted in the last twelve months. The skills-based organization, once a framework that lived mostly in conference keynotes and analyst whitepapers, is starting to show up in the real world. Early adopters are beginning to separate from the pack, and the conference circuit reflects it.
TechHR Singapore is kicking things off at the sharp end of the Asia-Pacific market. Learning Technologies London has set the tone for Europe. ATD’s International Conference in Los Angeles, UNLEASH World in Paris, SHRM26 in Orlando, the CIPD Festival of Work, and the HR Technology Conference are all building their agendas around the same core tension: organizations know skills matter, but most still can’t measure them in any meaningful way.
What’s striking is how consistent the frustration is, from Singapore to London to Las Vegas. HR leaders are not short of technology. They’re short of clarity on what to do with it.
That’s precisely where new research becomes critical. The Skills Base team is releasing fresh best-practice research ahead of this webinar, Sign-up to attend and receive a free copy of our skills new research report grounded in real customer behavior – not theory, opinion or consultant advice. It draws on real-world implementation data to address what taxonomy complexity looks like at scale, what assessment patterns predict genuine capability growth, and where organizations consistently underinvest in the foundations that make everything else work.
Where Most Organizations Actually Are Right Now
The skills tech conversation often assumes a baseline that simply doesn’t exist yet for most HR teams. The reality is a maturity spectrum, and it’s wider than most vendors acknowledge.
At one end are organizations still operating with no structured skills data at all. Capability is managed through intuition, tenure, and job title, informal proxies that tell you almost nothing about what people can actually do or where gaps are forming. One step further along is the spreadsheet cohort, arguably the most common position in the market right now. Skills data exists, but it’s scattered across department-owned Excel files, outdated SharePoint folders, and L&D trackers only one person knows how to interpret. Different teams work from different frameworks and different definitions of what a skill even means. The data can’t be aggregated, compared, or connected to any broader workforce strategy.
Further along still are organizations that have tried to solve this through their HRIS. It seems logical, given that the HRIS is already the system of record for people data. The problem is that HRIS platforms were built to manage employment data: contracts, compensation, org structures, compliance. They were not built to handle the complexity of a verified, living skills data model. Skills in an HRIS are typically flat lists with no proficiency framework, no way to distinguish between self-assessed and manager-verified capability, and no mechanism for keeping data current as roles evolve. What you end up with is a skills field that looks like it’s being used but isn’t producing intelligence anyone can act on.
This is the gap that purpose-built skills technology exists to close. Understanding where your organization sits on this spectrum is the first honest step toward closing it.
Theme 1: Your Skills Taxonomy Is Probably Either Too Thin or Too Deep
Before you can close a skills gap, you need to know what skills you’re actually measuring and whether that list is fit for purpose. In practice, this is where most organizations quietly get it wrong.
When Skills Base perform benchmarks, two patterns keep emerging: taxonomies too sparse to drive meaningful L&D decisions, and taxonomies so complex they collapse under their own weight. A taxonomy with too few skills gives a false sense of coverage. One with too many becomes impossible to assess accurately or maintain over time.
The metrics that matter are taxonomy size overall, average skills per user, and average skills per role. These three numbers tell you whether your skills architecture reflects the organization you actually have, or the one someone imagined when they first built the framework.
In the webinar, Steven will demonstrate how Sam and Lens AI can assess your taxonomy against real-world benchmarks, pulling data from your environment, analyzing it against the new research, and surfacing where your taxonomy is over-engineered, under-built, or misaligned. If your skills data foundation is off, everything built on top of it is off too: learning recommendations, gap analysis, and talent mobility decisions all rely on the quality of what sits underneath.
Theme 2: L&D Effectiveness Has to Be Measured as a Function of Skill Gap
Here’s a question worth sitting with: how do you actually know if your L&D programs are working?
For most organizations, the honest answer is not well enough. Completion rates tell you who showed up. Post-training surveys tell you how people felt. Neither tells you whether skills gaps are actually closing, and that’s the only measure that ties L&D investment to business outcomes. It’s the conversation happening across every major event this year, from Learning Technologies to SHRM, and the new Skills Base research addresses it directly with data from real implementations.
Effective L&D planning requires three things to work together: regular skill assessments, supervisor assessments that provide a second lens on capability, and a consistent cadence so you’re working from current data rather than an eighteen-month-old snapshot. The insights here focus on assessment volume, the ratio of self to supervisor assessments, and how frequently assessments are being updated, metrics that reveal whether your organization has a genuine culture of skills measurement or just a one-time compliance exercise.
In the live demo, Steven will show how to set skill targets for roles and use Lens AI to visualize capability gaps across an entire team. This is often the moment where L&D planning shifts from reactive to strategic: designing programs based on what the data shows people actually lack, and tracking whether those gaps close over time. Not hours of training delivered. Gaps closed.
Theme 3: AI Isn't a Feature. It's the Multiplier.
Every vendor on every conference stage this year is talking about AI. Most mean they’ve added a chatbot to a dashboard that already existed.
What Skills Base is demonstrating is something different: AI as the connective tissue between skills data and the workforce intelligence your team needs to act on. But there’s an important caveat. AI is only as powerful as the data underneath it. An organization still managing skills in spreadsheets, or pulling flat lists from an HRIS never designed for this purpose, won’t get meaningful AI-driven intelligence. The foundation has to be there first. AI amplifies good skills data. It doesn’t rescue bad data or create structure where none exists.
For organizations that have built that foundation, the opportunity is significant. The live demo in this section is deliberately open-ended.
Steven will use Skills Base’s AI capabilities to surface workforce intelligence that previously would have required a data analyst, a complex reporting setup, and weeks of lead time. Whether that’s identifying hidden capability clusters, predicting emerging gaps, or connecting skills data to learning recommendations that actually make sense, the demo will make the case better than any slide deck could.
Why This Webinar Matters Right Now
Looking across the 2026 conference calendar, from TechHR Singapore and Learning Technologies to UNLEASH, ATD, SHRM, and CIPD, every event is circling the same question: what does a skills-based organization actually look like when it’s working?
The debate has moved past whether skills matter. It’s now about execution, and the gap between organizations getting it right and those still fumbling with the foundations is widening fast.
Whether you have no structured skills data, siloed spreadsheets, an HRIS that’s reached its limits, or a purpose-built platform that isn’t delivering yet, this session meets you where you are. Nick and Steven will share preliminary access to the new Skills Base research ahead of the public release, giving attendees the evidence to move forward with confidence.
Live from TechHR Singapore 2026. No theory. No fluff. Just the real thing.
WEBINAR | Fri, March 27th 2026 @ 12PM GMT +10 (AEDT)
For HR leaders ready to make Skills Tech work.
Real metrics, fresh research, and zero fluff.