The Learning Ideas Conference

View Original

Toward a Learning and Performance Science

Big data and data science.

Have you heard the buzzwords around big data? You know, digital transformation, artificial intelligence (AI), machine learning (ML), scalability, low-code/no-code solutions, predictive analytics, shift-left, contextualization, and so many more. An expert recently published a list of 101 terms that data analysts should know.

The aim of data science is to continuously extract gems of wisdom from zettabytes of data to improve business processes and outcomes. It requires intelligent data pipelines and specialized technologies to “…unify, manage, and visualize the flow of structured business data…with the goal of improving the overall efficiency of a business.” As computing devices became relatively inexpensive and powerful around the turn of the millennium, businesses captured and stored volumes of data that overwhelmed their capacity and know-how to make sense of it all. In the following decades data science evolved at a breathtaking rate.

Today data science is a coveted discipline, taught across colleges and universities, and is the subject of many Gartner, Forrester, Nielsen, IDC, (and other expert) insights. As a result, there has been an explosion of data science products and services, all quoting the industry experts. The most common of the many buzzwords include “digital transformation,” “democratization,” “low-code/no-code solutions,” “contextualization,” “continuous compliance,” “shift left,” and “edge computing.” Search on “shift compliance left,” for example, and you’ll get hundreds of products riding Gartner’s and Forrester’s coattails. There’s nothing more satisfying to a technology vendor than making it into the upper right quadrant of Gartner’s magic squares, or just getting on their radar.

Digital transformation

Digital transformation is all about extracting wisdom from data through the pipeline: Data consists of small strings of digitized facts. Information is data with context. Knowledge is actionable information. Insight is generality abstracted from knowledge, where patterns of success emerge from individual solutions. Wisdom is insight rolled up into sound operating principles for continuous improvement. Data science products (and services) interact with one or more nodes of the data pipeline to ultimately arrive at wisdom: 

Data -> Information -> Knowledge -> Insight -> Wisdom

Some are holistic, offering “full stack” solutions. Organizations capture almost everything, and solid business decisions are made using such tools without humans ever having to look at typical data elements.

Toward an L&P science

Learning & performance management has surely benefited from advances in data science. But what about L&P content? You know, sentences, paragraphs, media, and all those things we use to develop solutions. Can we borrow from data science to evolve an L&P science that extracts wisdom from content? Could there be an intelligent content pipeline? Perhaps. Some ingredients are available to create an analogous L&P science, but so far there is no agreed upon recipe, and we don’t really know the nature of the cake. Consider, then, a number of data science attributes as they may apply to an L&P science, and where gaps and opportunities might emerge.

Content and Knowledge codification. Crisp delineations between content and knowledge are mostly nonexistent, obscured, or misunderstood. Repositories that comprise the L&P ecosystem—learning modules, knowledge articles, content nodes, application help, virtual assistants, job aids, checklists—are mostly separate entities of distinct domains, scattered among SharePoint, Teams, Google Docs, and so many digital filing cabinets.

Content object reuse. Aside from SCORM and xAPI fostering great system interoperability for L&P objects, the vision of taxonomy-based object reuse and assembly has not been realized. For example, relationships between raw content nodes and how they ultimately manifest through the content pipeline to published learning objects are lost in development lifecycles.

Content quality management. There does not exist L&P content hygiene similar to that provided by data quality management. While some such protocols exist in knowledge management systems—synonyms, stop words, concepts, keywords, proximity, relevancy, and the like—what about the rest of the L&P ecosystem? Publish exactly the same content to Sharepoint and Oracle Knowledge and see how quickly you can raise it through the same search string in each system. Not many organizations actually tune their SharePoint or KM systems for search optimization.

Semantic encoding. Quality data elements fall mostly into unambiguous categories when normalized. On the other hand, content is semantic. Protocols of the Semantic Web are applicable to the entire L&P ecosystem, especially those leading to Ontology and Trust, the top tiers of Berners-Lee’s Semantic Web layer cake. Where is L&P with all of this, like machine learning and natural language? Can our machines find, read, and reason about a unit of content? We are certainly not there across the L&P ecosystem.

Continuous content monitoring and evaluation. Data science includes interoperable frameworks (e.g., for compliance) that enable continuous monitoring, prediction, and improvement. Gartner’s “shift left” means evaluating data at the earliest possible moment, and continuously, through an intelligent pipeline. Evaluating data late in the game by extracting, analyzing, and reporting in spreadsheets and presentation decks is not only inefficient, but the results are often obsolete within moments of publication. L&P processes still extract data—mostly from learner evaluations—and generate point-in-time reports in spreadsheets and presentation decks. Does anyone really get to Kirkpatrick levels three and four? Data science is now able to predict customer behavior and buying habits. Where is the “shift-left” and “predictive analytics” for the L&P ecosystem?

Privacy, compliance, and trust: What is L&P compliance vis-à-vis GDPR and the litany of privacy regulations that have emerged? L&P is often concerned with personalized learning, where the system in which you work observes you, calls out performance gaps, then delivers snippets of just-for-you remediation, in real time. It aims to continuously improve business performance through human performance. But are we able to embrace privacy compliance such that it does not defeat learning and performance individualization? “Anonymization”—a key requirement of privacy regulation—is the antithesis of personalization, no? How do we know when we are throwing the personalization baby out with the privacy wash water? How are compliance protocols applied across the entire L&P ecosystem?

Predictive L&P. Learners and performers are the customers of the L&P ecosystem. Why, then, is there not predictive L&P analytics like we have, for example, in predictive advertising? Which are your best candidates for filling competency gaps, for maximum business improvement? Too bad there isn’t an analogous L&P process to how data science combines first and third party data to target the best marketing candidates. With such predictions around L&P customers, businesses would enjoy lower costs and a much higher return on the L&P investment.

Contextualization. These days analysis tools contextualize data to expose relationships and provide deeper insights, often with meaningful graphics for visualization. It is a critical step in the journey from data to wisdom. According to Gartner, “By 2023, graph technologies will facilitate rapid contextualization [of data] for decision making in 30% of organizations worldwide.” What about L&P objects? To what extent are they ever contextualized? How is it possible to enable, measure, and improve performance if there are no explicit mappings between performance taxonomies and persona context? Without contextualization, prediction is severely limited or impossible.

Democratization. Democratization is about putting “no-code / low-code” tools in the hands of subject matter experts or generalists to perform tasks that would otherwise require technology savvy specialists. L&P has enjoyed decades of democratization in authoring and management tools, but not for the content pipeline. So many L&P tools are desktop utilities rather than enterprise solutions, hence content flow from development to publication breaks critical connections. Organizations serious about enterprise L&P content and knowledge management still require deep technical savvy to pull it off. There is little democratization in the L&P content pipeline tools, let alone an intelligent content pipeline.

Stay tuned…

Is L&P content science moving in a positive direction? Somewhat, albeit way behind the data science curve. To be sure, there is far more complexity to L&P content than to data, and L&P content is more voluminous. Perhaps with improved cross-discipline cooperation the mostly disparate practices will converge to produce a cogent L&P science. Imagine stakeholders of the Semantic Web, Knowledge Management, Learning Management, Compliance, Edge- and Cloud computing, and more, convening an L&P science summit to establish standards, technology frameworks, and incentives for cross-discipline cooperation. That would be a great starting point. We’ll see.

I will be talking in much more detail on this topic at TLIC in June (“Learning and Performance Analytics for the Digital Transformation of the Knowledge Ecosystem”), including case studies. It’s a glorious time for L&P content science to mature. If only the solution space were as crowded as it is today for data science.