
Given the speed of generative AI development and adoption in the last two years, and despite the interest in retooling the notion of workforce readiness, we are on the wrong track.
Recently, I attended a presentation by a large corporation on the future of work in America. The central idea was that workers in knowledge-based industries such as higher education shouldnāt fear for their jobs, provided they developed AI literacy skills alongside uniquely human capacities such as empathy, aesthetic judgment, and critical thinking. The premise was that generative AI platforms (GenAI) are unlikely to replicate and replace key āsoft skills,ā skills weāve been discussing at least since the emphasis on STEM education in the late 20th Century (Science, Technology, Engineering, and Mathematics). In other words, the demands of the AI-driven workforce will require the acquisition of new technology-related skills, but the manner in which we prepare people for the workforce wonāt change fundamentally.
They are wrong: attempting to carve out a role for human beings in an AI-augmented workforce along largely conventional lines is the surest way to create a workforce ill-prepared for an AI future. Already, generative AI platforms such as Grammarly and Pi AI emulate emotional intelligence, and aspects of Grammarly are utilized to ensure that human communications are consistent, polite, relevant, and on-brand ā all of which reflect AIās encroachment on āuniquely human skills.ā Ā Increasingly, AI technologies (such as OpenAIās ChatGPT4.5) are behaving in ways one would expect of emotionally intelligent, thoughtful human agents, at least within circumscribed contexts.
In a widely-discussed study (and contrary to conventional wisdom), generative AI proved remarkably capable of dislodging peopleās beliefs in conspiracies about Coronavirus and election fraud, which may also suggest that AI can excel in communication where humans cannot.Ā Another study explores the efficacy of GenAI in personalized persuasion based on its ability to tailor messaging to the psychological profiles of recipients, even with limited prompt information. These and other studies may well suggest that the faith in soft skills to ensure an enduring role for humans in an AI-augmented workplace is misguided.
Moreover, if prognostications prove accurate, agentic AI will soon be able to engage customers, students, and human employees in ways that are more empathetic, clear, and tailored to personal needs than other human beings. In fact, the promise of agentic AI seems predicated on this idea, as increasingly autonomous AI systems will need to make value-based, emotionally intelligent judgments to meet the needs of their human counterparts. The notion that āemotional intelligence, critical thinking, leadership, and complex problem-solving are innately human attributesā¦ā is at best a statement of a present limitation, not the basis for a plan to upskill industry employees or adjusting the notion of āworkforce readiness.ā
Even if the aspirations of agentic AI prove difficult to accomplish in the short term, we have already moved past the idea that AI-literacies, i.e., the skills required to use GenAI platforms responsibly and effectively, divide cleanly into ātechnical skillsā and āhuman skills.ā Of course, to the extent that human beings are part of a workforce and must collaborate with each other, empathy, emotional intelligence, and critical thinking will always be essential to workforce readiness. But AI literacy requires a tectonic shift in the way we think about what it means to interface with and leverage technologies in industry and education: it requires that education embrace a systems-theoretic approach to teaching that encourages cognitive and emotional flexibility, curiosity, and adaptable problem-solving.
As a field of study, General Systems Theory (GST) emerged as an interdisciplinary framework in the mid-20th century with thinkers such as Norbert Wiener, Niklas Luhmann, and Ludwig von Bertananffy, all of whom were interested in questions regarding the nature of complex systems, their guiding principles, and their interrelations. The core idea is that systems ā whether biological, social, or technological ā function through interdependent components, feedback loops, and emergent properties. Most importantly, systems are dynamic, which means that their study requires an understanding of complexity and change, as well as a tolerance for the unexpected ā all of which are essential to working with generative AI systems such as Large Language Models (LLMs).
To reorient our understanding of workforce readiness and educational practice toward GST is, among other things, to recognize that the conventional distinction between āhuman skillsā and āAI capacitiesā is increasingly artificial. Instead, we must envision the relationship between humans and AI as part of a complex system ā an interrelationship between biological and artificial intelligence, each of which shapes the other through interaction and engagement. Adroit AI users must acquiesce to the fact that as AI systems evolve, so too must the form and nature of engagement with those systems ā and rapidly. Tolerance for uncertainty and the capacity to reframe and reshape problems and solutions must coexist with a penchant for clarity, rigor, and responsibility. That is, there must be a dynamic blending of traditional skills and foundational epistemologies with the emerging needs of complex systems.
It is tempting to say (and indeed it often is said) that what this means is that education and training must now begin at the higher levels of Bloomās Taxonomy ā with analysis, evaluation, and creativity rather than memorization and understanding. Ā On this view, the emergence of GenAI allows for more mechanical aspects of our workflow to be automated, thereby allowing us to exercise higher intellectual capacities. Consequently, education should focus on cultivating these āhigherā aspects of cognitive functioning.
But this is not, and cannot be, the point, since Bloomās well-worn framework codifies longstanding educational practice, thereby reinforcing traditional ideas about teaching and learning. Differently put, Bloomās Taxonomy is what might be called a legacy epistemology, i.e., a traditional theory of learning that emerged within particular cultural and historical moments. As such theories are predicated on the belief that education is the accumulation of largely stable fields of information, it is a view that is likely to overlook the dynamic interconnections between AI technologies and their human users. As a result, framing AI literacy and AI engagement in terms of constructs such as Bloomās Taxonomy pins our thinking to increasingly irrelevant ideas and practices. It is not that GenAI allows us to be more creative and analytical in the traditional sense; it is that we must now accustom ourselves to the fact that notions such as creativity and analysis are changing in light of the complex relationships between technology and people.
What does this mean for the future of workforce readiness? Among other things, it places a premium on āsystems-levelā skills that will be essential for engaging with GenAI and reorienting organizations to capitalize on AI integration. These include cognitive flexibility, emotional stability, curiosity, context-sensitivity, critical evaluation and adaptive problem-solving ā all of which must be understood in the dynamic context of human-AI engagement. Ā People with these skills will be good at identifying implicit assumptions and biases in AI output, understanding how to engage GenAI in ways that capitalize on its strengths, and tailoring the products of human-AI collaboration to the purposes at hand.
At the end of the day, the workforce of the future needs to be intellectually agile, emotionally adaptable, and curious. It is a systems-theoretic approach to training and education that is most likely to get us there.