Touro GST Search
Go to Top of Touro GST website

AI Literacy Needs Human Judgment

Dr. Shlomo E Argamon

Dean Touro GST,
Associate Provost for AI

2026-03-02


A critical look at the Department of Labor’s AI literacy framework and why true AI literacy must center human judgment, leadership, and accountability.



The US Department of Labor (DoL) has come out with a new framework for AI Literacy, to improve education for changing workforce needs. A shared national concept and baseline are a great idea.



But it regrettably misses the mark, even on its own terms.



DOL wants do develop adaptability, judgment, and long-term workforce resilience. Yet they define AI literacy as centered around understanding how AI works and how to use it responsibly. That indeed will build awareness. But it will not reliably produce good decision-making.



Knowing that AI can give wrong answers is not the same as being able to exercise good judgment when it does. Knowing about AI hallucinations is not the same as catching them when it matters. Knowing how to prompt is not the same as knowing how to structure work processes around AI, to apportion responsibility appropriately, or to decide when not to use AI.



Most importantly, the DOL framework treats human skills as adjacent to AI literacy at best, not central to it.

However, developing good judgment and leadership skills, exploring context, and establishing accountability are not just add-ons to AI literacy. They are at its core.



The DOL framework seeks to define a baseline for AI literacy. But it fails to grapple with the fundamentally human aspects of the question.



Can literacy comprising mainly just conceptual understanding of AI be enough to deliver the adaptability and resilience we need?

I think not.


More Posts