The Long Tail of Undone Tasks and Unmet Needs
A previous experiment showed many repetitive tasks could be automated. Today’s experiment is simple—mundane even—but it highlights how AI does more than automate repetitive tasks; it enables the completion of previously impossible ones.
These are tasks where the effort to learn them outweighs the benefits and where no third-party solution exists. Often, these tasks remain undone, resulting in an exceptionally long tail of incomplete tasks and unmet needs.
The Pains of Scientific Communication
One domain where these unmet needs are especially evident is scientific communication, where researchers struggle with tools that don’t fully support their work.
Scientists rely on math, figures, tables, code snippets, interactive apps, and citations to communicate ideas effectively. Many communication platforms designed for general audiences fail to meet these needs.
For example, this website uses Hugo as its static site generator. Hugo has great support for math, figures, tables, and notebooks but limited support for citations.
Before AI, you had to endure these limitations or submit a feature request, hoping a contributor would implement the solution. With AI, you may be able to build the solution yourself.
In my case, I wanted citations in this website to support back links (↩), a small yet delightful feature helping users stay in the flow.
Here is an example for you to click through, and come back:
Ask not what AI can do, but what AI should do (Lubars and Tan 2019)
After some research I learned I could get this functionality working by writing a Lua programming language filter for Pandoc, a universal document converter.
I had never written a line of Lua code, and learning it would have taken more effort than it was worth. With Claude 3.5 Sonnet’s help, I wrote and implemented it within minutes.
Discussion
This is one example of AI enabling a task I wouldn’t have otherwise attempted—one that, hopefully, delights readers.
But if we do not know how to perform the task, how can we verify that the AI implementation is correct? And how do we decide which tasks to entrust to AI? This remains an active area of research (Myllyaho et al. 2021).
Here is a simple heuristic I have found useful in deciding what tasks to delegate to AI:
Easy to verify | Hard to verify | |
---|---|---|
High familiarity | Delegate to AI | Delegate only to trusted AI, or break down task to make it easy to verify |
Low familiarity | Delegate to AI with care | Do not delegate to AI |
If you are familiar with a task and can easily verify its correctness, use AI. For example, let the AI write snippets of code or text that are tedious, like code for generating a chart.
Some tasks may be familiar but difficult to verify. For example, you may know how to extract, clean, and load data, but verifying whether the task was implemented correctly in a complex database isn’t trivial. Here, you must either trust the AI or do it yourself, which is laborious. One option is to break the task into smaller, verifiable chunks.
Conversely, you may not know how to perform a task but can verify whether it was done correctly. I was unfamiliar with writing Lua scripts, but I could easily confirm whether the AI-generated code achieved my goal. However, unintended consequences remain a risk. It’s not enough to verify that the code works—you must ensure it doesn’t do anything undesirable.
Finally, tasks you neither understand nor can verify—such as complex mathematical proofs—should either be avoided or delegated to a human expert who can validate them (or delegate them to an AI they can verify).
Impact on the Knowledge Production Hierarchy
Science operates within a knowledge hierarchy—supervisors, research assistants, reviewers, and so on.
As scientists adopt heuristics like these into their workflows, we can expect a significant reorganization of this hierarchy.
For example, Garicano and Rossi-Hansberg (2015, 16) suggest that reducing the knowledge cost of a task will:
increase the scope of decision making by lower-level workers, increase the span of control of supervisors, increase the ratio of production workers to problem solvers, and reduce the number of layers of management.
References
Garicano, Luis, and Esteban Rossi-Hansberg. 2015. “Knowledge-Based Hierarchies: Using Organizations to Understand the Economy.” Annual Review of Economics 7 (1): 1–30. https://doi.org/10.1146/annurev-economics-080614-115748.↩
Lubars, Brian, and Chenhao Tan. 2019. “Ask Not What AI Can Do, but What AI Should Do: Towards a Framework of Task Delegability.” arXiv. https://doi.org/10.48550/ARXIV.1902.03245.↩
Myllyaho, Lalli, Mikko Raatikainen, Tomi Männistö, Tommi Mikkonen, and Jukka K. Nurminen. 2021. “Systematic Literature Review of Validation Methods for AI Systems.” Journal of Systems and Software 181 (November): 111050. https://doi.org/10.1016/j.jss.2021.111050.↩