Cognitive vs mechanical friction in knowledge work
By Allen Yang
About this collection
This is a Collection of research curated for Liminary's blog post on mechanical vs cognitive friction for knowledge work: https://liminary.io/blog/best-ai-tools-friction-knowledge-work Explore the research yourself with this Open Collection! ===== About the article: This article explores why the best AI tools for knowledge work don’t eliminate friction but preserve it where it matters most. Drawing on research from cognitive science and organizational studies, it argues that while AI can boost productivity by removing mechanical friction—tedious, low-value effort—it should avoid removing cognitive friction, the effortful thinking that leads to insight and understanding. The piece examines studies showing when AI improves performance and when it undermines it, explains why editing AI outputs often feels harder than starting fresh, and discusses the risks of automation bias. It concludes that the future of AI knowledge management lies in tools that think with humans, not for them—highlighting Liminary as an example of thoughtful AI design that supports recall, context, and cognitive engagement without replacing human judgment.
Curated Sources
Creativity and fixation in the real world: A literature review of case study research - ScienceDirect
Design creativity and fixation are complex phenomena studied through various research methods, including laboratory experiments and real-world case studies. Many studies have investigated how designers generate ideas and how they might become fixated on particular solutions. This literature review collects and compares existing case studies on design creativity and fixation, identifying common themes and areas for future research. The review highlights the importance of understanding design processes in real-world settings, as opposed to just laboratory experiments, to gain a deeper understanding of how designers work and how creativity can be supported or hindered.
Key Takeaways
- The review emphasizes the need for more real-world case studies to understand design creativity and fixation beyond laboratory settings.
- Existing case studies show that designers' creative processes involve complex interactions between inspiration and fixation.
- Understanding these processes can inform strategies to support creativity and mitigate fixation in design practice.
Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err
People tend to avoid using algorithms for forecasting despite their superior accuracy compared to human forecasters. This phenomenon, called algorithm aversion, is partly driven by people's tendency to lose confidence in algorithms more quickly than in human forecasters after seeing them make the same mistake. Five studies demonstrated that participants who saw an algorithm perform were less likely to choose it over a human forecaster, even when the algorithm outperformed the human. The studies involved various forecasting tasks, including predicting MBA students' performance and U.S. states' airline passenger ranks. Participants' confidence and beliefs about the algorithm's and human's forecasts were measured, showing that seeing the algorithm perform decreased confidence in it, while seeing the human perform did not consistently decrease confidence in the human.
Key Takeaways
- Algorithm aversion occurs when people avoid using algorithms despite their superior forecasting accuracy.
- People lose confidence in algorithms more quickly than in human forecasters after seeing them make mistakes.
- The effect of algorithm aversion persists even when the algorithm outperforms the human forecaster.
- Confidence in the algorithm's forecasts significantly mediates the effect of seeing the algorithm perform on participants' likelihood of choosing it.
- The findings have implications for the adoption of algorithms in decision-making tasks.
Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips
The Internet, particularly search engines like Google, has become a primary external memory source, changing how people process and retain information. Four experiments demonstrate that when faced with difficult questions, people think about computers, and when they expect future access to information, they have lower recall rates for the information itself but enhanced recall for where to access it. This phenomenon is linked to transactive memory systems, where information is stored collectively outside individuals. The studies show that people's memory adapts to the availability of information online, prioritizing 'where to find' over 'what is' the information.
Key Takeaways
- The Internet has become an integral part of human memory, functioning as an external or transactive memory source that people rely on for information retrieval.
- When people expect to have future access to information, they tend to remember where to find it rather than the information itself.
- The availability of information online influences memory encoding, with people showing lower recall rates for information they believe will be available later.
- The cognitive consequences of relying on the Internet for memory include a shift towards remembering where information is stored rather than the information itself.
The Pen Is Mightier Than the Keyboard: Advantages of Longhand Over Laptop Note Taking
Taking notes on laptops rather than in longhand is increasingly common, but research suggests laptop note taking is less effective for learning due to shallower processing. Three studies found students who took notes longhand performed better on conceptual questions than those using laptops. Laptop users tended to transcribe lectures verbatim, which is detrimental to learning. Even when allowed to review notes, laptop users performed worse on tests of factual content and conceptual understanding. The studies suggest laptop use can negatively affect academic performance by affecting note-taking behavior.
Key Takeaways
- Laptop note taking leads to shallower processing and verbatim transcription, hurting learning outcomes
- Longhand note taking results in better performance on conceptual questions compared to laptop note taking
- The negative effects of laptop note taking persist even when students review their notes
- Verbatim note taking is particularly detrimental to conceptual understanding
- Instructional interventions to reduce verbatim note taking on laptops have shown limited effectiveness
Fortune favors the (): Effects of disfluency on educational outcomes - ScienceDirect
Disfluency, the subjective experience of difficulty associated with cognitive operations, leads to deeper processing and improved memory performance. Two studies explored the effects of disfluency on educational outcomes. Study 1 found that information presented in hard-to-read fonts was better remembered than information in easy-to-read fonts in a controlled laboratory setting. Study 2 extended this finding to high school classrooms, showing that students who received reading materials in a harder-to-read font performed better on classroom assessments. The results suggest that superficial changes to learning materials could yield significant improvements in educational outcomes by promoting deeper processing and reducing reliance on heuristics.
Key Takeaways
- Disfluency interventions can improve learning retention by promoting deeper processing strategies.
- Superficial changes to learning materials, such as using harder-to-read fonts, can significantly improve educational outcomes.
- The benefits of disfluency are driven by increased cognitive engagement rather than the difficulty itself.
- Disfluency can be a valuable tool in educational settings as it is inexpensive and easy to implement.
The Critical Importance of Retrieval for Learning
Students learned foreign language vocabulary words through repeated study-test trials or variations where items were dropped from study or test after correct recall. Repeated testing significantly improved long-term retention compared to repeated studying. Students' predictions of their performance were uncorrelated with actual results, showing a lack of awareness about the benefits of retrieval practice. The study demonstrates that testing is not just a neutral assessment but actively enhances learning and retention.
Key Takeaways
- Repeated retrieval practice through testing significantly enhances long-term retention compared to repeated studying.
- Students are generally unaware of the benefits of retrieval practice and tend to drop items from further practice once they are recalled.
- The type of practice (retrieval vs. encoding) is more important for long-term retention than the speed of initial learning.
- Conventional study methods that focus on repeated studying after initial recall may be ineffective for long-term learning.
Working Paper 24-013
The study examines AI's impact on knowledge-intensive tasks through a field experiment with 758 consultants. AI access significantly increased productivity (12.2% more tasks completed, 25.1% faster) and quality (40% higher). Consultants below average performance threshold benefited most (43% increase). AI struggled with tasks outside its capabilities, leading to 19% lower correctness. Two human-AI collaboration patterns emerged: 'Centaurs' dividing tasks between human and AI, and 'Cyborgs' integrating AI into their workflow.
Key Takeaways
- AI significantly boosts productivity and quality within its capability frontier
- Consultants below average performance threshold benefit most from AI assistance
- AI can decrease performance when used outside its capability frontier
- Two distinct human-AI collaboration patterns emerge: Centaurs and Cyborgs
- Centaurs divide tasks between human and AI, while Cyborgs integrate AI into their workflow
Is machine translation post-editing worth the effort? A survey of research into post-editing and effort Maarit Koponen, University of Helsinki
Advances in machine translation have increased its use in translation workflows, particularly through post-editing, where human translators edit machine-translated raw versions. While post-editing high-quality machine translations can increase productivity, editing poor translations remains unproductive. Research has investigated post-editing productivity, quality, and effort, showing varying results depending on factors like machine translation quality, text type, and translator experience. Studies have identified features affecting post-editing effort, including source text characteristics and machine translation errors. The effort involved in post-editing can be measured in terms of time, technical effort, and cognitive effort, with cognitive effort being particularly challenging to capture. As machine translation and post-editing become more central to the translation industry, research continues to explore ways to improve productivity and working conditions for translators.
Key Takeaways
- Post-editing high-quality machine translations can increase translator productivity, but poor quality translations remain unproductive.
- Research has identified source text characteristics and machine translation errors that affect post-editing effort.
- Cognitive effort in post-editing is challenging to measure, with ongoing research exploring new methods like eye tracking.
- The use of machine translation and post-editing is becoming more widespread in the translation industry, changing the role of humans and machines.
- Accurate measurement of post-editing effort is crucial for determining productivity and working conditions for translators.
Complacency and Bias in Human Use of Automation: An Attentional Integration
Automation-related complacency and bias occur when humans interact with automated systems, leading to decreased monitoring and decision-making performance. Complacency is defined as poorer detection of system malfunctions under automation compared to manual control, typically in multitask environments. Automation bias results in omission and commission errors when decision aids are imperfect. Both phenomena are linked to attentional processes and trust in automation. An integrated model shows that complacency and automation bias represent different manifestations of overlapping automation-induced phenomena. Factors influencing complacency include automation reliability, task load, and expertise. Automation bias is affected by the level of automation, system reliability, and task context. Training and experience can mitigate these effects to some extent.
Key Takeaways
- Complacency and automation bias are interrelated phenomena resulting from attentional processes and trust in automation.
- The integrated model provides a framework for understanding the complex interaction of personal, situational, and automation-related factors.
- Attentional factors contribute to many forms of automation bias, but not all.
ABSTRACT
Information workers experience high work fragmentation, switching between tasks every 11 minutes on average, with 57% of working spheres interrupted. Collocated workers work longer before switching but experience more interruptions. Most internal interruptions are due to personal work, while external interruptions are due to central work. Interrupted work is often resumed on the same day, but with multiple intervening activities. The study suggests technology design should support maintaining continuity within working spheres, providing cues for reorienting to interrupted tasks, and minimizing disruptive interruptions.
Key Takeaways
- Work fragmentation is a common practice among information workers, with frequent task switching and interruptions.
- Collocation affects work fragmentation, with collocated workers experiencing longer work sessions but more interruptions.
- Technology design should focus on supporting working spheres, providing cues for task resumption, and minimizing disruptions.
The Generation Effect: Delineation of a Phenomenon
Five experiments compared memory for self-generated words versus read words. Results showed superior performance in the generate condition across various testing procedures, encoding rules, and situational changes. The generation effect persisted in recognition, free recall, and cued recall tests. The effect was specific to response items under recognition testing but not under cued recall. Various explanatory principles were considered, including levels of processing and recall-based explanations, but no single theory fully accounted for the phenomenon.
Key Takeaways
- The generation effect is a robust phenomenon that enhances memory for self-generated words compared to read words.
- The effect persists across different testing procedures, encoding rules, and experimental designs.
- The generation effect is specific to the generated word and does not necessarily enhance memory for associated stimuli.
- Different theoretical explanations, such as levels of processing and recall-based theories, have been proposed but require further testing.
Review Cognitive Offloading
Cognitive offloading refers to using physical actions to reduce cognitive demands, such as tilting one's head to perceive rotated images or using smartphones for reminders. Recent research investigates the mechanisms triggering cognitive offloading and its cognitive consequences. The propensity to offload cognition is influenced by internal cognitive demands and metacognitive evaluations of mental abilities. Cognitive offloading can improve performance across domains like perception, memory, and spatial reasoning. However, metacognitive evaluations can be erroneous, leading to suboptimal offloading behavior. Research has examined cognitive offloading onto the body and into the world, including external normalization, intention offloading, and transactive memory systems. The metacognitive framework suggests that offloading represents a strategy to achieve cognitive goals, influenced by metacognitive beliefs and experiences. Cognitive offloading can have both benefits and costs, such as improved performance but also potential memory impairments. The framework highlights the need for deeper understanding of metacognitive processes involved in cognitive offloading and its long-term consequences.
Key Takeaways
- Cognitive offloading is influenced by both internal cognitive demands and metacognitive evaluations, which can sometimes lead to suboptimal behavior.
- Cognitive offloading can have significant downstream effects on both low-level cognitive capacities and higher-level metacognitive evaluations.
- A metacognitive framework is proposed to understand the processes involved in cognitive offloading, highlighting the role of metacognitive beliefs and experiences in strategy selection.
- Cognitive offloading has practical implications for individuals with impaired cognitive abilities and in educational settings, where it can be used to support learning.
- Long-term reliance on cognitive offloading technologies may lead to changes in unaided mental abilities and metacognitive evaluations of those abilities.
Desirable Difficulties in Theory and Practice
The concept of desirable difficulties suggests that certain challenges during learning can improve long-term retention and transfer of information. Researchers Robert A. Bjork and Elizabeth L. Bjork discuss the theory behind desirable difficulties and its applications in various educational settings, including law school instruction, mathematics education, and motor skill learning. The authors highlight the importance of creating difficulties that trigger encoding and retrieval processes, while avoiding difficulties that are too great for learners to overcome. They also discuss the challenges of implementing desirable difficulties in real-world educational settings, including learner motivation and prior knowledge. Various studies are presented that demonstrate the effectiveness of desirable difficulties in improving learning outcomes, such as spaced practice, interleaving, and retrieval practice. The authors conclude that while desirable difficulties have the potential to enhance teaching and self-regulated learning, there are still significant challenges to be addressed in implementing these strategies effectively.
Key Takeaways
- Desirable difficulties can enhance learning by triggering encoding and retrieval processes that support long-term retention and transfer.
- The level of difficulty must be tailored to the learner's prior knowledge and skills to be effective.
- Implementing desirable difficulties in real-world educational settings can be challenging due to factors such as learner motivation and prior knowledge.
- Strategies such as spaced practice, interleaving, and retrieval practice have been shown to be effective in improving learning outcomes.
- Motivational factors play a crucial role in learners' willingness to adopt desirable difficulties, and addressing these factors is essential for successful implementation.
Frequently Asked Questions
- How do the 'Centaur' and 'Cyborg' collaboration patterns from the consultant study map onto the distinction between cognitive offloading 'onto the body' versus 'into the world,' and what does this suggest about designing AI interfaces that support different collaboration styles?
- Given that the generation effect shows superior memory for self-generated versus read content, and AI tools can generate outputs instantly, what specific task characteristics should determine when users should generate content themselves versus accepting AI-generated content to maintain long-term learning and capability development?
- The consultant study found AI helped below-average performers most (43% increase) while the desirable difficulties research warns that making things too easy can create an 'illusion of mastery'—how might AI tools inadvertently prevent skill development in the very users they help most in the short term?
- How does the finding that humans switch tasks every 11 minutes with 57% of working spheres interrupted interact with automation complacency research showing decreased monitoring performance in multitask environments, and what does this suggest about when AI assistance becomes cognitively dangerous rather than helpful?
- The post-editing research shows productivity gains only with high-quality machine translation while poor MT remains unproductive to fix—what metacognitive signals should users rely on to determine when AI output quality justifies cognitive offloading versus when maintaining human generation is more efficient?
- Given that automation bias leads to both omission errors (missing problems AI fails to catch) and commission errors (accepting incorrect AI outputs), and the consultant study showed 19% lower correctness on tasks outside AI capabilities, what training approaches from the complacency research could help users develop appropriate calibration of AI reliability across different task types?
- How might the concept of 'desirable difficulties' be applied to AI tool design such that the tools provide scaffolding that gradually reduces rather than immediate complete automation, maintaining the cognitive engagement necessary for users to develop genuine expertise rather than performance-dependent capability?
- The cognitive offloading framework suggests people make offloading decisions based on metacognitive evaluations that can be erroneous—what specific metacognitive errors might lead knowledge workers to over-offload to AI tools, and how could AI interfaces be designed to surface these errors before they become habitual?
- What parallels exist between the 'Challenge Point' framework from motor learning (suggesting optimal difficulty varies with prior learning) and determining appropriate levels of AI assistance for knowledge workers at different expertise levels, and how might AI tools adapt their level of automation based on user capability?
- Given that spacing and interleaving create 'desirable difficulties' that enhance long-term retention despite reducing immediate performance, how might AI-assisted workflows be structured to maintain these beneficial learning patterns rather than optimizing purely for immediate task completion speed?