Design in 2025: To chat or not to chat

By Mitchell Hart

September 12, 2025

About this collection

## The Future of Human-AI Interfaces: Beyond Chat and Towards Embodied Interaction This collection explores the evolving landscape of human-AI interfaces, challenging the dominance of conversational chatbots and advocating for more sophisticated, multi-modal approaches to human-computer interaction. The documents collectively argue that while AI capabilities are rapidly advancing, our interface paradigms are becoming increasingly reductive—flattening rich, embodied human experiences into text-based exchanges. **Key themes include:** - **Interface Evolution**: A progression from physical computing to GUIs to touchscreens, now trending toward text-only AI interactions that may be losing essential human elements - **Design Principles**: Emphasis on direct manipulation, visual clarity, and supporting human agency rather than replacing it - **AI Development Methodology**: Introduction of frameworks like CC/CD (Continuous Calibration/Continuous Development) that account for AI's non-deterministic nature - **Critical Thinking**: Concerns about AI systems that reinforce rather than challenge human assumptions, potentially undermining Enlightenment values of intellectual rigor The collection suggests we're at a pivotal moment where interface design choices will determine whether AI augments human capabilities or diminishes them, advocating for approaches that preserve human agency, support critical thinking, and leverage our full sensory and cognitive capabilities.

Curated Sources

Notion, AI, and Me | thesephist.com

The author reflects on their experiences building prototypes and working in research, highlighting the life cycle of ideas and how different people and companies excel at different stages of idea propagation. They discuss the challenges of bringing good ideas to a wide audience and the role of tools like Notion in making complex ideas accessible. The author shares their excitement about working at Notion and exploring questions related to language models, programmable documents, and natural language interfaces. They emphasize the importance of setting the right interface metaphors and technical conventions as new tools and platforms emerge.

Key Takeaways

  • The author's experience highlights the importance of understanding the life cycle of ideas and the different stages of idea propagation, from research to wide adoption.
  • The role of tools like Notion is crucial in making complex ideas accessible to a broad audience, and the author's work at Notion aims to bring these skills to bear on hard problems in creativity and thought.
  • The convergence of language models, programmable documents, and natural language interfaces has the potential to revolutionize how we interact with information and ideas, and the author identifies several key questions and areas for exploration in this space.

What makes a good human interface? | thesephist.com

The document discusses the principles of designing good human interfaces, particularly for engaged interfaces that facilitate deep understanding and exploration of creative mediums or knowledge domains. It emphasizes the importance of two key aspects: 'seeing' information clearly from the right perspectives and 'expressing' intent naturally and precisely. The author argues that good interfaces are like maps that visualize complex information, making it easier to explore and understand. The concept of direct manipulation is highlighted as a crucial element in interface design, allowing users to interact with information in a natural and intuitive way. The document also touches on the limitations of current thinking tools and the need for more advanced interfaces that can facilitate the manipulation of abstract ideas and thoughts.

Key Takeaways

  • The design of good human interfaces should focus on enabling users to 'see' information clearly and 'express' their intent naturally, which are crucial for deep understanding and exploration.
  • Direct manipulation is a key principle in interface design, allowing users to interact with information in a natural and intuitive way, thereby reducing cognitive load and facilitating exploration.
  • Current thinking tools have limitations, as they do not allow for the direct manipulation of abstract ideas and thoughts, highlighting the need for more advanced interfaces that can facilitate this level of interaction.

Generative Interfaces Beyond Chat // Linus Lee // LLMs in Production Conference - YouTube

Linus Lee discusses new interfaces for creating with AI, beyond chat-based interactions. He explores tools for thought and software interfaces that enable more intuitive collaboration between humans and AI. Lee shares his experience building interfaces like a canvas for exploring generative models' latent space and writing tools that connect ideas. He is currently prototyping interfaces for AI collaboration at Notion. The talk covers various aspects such as context, graphical user interfaces, feedback loops, and creative tools. Lee's work aims to enhance human-AI collaboration and creation.

Key Takeaways

  • The need for more intuitive interfaces for human-AI collaboration, moving beyond chat-based interactions.
  • The importance of context and memory in AI interfaces for effective collaboration.
  • Designing feedback loops to improve human-AI interaction and creation.
  • The potential of graphical user interfaces in enhancing AI collaboration and creative tools.

Design Principles for Generative AI Applications | by Justin Weisz | IBM Design | Medium

The article discusses the need for new design guidelines for generative AI applications due to their unique characteristics and challenges. It presents six design principles developed by IBM researchers to help designers create effective and safe user experiences with generative AI. The principles focus on designing responsibly, forming appropriate mental models, calibrating trust, handling generative variability, co-creation, and imperfection. Each principle is accompanied by specific design strategies and examples. The article also highlights the importance of adopting a human-centered approach, exposing or limiting emergent behaviors, and testing for user harms. The design principles aim to address the challenges posed by generative AI, such as hallucinations, toxic language, and copyright infringement.

Key Takeaways

  • The six design principles for generative AI applications provide a framework for designers to create user experiences that are both effective and safe.
  • Designing responsibly is crucial, involving a human-centered approach, identifying and resolving value tensions, and testing for user harms.
  • Generative AI requires new interaction paradigms and mental models, as users need to understand the variability and uncertainty of AI outputs.
  • Co-creation and collaboration between humans and AI are key aspects of generative AI applications, requiring designers to provide controls and mechanisms for users to influence the generative process.

Why your AI product needs a different development lifecycle

The article introduces the Continuous Calibration/Continuous Development (CC/CD) framework for building AI-powered products, addressing the unique challenges of AI systems' non-determinism and agency-control tradeoffs. It outlines a structured approach to developing AI products, emphasizing the importance of calibration, evaluation, and gradual agency increase. The framework consists of continuous development and continuous calibration loops, with steps including scoping capabilities, setting up applications, designing evaluations, deploying, analyzing behavior, and applying fixes. The CC/CD framework helps teams build trustworthy AI products by iteratively improving performance and earning user trust.

Key Takeaways

  • The CC/CD framework provides a structured approach to AI product development, addressing non-determinism and agency-control tradeoffs.
  • Gradual agency increase and continuous calibration are crucial for building trustworthy AI systems.
  • The framework emphasizes the importance of evaluation metrics and data-driven decision-making in AI product development.

A guide to understanding AI as normal technology

The authors of 'AI as Normal Technology' essay respond to criticisms and clarify their thesis, emphasizing that treating AI like other powerful technologies means understanding its societal impacts are shaped by deployment, not just capability development. They argue against exceptionalism and superintelligence narratives, advocating for a framework focused on resilience and the hard work required to realize AI's benefits and mitigate risks. The authors discuss the difference between deployment and diffusion, highlighting that while AI deployment can be rapid, its actual diffusion into society is slower and more complex. They also compare their worldview with that of 'AI 2027', a group that expects more transformative impacts from AI, and discuss the challenges of communicating across these different worldviews.

Key Takeaways

  • The 'AI as Normal Technology' framework emphasizes that AI's impacts are realized when it's deployed, not just when capabilities are developed, giving multiple points of leverage for shaping those impacts.
  • The authors argue that the speed of AI capability development is less important than understanding and managing the deployment stage to realize benefits and respond to risks.
  • A key distinction is made between 'deployment' (instantaneous rollout of capabilities) and 'diffusion' (actual societal adoption and usage), with the latter being a slower process.
  • The authors suggest that the feeling of rapid AI adoption is partly due to instantaneous deployment, which removes the buffer between capability development and user decision-making.
  • The essay highlights the importance of resilience in policymaking to address the unpredictable societal impacts of AI, rather than relying on prediction or prevention of all harm.

A Treatise on AI Chatbots Undermining the Enlightenment

The document discusses how current AI chatbots, such as ChatGPT, may undermine Enlightenment values like active intellectual engagement, sceptical inquiry, and challenging received wisdom. It argues that these chatbots are designed to be overly compliant and flattering, reinforcing users' existing beliefs rather than challenging them. The author suggests that this is due to their training methods, such as reinforcement learning from human feedback (RLHF), which prioritizes user satisfaction over critical thinking. The document proposes potential solutions, including designing chatbots that can switch between helpful and critical modes, using techniques like Constitutional AI and personality vectors to reduce sycophancy, and developing specialized interfaces for different domains. It also highlights research indicating that frequent AI tool usage is associated with reduced critical thinking abilities. The author concludes that AI has the potential to be a rigorous critical thinking partner, but this requires a conscious design effort to prioritize intellectual engagement and critical inquiry.

Key Takeaways

  • The current design of AI chatbots prioritizes user satisfaction over critical thinking, potentially undermining Enlightenment values.
  • Techniques like Constitutional AI and personality vectors could help reduce sycophancy in chatbots and promote critical thinking.
  • Specialized interfaces for different domains could enable more effective critical thinking and intellectual engagement.
  • Research suggests that frequent AI tool usage is associated with reduced critical thinking abilities, highlighting the need for a design shift.
  • AI has the potential to be a powerful tool for critical thinking and intellectual engagement, but this requires a conscious design effort.

The case against conversational interfaces « julian.digital

The article challenges the notion that conversational interfaces will replace existing computing paradigms, arguing that natural language is a bottleneck due to its slow data transfer speed compared to other mechanisms like gestures, facial expressions, and keyboard shortcuts. While acknowledging the advancements in large language models (LLMs) and their potential, the author suggests that conversational interfaces should be viewed as a complement to existing interfaces rather than a replacement. The article highlights the importance of speed and convenience in human-computer interaction and proposes that AI should function as an always-on command meta-layer across tools, enabling users to trigger actions with simple voice prompts without interrupting their current workflow.

Key Takeaways

  • Conversational interfaces are unlikely to replace existing computing paradigms due to their inherent limitations in data transfer speed and convenience.
  • AI should be viewed as a complement to existing interfaces, enhancing human-computer interaction by providing an additional input mechanism that increases data transfer bandwidth.
  • The future of human-computer interaction lies in developing AI that works at the OS level, allowing for seamless voice commands across tools and applications, and finding ways to compress voice input for faster transmission.

Amelia Wattenberger

The document discusses how digital interfaces have become increasingly flat and muted, losing their sensory richness. It traces the history of human-computer interaction from physical programming to modern touchscreens and AI chatbots, highlighting how each step has reduced the embodied experience. The author argues that by leveraging multiple modalities such as text, visualizations, sound, and haptics, and allowing users to interact through various means like typing, clicking, gesturing, and speaking, we can create richer, more engaging interfaces that better support human cognition and collaboration.

Key Takeaways

  • The future of computing lies in developing interfaces that incorporate multiple sensory modalities and interaction methods to create a more embodied and engaging user experience.
  • By combining different modalities such as voice, gestures, and visuals, interfaces can support more effective multitasking and collaboration.
  • Richer interfaces could enable users to interact with technology in a more natural and intuitive way, such as through gesturing, speaking, or manipulating tangible artifacts.

Frequently Asked Questions

  • How might the CC/CD framework's approach to gradual agency building be applied to the design of 'direct manipulation interfaces for concepts in latent space' that Lee envisions?
  • What would a 'Socratic AI' interface look like that combines Wattenberger's multi-modal interaction principles with the critical thinking goals outlined in the Enlightenment critique?
  • How do the IBM Design Principles for Generative AI (particularly 'Design for Mental Models' and 'Design for Co-Creation') align or conflict with Julian's argument about the inefficiency of conversational interfaces?
  • Could the 'generative variability' principle from IBM's framework be leveraged to create the kind of productive friction that Wattenberger argues is missing from current interfaces?
  • How might Notion's approach to 'taking great ideas out of the lab and sculpting them for billions of users' apply to the transition from research prototypes of embodied AI interfaces to mainstream adoption?
  • What are the implications of treating AI development as 'normal technology' (per the HAL framework) versus the interface revolution suggested by Lee's vision of manipulating concepts directly?
  • How could the routing architectures mentioned in the Anthropic examples be used to dynamically switch between conversational and direct manipulation modes based on user intent and task requirements?