How to AI by Ruben Hassid

By Sarah Andrabi

January 1, 1970

About this collection

# Mastering AI Through Practical Setup and Structured Thinking This collection captures a practitioner's hard-won lessons on **how to actually use AI effectively**, not just talk about it. The core insight: **most AI failures happen before you type a single word**. ## Key Themes on AI Mastery **Setup beats prompting.** Turn on Extended Thinking for deeper reasoning, activate Search to prevent hallucinations, and use Projects to stop re-introducing yourself. These three settings transform AI from pattern-matcher to reliable tool. **Show, don't tell.** Stop writing vague prompts. Instead: find a reference example, convert it to markdown, reverse-engineer what makes it work, define your success criteria, then assemble everything into one structured prompt. AI needs to see what you see. **Chat is the revolution.** Don't treat AI like a search engine. Correct errors immediately, keep messages concise, reference prior outputs precisely, and steer aggressively toward your goal. The conversation *is* the intelligence. **Tools matter.** Claude Excel for spreadsheets (not Copilot), Granola for meeting notes (no bots), and custom prompts/recipes for repeatable tasks. The right tool eliminates entire categories of friction. **Bottom line:** Master the setup, structure your thinking, and actually use the chat interface. That's the 1% behavior.

Curated Sources

Stop copy-pasting spreadsheets into ChatGPT.

This guide addresses the long-standing frustration of using AI tools like ChatGPT with spreadsheets, where formulas and structure were lost in translation. Until recently, AI couldn't truly handle Excel files, forcing manual fixes. Now, Claude Excel has emerged as a game-changer, integrated directly into Excel via a Microsoft Marketplace add-in. Unlike Microsoft's own Copilot, which the author dismisses as ineffective, Claude Excel allows users to interact with spreadsheets through natural language queries. Users can explain complex formulas, trace errors, clean messy data (standardizing dates, phone numbers, addresses), analyze trends without writing formulas, extract data from PDFs, and build financial models by simply asking Claude. The tool highlights changes for transparency and works alongside existing Excel features, though it doesn't support macros, Power Query, or external databases. Anthropic cautions against using it for audit-critical calculations without verification. The author demonstrates installation steps, shortcuts (Control+Option+C on Mac), and positions Claude as the future of Excel automation, urging readers to adopt it before it becomes ubiquitous. Microsoft’s late response with Copilot is criticized as inferior, with the author betting on Claude’s superiority.

Key Takeaways

  • Claude Excel solves the core limitation of previous AI tools by maintaining Excel's structural integrity and formula context during analysis.
  • It enables natural language tasks like data cleaning, PDF extraction, and financial modeling without manual formula writing.
  • Microsoft's Copilot is portrayed as significantly inferior, making Claude the current leader in AI-driven spreadsheet workflows.
  • Users must still verify outputs and avoid relying on it for highly sensitive or audit-critical calculations without human review.
  • Adopting Claude Excel positions users in the top 1% of AI users, with potential to reach top 0.1% through community engagement and iterative practice.

Your prompt sucks. - by Ruben Hassid - How to AI

This guide debunks the myth that better prompts alone solve AI limitations. It emphasizes that effective AI use requires preparation before typing: gathering concrete references (converted to markdown), reverse-engineering them into actionable blueprints, and defining clear success criteria through a structured brief. The author argues that most prompting failures stem from vagueness and lack of context, not the AI itself. A three-step framework is presented: (1) prepare a reference file and blueprint, (2) define a success brief with output type, recipient reaction, avoidance criteria, and success metrics, and (3) structure prompts by combining these elements. The guide stresses the importance of iterative chat interactions—correcting errors immediately, keeping messages concise, referencing prior outputs, using directive language, and resetting conversations when needed. Unlike one-shot prompts, this method leverages the 'chat' capability of LLMs to steer outputs through continuous dialogue, backed by research on instruction-following and chain-of-thought techniques. The article concludes with a cheat sheet for subscribers, positioning structured prompting as a foundational skill for serious AI work.

Key Takeaways

  • Preparation beats clever phrasing: Concrete references and reverse-engineered blueprints are more impactful than refined wording alone.
  • Success briefs transform vague goals into measurable outcomes by specifying audience reaction, avoidance criteria, and success metrics.
  • Iterative chat interaction—not one-off prompts—is where AI truly excels, allowing real-time correction and convergence toward desired results.
  • Strict message discipline (concise corrections, precise references, directive language) prevents context bloat and improves output quality.
  • The 'chat' in ChatGPT enables smarter outputs through conversation, making error correction and guidance far more effective than static inputs.

You forgot 70% of yesterday's meeting. - by Ruben Hassid

Most people forget 70% of meeting details within 24 hours due to the Ebbinghaus effect, often reconstructing events from fragments and assumptions. In teams, this leads to shared false memories where entire groups misremember decisions or directives that never occurred. Traditional solutions like manual note-taking split attention and produce unusable records, while AI bots alter meeting dynamics and generate generic summaries. Granola offers a local, privacy-focused alternative that captures audio without joining calls as a bot. It processes transcripts locally, keeping data user-controlled and enabling custom "Recipes" for targeted outputs. Key features include a TLDR prompt that distills meetings into two bullet points (e.g., "John owns pricing by Friday"), an action-items prompt that lists assignments with deadlines, and a LinkedIn post generator that turns pain points into shareable content briefs. The workflow involves running these prompts post-meeting via slash commands, producing actionable outputs in under two minutes per meeting. This approach preserves meeting authenticity while ensuring critical decisions and tasks are retained, eliminating ambiguity and follow-up confusion.

Key Takeaways

  • The Ebbinghaus effect explains why meeting details fade rapidly, often leading teams to collectively invent decisions that never happened
  • Granola's local processing avoids bot interference and privacy concerns by keeping audio data user-controlled and off-cloud
  • Custom prompts like ultra-concise TLDRs and deadline-specific action items deliver practical, scanable outputs that users actually reference
  • The tool transforms meeting transcripts into ready-to-use content assets like LinkedIn briefs without manual rewriting
  • A two-minute post-meeting workflow using slash-command prompts replaces chaotic note-taking and vague recap emails

how to better use AI (before prompting): - by Ruben Hassid

This Substack post by Ruben Hassid explains why most AI interactions fail and provides a three-step framework to dramatically improve results. The core problem is that users treat AI like a generic tool without setting up proper conditions for success. The author compares using AI without preparation to giving a brilliant consultant zero context and demanding real-time answers - the fault lies not with the AI but with the user's approach. The solution involves three critical components before typing any prompt: 1) Using Projects to create dedicated workspaces that remember your identity, style, goals, and uploaded reference files; 2) Enabling Extended Thinking mode to allow the AI to reason deeply rather than rushing to surface-level answers; and 3) Activating Search to ground responses in current data and eliminate hallucinations. The post details specific implementation steps for ChatGPT, Claude, and Grok, emphasizing that projects prevent repetitive self-introductions, custom instructions define tone and objectives, and uploaded files should contain only high-quality information. Extended Thinking enables complex reasoning by allowing the AI to consider uncertainties and build arguments, while Search forces accountability to real-world data. The author synthesizes these insights into a 'Too Long Didn't Read' summary: always enable Extended Thinking, activate Search when accuracy matters, and use Projects for recurring tasks. Mastering these techniques allegedly places users in the top 1% of AI users, with further optimization possible through community learning, focused experimentation, and deep mastery of a single AI platform.

Key Takeaways

  • AI performance depends on pre-prompt setup rather than prompt quality alone - users must create proper conditions through Projects, Thinking mode, and Search
  • Projects solve the 're-introduction problem' by maintaining permanent context, custom instructions, and reference materials across all conversations
  • Extended Thinking transforms AI from pattern-matching to genuine problem-solving by allowing uncertainty, backtracking, and argument-building
  • Search is essential not just for current data but as an anti-hallucination mechanism that grounds responses in verifiable reality
  • The combination of these three features creates compounding improvements in output quality for complex or repetitive tasks

Frequently Asked Questions

  • How does Hassid's 'Projects for persistent context' concept map to Liminary's activation problem—could making users invest in setup (custom instructions, uploaded files, defined workflows) actually increase Month 1 retention by creating switching costs?
  • What's the relationship between Hassid's 'invisible AI' preference (Granola with no bot) and your 'Human Curation Renaissance' positioning—is there a product philosophy where AI augments without announcing itself?
  • Hassid advocates tool-specific mastery (Claude for writing, Grok for search, Granola for meetings) rather than one AI for everything—does this validate a 'best-of-breed integrations' strategy for Liminary, or suggest you should own the full workflow?
  • The 'reference-blueprint-brief' method forces users to articulate what success looks like before prompting—could this framework become Liminary's onboarding flow, where defining research goals IS the activation moment?
  • How does Hassid's emphasis on 'Extended Thinking' and 'Search' modes (setup before prompting) challenge or support your hypothesis that PLG should work for the first 10,000 customers—are these power features or table stakes?
  • Hassid's 'conversation beats one-shots' thesis requires users to iterate through dialogue—does this align with or contradict the 'concise, actionable' preference you've shown in your own AI interactions?
  • What's the tension between Hassid's 'master one AI completely' advice and the reality that different LLMs excel at different tasks—how should Liminary handle model selection without overwhelming users?
  • The Granola custom prompts (TLDR, action items, LinkedIn posts) are task-specific recipes—could Liminary's workflows become similar 'recipes' that users customize once and reuse, creating compounding value?
  • Hassid reframes hallucination as 'pattern completion without real data'—does this suggest Liminary's core value is ensuring AI always has curated sources to cite, making the product an 'anti-hallucination layer'?
  • How does the 'pre-prompt infrastructure' concept (Projects, files, instructions) relate to your research on Systems of Context and enterprise AI adoption—is persistent context the actual product category?