<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Computing Life - Computing</title><link>https://yage.ai/</link><description/><atom:link href="https://yage.ai/feeds/computing.rss.xml" rel="self"/><lastBuildDate>Mon, 30 Mar 2026 22:00:00 -0700</lastBuildDate><item><title>一行代码的事，Web 为什么做了三十年还没做到</title><link>https://yage.ai/web-layout-tradeoff.html</link><description>&lt;p&gt;在iOS上查询排版结果只需一行代码，Web上需要触发整个页面的重新布局。这不是因为浏览器工程师蠢，而是CSS在1994年做了一个声明式的架构选择。这个选择的天花板更高，但代价是中间状态不可查询。Facebook在2012年因为不理解这个trade-off付出了数亿美元的代价。SwiftUI和Jetpack Compose证明了声明式和可观测可以共存，关键在于分层。这个教训适用于所有系统设计：好的抽象让你选择在哪一层工作，坏的抽象把所有层粘在一起让你没得选。&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">grapeot</dc:creator><pubDate>Mon, 30 Mar 2026 22:00:00 -0700</pubDate><guid>tag:yage.ai,2026-03-30:/web-layout-tradeoff.html</guid><category>Computing</category><category>Chinese</category><category>Frontend</category><category>System Design</category></item><item><title>One Line of Code on Every Other Platform. Why Can't the Web Do It After 30 Years?</title><link>https://yage.ai/web-layout-tradeoff-en.html</link><description>&lt;p&gt;Querying layout results takes one line of code on iOS, Android, Qt, and Flutter. On the web, it requires triggering a full-page reflow. This isn't because browser engineers are incompetent. CSS made a deliberate architectural choice in 1994 toward declarative layout, which has a higher ceiling but hides intermediate state. Facebook paid hundreds of millions of dollars in 2012 for not understanding this trade-off. SwiftUI and Jetpack Compose prove that declarative and observable can coexist through proper layering. The lesson applies to all system design: good abstractions let you choose which layer to work at; bad abstractions glue all layers together and leave you no choice.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">grapeot</dc:creator><pubDate>Mon, 30 Mar 2026 21:00:00 -0700</pubDate><guid>tag:yage.ai,2026-03-30:/web-layout-tradeoff-en.html</guid><category>Computing</category><category>English</category><category>Frontend</category><category>System Design</category></item><item><title>为什么AI只会说正确的废话，以及怎么把它逼出舒适区</title><link>https://yage.ai/context-infrastructure.html</link><description>&lt;p&gt;LLM的默认输出是consensus：正确但平庸。Deep Research其实是Wide Research。我们找到了一种系统性方法，用个人认知上下文把LLM从consensus里强行扯出来。一年实验，有控制变量证据。&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">grapeot</dc:creator><pubDate>Sun, 15 Mar 2026 22:00:00 -0700</pubDate><guid>tag:yage.ai,2026-03-15:/context-infrastructure.html</guid><category>Computing</category><category>Chinese</category><category>Agentic AI</category><category>Methodology</category></item><item><title>Why AI Only Gives You Correct Nonsense, and How to Push It Out of Its Comfort Zone</title><link>https://yage.ai/context-infrastructure-en.html</link><description>&lt;p&gt;An LLM's default output is consensus: correct but mediocre. Deep Research is really Wide Research. We found a systematic way to pull LLMs out of consensus using personal cognitive context. One year of experimentation, with controlled evidence.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">grapeot</dc:creator><pubDate>Sun, 15 Mar 2026 21:00:00 -0700</pubDate><guid>tag:yage.ai,2026-03-15:/context-infrastructure-en.html</guid><category>Computing</category><category>English</category><category>Agentic AI</category><category>Methodology</category></item><item><title>用好AI的第一步：停止和AI聊天</title><link>https://yage.ai/stop-using-chatgpt.html</link><description>&lt;p&gt;会用AI和用好AI之间差的是10倍。这个差距的根源在于工作方式，而非模型。本文通过一个完整的工作流例子和上中下三策的框架，解释为什么应该从ChatGPT切换到Cursor这类Agentic工具。&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">grapeot</dc:creator><pubDate>Tue, 03 Mar 2026 12:00:00 -0800</pubDate><guid>tag:yage.ai,2026-03-03:/stop-using-chatgpt.html</guid><category>Computing</category><category>Chinese</category><category>Agentic AI</category><category>Methodology</category></item><item><title>Step One to Using AI Well: Stop Chatting with AI</title><link>https://yage.ai/stop-using-chatgpt-en.html</link><description>&lt;p&gt;The gap between using AI and using AI well is 10x. That gap comes from how you work, not which model you use. This post walks through a complete workflow example and a Three Tiers framework to explain why you should switch from ChatGPT to agentic tools like Cursor.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">grapeot</dc:creator><pubDate>Tue, 03 Mar 2026 12:00:00 -0800</pubDate><guid>tag:yage.ai,2026-03-03:/stop-using-chatgpt-en.html</guid><category>Computing</category><category>English</category><category>Agentic AI</category><category>Methodology</category></item><item><title>以一个简单任务为例看AI落地的关键决策</title><link>https://yage.ai/ai-key-decisions.html</link><description>&lt;p&gt;用两分钟指挥AI给300篇文章添加SEO summary的实战案例，拆解五个关键决策：选对执行环境、先建测试再干活、让agent自己处理corner case、divide and conquer、结果导向的prompt写法。&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">grapeot</dc:creator><pubDate>Fri, 20 Feb 2026 18:00:00 -0800</pubDate><guid>tag:yage.ai,2026-02-20:/ai-key-decisions.html</guid><category>Computing</category><category>Chinese</category><category>Agentic AI</category></item><item><title>Key Decisions for Agentic Workflows: A Simple Case Study</title><link>https://yage.ai/ai-key-decisions-en.html</link><description>&lt;p&gt;A real-world case study of directing AI to add SEO summaries to 300 articles in two minutes, breaking down five key decisions: choosing the right execution environment, building tests before work, letting agents handle corner cases, divide and conquer, and outcome-oriented prompt writing.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">grapeot</dc:creator><pubDate>Fri, 20 Feb 2026 18:00:00 -0800</pubDate><guid>tag:yage.ai,2026-02-20:/ai-key-decisions-en.html</guid><category>Computing</category><category>English</category><category>Agentic AI</category></item><item><title>OpenClaw 是什么｜AI Agent 聊天工具的原理、价值与局限</title><link>https://yage.ai/openclaw.html</link><description>&lt;p&gt;OpenClaw 爆火的原因和去年 DeepSeek 一模一样——不是技术突破，而是把小众体验推向大众。本文不教配置，而是从产品设计角度拆解它的记忆系统、Skills 机制和聊天界面的根本局限，帮你判断该不该跟，以及怎么把核心思路用到自己的工作流里。&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">grapeot</dc:creator><pubDate>Sat, 14 Feb 2026 23:00:00 -0800</pubDate><guid>tag:yage.ai,2026-02-14:/openclaw.html</guid><category>Computing</category><category>Chinese</category><category>Agentic AI</category><category>Review</category></item><item><title>OpenClaw Deep Dive: Why It Went Viral and What It Means for You</title><link>https://yage.ai/openclaw-en.html</link><description>&lt;p&gt;OpenClaw went viral for the same reason DeepSeek did — not a technical breakthrough, but bringing a niche power-user experience to the masses. This post skips setup tutorials and instead dissects its memory system, Skills mechanism, and the fundamental ceiling of chat-based AI interfaces, helping you decide whether to adopt it and how to extract its core ideas into your own workflow.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">grapeot</dc:creator><pubDate>Sat, 14 Feb 2026 22:00:00 -0800</pubDate><guid>tag:yage.ai,2026-02-14:/openclaw-en.html</guid><category>Computing</category><category>English</category><category>Agentic AI</category><category>Review</category></item></channel></rss>