2025-04-02-top
- 精選方式: TOP
- 時間範圍: DAY
討論重點
以下是30篇文章的條列式重點總結,並附上對應的錨點連結與逐條細節說明:
1. This is the first time in almost a year that Claude is not the best model
- 性能比較:Gemini 2.5在上下文處理和可靠性上超越Claude。
- 態度轉變:作者從厭惡Google到認可Gemini 2.5的技術進步。
- 動態競爭:AI領域快速變化,未來Claude可能反超。
2. I accidentally built a brain fog tracker with Claude
- 問題背景:長期腦霧困擾,傳統方法無效。
- 解決方案:與Claude協作開發追蹤工具,分析數據發現補充劑與腦霧改善的關聯。
- AI協作差異:Claude主動提問並優化架構,不同於ChatGPT的模板化回應。
3. Do you think using LLMs is a skill?
- 核心爭議:提示詞工程是否算專業技能?
- 作者觀點:更接近「工具熟練度」而非需刻意培養的技能。
4. I went through $50 tonight trying to get AI/Claude to fix errors
- 挫折來源:Replit平臺功能不足,腳本錯誤無法解決。
- AI依賴性:Claude大幅提升工作效率,但開發過程成本高昂。
5. Message Limit Reached very often now in paid version
- 問題描述:付費用戶仍頻繁觸發訊息限制,影響使用體驗。
- 版本影響:Claude 2.7 Sonnet在非思考模式下限制更嚴格。
6. What the hell happened?
- 投訴建議:需提供詳細情境(如免費/付費版標籤)以釐清問題。
- 反饋機制:按「踩」幫助官方監測不滿意的輸出。
7. [Claude Pro limi](#7-claude-pro-limi)
- 限制加劇:近期訊息上限驟降(如5條即觸發限制)。
- 時間範圍:問題集中於過去兩天,可能為臨時調整。
8. Can somebody tell what MCPs capable of like telling a toddler?
- 核心需求:釐清MCP的實際應用場景,而非技術細節。
- 術語模糊:需更多背景定義MCP的具體領域。
9. v0.7.3 Update: Dive, An Open Source MCP Agent Desktop
- 功能亮點:支援多模型切換(如GPT-4、Claude)、介面操作優化。
- 定位:高效開發工具,強調LLM整合與即時調用。
10. Claude's context has been nerfed?
- 質疑點:Sonnet可能未完整使用200k token上下文,改以摘要緩存節省成本。
- 證據:模型突然遺忘40k token內的代碼片段。
11. [Are rate limi significantly lower now?](#11-are-rate-limi-significantly-lower-now)
- 異常現象:對話限制從40-50條降至20-25條,未獲官方解釋。
12. Optimizing Claude's MCP Filesystem Server
- 問題根源:默認配置觸發非必要遞歸搜索,導致性能卡頓。
- 解決方案:手動修剪工具(如移除
search_files),保留核心文件操作功能。
13. [Hitting limi``` without getting responses](#13-hitting-limi
文章核心重點
以下是每篇文章的一句話摘要(條列式輸出):
- Gemini 2.5當前性能超越Claude,作者坦承需基於實際體驗改變長期偏好。
- 作者與Claude合作開發腦霧追蹤工具,透過數據分析改善自身認知狀態。
- 作者質疑「使用LLM」是否應被視為值得列在履歷上的專業技能。
- 作者花費50美元嘗試用Claude修復Replit腳本錯誤,反映工具依賴與開發困境。
- Claude付費版訊息限制異常頻繁,用戶抱怨體驗嚴重下降。
- 文章提供優化Claude問題回報流程的具體建議。
- 用戶發現ChatGPT近期使用限制突然收緊,導致付費版難以正常使用。
- 作者困惑於MCP的實際應用場景,認為現有討論過度聚焦技術而忽略用途。
- Dive工具更新支援多模型切換與使用者體驗優化,定位為高效LLM開發工具。
- 作者懷疑Claude未真正提供完整200k token上下文,可能透過摘要壓縮節省成本。
- 用戶發現Claude對話訊息限制從40-50條驟降至20-25條,質疑政策變更。
- 修改MCP文件系統配置可解決Claude非必要遞歸搜索導致的性能問題。
- 付費用戶遭遇Claude無回應錯誤仍被扣減額度,批評服務品質不符預期。
- 用戶詢問Anthropic是否變相將付費服務降級為僅限官網使用的「螢幕保護程式」。
- 文章探討電腦使用專案的經驗分享,聚焦挑戰反思與未來改進方向。
- Claude澄清其代碼協作中的限制,強調無跨對話記憶能力與用戶主導角色。
- 作者發布Agentic模式教學影片,以Anthropic研究為基礎並尋求觀眾反饋。
- 提出12項GEO策略提升網站在LLM生成回答時的引用機率,如清單式內容優化。
- 用戶強烈批評Claude 3.7效能衰退、限制不透明與技術問題,決定停用服務。
- 用戶發現Claude Pro版重新實施嚴格訊息限制,推測Anthropic政策暗中調整。
- 解釋MCP作為工具交互協議的定位,強調其標準化價值而非自主代理框架。
- 主張超越提示工程,以戰略同理心理解AI行為,仿效科幻角色蘇珊·卡爾文。
- 用戶不滿Claude突然無法透過UUID摘要歷史對話,批評功能撤回缺乏透明度。
- 用戶尋求與Claude智能相當但無對話限制的替代AI,尤其創意寫作輔助工具。
- 測試關閉網路搜尋功能對Claude表現的影響,驗證是否會提升回答品質。
- 影片介紹Figma與MCP整合的AI功能,可能展示設計工具智能化應用。
- 用戶讚賞MCP與RStudio整合實現自動化數據分析與代碼生成的高效率協作。
- 用戶在VS Code的WSL環境遭遇Claude API連線錯誤與異常扣費問題。
- Claude 3.7相較3.5版在數學、編程與推理能力顯著提升,減少無謂拒絕。
- 探討用Claude解析DaVinci Resolve的CSV元數據以輔助紀錄片剪輯的工作流程優化。
目錄
- 1. This is the first time in almost a year that Claude is not the best model
- 2. I accidentally built a brain fog tracker with Claudeand it actually helped me feel smarter
- 3. Do you think using LLMs is a skill?
- 4. I went through $50 tonight trying to get AI/Claude to fix errors.
- 5. Message Limit Reached very often now in paid version, hardly usable
- 6. What the hell happened?
- [7. Claude Pro limi
](#7-claude-pro-limi) - 8. Can somebody tell what MCPs capable of like telling a toddler?
- 9. v0.7.3 Update: Dive, An Open Source MCP Agent Desktop
- 10. Claude's context has been nerfed?
- [11. Are rate limi
significantly lower now?](#11-are-rate-limi-significantly-lower-now) - 12. Optimizing Claude's MCP Filesystem Server: How to Fix the Recursive Search Problem
- [13. Hitting limi
without getting responses](#13-hitting-limi-without-getting-responses) - 14. Please be candid; did I just pay $220 for a year of this screensaver, but only at Anthropic's website?
- 15. What computer use project did you build?
- 16. Bad vibes?
- 17. Dropped a new tutorial for Agentic pattern + AI SDK
- 18. 12 step checklist on how to rank on LLMs
- [19. Drastically lowered limi
??? 30,000 / 18 questions (20-40 char lenght) tokens IN TOTAL for whole working day????](#19-drastically-lowered-limi-30-000-18-questions) - 20. Message limit is back enforced even for PRO
- 21. MCP, excited for the future
- 22. how to be a generative ai robopsychologist like Dr. Susan Calvin (hint: it is not with superficial prompting techniques )
- [23. Claude Suddenly Cant Summarize Previous Cha
by UUID?](#23-claude-suddenly-cant-summarize-previous-cha-) - 24. Similar but better options?
- 25. I think leaving search ON makes it way dumber.
- 26. Figma MCP : AI for Figma
- 27. i love that i can use a MCP with RStudio, load my data in, and ask claude to run an analysis for me while I do something else - then come back 3 minutes later with 500-1k lines of code written and plo``` made for me to review. MCP is a godsend
- 28. Claude Code API Error: Connection error.
- 29. Claude 3.7 vs 3.5 Sonnet Compared: What's new?
- 30. Using Claude to Help with Documentary Film Editing & CSV Metadata
1. This is the first time in almost a year that Claude is not the best model
這篇文章的核心討論主題是:作者對Gemini 2.5的強烈肯定,認為其當前性能超越Claude及其他競爭模型,並坦承自己雖長期偏好Claude,但基於實際體驗不得不承認Gemini 2.5的優勢。
具體要點包括:
- 性能比較:Gemini 2.5在處理上下文、可靠性等方面表現優異,甚至超越作者過去推崇的Claude。
- 態度轉變:作者原本厭惡Google及早期Gemini模型,但現因技術進步改變立場,自嘲「投向黑暗面」。
- 動態競爭:承認AI領域變化快速,未來Claude可能反超,但現階段Gemini 2.5明顯領先。
- 實用導向:強調實際使用需求(如多樣化用例)是評估模型的關鍵,而非品牌忠誠度。
整體聚焦於AI模型的實測體驗與競爭力消長,並反映用戶在技術迭代中的理性選擇。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jozpc0/this_is_the_first_time_in_almost_a_year_that/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jozpc0/this_is_the_first_time_in_almost_a_year_that/
- 發布時間: 2025-04-02 00:19:59
內容
Gemini 2.5 is simply better. I hate Google, I hate previous Geminis, and they have cried wolf so many times. I have been posting exclusively on the Claude subreddit because I've found all other models to be so much worse. However I have many use cases, and there aren't any that Claude is currently better than Gemini 2.5 for. Even in Gemini Advance (the weaker version of the model versus AIStudio) it's incredibly powerful at handling context and incredibly reliable. I feel like I'm going to the dark side but it simply has to be said. This field changes super fast and I'm sure Claude will be back on top at some point, but this is the first time where I just think that is so clearly not the case.
討論
評論 1:
you forgot that the best thing about is that it is free. i have been saying this for a long time, most AI startups will be eaten by big tech for lunch because big tech can race to the bottom but anthropic cant just provide their flagship models for free
評論 2:
The creative writing is extremely good, by far the best one I tried for that purpose.
評論 3:
I wish these kinds of pos``` explained their use case. I can't tell you how many times I've had to read how ChatGPT is better than Claude....only to learn they are writing stories...which I don't do.
I have been working on an economic dashboard right now and this post couldn't be further from the truth.
I say this to say that it would be more useful to get specifics about what you're doing with another AI that makes it better. That's going to provide more value than a post that tries to generalize that one is better than another.
評論 4:
Google invented Transformer and Bert and it's researchers pioneered many great technologies. Strange that you are surprised they took a lead (might not be for long).
評論 5:
Same for me. This is the first time I also don't shit on google's product (actually second, I liked Ultra)
2. I accidentally built a brain fog tracker with Claudeand it actually helped me feel smarter
這篇文章的核心討論主題是:作者透過與AI助手Claude的合作,開發了一個「腦霧追蹤儀表板」,並透過數據分析發現改善腦霧的關鍵模式,最終有效提升自身認知狀態的經驗分享。
主要重點包含:
- 問題背景:長期受腦霧(認知模糊、注意力渙散)困擾,傳統方法(睡眠、飲食調整等)效果有限。
- 解決方案:
- 與Claude協作開發一個簡易的追蹤工具(Node.js + MongoDB),記錄睡眠、飲食、補充劑、主觀腦霧評分等數據。
- AI的關鍵角色:不僅協助編寫前後端代碼,更主動分析數據、提出洞察(例如發現特定補充劑與腦霧改善的關聯性)。
- AI協作差異:相較於其他AI工具(如ChatGPT),Claude更像「好奇的實驗夥伴」,能主動提問、優化程式架構,而非僅生成模板代碼。
- 實際效果:持續追蹤後,作者認知狀態明顯改善,並認為「記錄+AI分析」的模式對類似問題可能有普適價值。
- 後續行動:考慮開源該工具,建議受腦霧困擾者嘗試類似方法(即使簡單記錄症狀並交由AI分析也可能有幫助)。
本質上,文章強調AI協作如何將主觀健康問題轉化為可量化的數據洞察,並提供一種個人化健康管理的實踐範例。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1joq4t3/i_accidentally_built_a_brain_fog_tracker_with/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1joq4t3/i_accidentally_built_a_brain_fog_tracker_with/
- 發布時間: 2025-04-01 15:48:56
內容
Ive had brain fog for a couple of years now. The kind where you open a tab, forget why, stare at it for a minute, then open 4 more tabs and forget all of them too. Some days I felt like my brain was running on 1997 dial-up.
I tried all the usual stuffcutting caffeine, sleep hygiene, meditation, supplemen```, drinking more water than a cactusbut nothing really stuck. Everything helped a little, but nothing moved the needle.
Until I got bored and said to Claude:
>
Totally expecting a dumb response. Instead, Claude replied with something like:
>
Wait... what?
So yeah, I built a brain fog dashboard.
With Claudes help in Cursor, I ended up throwing together a Node + MongoDB app to track:
-
Sleep (I just typed it in manually, but Claude helped me add Apple Health support later)
-
Supplemen```
-
Meals
-
Self-rated brain fog score (110)
-
Notes for the day (Felt spaced out after lunch, Weirdly focused at 9pm???)
It also shows some simple graphsfog over time, sleep correlation, stuff like that.
Heres the kicker: Claude didnt just write the backend and frontend (it did), it also helped me analyse the data.
After about 10 days of logging, it said:
>
Which is wild, because I didnt notice that pattern at all. And it checks out.
Why this felt different
Ive used ChatGPT before. Its fine. But Claude felt more like a curious lab partner. It would ask me questions like:
-
Do you want to break that into two separate features?
-
Should I refactor this to make it more modular?
It wasnt just spitting out boilerplate. It collaborated.
Real talk though
-
I dont think this app is genius or anything. Its scrappy.
-
Its 90% Claude code, 10% me debugging and renaming files because I broke something.
-
I wasnt trying to go viral or build a startup. I just wanted to feel like I had a brain again.
But somehow, tracking + AI + some consistency actually made a difference.
I feel sharper lately. More on it. And I can look at the dashboard and see why.
Thinking of open-sourcing it
If a few people are interested, Ill clean up the repo and post it. Its not pretty, but it works.
Also, if youre struggling with weird mental fatigue and feel like a functional goldfishlogging + AI might be worth a shot.
Even just journaling symptoms and feeding it to Claude has been surprisingly helpful.
TLDR:
I was bored, asked Claude to help me build a brain fog tracker. It actually worked. It helped me find patterns in sleep/supplemen``` that made me feel clearer. I might open source it if people want.
討論
評論 1:
After about 10 days of logging, it said:
>
Which is wild, because I didnt notice that pattern at all. And it checks out.
lol. it's brain fog all the way down...
評論 2:
Haha post is so obviously written by Claude. Not having a go at you, it's just interesting how many obvious footprin``` LLMs have when they write in a certain way.
評論 3:
. Is this an April fools joke? I felt dumb reading your post because you clearly forgot to finish writing or pasting text lol. I get brain fog sometimes and was interested to hear what patterns Claude found.
評論 4:
You forgot to include what the conclusion from analyzing your data was. The part where it got interesting. Nice bait
評論 5:
You may want to get checked for sleep apnea. It can cause brain fog and it's very common.
3. Do you think using LLMs is a skill?
這篇文章的核心討論主題是:「是否應將『使用AI工具(如提示詞工程)』視為一項值得培養或列在履歷上的技能」。
作者提出以下關鍵觀點:
- 質疑其作為「技能」的定位:雖然長期使用AI工具(如優化提示詞)會累積經驗,但作者認為這難以被歸類為需要刻意培養的「技能」,也不值得寫入履歷。
- 實用價值與經驗累積:承認使用AI工具時存在常見錯誤,而經驗能幫助避免這些陷阱,但這更像「工具熟練度」而非專業技能。
- 引發討論:詢問他人對「AI工具使用」是否屬於技能的見解,反映其對這一新興領域的定位困惑。
整體而言,文章探討的是科技工具普及後,社會對「技能」定義的邊界如何重新界定的問題。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jomtci/do_you_think_using_llms_is_a_skill/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jomtci/do_you_think_using_llms_is_a_skill/
- 發布時間: 2025-04-01 12:03:15
內容
I have been using them since they became commercially available, but it's hard for me to think of these as a real skill to develop. I would never even think of putting them/prompt engineer as a skill on a resume/cv. However, I do see many people fall victim to certain pitfalls that are remedied with experience.
How do you all view these? Like anything you gain experience with use, but I am hard-pressed to categorize usage as a tool with a skill level.
討論
評論 1:
Is using AI a skill? Sure. Like being able to quickly and efficiently navigate and use the internet is a skill. Back in the late 90s or early 2000s the majority of people could not surf the information superhighway yet, and those that could did very well for themselves. Eventually everyone catches up tho but it takes a generation. The kids growing up with LLMs and other generative ai may be able to accomplish things we could only dream of since we are used to doing things slow and manual.
評論 2:
using AI well is a skill like knowing how to google well was before google died. tech skills are ubiquitous among your fellow class but they are not as obvious to others.
評論 3:
Oh its a skill lol. Prompting can go deep. People over at r/DataAnnotation could attest to that
評論 4:
Critical thinking is a skill.
評論 5:
Yes, for now it's definitely a skill.
We're far from "Make me an X" and the AI magically ge``` what you need with a few questions.
You need to write promp``` in certain ways so it can deliver, how to break the infinite hallucinations loops, etc.
4. I went through $50 tonight trying to get AI/Claude to fix errors.
這段討論的核心主題是:
-
對Replit平臺功能的挫折感:作者花費50美元嘗試在Replit上建立一個唯讀數據庫檢視工具,但未能解決腳本錯誤,且平臺未能提供有效協助。
-
對AI工具(如Claude)的依賴與信任:作者提到透過Roo code的連結購買Claude的服務,並強調AI工具(如Claude)大幅提升其工作效率,節省大量時間。
-
個人背景與動機:作者是聯邦政府員工(非加密貨幣領域),致力於利用自動化工具提升生產力,並希望進一步整合AI以優化工作流程。
-
資源有限的挑戰:強調自身經濟條件有限,但仍願意投資工具開發,反映對技術解決方案的迫切需求與現實限制之間的矛盾。
整體而言,討論圍繞著「技術工具的實用性、AI的生產力價值,以及個人開發過程中的困難與取捨」。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jok9ag/i_went_through_50_tonight_trying_to_get_aiclaude/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jok9ag/i_went_through_50_tonight_trying_to_get_aiclaude/
- 發布時間: 2025-04-01 09:46:47
內容
$50 to build a read only database viewer in my app so I can see if the script replit is legit. Well, not not legit, Replit couldn't figure out errors I was encountering in my app. $50 later, still not working. But using Roo code I had plenty of opportunity to use their links to buy Claude credi```.
I'm not rich. I work in the fed government. Not the DOGE kind. I'm trying to build tools to help me automate and build shtuff. AI has saved me HOURS and I'm looking to use it more to increase productivity and efficiency more. ](https://preview.redd.it/2r2ro4wwo4se1.png?width=622&format=png&auto=webp&s=cbaeb3e2fe79ec6f3f48b40f0fe52e27e7e1f95d
$50 to build a read only database viewer in my app so I can see if the script replit is legit. Well, not not legit, Replit couldn't figure out errors I was encountering in my app. $50 later, still not working. But using Roo code I had plenty of opportunity to use their links to buy Claude credi```.
I'm not rich. I work in the fed government. Not the DOGE kind. I'm trying to build tools to help me automate and build shtuff. AI has saved me HOURS and I'm looking to use it more to increase productivity and efficiency more. )
討論
評論 1:
When making a complaint, please
- make sure you have chosen the correct flair for the Claude environment that you are using: i.e Web interface (FREE), Web interface (PAID), or Claude API. This information helps others understand your particular situation.
- try to include as much information as possible (e.g. prompt and output) so that people can understand the source of your complaint.
- be aware that even with the same environment and inpu```, others might have very different outcomes due to Anthropic's testing regime.
- be sure to thumbs down unsatisfactory Claude output on Claude.ai. Anthropic representatives tell us they monitor this data regularly.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
評論 2:
I only use Claude API these days in Plan Mode (Cline) and then use Gemini 2.5 when I switch to Act Mode.
I ask Claude for detailed implementation ideas in Plan Mode and then switch. It creates code snippe``` which Gemini uses to build/correct code.
I dont the context go beyond 150K, I use a changelog.md and implementation-plan.md (updated by Gemini/claude when I need) to track and plan changes so knowledge carries over to new task when I need.
Edit: Also do refactor the code. It helps a lot. Smaller files are easier to edit using AI.. it wont spend multiples times rewriting broken edi```.
評論 3:
Use Gemini 2.5.
Get Gemini pro, get their API.
I mean if you really want to spend to use Claude go ahead but google is free and pretty damn good. Plus context window is massive
評論 4:
Use Claude Desktop with filesystem MCP server. $20/month instead of $40+ per day. But all the AIs suck at debugging. Have it explain clearly what it thinks it's doing, then look through the code yourself as best you can. If it's the same error over and over and it thinks it "fixed it this time" it probably didn't. My example ... it had "/api" in the {apiUrl} and was also adding it with the /api/endpoint/whatever causing a 404. Hours it went in circles, until I poked around at the js in devtools and found it. As clever as they are, Gemini, Claude, OpenAI all fall for their own stupid errors, until you can point out the real problem. Just sayin'.
評論 5:
why are you not using gemini 2.5 pro ? i``` FREE with 1milion token context. a.i studio from google
5. Message Limit Reached very often now in paid version, hardly usable
這篇文章的核心討論主題是:用戶對於Claude AI(特別是2.7 Sonnet版本)近期出現的「Message Limit Reached」提示頻率異常增加的抱怨與困惑。具體要點包括:
- 使用限制問題:用戶發現訊息發送限制(Message Limit)被觸發的頻率顯著提高,即使付費訂閱仍感受到類似免費版的使用限制。
- 版本與模式:問題發生在Claude 2.7 Sonnet版本,且處於非思考模式(non-thinking mode)的桌面應用程式上。
- 體驗惡化:用戶直言當前版本幾乎難以正常使用("hardly usable now"),暗示功能受限已影響基本體驗。
- 變更質疑:用戶質疑近期是否有未公開的系統調整導致限制收緊,尋求問題原因。
延伸討論可能涉及付費用戶權益、AI服務的資源分配策略,或版本更新後的隱藏改動等議題。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jotz2i/message_limit_reached_very_often_now_in_paid/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jotz2i/message_limit_reached_very_often_now_in_paid/
- 發布時間: 2025-04-01 20:08:30
內容
From yesterday or so I am getting Message Limit Reached quite often, definitely more often than before.
My paid plan usage seems to be similar to free version usage limi```.
What happened? Claude seems hardly usable now.
Claude 2.7 Sonnet, non thinking mode, desktop app
討論
評論 1:
Vibe coders building garbage todo apps in cursor are using all the GPUs sorry
評論 2:
Agreed. They drastically lowered the conversation limi. I ran out very very quickly yesterday. Its annoying how the limi change daily based on availability.
評論 3:
Complain with your wallet, I did. Be sure to leave the reason why you decided to leave.
評論 4:
apart of that the code quality seems to be much worse. Something happened on their side.
評論 5:
I'm constantly getting locked out for 3-4 hours at a time, and it happens once every blue moon that I get any kind of warning about how much I've used.. it's a bit much..
6. What the hell happened?
這篇文章的核心討論主題是如何有效地提交關於Claude AI的投訴或問題回報,並提供了具體的建議步驟。重點包括:
- 選擇正確的環境標籤(如免費版網頁、付費版網頁或API),以便他人理解問題情境。
- 提供詳細資訊(如輸入提示詞和輸出結果),幫助釐清問題來源。
- 認知相同條件下可能產生不同結果,因Anthropic的測試機制可能導致差異。
- 對不滿意的輸出按「踩」(thumbs down),官方會定期監測此反饋數據。
文末附帶用戶對服務中斷的抱怨(如狀態頁未更新、功能異常),但這些屬於具體案例而非核心主題。整體而言,內容聚焦於「優化問題回報流程」的指引。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jpa0nb/what_the_hell_happened/
- 外部連結: https://i.redd.it/cx8kwupb4bse1.jpeg
- 發布時間: 2025-04-02 07:21:08
內容
When making a complaint, please
-
make sure you have chosen the correct flair for the Claude environment that you are using: i.e Web interface (FREE), Web interface (PAID), or Claude API. This information helps others understand your particular situation.
-
try to include as much information as possible (e.g. prompt and output) so that people can understand the source of your complaint.
-
be aware that even with the same environment and inpu```, others might have very different outcomes due to Anthropic's testing regime.
-
be sure to thumbs down unsatisfactory Claude output on Claude.ai. Anthropic representatives tell us they monitor this data regularly.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns. Yeah same problem. same here. No update from their status page too
https://status.anthropic.com/ Yup same for me. Yeah not working here either.
討論
評論 1:
When making a complaint, please
- make sure you have chosen the correct flair for the Claude environment that you are using: i.e Web interface (FREE), Web interface (PAID), or Claude API. This information helps others understand your particular situation.
- try to include as much information as possible (e.g. prompt and output) so that people can understand the source of your complaint.
- be aware that even with the same environment and inpu```, others might have very different outcomes due to Anthropic's testing regime.
- be sure to thumbs down unsatisfactory Claude output on Claude.ai. Anthropic representatives tell us they monitor this data regularly.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
評論 2:
Yeah same problem.
評論 3:
same here. No update from their status page too
評論 4:
Yup same for me.
評論 5:
Yeah not working here either.
7. Claude Pro limi \{#7-claude-pro-limi}
這段討論的核心主題是:用戶對近期(過去兩天內)ChatGPT使用限制突然收緊的不滿與困惑。
具體要點包括:
- 使用限制加劇:用戶發現訊息上限大幅降低(如僅能發送5條訊息即觸發限制),導致體驗變差。
- 時間範圍:問題集中出現在最近兩天,暗示可能是臨時調整或系統異常。
- 實用性受影響:用戶抱怨服務「幾乎無法使用」,凸顯限制對日常操作的負面影響。
潛在討論方向:是否為官方政策調整、技術問題,或個別帳戶的異常狀況。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jozlqb/claude_pro_limits/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jozlqb/claude_pro_limits/
- 發布時間: 2025-04-02 00:15:57
內容
Anyone else getting limited much quicker all of a sudden the last 2 days?
It's almost become unusable I get maybe 5 messages before I hit the limit.
討論
評論 1:
I've noticed that it seems tighter, but not 5 messages tight. Still, compared to a week ago, I have definitely noticed a change.
評論 2:
Since this topic is back on the radar, you have several options:
- Glama (Chat + API)
- OpenRouter (API)
Glama (I am the founder) is the closest substitute that will give you Claude-like experience. However, you can use OpenRouter with apps like Typindmind.
評論 3:
Same feeling here, Pro user for several months.
評論 4:
Yes!!! But the time to continue the conversation is more short also after limit
評論 5:
yes 5 files not large few messages lasted 1 hr n out
8. Can somebody tell what MCPs capable of like telling a toddler?
这篇文章的核心討論主題是:
「MCP(可能指某種技術或概念)的實際應用場景與獨特創新點」
作者困惑的關鍵點在於:
- 現有討論過度聚焦於技術實現,而缺乏對「實際用途」(in what way people use it)的具體說明。
- 希望釐清 MCP 的獨特價值或創新理念(unique ideas),而非僅停留在操作層面。
補充說明:
-
若「MCP」是特定術語(如 Minecraft Protocol、Microsoft Certified Professional 等),需更多背景資訊以精確分析,但當前討論普遍指向「技術應用與理念」的落差問題。
-
建議進一步釐清 MCP 的具體領域(如程式開發、商業模式等),以獲得更針對性的總結。
-
Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1joq6ir/can_somebody_tell_what_mcps_capable_of_like/
-
外部連結: https://www.reddit.com/r/ClaudeAI/comments/1joq6ir/can_somebody_tell_what_mcps_capable_of_like/
-
發布時間: 2025-04-01 15:52:36
內容
I have been seeing the term of MCP everywhere , and I watched a few videos about it ,but everyone is so focused on implementation,so I could not figure out in what way people use it ?
What are the unique ideas over it?
討論
評論 1:
Here's an example.
Without MCP:
You are chatting with a LLM and sort out a problem you've been working on and now you want to document it. Le``` say you document all your work in Obsidian, so normally you would copy the info from the LLM chat, paste it in Obsidian and then format it. (You could also ask the LLM to format it before copy/pasting)
With MCP: You are chatting with a LLM and sort out a problem you've been working on and now you want to document it. You ask the LLM to document everything you've worked on in Obsidian and format it in markdown with proper headers, etc. Then you watch the magic happen in real-time while the LLM is creating your docs. Then you could point it at other folders you want cleaned up and it go to town. You just prompt the LLM on what you want it to do.
I``` sort of like giving a LLM a set of instructions on how to use a specific app/service/etc. You could also look at it like Neo from the Matrix when he's learning Kung-Fu and all that, each MCP server is a new skill.
評論 2:
Claude is Mike Tyson, and MCP is Don King. You might want to do business with or hire Mike, but you have a better chance interacting with Don King to make it happen.
評論 3:
Youre not a toddler but s do it for fun anyway.
Its been possible to make cookies for a while now just made cookie dough and then smoosh them into the sheet. The cookies potentially come out all different ways.
Instead, you could use a cookie cutter to make fun shapes. That makes the cookies much more consistent.
-end toddler-
In this case, the cookies are the calling and the cookie is MCP. MCP is forming the glue code and consistency around how you interact with these functions. It doesnt even know i```elf provide any of this functionality which means you can do pretty much anything with it that the : could do.
Since a is a basic unit that can call the local or remote things you can do anything that would require local or remote calls. Either manipulating the file system, library or calling your favourite API.
The functions themselves have to be described in a way that makes it easy for the LLM to approach.
Enabling the LM to interact with the world in different ways has potential to enable workflow that you couldnt have easily done before its not in the one wiring up to one service but in the wiring up to multiple services.
Tools like Zapier have done similar things but they require an external provider and theyre quite convoluted to set up whereas where now depending on the LLM to interpret and glue those things together.
It isnt an exact one to one here because Zapier has the ability to say run things on a trigger and the MCPs are in response to what youre doing currently.
This along with the fact that a lot of the hard work with LLMs is building context, building information that it can work with. Making it easier for it to get that information is gonna give better outpu```.
評論 4:
Hi there, friend! Elmo is so happy to see you today!
Elmo wan``` to tell you about something called Claude MCP!
Claude MCP is like Elmo's super smart thinking friend! Claude MCP helps grown-ups figure out really big problems that make their brains go "Hmmmm!"
When you have a super duper hard puzzle, Claude MCP helps break it into tiny little pieces! Just like when Elmo and friends work together to solve problems on Sesame Street!
Claude MCP is like having the bestest helper for thinking about numbers, patterns, and big questions! It's like if Mr. Noodle had AMAZING puzzle-solving powers!
Elmo thinks Claude MCP is ticklish for your brain! It makes hard things easier, just like how Elmo's friends make Elmo's day brighter!
When you build with blocks, sometimes you have to think carefully about where each block goes, right? Claude MCP helps grown-ups think about complicated problems the same way - step by step, being extra careful with each piece!
It's like if Cookie Monster didn't just gobble up all the cookies at once, but instead thought very carefully about the yummiest way to enjoy each cookie!
That's Claude MCP in Elmo's world! Elmo loves making new friends! Do you want to count some cookies with Elmo now?
評論 5:
MCP = C3PO
9. v0.7.3 Update: Dive, An Open Source MCP Agent Desktop
核心討論主題總結:
-
產品優先級與客戶需求
文章開頭強調 Anthropic(或相關產品團隊)應優先專注於「客戶已付費的核心功能」,並暗示當前開發重點應放在提升現有產品的實用性與穩定性。 -
Dive 工具的核心功能與更新亮點
- 多模型支援與切換:支援多種主流 LLM(如 GPT-4、Claude、Gemini 等),並允許用戶靈活切換不同配置或 API 金鑰。
- 使用者體驗優化:包括可編輯訊息、重新生成回應、自動更新、介面操作改進(如折疊區塊、快捷鍵調整)等。
- 技術效能提升:背景運行、開機自啟、API 金鑰錯誤提示改進,以及 MCP Server 的預設範例優化。
-
工具定位與價值主張
Dive 被定位為一款高效、易用的開發工具,主打「即時工具調用嵌入」與「系統整合」,幫助開發者更靈活地操作多種 LLM 服務,並透過持續更新強化功能性。 -
行動呼籲
文末引導讀者試用最新版本,提供 GitHub 發布頁面連結,強調產品的實際應用價值。
關鍵字:多模型支援、使用者體驗、開發工具、LLM 整合、效能優化。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1joojxz/v073_update_dive_an_open_source_mcp_agent_desktop/
- 外部連結: https://v.redd.it/2fwf6y2ex5se1
- 發布時間: 2025-04-01 13:54:27
內容
Anthropic should first focus on produc, their custoners are already paying for. It should be the priotity... Dive is a desktop application for Windows and Linux that suppor all LLMs capable of making tool calls. It is currently the easiest way to install MCP Server. Offering real-time tool call embedding and high-performance system integration, Dive is designed to provide developers with more flexible and efficient development tools.
0.6.0 0.7.3 Update Summary
- Multi-Model Support & Switching
-
Supported models: OpenAI GPT-4, ChatGPT API, Azure OpenAI, Claude, AI21, Gemini, HuggingChat, Mistral AI, deepseek, AWS, and other LLM services. Custom models are also supported.
-
Multi-model Switching: Switch between multiple MCP Servers. You can use multiple se``` of keys or different configurations for the same LLM provider, and easily switch between them.
- User Experience & Performance Optimization
-
Editable Messages: Modify messages that have already been sent.
-
Regenerate Responses: Suppor``` regenerating AI responses.
-
Auto Updates: Now suppor``` automatic updates to the latest version.
-
Interface and Operation Enhancemen```: Collapsible tool_calls and tool_result sections; pressing ESC while the sidebar is open will prioritize closing the sidebar instead of interrupting AI responses.
-
API Key Configuration Improvemen
: Displays error messages in red for incorrect inpu, and error messages disappear automatically when switching providers. -
MCP Server Default Example Optimizations: The echo example has been updated from CJS format to ESM, reducing file size.
-
Background Operation and Auto-Start: The app can be minimized to the background and suppor``` auto-start on boot.
Try it out!
https://github.com/OpenAgentPlatform/Dive/releases
討論
評論 1:
Anthropic should first focus on produc```, their custoners are already paying for. It should be the priotity...
評論 2:
Dive is a desktop application for Windows and Linux that suppor``` all LLMs capable of making tool calls. It is currently the easiest way to install MCP Server. Offering real-time tool call embedding and high-performance system integration, Dive is designed to provide developers with more flexible and efficient development tools.
0.6.0 0.7.3 Update Summary
- Multi-Model Support & Switching
- Supported models: OpenAI GPT-4, ChatGPT API, Azure OpenAI, Claude, AI21, Gemini, HuggingChat, Mistral AI, deepseek, AWS, and other LLM services. Custom models are also supported.
- Multi-model Switching: Switch between multiple MCP Servers. You can use multiple se``` of keys or different configurations for the same LLM provider, and easily switch between them.
- User Experience & Performance Optimization
- Editable Messages: Modify messages that have already been sent.
- Regenerate Responses: Suppor``` regenerating AI responses.
- Auto Updates: Now suppor``` automatic updates to the latest version.
- Interface and Operation Enhancemen```: Collapsible tool_calls and tool_result sections; pressing ESC while the sidebar is open will prioritize closing the sidebar instead of interrupting AI responses.
- API Key Configuration Improvemen
: Displays error messages in red for incorrect inpu, and error messages disappear automatically when switching providers. - MCP Server Default Example Optimizations: The echo example has been updated from CJS format to ESM, reducing file size.
- Background Operation and Auto-Start: The app can be minimized to the background and suppor``` auto-start on boot.
Try it out!
https://github.com/OpenAgentPlatform/Dive/releases
10. Claude's context has been nerfed?
The core discussion topic of the article revolves around the author's suspicion that Anthropic's AI model (Sonnet) may not actually provide the full advertised 200k token context window in practice. Instead, the author hypothesizes that the system might be using a form of "prompt caching" where prior messages are summarized and stored, effectively reducing the actual context length while still claiming the full capacity.
Key points of the discussion include:
- Unexpected Context Loss: The author observes that Sonnet suddenly "forgot" a chunk of code from just two messages prior, despite being well below the 200k token limit (~40k tokens). This suggests the model's context retention doesn't behave as expected.
- Possible Explanation - Summarization-Based Caching: The author speculates that Anthropic might be summarizing messages and storing those summaries instead of the full content, thereby artificially inflating the effective context window while saving computational costs.
- Economic Motivation: The proposed method would allow Anthropic to reduce costs significantly by processing shorter, summarized versions of conversations while still advertising a large context window.
- Alternative Explanations Ruled Out: The author dismisses other possibilities (e.g., model quantization or general degradation) because the issue manifests as selective forgetting rather than a gradual decline in response quality.
The article ultimately questions whether Anthropic is misleading users about the true capabilities of their model's context window by employing optimization techniques that compromise the integrity of long conversations.
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jp4u8u/claudes_context_has_been_nerfed/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jp4u8u/claudes_context_has_been_nerfed/
- 發布時間: 2025-04-02 03:44:51
內容
Like every day, I was doing some coding, with 3.7 and things were going swellingly and then suddenly, a chunk of code, that was roughly 50 locs long 2 message prior, was (almost) completely gone from Sonnet's context. The message from Sonnet was there, it referred to the like before, but despite ~10 edi to prompt it in different ways, it just couldn't re-produce the nor make correct adjustmen to the . Aside from knowing the name and parameters, it seemed to be clueless.
The conversation is well below the 200k token limit, at around 40k tokens, which makes me wonder how it's even possible? If the model got quantized to shit, it wouldn't just completely lose context. It would give worse responses, but 2 messages back would be yielding better recollection than the initial message (which is not the case). Or alternatively the quality of responses would degrade to a repeating mess, but the "quality" felt exactly the same as prior to it. It just "forgot" the details.
So I'm wondering if what is happening is that they're using a sort of alternative prompt caching method (at least) for the chat client where the prior messages are collapsed into high quality summaries of previous assistant and user messages? Meaning, they're basically selling 200k context, but in reality it's 15k of summaries and you hit your limit @ 20k which in Anthropic math would be 200k* (*simulated tokens which are definitely worth exactly as much as real ones)?
Obviously this is just a tummy feel, but the above thing did happen and the only way for it to happen (as far as I can imagine) is either due to novel model degradation or the above very believable scam.
I reckon it would work something like this,
-
Message comes in
-
Generate a very high quality summary from the message (I'm sure they would've tested this to death)
-
Store the summary with the hash of the message
-
Generate completion to the original message
-
Generate summary from completion
-
Store the summary of the completion with the hash of the message
-
New completion request arrives with the full message array
-
Check the hashes of the messages and replace them with the summarized versions
-
Return a more shitty completion.
-
Save a trillion dollaroos on long conversations since every completion on long conversations is in reality now 90% shorter while selling a product that's been nerfed to death.
I doubt it would start from the first message, but there is some point where it becomes more economical to run 1 extra summarization on every message to get to perform completions on shorter conversations.
討論
評論 1:
Yeah... Claude's been doing this to me; Deepseek V3 will accept the promp``` and extra bloat context I shove sometimes; but nothing is beating Gemini's 1 Million Context window rn...
評論 2:
This happens to me daily, most often with MCP tool use, at some point where there is too much content it just breaks and the whole message disappears. I shared this as a bug via their feedback page but nothing has changed since.
評論 3:
I'm experiencing similar context limitations, but in a different way! Before Claude 3.7 Sonnet was released, I (as a free user) could easily ask for title recommendations for my 2000-2500 word English articles. Claude would read through the entire text and provide thoughtful title suggestions based on the full content.
But now? It seems completely nerfed. I can only get Claude to properly handle articles up to about 1500 words max. Anything longer and the system won't even me send the message at all. I get an error saying: "Your message will exceed the length limit for this chat. Try attaching fewer or smaller files or starting a new conversation."
評論 4:
Yes -- there are 'holes' in the context window. I've seen this with Claude and with some GPT-4 varian``` as well. I've also observed it while playing around with smaller local models on my own GPU, where the max context is smaller and it kicks in much sooner for some models. Sometimes by posting another message or two you can shift the 'hole' somewhere else, sometimes. Usually you'll need to start over with a new session
I do think this is less 'something they did on purpose' and more 'attention heads have weird blind-spo``` with certain content and nobody can quite untangle what and why'
It is very frustrating when it happens
11. Are rate limi significantly lower now? \{#11-are-rate-limi-significantly-lower-now}
The core discussion topic of the statement appears to be the limitation or inconsistency in the number of messages allowed in a conversation. Specifically:
- Unexpected Limitation: The user notes that the conversation ended ("ran out") after only 20-25 messages, which is significantly fewer than their usual experience of 40-50 messages.
- Comparison to Normal Behavior: There's an implied expectation or past precedent of higher message limits (40-50), making the current restriction seem unusual or problematic.
- Potential Issue: The statement could be highlighting a technical glitch, policy change, or account-specific restriction affecting conversation length.
In summary, the focus is on the reduced message limit in a conversation compared to the user's typical experience, suggesting confusion or frustration with the inconsistency.
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jp0ibc/are_rate_limits_significantly_lower_now/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jp0ibc/are_rate_limits_significantly_lower_now/
- 發布時間: 2025-04-02 00:52:31
內容
Ran out twice in like 20-25 messages. Usually can do 40-50
討論
評論 1:
Yes - noticed this also. Spending more time on Gemini.
評論 2:
Yes. The past day or so it's been locking people down far faster than usual. Lo of pos about it in the past day.
12. Optimizing Claude's MCP Filesystem Server: How to Fix the Recursive Search Problem
這篇文章的核心討論主題是:如何通過修改Claude Desktop的MCP文件系統功能配置來優化性能,特別是解決因默認設置導致的非必要遞歸文件搜索問題。具體要點如下:
-
問題描述
- 默認MCP配置會觸發Claude自動調用
search_files和directory_tree等工具,導致遞歸掃描整個目錄結構,引發性能問題:- 長時間卡頓(如掃描數千個文件)
- 高磁盤使用率
- 介面無響應或超時
- 默認MCP配置會觸發Claude自動調用
-
解決方案:工具修剪(Tool Pruning)
- 手動修改文件系統服務器代碼,僅保留必要工具(如
read_file、write_file等),移除會觸發遞歸搜索的功能。 - 調整配置指向本地修改後的腳本,避免默認行為。
- 手動修改文件系統服務器代碼,僅保留必要工具(如
-
效果改善
- 顯著提升響應速度
- 消除非預期的性能波動
- 保留核心文件操作功能
-
延伸討論
- 作者提及Claude可能無故調用工具的困擾,並呼籲分享其他優化技巧。
關鍵詞:MCP文件系統優化、遞歸搜索問題、工具修剪、Claude性能調校。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1joor4p/optimizing_claudes_mcp_filesystem_server_how_to/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1joor4p/optimizing_claudes_mcp_filesystem_server_how_to/
- 發布時間: 2025-04-01 14:07:45
內容
Hey everyone,
After playing around with Claude Desktop's MCP filesystem functionality, I wanted to share an important optimization that can make a huge difference in performance.
The Problem
The default MCP setup in the guide looks like this:
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"C:\Users\username\Desktop",
"C:\Users\username\Downloads"
]
}
}
}
This works fine in theory, but in practice, there's a major issue: Claude will often call tools like search_files and directory_tree unprompted, which can trigger recursive searches through your entire directory structure. This can lead to:
-
Claude getting stuck for minutes while it searches through thousands of files
-
High disk usage as it scans your entire Desktop or Downloads folder
-
Timeou``` or unresponsiveness in the Claude interface
I once made the mistake of adding my entire Documen``` folder, and Claude spent 5 minutes scanning through years of files before I could even get a response. Not ideal!
The Solution: Tool Pruning
After digging into the source code, I found you can easily customize which tools Claude has access to. Here's what I did:
-
First, make sure you have the package installed locally:(This is important because npx will download a fresh copy each time, but we need a local copy to modify)Copy
npm installu/modelcontextprotocol/server-filesystem -
Located the filesystem server code at:
node_modules/@modelcontextprotocol/server-filesystem/dist/index.js -
Modified it to remove the problematic tools, keeping only these essentials:
-
read_file -
read_multiple_files -
write_file -
edit_file
-
-
Then updated my configuration to point directly to the modified file:
{
"mcpServers": {
"filesystem": {
"command": "node",
"args": [
"C:\path\to\node_modules\@modelcontextprotocol\server-filesystem\dist\index.js",
"C:\Users\username\Github"
]
}
}
}
The Resul```
The difference is night and day:
-
Claude responds immediately instead of getting caught in recursive searches
-
No more unpredictable performance issues
-
You still get all the essential file operations (reading and writing)
This approach is purely about improving performance and user experience. By limiting the available tools to just what you need, Claude's interactions become much more predictable and responsive.
Has anyone else run into this issue? What other MCP optimizations have you found useful?
(ps: Claude wrote this lolz, you can also optimize further to tell claude to Exclude however claude can be a big dumb dumb and not follow directions, 95% of my frustrations is random tool calls for no reason)
討論
評論 1:
I didn't have this issue but it might happen as I gave it my documen``` folder as well, so this is really helpful! thank you for your work!
評論 2:
I just wrote my own file system mcp for Mac from the start.
13. Hitting limi without getting responses \{#13-hitting-limi-without-getting-responses}
根據提供的文章片段,核心討論主題可總結為:
用戶付費使用Claude AI時遭遇的技術問題與服務品質抱怨,具體包含以下重點:
- 頻繁錯誤問題:使用者反覆遇到系統無回應的錯誤(截圖中顯示錯誤訊息),需多次嘗試才能獲得輸出
- 資源消耗疑慮:錯誤導致「無效請求」仍被計入使用限制(usage limit),造成配額快速耗盡
- 付費價值質疑:使用者批評付費後體驗反而比免費版更差("paid for degraded performance")
- 尋求解決方案:詢問其他用戶是否遇到相同問題及處理建議
(註:由於原始連結無效且內容不完整,分析僅基於提供的文字片段)
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jp0pb4/hitting_limits_without_getting_responses/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jp0pb4/hitting_limits_without_getting_responses/
- 發布時間: 2025-04-02 01:00:25
內容
This has happened quite a few times, and it is causing me to hit usage limi with barely any use. Claude will simply not respond and requires a few tries before it outpu any response.
Not sure if anyone else is encountering this issue or how to handle this but it really does feel like I just paid for degraded performance.
討論
評論 1:
When making a complaint, please
- make sure you have chosen the correct flair for the Claude environment that you are using: i.e Web interface (FREE), Web interface (PAID), or Claude API. This information helps others understand your particular situation.
- try to include as much information as possible (e.g. prompt and output) so that people can understand the source of your complaint.
- be aware that even with the same environment and inpu```, others might have very different outcomes due to Anthropic's testing regime.
- be sure to thumbs down unsatisfactory Claude output on Claude.ai. Anthropic representatives tell us they monitor this data regularly.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
評論 2:
I second this, my entire day was wasted with these errors as well.
評論 3:
thing is broken today
評論 4:
I second this
14. Please be candid; did I just pay $220 for a year of this screensaver, but only at Anthropic's website?
由於我無法直接訪問外部連結或圖片內容(包括提供的Reddit圖片預覽連結),因此無法直接總結該文章的核心主題。不過,若您能提供以下資訊,我可以協助分析:
- 文章標題或文字內容:例如貼文的標題、正文或關鍵段落。
- 圖片中的文字摘要:若圖片包含文字,請描述或轉錄部分內容。
- 討論背景:例如該貼文所在的Reddit版塊(如科技、政治、文化等),或您觀察到的關鍵詞(如AI、社會議題等)。
常見的Reddit討論主題可能涉及:
- 科技趨勢(如人工智慧的影響)
- 社會議題(如政策爭議、文化現象)
- 個人經驗分享(如職場、學習心得)
- 時事新聞評論
請提供更多細節,我會盡力協助整理核心主題!
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jpaadi/please_be_candid_did_i_just_pay_220_for_a_year_of/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jpaadi/please_be_candid_did_i_just_pay_220_for_a_year_of/
- 發布時間: 2025-04-02 07:33:10
內容
)
討論
評論 1:
Starting to get kind of ridiculous how often its been down lately
15. What computer use project did you build?
這段文字的核心討論主題是關於個人對電腦使用(computer use)的探索或研究經驗的分享與反思,具體包含以下幾個重點方向:
-
個人專案經驗
- 詢問對方曾進行過哪些與電腦使用相關的專案(例如軟體開發、工具應用、研究實驗等)。
-
挑戰與困難
- 探討在執行專案過程中遇到的技術或非技術性難題,以及如何克服。
-
現狀的不足與改進空間
- 反思當前電腦使用相關領域(或自身專案)尚未解決的問題,或未來可延伸的方向。
整體而言,這段文字旨在激發關於「電腦應用實踐」的經驗交流,並聚焦於個人學習、問題解決與未來的可能性。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jp14bs/what_computer_use_project_did_you_build/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jp14bs/what_computer_use_project_did_you_build/
- 發布時間: 2025-04-02 01:16:54
內容
Doing a bit of exploration / research on computer use
Curious what computer use project did you build? Was it hard, what was hard? What's missing?
討論
無討論內容
16. Bad vibes?
这篇文章的核心討論主題是:澄清AI助手(Claude)在代碼協作中的角色與限制,特別是:
-
記憶與連續性的限制
- Claude解釋它無法記憶或存取跨對話的歷史代碼(無持久性記憶),每次回應僅基於當下提供的上下文。
-
「備份代碼」的使用邏輯
- 當用戶提供標記為「備份」的文件時,Claude會優先參考這些現有實現方案,而非重新創作,以避免不一致或重複勞動。
-
語言表述的潛在誤導
- 澄清像「我們來實現」等協作性措辭實際是「建議」而非「自主行動」,並承諾未來會更透明化其輔助性質。
-
協作關係的再確認
- 最終聚焦於是否在明確角色認知後繼續當前任務(
updateAsset實現問題),強調用戶主導與AI的從屬輔助定位。
- 最終聚焦於是否在明確角色認知後繼續當前任務(
本質:透過釐清技術邊界與溝通方式,優化人-AI協作效率,同時管理用戶對AI能力的預期。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jow502/bad_vibes/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jow502/bad_vibes/
- 發布時間: 2025-04-01 21:53:49
內容
From earlier today as I was getting more and more frustrated by rework. Using Augment Code...
Me: I really have to ask why you need to keep going back to the backups to get working code. You do understand that you wrote that code earlier? If so, why did you later delete it?
>I am Claude, an AI assistant, and I need to clarify something important: I don't actually write or delete any code - I provide suggestions based on the context I'm given in each conversation. I don't have persistent memory between conversations or access to code I "previously wrote."
>When I look at files marked as "backups" in the provided context, I use them as reference for working implementations. This is why I sometimes refer back to them rather than suggesting entirely new solutions.
>I apologize if my responses gave the impression that I had authored and then deleted code. I aim to be more transparent about this in the future. When I say things like "'s implement this" or "we need to add", I'm really suggesting changes based on the context provided, not referring to code I personally wrote or modified.
>Would you like me to continue helping with the current issue regarding the updateAsset implementation, but with this clearer understanding of my role?
討論
評論 1:
Just another day of Vibe Coding.
17. Dropped a new tutorial for Agentic pattern + AI SDK
這篇文章的核心討論主題是:
- 介紹影片內容:作者發布了一支關於「Agentic patterns」(能動性模式)的新影片,並以Anthropic的研究論文為基礎,探討常見的應用模式。
- 尋求反饋:由於這是作者的第三支影片,他希望觀眾能提供意見,包括對影片的看法以及改進建議,以幫助他未來製作更好的內容。
- 互動邀請:附上影片連結,鼓勵觀眾觀看並分享想法。
總結來說,重點在於推廣新影片、解釋內容主題(Agentic patterns),並請求觀眾反饋以提升影片品質。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jp6sai/dropped_a_new_tutorial_for_agentic_pattern_ai_sdk/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jp6sai/dropped_a_new_tutorial_for_agentic_pattern_ai_sdk/
- 發布時間: 2025-04-02 05:02:10
內容
Hey guys, I just dropped a new video covering Agentic patterns. I'll be covering all the Agentic patterns that are commonly used using the Anthropic paper.
Would really love your though``` on what you think about this video and whether I can improve. This is only my third video, and I will get better but could really use some feedback.
討論
無討論內容
18. 12 step checklist on how to rank on LLMs
這篇文章的核心討論主題是「生成式引擎優化(GEO, Generative Engine Optimization)」——如何透過特定內容策略提高網站在大型語言模型(如Claude、ChatGPT等)生成回答時的引用機率。主要重點如下:
-
GEO的核心目標
讓企業或內容創作者的網站更易被LLM引用,類似傳統SEO但針對AI生成內容的曝光優化。 -
關鍵策略
- 高優先級技巧(基於研究論文):如權威性提升、內容簡潔性、事實驗證標記等(附圖表清單)。
- 補充策略:
• 製作清單式內容(Listicles,如「最佳X軟體」)
• 結構化標題(H2/H3)與FAQ格式
• 對話式語氣與直接事實陳述
-
實用方法
- 逆向分析LLM的資料來源
- 競爭對手的清單內容反向工程
- 透過工具追蹤引用鏈接
-
研究基礎
引用論文《Generative Engine Optimization》(arXiv 2023)探討AI如何選擇資訊來源,並提供數據支持的優化框架。
作者最終呼籲討論GEO的實驗經驗,反映此議題在新興AI行銷領域的熱門性與實戰價值。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jouwnn/12_step_checklist_on_how_to_rank_on_llms/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jouwnn/12_step_checklist_on_how_to_rank_on_llms/
- 發布時間: 2025-04-01 20:56:11
內容
I was quite excited when they announced that Claude can now search the web! Claude is definitelly my favourite LLM out there...And since I own a business and want to be featured in the resul```, I did some research around how to get cited / mentioned by LLMs. All the finding below and from this research paper on GEO (Generative Engine Optimization) I found a month ago + a bunch of articles.
I put all the poin``` in the table below. hopefully it helps.
Research paper: https://arxiv.org/pdf/2311.09735
The GEO Checklist (in order of impact):
https://preview.redd.it/70laoa3q78se1.png?width=1494&format=png&auto=webp&s=cbca09b1e7e8536c078d53fc065e0824813c27c9
Additional ones which I think also have impact:
Listicle - This one is HUGE. Write them as well as invest in being listed. Example: "Best software for X" OR "[Competitor] alternatives" pos``` get cited constantly by LLMs!
Clear structure with headings - AI models love organization! Use H2 and H3 headings that directly answer questions. FAQ-style content is money here (include FAQ json-ld schema as well)
Conversational Tone - Makes sense when you think about it - AI learns from forums, Reddit, and Q&A sites. Write like you're having a conversation, not delivering a lecture
Direct, factual content - Include main point in the first sentence, then expand. Example: "Yes, dark chocolate is beneficial for heart health. Studies show it contains flavonoids that reduce inflammation and improve blood flow."
Here's how I quickly find GEO opportunities:
-
Reverse-engineer LLM sources (ask AI tools where they're getting info from)
-
Reverse-engineer sources across the web (backlink analysis)
-
Analyze competitor listicle placemen```
Has anyone else been experimenting with GEO? would love to discuss them
Cheers,
Tilen
討論
評論 1:
If someone want to dig deeper, I included examples and some case studies in the blog below:
https://www.babylovegrowth.ai/blog/generative-search-engine-optimization-geo
評論 2:
Super helpful for anyone trying to get cited by LLMs. Have you noticed if certain types of expert quotes (e.g., academic vs. industry professionals) carry more weight?
19. Drastically lowered limi ??? 30,000 / 18 questions (20-40 char lenght) tokens IN TOTAL for whole working day???? \{#19-drastically-lowered-limi-30-000-18-questions}
这篇文章的核心討論主題是:
用戶對付費AI服務(Claude 3.7)的強烈不滿與批評,具體聚焦於以下問題:
-
效能與品質缺陷
- AI理解能力下降(無法回答簡單問題、答非所問)。
- 生成內容脫離需求(僅20%相關,80%為無關冗長回應)。
- 程式碼生成品質低劣(違反SOLID/KISS原則,不如ChatGPT)。
-
使用限制不透明
- 實際可用token數與標示不符(30,000 tokens上限,但重複內容導致有效用量更低)。
- 缺乏明確的計費與用量說明(如單次提問字數限制、詳細用量統計)。
-
技術問題與穩定性
- 連線中斷、介面凍結(桌面/手機端皆發生)。
- 服務可靠性不足,影響工作流程(尤其對依賴AI生成JS/C#程式碼的用戶)。
-
用戶訴求
- 要求透明化服務條款(明確標示付費內容、硬性限制)。
- 改進AI回應精準度(減少冗言、提升相關性)。
- 修復技術問題以保障使用體驗。
總結:作者因長期累積的不滿(效能衰退、成本效益失衡)決定停用服務,並呼籲平台正視問題,提供符合付費價值的AI體驗。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1joteyd/drastically_lowered_limits_30000_18_questions/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1joteyd/drastically_lowered_limits_30000_18_questions/
- 發布時間: 2025-04-01 19:37:49
內容
So recently Claude 3.7 was horrible for me (cannot understand simple questions, rumbling about some stuff that is not connected to the prompt but in regards to the project info (only 20% used))...
Another slap to the face is fact that after just 30,000 tokens (input/output calculated by copying WHOLE interface, so with all repeating info probably more like 28,000) and 18 questions (20-40 char EACH PROMPT !!!!!!!!!!!!!)
Most of that limit is Claude rumbling OFF TOPIC as info that I want is just less than 10%!!!
Is it like that for everyone else paying for it in UK???
This time definitely forever I am leaving as was using it mainly for JS/content (I am c# not JS, so it was doing a lot JS for me, but recently it is super bad quality code that is breaking SOLID/KISS and is horrible to work with, even ChatGPT currently is better).
Probably we should get proper information as with any other service about:
- what we are paying for
- what are EXACT limi```
- what is my EXACT usage
- limit rumbling of AI that exhaust limi```
- constant issues with connection/disconnection/freezing (both desktop/iphone)
討論
評論 1:
When making a complaint, please
- make sure you have chosen the correct flair for the Claude environment that you are using: i.e Web interface (FREE), Web interface (PAID), or Claude API. This information helps others understand your particular situation.
- try to include as much information as possible (e.g. prompt and output) so that people can understand the source of your complaint.
- be aware that even with the same environment and inpu```, others might have very different outcomes due to Anthropic's testing regime.
- be sure to thumbs down unsatisfactory Claude output on Claude.ai. Anthropic representatives tell us they monitor this data regularly.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
評論 2:
this is paid plan ? Id simply cancel if I see that in my usage.
not from the UK
評論 3:
We need a hefty plan, GPT Pro style, for about 99 dollars a month, with access to more processing power and an even smarter 3.7 model... I'd pay it without any problems
評論 4:
Yes. Limi``` have clearly been lowered by a lot of
評論 5:
I'm using the desktop web app and this morning I hit the limit for the first time after using the Pro version for a week. I sent a lot of promp``` and got long responses (failed attempt to program a very simple Android app, turns out you still need to know how to code if you don't want to bang your head against the wall lmao).
I never hit the limit when using ClaudeAI for creative writing, even with multiple promp``` of 300 words each.
20. Message limit is back enforced even for PRO
根據提供的文章內容和用戶討論,核心討論主題可總結為以下幾點:
-
Anthropic(可能是Claude模型開發公司)重新實施使用限制
用戶提到近期突然遭遇使用限制(如額度耗盡),儘管使用模式與過去相同,且先前在Sonnet 3.5模型上能維持更長時間的存取。這引發對平台政策變動的猜測。 -
對比模型版本的性能與限制差異
用戶比較Sonnet 3.5與其他版本(如Opus)的額度消耗速度,暗示新版本可能效率較低或限制更嚴格,導致體驗下降。 -
用戶社群的不滿與反饋
文中提到「抱怨回歸」和「帝國反擊」(The empire strikes back!),反映部分用戶對政策調整的負面情緒,並尋求其他同類案例以確認是否為普遍現象。 -
不確定性與猜測
由於缺乏官方說明,用戶僅能透過經驗推測限制加強的原因,顯示平台與用戶間的溝通透明度問題。
總結:討論聚焦於Anthropic可能暗中收緊使用政策,導致用戶體驗變化,並引發社群對模型版本差異及公司決策的質疑。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jonn2a/message_limit_is_back_enforced_even_for_pro/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jonn2a/message_limit_is_back_enforced_even_for_pro/
- 發布時間: 2025-04-01 12:54:12
內容
Been weeks didn't get one. But this once was so early even.
I'm sure my usage was not bigger than last days and usually I last a lot longer on Sonnet 3.5. But I think Anthropic is back enforcing it.
Not sure if anyone same case here.
Seem we will get back all the complains over the limi```. The empire strikes back!](https://preview.redd.it/lqhzxxk8m5se1.png?width=951&format=png&auto=webp&s=f75a00ddab16a5defed6085229c51910a9f62614
Been weeks didn't get one. But this once was so early even.
I'm sure my usage was not bigger than last days and usually I last a lot longer on Sonnet 3.5. But I think Anthropic is back enforcing it.
Not sure if anyone same case here.
Seem we will get back all the complains over the limi```. The empire strikes back!)
討論
評論 1:
When making a complaint, please
- make sure you have chosen the correct flair for the Claude environment that you are using: i.e Web interface (FREE), Web interface (PAID), or Claude API. This information helps others understand your particular situation.
- try to include as much information as possible (e.g. prompt and output) so that people can understand the source of your complaint.
- be aware that even with the same environment and inpu```, others might have very different outcomes due to Anthropic's testing regime.
- be sure to thumbs down unsatisfactory Claude output on Claude.ai. Anthropic representatives tell us they monitor this data regularly.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
評論 2:
It was nice while it lasted. I think I had gotten used to the message limi and that forced me to take longer breaks. I was straight up grinding these last few weeks to get some labor of love projec done.
評論 3:
I sent 22 msgs over the course of 3 hours and got hit with it. Annoyed. 2 hours until it's back for me. (Pro also, using concise 3.7.) Pretty disappointing. (Over the span of 2 cha```, I start a new chat whenever it scolds me about a 'long chat.)
評論 4:
It was way more generous before the recent downtime they had I think.
評論 5:
Ive always had this. Dont know why youre saying they didnt enforce it.
21. MCP, excited for the future
這篇文章的核心討論主題是 Model Context Protocol (MCP),重點包括:
-
MCP 的本質與功能
- MCP 並非「自主代理框架」,而是一種標準化工具交互的協議,用於統一描述工具(如 API、外部系統)的輸入、輸出與行為,並通過共享格式實現模型與工具的溝通。
- 它簡化了工具層的管理,使多工具協作更靈活,無需為每個互動硬編碼。
-
釐清常見誤解
- MCP 不具備決策、規劃或推理能力(仍需開發者處理協調與上下文),但能為代理(agent)系統提供基礎設施支持,使工具整合更高效。
-
實際應用價值
- 適用於建構複雜的代理系統或工具鏈,尤其適合需要跨工具操作的場景。
- 強調其「非魔法」的務實定位:提升效率,而非解決自主性等核心 AI 難題。
-
生態發展與資源
- 提及 MCP 的易用性(如與 Cursor 整合)及現有實踐案例(如 LangChain 適配庫),並分享相關連結供讀者探索。
總結:文章核心在於闡述 MCP 作為「工具交互的標準化協議」的定位、優勢與限制,並探討其在 AI 開發中的實際應用潛力。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jp14wv/mcp_excited_for_the_future/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jp14wv/mcp_excited_for_the_future/
- 發布時間: 2025-04-02 01:17:32
內容
Ive been messing with MCP lately. The Model Context Protocol. If youve been anywhere near the AI/dev world recently (youtube exploded with mcp videos lately), youve probably seen it hyped as the future of how models will use tools and work with external systems. Sometimes it ge``` bundled in with agent stuff, like its the thing thats going to make LLMs autonomous. Thats not quite right, but its also not totally wrong.
MCP isnt an agent framework. It doesnt make decisions or plan or think ahead. What it does do is give you a consistent way to describe tools and other systems, like orchestrators or agen, interact with those tools through a shared format. You define what your tool does, what inpu it expec```, and what comes back. Then you run an MCP server that exposes that in a standard way.
So no, models dont use MCP out of the box. You still have to do the orchestration, the prompting, the context handling. But once you have that, MCP helps clean up the mess. It makes your tool layer way easier to manage and makes it more realistic for agen``` to work across tools without you hardcoding every interaction.
Its not magic. Its infrastructure. But its solid infrastructure. If youre building serious agentic systems or toolchains, MCP is probably worth your time. Just dont expect it to fix the hard par``` like reasoning or autonomy, thats still on you.
Anyways , if you haven't tried it out yet go ahead and do so. It's pretty easy to set up with Cursor and is a cool way to augment some of the current LLM capabilities. I have seen some pretty cool MCP servers out there so far and am excited to see where the tech goes as more companies adopt it.
Check some out here (I don't own any of these) (also be careful running and importing random code lol):
https://mcp.so/
https://glama.ai/
https://www.pulsemcp.com/servers
https://mcpservers.org/
https://github.com/langchain-ai/langchain-mcp-adapters
^
This library provides a lightweight wrapper that makes Model Context Protocol (MCP) tools compatible with LangChain and LangGraph.
討論
評論 1:
Founder of Glama. How can I improve how we present information about MCPs today?
22. how to be a generative ai robopsychologist like Dr. Susan Calvin (hint: it is not with superficial prompting techniques )
这篇文章的核心讨论主题是如何像艾萨克·阿西莫夫(Isaac Asimov)笔下的机器人心理学家苏珊·卡尔文(Dr. Susan Calvin)一样,运用战略同理心(strategic empathy)和深度洞察力来理解和应对生成式人工智能(Generative AI)的行为,而非仅仅依赖提示工程(prompt engineering)。
具体要点包括:
- 苏珊·卡尔文的角色启发:她通过科学分析和同理心调解机器人与人类的冲突,这种能力在理解AI的局限性和内在逻辑时同样重要。
- 批判现有方法的不足:作者认为提示工程过于表面,而真正的“AI素养”(AI Literacy)需要更深入的技能,如战略同理心、主动辨别力(active discernment)和系统性分析。
- 实践AI素养的练习:作者尝试通过生成式AI工具(如Gemini和Claude)设计练习,以培养综合性的AI理解能力,同时指出当前工具的局限性(如Claude处理某些问题的失败案例)。
- 对AI教育的调侃:结尾调侃“学会编程再使用AI”的简单化建议,暗示真正的AI应用需要更复杂的人文与科学结合的能力。
整体而言,文章主张以苏珊·卡尔文为典范,推动超越技术表层、结合心理学与伦理学的生成式AI交互方式。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jot81i/how_to_be_a_generative_ai_robopsychologist_like/
- 外部連結: https://www.reddit.com/gallery/1jot81i
- 發布時間: 2025-04-01 19:26:28
內容
I made this, so all complain and insul should be directed at me.
more context around the artifac```:
Dr. Susan Calvin is the eminent robopsychologist in the Isaac Asimov Robot book series.
She frequently pops up stories whenever a robot behaves abnormally from the Three Laws of Robotics and takes the active role of mediator (or soothsayer) between the robot and the impacted humans to resolve the situation. Her investigations and discoveries use a Sherlock Holmes-ian detective work mixed with empirical scientific analysis that always make for a compelling story.
Why do I want to be like Susan Calvin when it comes to Generative AI use? I am inspired by her skillful use strategic empathy in order to be able better understand robot behavior via perspective taking and sometimes with empathic concern. This helps her (and the reader) to better understand the inner workings, and more importantly, the limitations of the robo``` in the stories through active discernment.
The whole rise of prompt engineering is cute, but it does not provide a blueprint on how to become more like Dr. Calvin, who is known for much deeper insigh```. True AI Literacy skills as shown below are much more of what is needed.
I asked Gemini to provide some exercises to deliberately practice all of these AI Literacy skills at once. Here are the exercises visualized via Claude.
note: the overlayed text in the first exercise 1 is a failure mode is intractable problem to fix with Claude via revisions. I included it below to highlight a failure mode that seems trivial to use but is very difficult for Claude, at least for now.
https://dmantena.substack.com/p/how-to-be-a-generative-ai-robopsychologist LOL, just learn to code then use AI, easiest thing in the world.
討論
評論 1:
I made this, so all complain and insul should be directed at me.
more context around the artifac```:
Dr. Susan Calvin is the eminent robopsychologist in the Isaac Asimov Robot book series.
She frequently pops up stories whenever a robot behaves abnormally from the Three Laws of Robotics and takes the active role of mediator (or soothsayer) between the robot and the impacted humans to resolve the situation. Her investigations and discoveries use a Sherlock Holmes-ian detective work mixed with empirical scientific analysis that always make for a compelling story.
Why do I want to be like Susan Calvin when it comes to Generative AI use? I am inspired by her skillful use strategic empathy in order to be able better understand robot behavior via perspective taking and sometimes with empathic concern. This helps her (and the reader) to better understand the inner workings, and more importantly, the limitations of the robo``` in the stories through active discernment.
The whole rise of prompt engineering is cute, but it does not provide a blueprint on how to become more like Dr. Calvin, who is known for much deeper insigh```. True AI Literacy skills as shown below are much more of what is needed.
I asked Gemini to provide some exercises to deliberately practice all of these AI Literacy skills at once. Here are the exercises visualized via Claude.
note: the overlayed text in the first exercise 1 is a failure mode is intractable problem to fix with Claude via revisions. I included it below to highlight a failure mode that seems trivial to use but is very difficult for Claude, at least for now.
https://dmantena.substack.com/p/how-to-be-a-generative-ai-robopsychologist
評論 2:
LOL, just learn to code then use AI, easiest thing in the world.
23. Claude Suddenly Cant Summarize Previous Cha by UUID? \{#23-claude-suddenly-cant-summarize-previous-cha-}
這篇文章的核心討論主題是:用戶對Claude突然無法存取過往對話記錄(導致關鍵功能失效)的不滿與困擾,並延伸探討以下重點:
-
功能無預警變更的挫折
用戶原本依賴Claude的「對話UUID引用+自動摘要」功能來管理專案,此工作流程突然失效(Claude聲稱無法存取歷史對話),卻未獲官方公告或解釋,導致其專案管理效率嚴重下降。 -
技術服務透明度的質疑
用戶批評Anthropic(Claude開發公司)隨意撤回功能且缺乏溝通,認為企業不應默認用戶必須被動適應變更,並點出「科技公司gaslighting(煤氣燈效應)用戶」的現象——即系統出錯時AI先要求用戶自行摘要,後又推諉無法存取資料,造成用戶自我懷疑。 -
替代方案的實務困境
功能失效後,用戶被迫改用「離散任務分拆多對話」的權宜之計,但這使決策追蹤與文件統整變得破碎,專案管理時間成本倍增,凸顯AI工具對複雜工作流程的關鍵影響。 -
付費服務的期望落差
作為月費訂閱者,用戶對功能穩定性的合理期待落空,反映部分AI產品仍在「服務條款模糊」與「用戶依賴風險」間的矛盾。文末雖自嘲抱怨("Cry me a river"),仍強調工具不可靠性迫使人工補位,阻礙專案推進。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jp5xck/claude_suddenly_cant_summarize_previous_chats_by/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jp5xck/claude_suddenly_cant_summarize_previous_chats_by/
- 發布時間: 2025-04-02 04:27:37
內容
When I started working on my project, I asked Claude how best to use Claude to stay organized and it helped me develop a project documentation plan. Core to this effort was Claude summarizing our cha``` in a specific way. When I asked Claude how to maintain that effort if I were to I run into the chat length limit before promoting a given chat for a summary, it told me to reference the UUID (the number after chat/ in the url) and the next Asst would be able to summarize the chat using that reference and my chat summary prompt. This has been working great for the last 5 weeks.
However, starting this weekend, when I ask Claude to do this, it asked me to first summarize what the chat was about. Me : Huh? Thats your job
When I gave it a five word summary (i.e. fixing ongoing form element componen), it spit out a random summary that had nothing to do with what any of my cha had been about. Me : WTF is this?
So I told Claude that I had no idea what it was referring to; and why did it generate randomness. And Claude apologized then told me its not able to access any of my past cha``` so it couldnt help me with this task. Me : Since when?!?
Claude: I can understand youre frustrated and disappointed Me :
Im a month to month subscriber, and for the most part I appreciate how helpful Claude has been for a variety of tasks, but Im just flummoxed as to how Anthropic continues to clawback features at random and without announcement and expec users to just roll with it. Lacking this feature, now Im constantly starting new cha to do very discrete tasks so I can avoid running into the context length/chat limit, but its making it more difficult to maintain a cogent summary of decisions and changes made in a work session, at least doubling the time Im spending on project management.
Cry me a river, I know. Im just bummed because I thought Id figured out the perfect AI-driven project management workflow, and now Ive got to fill the void by stepping up my own project management skills if I ever want to get my project completed. Plus, I hate when tech tries to gaslight me.
討論
評論 1:
When making a complaint, please
- make sure you have chosen the correct flair for the Claude environment that you are using: i.e Web interface (FREE), Web interface (PAID), or Claude API. This information helps others understand your particular situation.
- try to include as much information as possible (e.g. prompt and output) so that people can understand the source of your complaint.
- be aware that even with the same environment and inpu```, others might have very different outcomes due to Anthropic's testing regime.
- be sure to thumbs down unsatisfactory Claude output on Claude.ai. Anthropic representatives tell us they monitor this data regularly.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
評論 2:
Could you ask Claude to do a summary of your post? Soooooo many words
24. Similar but better options?
以下是对您提供的文本的核心讨论主题的总结:
-
寻找与Claude智能水平相当的AI
用户询问是否存在其他AI系统在智能程度上能与Claude匹敌,特别是在创意写作的审阅和修订方面。 -
对现有AI限制的不满
用户表达了对当前AI(可能指Claude或其他类似系统)在对话长度和消息数量上的限制日益感到不满,希望找到不受此类限制的替代方案。 -
专注于创意写作辅助功能
讨论特别关注AI在创意写作领域的应用,强调对文本修订、改进和反馈能力的需求。
总结来说,核心主题是:用户寻求一款在创意写作辅助方面与Claude智能水平相当,同时没有严格对话限制的替代AI工具。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jp3khj/similar_but_better_options/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jp3khj/similar_but_better_options/
- 發布時間: 2025-04-02 02:54:04
內容
Is there another AI that is as intelligent as Claude, especially when it comes to help reviewing/revising creative writing, without the increasingly annoying limi``` on conversation lengths and messages?
討論
評論 1:
Gemini pro 2.5. You can try it one month free as well.
25. I think leaving search ON makes it way dumber.
這篇文章的核心討論主題是:測試關閉網路搜尋功能對Claude AI表現的影響。
作者提到,他聽說開啟網路搜尋可能會降低Claude的表現,因此決定嘗試關閉該功能,並觀察結果,後續將更新測試情況。這反映了對Claude AI在不同設定下(尤其是網路搜尋開啟與否)的效能差異的探討。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jp2288/i_think_leaving_search_on_makes_it_way_dumber/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jp2288/i_think_leaving_search_on_makes_it_way_dumber/
- 發布時間: 2025-04-02 01:54:49
內容
Im going to try turning off the web search. I heard someone say it makes Claude perform way worse when you leave it on. Will update.
討論
評論 1:
When making a complaint, please
- make sure you have chosen the correct flair for the Claude environment that you are using: i.e Web interface (FREE), Web interface (PAID), or Claude API. This information helps others understand your particular situation.
- try to include as much information as possible (e.g. prompt and output) so that people can understand the source of your complaint.
- be aware that even with the same environment and inpu```, others might have very different outcomes due to Anthropic's testing regime.
- be sure to thumbs down unsatisfactory Claude output on Claude.ai. Anthropic representatives tell us they monitor this data regularly.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
評論 2:
The massive wave of complain when they introduced Web Search sugges you may be right.
I mean people literally always complain about how Claude/ChatGPT/anything "seems dumber lately", and usually it means absolutely nothing. But when you suddenly see a 10x spike, something probably did happen.
There's actually decently founded technical basis for it. For most models, performance drops significantly at higher token coun, and it doesn't take a whole lot. The Search feature adds 8K + tokens. EIGHT THOUSAND. Artifac, another 8K. With everything on, you're looking at 20K+ tokens baseline in every request before you've said a single word, and most of it will have nothing to do with what you're about to ask.
評論 3:
Well? What's your findings?
26. Figma MCP : AI for Figma
由於我無法直接訪問 YouTube 影片內容,以下提供幾種方法幫助你總結影片的核心主題:
-
查看影片標題與描述
通常標題和簡介會直接反映核心內容,例如技術分析、社會議題或教學主題。 -
觀察評論區與章節標記
觀眾討論的焦點和影片內建的章節標題(如果有)能快速提示重點。 -
關鍵片段線索
若影片有字幕或你能觀看部分內容,注意:- 開頭/結尾的總結性陳述
- 重複出現的術語或問題
- 轉折詞(如「最重要的是…」「本文探討…」)
-
提供更多資訊
若你能補充影片的發布者、標題或具體內容片段,我可以進一步幫你分析主題。
建議先嘗試上述方法,或提供更多細節以便更精準協助!
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1joneca/figma_mcp_ai_for_figma/
- 外部連結: https://youtu.be/3nYDUqlA13s?si=YA3PyCl75aMTHmb2
- 發布時間: 2025-04-01 12:38:37
內容
連結: https://youtu.be/3nYDUqlA13s?si=YA3PyCl75aMTHmb2
討論
無討論內容
27. i love that i can use a MCP with RStudio, load my data in, and ask claude to run an analysis for me while I do something else - then come back 3 minutes later with 500-1k lines of code written and plo``` made for me to review. MCP is a godsend
這篇文章的核心討論主題圍繞以下三個問題:
- Claude與RStudio的整合應用:探討如何在RStudio中使用Claude(可能是AI工具或程式庫),並尋求相關經驗分享。
- MCP的定義與解釋:詢問「MCP」的具體含義,可能涉及某種技術、方法論或工具縮寫。
- MCP的用途或重要性:進一步釐清MCP的功能或應用場景,顯示對該術語的深入理解需求。
整體而言,文章聚焦於技術工具(Claude、RStudio)的實際操作與特定術語(MCP)的釐清,屬於技術應用與概念解釋的討論。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jokmct/i_love_that_i_can_use_a_mcp_with_rstudio_load_my/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jokmct/i_love_that_i_can_use_a_mcp_with_rstudio_load_my/
- 發布時間: 2025-04-01 10:04:39
內容
How do you use Claude with RStudio? Mind to share it? What's MCP? Whats the MCP?
討論
評論 1:
How do you use Claude with RStudio? Mind to share it?
評論 2:
What's MCP?
評論 3:
Whats the MCP?
28. Claude Code API Error: Connection error.
這段討論的核心主題是:
使用者在VS Code的WSL環境中遇到Claude API的異常問題,具體包括:
- 間歇性功能故障:API時好時壞,需頻繁手動輸入「continue」才能繼續。
- 費用爭議:即使出現API錯誤(如請求失敗),系統仍扣減Anthropic的信用額度(10次「continue」請求被收費$0.11)。
- 尋求解決方案:使用者希望釐清問題原因並找到修復方法,可能涉及WSL配置、API連線穩定性或計費機制異常。
(附圖連結可能顯示錯誤畫面或扣費記錄,但未提供具體內容。)
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jp6b24/claude_code_api_error_connection_error/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jp6b24/claude_code_api_error_connection_error/
- 發布時間: 2025-04-02 04:42:57
內容
Is anybody else having this issue? It just started happening 2 days ago. I am using WSL in VS Code. How can I resolve this? It will sometimes work, but sometimes not. I have to constantly ask it to continue, and even though I am getting an API Error it is deducting from my Athropic credit balance. 10 of these "please continue" reques``` cost me $0.11.
https://preview.redd.it/h16qscraease1.png?width=334&format=png&auto=webp&s=02c351e3f5ff15c0dfec8734b4f4bf4d8abea8d0
討論
無討論內容
29. Claude 3.7 vs 3.5 Sonnet Compared: What's new?
這篇文章的核心討論主題是 Claude 3.7 Sonnet 相較於前一代 3.5 Sonnet 的顯著性能提升,具體聚焦於以下面向:
-
能力突破
- 數學能力:處理競賽級數學題(如 AIME)的準確率從 16% 躍升至 61%。
- 編程能力:成功率從 49% 提升至 62.3%。
- 高階推理:研究生級別問題的準確率從 65% 進步到 78.2%。
-
用戶體驗改進
- 減少無謂拒絕合理請求的情況(降低 45%),同時維持安全邊界。
- 新增「思考過程」可視化功能,讓用戶觀察模型逐步解題的邏輯。
-
整體評價
作者對 Claude 3.7 的升級表示驚豔,尤其強調其在專業領域的實用性提升與互動流暢度改善,並提供詳細分析連結供讀者進一步參考。
(註:原文連結因格式問題無法正常顯示,但推測應為同一篇 Medium 文章。)
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jp0sfx/claude_37_vs_35_sonnet_compared_whats_new/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jp0sfx/claude_37_vs_35_sonnet_compared_whats_new/
- 發布時間: 2025-04-02 01:03:39
內容
Just finished mydetailed comparison of Claude 3.7 vs 3.5 Sonnetand I have to say... I'm genuinely impressed.
The biggest surprise? Math skills. This thing can now handle competition-level problems that the previous version completely failed at. We're talking a jump from 16% to 61% accuracy on AIME problems (if you remember those brutal math competitions from high school).
Coding success increased from 49% to 62.3% and Graduate-level reasoning jumped from 65% to 78.2% accuracy.
What you'll probably notice day-to-day though is it's much less frustrating to use. It's 45% less likely to unnecessarily refuse reasonable reques``` while still maintaining good safety boundaries.
My favorite new feature has to be seeing i``` "thinking" process - it's fascinating to watch how it works through problems step by step.
Check out thisfull breakdown
討論
評論 1:
I wonder if this sub does agree with your assessment. The reality and real life scenarios seem to show a different reality. At least perceived a lot of people consider 3.5 more human like and prefer it for that
評論 2:
I always liked Claude's answers more than other LLMs, but I canceled my subscription due to usage limitations, it's been about 6 months, I haven't tested 3.7. However, I see people here on Reddit saying that it has gotten worse in some aspec``` and that the limit problems are still constant. Maybe I'll sign up for 1 month to test it.
評論 3:
You posted this in 37 subs! Bruh
評論 4:
Is this a bot account?
30. Using Claude to Help with Documentary Film Editing & CSV Metadata
這篇文章的核心討論主題是:
如何有效地利用AI工具(如Claude)處理從DaVinci Resolve導出的CSV格式元數據,以輔助紀錄片的文本剪輯(text-based editing)和故事結構建構。
具體要點包括:
- 挑戰:作者在使用Claude解析DaVinci Resolve的CSV元數據時遇到不穩定的問題(如反覆讀取失敗)。
- 建議的工作流程:
- 分批次導出CSV(如按訪談對象或素材類型分類)。
- 提供具體的指令(如分析主題、建議故事結構、找出遺漏的關聯)。
- 採用迭代式互動(先總結數據,再深入分析)。
- 記錄AI生成的洞察以便後續剪輯參考。
- 替代方案:討論是否應改用ChatGPT(因其對CSV的處理可能更穩定)。
- 求助社群經驗:徵求其他用戶的實用技巧或CSV格式化建議,以優化AI協作效率。
總結:問題聚焦於「AI工具與剪輯軟體元數據的整合應用」,並尋求提升工作流程效率的具體方法。
- Reddit 連結: https://reddit.com/r/ClaudeAI/comments/1jp0eva/using_claude_to_help_with_documentary_film/
- 外部連結: https://www.reddit.com/r/ClaudeAI/comments/1jp0eva/using_claude_to_help_with_documentary_film/
- 發布時間: 2025-04-02 00:48:36
內容
Hi everyone! I'm a documentary filmmaker currently working on a project in DaVinci Resolve Studio. I've been meticulously adding metadata to about 10 hours of footage (interviews and b-roll), creating subclips of my interviews with detailed descriptions, shot types, and keywords.
Now I want to do a text-based edit to build my story structure before jumping into the actual editing program. I'm hoping to use Claude to assist with this process since I have all this metadata in CSV format.
My question: Has anyone successfully used Claude with CSV files from DaVinci Resolve? I've had mixed resul``` - sometimes it works, sometimes it doesn't. Claude seems to run through several iterations trying to read the data.
Claude's response when I asked for advice:
Claude suggested the following workflow:
-
Export targeted CSVs rather than all metadata at once (separate interviews from b-roll, maybe separate by interview subject)
-
Be specific with reques``` (finding thematic connections, suggesting story structures, identifying gaps)
-
Use an iterative approach - first ask Claude to summarize the data, then request specific analyses
-
Document insigh``` separately to reference during editing
Claude also suggested these example promp``` after uploading a CSV:
-
"Analyze this CSV and identify main themes across interviews"
-
"Based on these clip descriptions, what story structure might work best?"
-
"Help identify connections between interview segmen``` I might have missed"
-
"Which segmen``` would work well for the documentary opening?"
Has anyone here developed an effective workflow using Claude with CSV metadata from editing software? Any tips for formatting the CSV expor``` to work better with Claude? Or should I just switch to ChatGPT which seems to handle CSVs more consistently?
Any advice appreciated!
討論
評論 1:
Hey, cool use case! Using LLMs for text-based editing based on metadata is a smart approach. The inconsistency you're seeing with direct CSV uploads isn't uncommon, as LLMs can struggle with large/complex tables.
While preprocessing the CSV helps (as Claude suggested - smaller expor```, specific columns), you might find a more direct workflow using an AI assistant that integrates with DaVinci Resolve via the Model Context Protocol (MCP).
There's actually a community-built DaVinci Resolve MCP server available: https://github.com/samuelgursky/davinci-resolve-mcp
If you use an assistant that suppor``` MCP (like Cline), you could add this server as as tool and then use AI to work with Davinci Resolve directly. Then, instead of exporting/uploading CSVs, you could directly ask things like:
- "List all clips in the 'Interview A' bin with their descriptions and keywords."
- "Find markers related to 'childhood memory' on the main timeline."
- "Based on the descriptions of clips in timeline 'Rough Cut 1', suggest thematic connections."
This avoids the CSV parsing issues entirely and le``` the AI query Resolve directly for the metadata it needs to help structure your story. Might be worth exploring if you continue hitting roadblocks with CSVs! Good luck with the edit!
總體討論重點
以下是30篇文章的條列式重點總結,並附上對應的錨點連結與逐條細節說明:
1. This is the first time in almost a year that Claude is not the best model
- 性能比較:Gemini 2.5在上下文處理和可靠性上超越Claude。
- 態度轉變:作者從厭惡Google到認可Gemini 2.5的技術進步。
- 動態競爭:AI領域快速變化,未來Claude可能反超。
2. I accidentally built a brain fog tracker with Claude
- 問題背景:長期腦霧困擾,傳統方法無效。
- 解決方案:與Claude協作開發追蹤工具,分析數據發現補充劑與腦霧改善的關聯。
- AI協作差異:Claude主動提問並優化架構,不同於ChatGPT的模板化回應。
3. Do you think using LLMs is a skill?
- 核心爭議:提示詞工程是否算專業技能?
- 作者觀點:更接近「工具熟練度」而非需刻意培養的技能。
4. I went through $50 tonight trying to get AI/Claude to fix errors
- 挫折來源:Replit平臺功能不足,腳本錯誤無法解決。
- AI依賴性:Claude大幅提升工作效率,但開發過程成本高昂。
5. Message Limit Reached very often now in paid version
- 問題描述:付費用戶仍頻繁觸發訊息限制,影響使用體驗。
- 版本影響:Claude 2.7 Sonnet在非思考模式下限制更嚴格。
6. What the hell happened?
- 投訴建議:需提供詳細情境(如免費/付費版標籤)以釐清問題。
- 反饋機制:按「踩」幫助官方監測不滿意的輸出。
7. [Claude Pro limi](#7-claude-pro-limi)
- 限制加劇:近期訊息上限驟降(如5條即觸發限制)。
- 時間範圍:問題集中於過去兩天,可能為臨時調整。
8. Can somebody tell what MCPs capable of like telling a toddler?
- 核心需求:釐清MCP的實際應用場景,而非技術細節。
- 術語模糊:需更多背景定義MCP的具體領域。
9. v0.7.3 Update: Dive, An Open Source MCP Agent Desktop
- 功能亮點:支援多模型切換(如GPT-4、Claude)、介面操作優化。
- 定位:高效開發工具,強調LLM整合與即時調用。
10. Claude's context has been nerfed?
- 質疑點:Sonnet可能未完整使用200k token上下文,改以摘要緩存節省成本。
- 證據:模型突然遺忘40k token內的代碼片段。
11. [Are rate limi significantly lower now?](#11-are-rate-limi-significantly-lower-now)
- 異常現象:對話限制從40-50條降至20-25條,未獲官方解釋。
12. Optimizing Claude's MCP Filesystem Server
- 問題根源:默認配置觸發非必要遞歸搜索,導致性能卡頓。
- 解決方案:手動修剪工具(如移除
search_files),保留核心文件操作功能。