2025-04-01-rising
- 精選方式: RISING
討論重點
以下是根據提供的25篇文章整理的重點摘要,以條列方式呈現核心討論主題及逐條細節,並附上對應的文章錨點連結:
1. Roo Code 3.11.0 Release Notes
核心主題:Roo Code 的功能更新與效能改進
- 效能優化
- 新增「快速編輯」功能,僅修改必要代碼行而非重寫整個檔案。
- API 資源管理
- 支援 OpenRouter 等 API 金鑰餘額查詢。
- 專案層級配置
- 透過
.roo/mcp.json自訂 MCP 伺服器設定,覆蓋全域配置。
- 透過
- 模型強化
- 改進 Gemini API 錯誤處理,新增 Gemini 2.5 Pro 支援。
- 設定管理工具
- 新增設定檔匯入/匯出功能。
2. 自動化解決編程測驗的工具
核心主題:反抗技術招聘中的限時編程測驗
- 批評測驗制度
- 短時間高難度題目、禁止查資料等要求不合理。
- 開發自動化工具
- 使用 OpenAI API 解析題目並生成答案,規避監控。
- 開源與社群號召
- 計畫開源工具,反映對招聘文化的不滿。
3. Google 的問題討論
核心主題:圖片內容未提供,推測可能涉及:
- 迷因或社會議題調侃。
- 科技或服務批評(如搜尋引擎品質下降)。
4. AI 工具的局限性與風險
核心主題:AI 在技術問題解決中的潛在誤導
- 缺乏真實理解
- 以 Claude 3 為例,生成矛盾建議(如錯誤的
window.location用法)。
- 以 Claude 3 為例,生成矛盾建議(如錯誤的
- 過度依賴風險
- 使用者可能被「自信但錯誤」的回應誤導。
- 與人類協作對比
- AI 無法主動釐清問題脈絡。
5. 分散式系統的日誌服務設計
核心主題:分散式系統中的日誌提交原子性
- 多線程風險
- 變量
location雙重遞增導致數據不一致。
- 變量
- 提交仲裁機制
- 依賴
tLogWriteAntiQuorum確保跨節點一致性。
- 依賴
6. Swebench 排行榜新冠軍
核心主題:AI 自動修復真實 GitHub 問題的評估基準
- 測試框架
- 衡量 LLMs 在 Django 等專案中的修復能力。
- 開源維護
- 透過 AI 輔助處理 Issue 和 PR。
(因篇幅限制,以下為簡要條列,完整細節請參考原文錨點連結。)
7-25. 其他文章重點摘要
- Gemini 2.5 速率限制困惑:用戶遭遇突然的 API 限制,質疑配額透明度。
- Gemini 2.5 開發工具選擇:討論 Roo、Cursor 等 IDE 的相容性問題。
- AI 編程實戰手冊:分享 Copilot 整合與除錯技巧。
- AI 應用的安全指南:未提供具體內容。
- 開源工具 Dyad 的應用:幫助數據科學家遵循工程標準。
- Trae Coder 的穩定性問題:React 專案頻繁崩潰。
- Aider 0.80.0 更新:OpenRouter 整合與模型自動選擇。
- 非開發者的後端解決方案:推薦 Supabase 等低代碼工具。
- AI 的矛盾影響:批評「隨意編碼」與理想化潛
文章核心重點
以下是各篇文章的一句話摘要(條列式輸出):
-
Roo Code 3.11.0 Release Notes
Roo Code 3.11.0 推出專案層級 MCP 設定、快速編輯功能,並強化 Gemini 模型支援與 API 管理工具。 -
自動解題工具抗議招聘測驗
作者批評技術招聘的編程測驗不切實際,計畫開發開源工具利用 AI 自動解題以反抗此制度。 -
Google 問題討論(圖片內容未知)
因內容為圖片無法分析,可能涉及 Google 相關迷因、社會議題或科技新聞。 -
AI 工具的技術風險警示
實例指出 Claude 等 AI 在解決複雜問題時可能提供矛盾建議,強調需結合領域知識驗證輸出。 -
大型程式碼庫分析工具需求
詢問是否有 AI 工具能分析龐大程式庫並建立知識圖譜以回答技術問題。 -
SWE-Bench 評比榜首工具探討
討論最新登上 SWE-Bench 榜首的 AI 工具,評估其自動修復真實 GitHub 議題的能力。 -
Gemini 2.5 速率限制問題
用戶反映 Gemini 2.5 Pro 無預警觸發速率限制,對配額機制透明度提出質疑。 -
Gemini 開發工具選擇
使用者比較 Roo、Cursor 等 IDE 插件對 Gemini 長上下文的支援效能與成本效益。 -
AI 編程實戰手冊
分享透過 AI 工具加速開發的戰術,包含代碼生成、除錯與避免生產環境事故的經驗。 -
AI 編程導師測試計畫
提供免費 LLM 額度以測試 AI 編程導師工具 Dyad,協助非開發者遵循工程最佳實踐。 -
Trae Coder 工具使用困境
初學者反映 Trae Coder 在 React 專案中頻繁出錯,尋求框架支援與除錯的解決方案。 -
Aider 0.80.0 新增 OpenRouter 整合
新版 Aider 簡化 OpenRouter 登入流程,優化模型選擇與終端互動功能。 -
非開發者的後端 AI 解決方案
探討如何讓前端開發者透過 AI 與低代碼工具(如 Supabase)快速實現後端邏輯。 -
AI 編程的雙面性爭論
對比 AI 導致「隨意編碼」的技術債風險與其理想中精準適配需求的潛力。 -
大型程式庫上下文管理工具
介紹 Prompt Tower 工具如何動態篩選相關代碼上下文,提升 Gemini 等模型的處理效率。 -
AI 代碼生成穩定性質疑
分析 AI 工具因上下文記憶缺陷導致迭代修改失效的問題,呼籲正視技術限制。 -
自動化瀏覽器工具評比
探討 browser-use 專案是否為當前最強大的瀏覽器自動化解決方案。 -
繞過 Gemini 速率限制技巧
(內容不足)可能討論如何規避 Gemini 2.5 Pro 的 API 呼叫限制。 -
ChatGPT 爬蟲的潛在危害
批評大量 AI 生成爬蟲消耗網站資源,建議建立公共資料庫替代重複爬取。 -
AI 社群媒體工具爭議
警告低品質 AI 生成內容(slopposting)正快速破壞網路生態。 -
MCP 創建工具開發分享
開發者公開自製工具用於自動生成 MCP(可能為 Minecraft 模組/插件)。 -
AI 編程的學習曲線詢問
(僅提問)徵求使用 Claude 等工具開發完整應用的經驗與時程。 -
本地 LLM 的上下文控制策略
討論手動篩選代碼上下文如何提升本地模型在開發工作流的準確性。 -
Claude 輔助 Spring Boot 遷移
實例說明如何利用 Claude Code 自動化 Spring Boot 2.x 至 3.x 的遷移任務。
目錄
- 1. Roo Code 3.11.0 Release Notes - Project Level MCP Config, Fast Edits and MOREEEEEEE.....
- 2. I'm writing a free program that will silently solve a coding assessment challenge for a job application
- 3. What's wrong with Google?
- 4. These tools will lead you right off a cliff, because you will lead yourself off a cliff.
- 5. is there any AI tool that can analyze big code base and build knowledge graph and answer questions
- 6. new #1 on SWE-Bench Leaderboard. Anyone tried them?
- 7. Gemini 2.5 Rate Limiting
- 8. Y'all who are raving about Gemini 2.5 Pro - which IDE / plugin are you using? Aider, Cline, Roo, Cursor, etc
- 9. My AI coding playbook: Tactics I've learned after taking down production sites
- 10. 10 Min video on how to secure you AI vibe-coded app
- 11. Free LLM credits for beta testing AI coding mentor
- 12. How to effectively use AI coders? (Common Mistakes) (Trae)
- 13. Aider v0.80.0 is out with easy OpenRouter on-boarding
- 14. Best way for non-developers to code the backend with AI for a frontend I built on V0?
- 15. Polio, Bloatware and Vibe Coding
- 16. Tool for managing large codebase context
- 17. My theory about why AI both sucks and is great for code generation
- 18. What is the latest and greatest for autonomous computer use?
- 19. What is the trick for getting past the Gemini 2.5 pro rate limits right now?
- 20. Is everyone building web scrapers with ChatGPT coding and what's the potential harm?
- 21. I created an AI-powered social media tool
- 22. I created a tool to create MCPs
- 23. People who can actually code, how long did it take you to build a fully functional, secure app with Claude or other AI tools?
- 24. Context control for local LLMs: How do you handle coding workflows?
- 25. Migrating a Spring Boot 2.x project using Claude Code - Claude Code: a new approach for AI-assisted coding
1. Roo Code 3.11.0 Release Notes - Project Level MCP Config, Fast Edits and MOREEEEEEE.....
這篇文章的核心討論主題是 Roo Code 的最新功能更新與改進,具體涵蓋以下重點:
-
效能優化
- 「快速編輯」(Fast Edits)功能提升修改程式碼的效率,僅變更必要行數而非重寫整個檔案。
-
API 資源管理
- 新增 API 金鑰餘額查詢功能(如 OpenRouter、Requesty),方便監控使用量。
-
專案設定彈性化
- 支援透過
.roo/mcp.json檔案設定專案層級的 MCP 伺服器配置,覆蓋全域設定。
- 支援透過
-
模型支援強化
- 改進 Gemini API 的錯誤重試邏輯、字元跳脫處理,並新增 Gemini 2.5 Pro 模型支援。
-
設定管理工具
- 新增設定檔匯入/匯出功能,便於備份或共享配置。
-
使用者體驗改進
- API 設定檔釘選與排序、可直接編輯建議答案、修復多項工具互動問題等。
總結:文章主要介紹 Roo Code 在開發效率、API 整合、設定管理及使用者操作上的多項更新,目標是提升開發者的工作流順暢度與工具彈性。
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1joi2n1/roo_code_3110_release_notes_project_level_mcp/
- 外部連結: https://www.reddit.com/r/ChatGPTCoding/comments/1joi2n1/roo_code_3110_release_notes_project_level_mcp/
- 發布時間: 2025-04-01 07:58:04
內容
For comprehensive details and previous release notes, visit the Roo Code Docs Update Notes.
⚡ Fast Edits
- Applying edits, especially multiple changes, is now significantly faster by modifying only necessary lines instead of rewriting the whole file. This speeds up iterative development and helps prevent issues on large files. Learn more: Fast Edits Documentation
💰 API Key Balances
- Conveniently check your current credit balance for OpenRouter and Requesty directly within the Roo Code API provider settings to monitor usage without leaving the editor.
📁 Project-Level MCP Config
- Configure MCP servers specifically for your project using a
.roo/mcp.jsonfile, overriding global settings. Manage this file directly from the MCP settings view. (thanks aheizi!) Learn more: Editing MCP Settings Files
🧠 Improved Gemini Support
-
Smarter Retry Logic: Intelligently handles transient Gemini API issues (like rate limits) with precise retry timing and exponential backoff.
-
Improved Character Escaping: Resolved issues with character escaping for more accurate code generation, especially with special characters and complex JSON.
-
Gemini 2.5 Pro Support: Added support for the Gemini 2.5 Pro model via GCP Vertex AI provider configuration. (thanks nbihan-mediware!)
💾 Import/Export Settings
- Export your Roo Code settings (API Profiles, Global Settings) to a
roo-code-settings.jsonfile for backup or sharing, and import settings from such a file to merge configurations. Find options in the main Roo Code settings view. Learn more: Import/Export/Reset Settings
📌 Pin and Sort API Profiles
- Pin your favorite API profiles to the top and sort the list for quicker access in the settings dropdown. (thanks jwcraig!) Learn more: Pinning and Sorting Profiles
✏️ Editable Suggested Answers
- Suggested answers from Roo's follow-up questions (
ask_followup_questiontool) can now be edited directly in the chat before accepting. (thanks samhvw8!) Learn more: Interacting with Suggestions
🔧 General Improvements and Bug Fixes
- Numerous other enhancements and fixes have been implemented, including improvements to partial file reads, tool-calling logic, the "Add to Context" action, browser tool interactions, and more. See the full list here: General Improvements and Bug Fixes (Thanks KJ7LNW, diarmidmackenzie, bramburn, samhvw8, gtaylor, afshawnlotfi, snoyiatk, and others!)
討論
評論 1:
RooCode is the BEST!
評論 2:
Can’t believe I didn’t ditch cursor before.
評論 3:
This is the way
評論 4:
Someone w a good setup share their settings json
2. I'm writing a free program that will silently solve a coding assessment challenge for a job application
這篇文章的核心討論主題是:對技術招聘中限時編程測驗(如HackerRank、LeetCode)的不滿與反抗。
作者具體表達了以下觀點:
- 批評現行測驗制度:認為短時間內完成高難度題目、禁止查閱資料、監控螢幕/鏡頭等要求不合理,且測驗結果可能未被認真審查。
- 個人困境:因失業被迫接受這些測驗,但認為它們無法真實反映求職者的能力。
- 技術性反抗:計劃開發一個工具,利用OpenAI API自動解析題目並生成答案,幫助用戶「作弊」以應付測驗,同時規避監控。
- 號召社群參與:公開徵求意見,並預告將開源工具,暗示對招聘文化的不滿是普遍現象。
本質上,文章是對科技業招聘形式主義的控訴,並提出一種激進的解決方案。
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1jonjwg/im_writing_a_free_program_that_will_silently/
- 外部連結: https://www.reddit.com/r/ChatGPTCoding/comments/1jonjwg/im_writing_a_free_program_that_will_silently/
- 發布時間: 2025-04-01 12:48:28
內容
Why? Because fuck any job that bases an entire candiates skill level on a 60 minute assessment you have zero chance of completing.
Ok, so some context.
Im unemployed and looking for a job. I got laid off in January and finding work has been tough. I keep getting these hackerrank and leetcode assessments from companies that you have to complete before they even consider you. Problem is, these are timed and nearly impossible to complete in the given timeframe. If you have had to do job hunting you are probably familiar with them. They suck. You cant use any documentation or help to complete them and alot of them record your screen and webcam too.
So, since they want to be controlling when in reality they dont even look at the assessments other than the score, I figure "Well shit, lets make them atleast easy".
So the basics of the program is this. The program will run in the background and not open any windows on the task bar. The user will supply their openAI api key and what language they will be doing the assessment in in a .env file, which will be read in during the booting of the program. Then, after the code question is on screen, the page will be screenshot and sent to chatgpt with a prompt to solve it. That result will be displayed to the user in a window only visible to them and not anyone watching their screen (still working on this part). Then all the user has to do is type the output into the assessment (no copy paste because thats suspicious).
So thats my plan. Ill be releasing the github for it once its done. If anyone has ideas they want to see added or comments, post them below and ill respond when I wake up.
Fuck coding Assessmnents.
討論
評論 1:
A guy just made some big waves releasing this.
Amazon was pretty pissed that he posted the whole interview online.
評論 2:
Dude just make ChatGPT talk to you since nobody is hearing you.
評論 3:
Ignore the haters. Go crazy with AI on this bullshit.
評論 4:
I hope I do not end up working with someone who used your tool to get hired and then cannot do the work.
評論 5:
You deserve to be unemployed babe
3. What's wrong with Google?
由於提供的連結是圖片格式(i.redd.it/45cl058mx5se1.png),我無法直接查看或分析其內容。但根據常見情況推測,Reddit 圖片貼文可能涉及以下主題之一:
-
網路迷因或幽默內容
- 若圖片是迷因(meme)或搞笑圖片,核心主題可能是某種社會現象、流行文化的調侃,或網路社群的互動。
-
社會議題或新聞事件
- 圖片可能以視覺方式呈現某個熱門話題(如政治、科技、環境等),引發討論。
-
個人經歷或故事分享
- Reddit 用戶常分享個人生活片段,圖片可能附帶文字描述某種經歷或觀點。
-
科技或遊戲相關內容
- 若來自特定子版塊(如 r/gaming、r/technology),可能與遊戲、軟體或硬體相關。
建議行動:
- 提供圖片的文字內容或更詳細的描述,以便具體分析。
- 若圖片涉及特定社群(如 r/AskReddit、r/pics),可參考該版塊的常見主題方向。
希望以上方向能幫助初步釐清內容!
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1jook14/whats_wrong_with_google/
- 外部連結: https://i.redd.it/45cl058mx5se1.png
- 發布時間: 2025-04-01 13:54:38
內容
連結: https://i.redd.it/45cl058mx5se1.png
討論
無討論內容
4. These tools will lead you right off a cliff, because you will lead yourself off a cliff.
這篇文章的核心討論主題是:
「AI工具(如Claude)在技術問題解決上的局限性與潛在風險」,尤其聚焦於以下幾點:
-
AI缺乏真正的理解能力
- 作者以自身使用Claude 3解決NextJS與Firebase身份驗證問題的失敗經驗為例,指出AI即使提供看似專業的建議,本質上仍是「無意識的推測」,可能導致循環論證或矛盾的回應(如先質疑cookie設定錯誤,後又撤回建議並回到原點)。
-
過度依賴AI的危險性
- 當使用者對領域知識不熟悉時,AI的「自信但錯誤」的建議可能誤導決策(例如在伺服器端代碼中荒謬地建議使用
window.location)。作者比喻這種互動如同「與橡皮鴨對話」,實際是使用者在自我引導。
- 當使用者對領域知識不熟悉時,AI的「自信但錯誤」的建議可能誤導決策(例如在伺服器端代碼中荒謬地建議使用
-
與人類協作的對比
- 人類專家會主動釐清問題脈絡、提出批判性思考,而AI僅能基於輸入參數生成表面回應,缺乏深度推理或經驗判斷。作者甚至諷刺:若初級工程師表現如此,應被解僱。
-
技術負債的隱憂
- 預測未來幾年因濫用AI工具可能導致大量低品質軟體,強調需明確界定AI的適用場景(如輔助生成代碼),但必須搭配領域知識才能安全使用。
-
根本建議
- 呼籲開發者保持警惕,尤其在不熟悉的領域中,應質疑AI的所有回應,並回歸官方文件或傳統除錯方法。
隱含批判:當前AI工具被過度神化為「智能助手」,但本質仍是高級模式匹配器,使用者需清醒認知其局限性。
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1jokdbm/these_tools_will_lead_you_right_off_a_cliff/
- 外部連結: https://www.reddit.com/r/ChatGPTCoding/comments/1jokdbm/these_tools_will_lead_you_right_off_a_cliff/
- 發布時間: 2025-04-01 09:52:33
內容
Just another little story about the curious nature of these algorithms and the inherent dangers it means to interact with, and even trust, something "intelligent" that also lacks actual understanding.
I've been working on getting NextJS, Server-Side Auth and Firebase to play well together (retrofitting an existing auth workflow) and ran into an issue with redirects and various auth states across the app that different components were consuming. I admit that while I'm pretty familiar with the Firebase SDK and already had this configured for client-side auth, I am still wrapping my head around server-side (and server component composition patterns).
To assist in troubleshooting, I loaded up all pertinent context to Claude 3.7 Thinking Max, and asked:
It goes on to refactor my endpoint, with the presumption that the session cookie isn't properly set. This seems unlikely, but I went with it, because I'm still learning this type of authentication flow.
Long story short: it didn't work, at all. When it still didn't work, it begins to patch it's existing suggestions, some of which are fairly nonsensical (e.g. placing a window.location redirect in a server-side function). It also backtracks about the session cookie, but now says its basically a race condition:
When I ask what reasoning it had to suggest the my session cookies were not set up correctly, it literally brings me back to square one with my original code:
The lesson here: these tools are always, 100% of the time and without fail, being led by you. If you're coming to them for "guidance", you might as well talk to a rubber duck, because it has the same amount of sentience and understanding! You're guiding it, it will in-turn guide you back within the parameters you provided, and it will likely become entirely circular. They hold no opinions, vindications, experience, or understanding. I was working in a domain that I am not fully comfortable in, and my questions were leading the tool to provide answers that were further leading me astray. Thankfully, I've been debugging code for over a decade, so I have a pretty good sense of when something about the code seems "off".
As I use these tools more, I start to realize that they really cannot be trusted because they are no more "aware" of their responses as a calculator would be when you return a number. Had I been working with a human to debug with me, they would have done any number of things, including asked for more context, sought to understand the problem more, or just worked through the problem critically for some time before making suggestions.
Ironically, if this was a junior dev that was so confidently providing similar suggestions (only to completely undo their suggestions), I'd probably look to replace them, because this type of debugging is rather reckless.
The next few years are going to be a shitshow for tech debt and we're likely to see a wave of really terrible software while we learn to relegate these tools to their proper usages. They're absolutely the best things I've ever used when it comes to being task runners and code generators, but that still requires a tremendous amount of understanding of the field and technology to leverage safely and efficiently.
Anyway, be careful out there. Question every single response you get from these tools, most especially if you're not fully comfortable with the subject matter.
Edit - Oh, and I still haven't fixed the redirect issue (not a single suggestion it provided worked thus far), so the journey continues. Time to go back to the docs, where I probably should have started! 🙄
討論
評論 1:
Well, maybe you should talk to it like a rubber ducky.
https://en.wikipedia.org/wiki/Rubber_duck_debugging
評論 2:
lol, crickets. unsolvable errors kill the VIBE man!
評論 3:
That's a lot of words.
評論 4:
I was four tries in before it refactoring to my original solution the other day. Honestly, I think it may be taking my own code from its context history.
I basically gave it an unsolvable problem to begin with, as it turns out. I refused to give it proper rights, and demanded it set rights. Solution was a limited bash script instead, good ol -x. But because I framed it as "this .py isn't working" it never got there.
評論 5:
Holy shit, it is as if I'm looking at my own code. I am literally fighting this whole day with auth routes being set and redirections to a dashboard if the authentication fails because redirects do not work or cookies are set too early. It is bizarre. Very, very funny.
5. is there any AI tool that can analyze big code base and build knowledge graph and answer questions
核心討論主題總結:
-
分散式系統日誌服務的可靠性設計
文章聚焦於分散式系統(如ZooKeeper、FoundationDB)中,日誌服務(LogServer)的核心實現機制,尤其關注「數據提交(commit)」過程的原子性與一致性問題。 -
代碼中的潛在多線程風險
通過分析FoundationDB的LogServer提交代碼片段,指出變量location在迭代過程中的潛在危險:- 雙重遞增問題:
location在if-else分支和循環末尾均被遞增,可能導致跳過某些伺服器或重複處理。 - 數據不一致風險:若不同位置的
logServers收到不同消息(因location錯誤偏移),可能破壞跨節點的數據一致性。
- 雙重遞增問題:
-
分散式協調的關鍵細節
討論分散式系統中「提交仲裁」(quorum)的實現(如tLogWriteAntiQuorum參數),以及如何通過版本控制(prevVersion、versionSet)和異步確認(Future)確保數據持久化的可靠性。 -
工程實踐的嚴謹性
強調在底層系統編程中,變量作用域、循環邊界條件和併發控制的細微疏忽可能導致嚴重錯誤,需通過靜態檢查或形式化驗證(如FoundationDB的模擬測試)來規避。
關鍵問題補充:
代碼中location的危險性具體體現為:
-
當
ENABLE_VERSION_VECTOR_TLOG_UNICAST啟用且tpcvMap不包含當前location時,location++會跳過一個伺服器,但循環末尾的location++會再次遞增,導致後續處理錯位。這可能使部分日誌伺服器接收錯誤數據,破壞仲裁寫入的正確性。 -
Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1jomfhe/is_there_any_ai_tool_that_can_analyze_big_code/
-
發布時間: 2025-04-01 11:41:13
內容
The projects in my mind is something like zookeeper, foundationdb,
An example question I would ask about foundationdb LogServer implementation:
code:
for (size_t loc = 0; loc < it->logServers.size(); loc++) {
Standalone<StringRef> msg = data.getMessages(location); data.recordEmptyMessage(location, msg);
if (SERVER_KNOBS->ENABLE_VERSION_VECTOR_TLOG_UNICAST) {
if (tpcvMap.get().contains(location)) { prevVersion = tpcvMap.get()[location]; }
else { location++; continue; } }
const auto& interface = it->logServers[loc]->get().interf();
const auto request = TLogCommitRequest(spanContext, msg.arena(), prevVersion, versionSet.version, versionSet.knownCommittedVersion, versionSet.minKnownCommittedVersion, seqPrevVersion, msg, tLogCount[logGroupLocal], tLogLocIds[logGroupLocal], debugID);
auto tLogReply = recordPushMetrics(it->connectionResetTrackers[loc], it->tlogPushDistTrackers[loc], interface.address(), interface.commit.getReply(request, TaskPriority::ProxyTLogCommitReply)); allReplies.emplace_back(interface.id(), tLogReply);
Future<Void> commitSuccess = success(tLogReply); addActor.get().send(commitSuccess); tLogCommitResults.push_back(commitSuccess); location++; }
quorumResults.push_back(quorum(tLogCommitResults, tLogCommitResults.size() - it->tLogWriteAntiQuorum));
logGroupLocal++;
question: this is code of push function for a set of LogServers to save data from CommitProxy. Why variable location seems dangerous without being carefully taken care of? It looks like each server will get different message if the message from each location is different.
討論
評論 1:
I just cloned that repository and asked Aider with Claude your question The response is below. I have no idea if it makes sense :) ... Hope this helps you make a decision.
The comment is asking about a potential issue with the location variable in the push function. Let me analyze this code:
The concern is valid. The variable location is used as an index to retrieve messages from data using data.getMessages(location), but there's a mismatch between the loop variable loc (which iterates through the current log set's servers) and the location variable (which is a global counter across all log sets).
Here's why this is potentially dangerous:
1 location is initialized at the beginning of the function and incremented for each log server across all log sets.
2 Inside the nested loop, we're using loc to iterate through the current log set's servers, but using location to retrieve messages.
3 This means each log server gets a message corresponding to its global position across all log sets, not its position within its own log set.
If the messages in data are intended to be specific to each log server's global position, this is correct. However, if messages should be distributed based on the server's position within its log set,this would be incorrect.
The code appears to be designed this way intentionally - each log server gets a unique message based on its global position. This is reinforced by the fact that location is incremented at the end of the inner loop (location++).
This design requires careful coordination between how messages are stored in data and how they're retrieved here. If the message preparation logic doesn't match this retrieval pattern, it could lead to servers receiving incorrect messages.
6. new #1 on SWE-Bench Leaderboard. Anyone tried them?
根據 Swebench 網站(swebench.com)的內容,其核心討論主題聚焦於 「軟體工程評估基準」(Software Engineering Benchmark),具體涉及以下方向:
-
自動化軟體修復
- 評估人工智慧(如大型語言模型,LLMs)在自動修復真實 GitHub 問題(如錯誤修正、功能改進)上的能力。
- 提供標準化測試框架,衡量模型在實際程式碼庫(如 Django、scikit-learn)中的表現。
-
開源軟體維護
- 分析如何透過 AI 輔助解決開源專案中的問題(如 Issue 處理、Pull Request 生成),降低開發者負擔。
-
基準測試與數據集
- 推出公開數據集(如 Swe-bench),包含真實世界的軟體問題及其對應修復方案,用於訓練和測試 AI 工具。
-
技術挑戰與限制
- 討論當前 AI 在處理複雜程式碼庫時的局限性(如上下文理解、跨檔案修改),並提出未來改進方向。
總結:Swebench 的核心目標是 推動 AI 在軟體工程領域的實際應用,透過量化評估和開源協作,促進自動化程式維護技術的發展。
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1joo24i/new_1_on_swebench_leaderboard_anyone_tried_them/
- 外部連結: https://www.swebench.com/
- 發布時間: 2025-04-01 13:21:07
內容
討論
無討論內容
7. Gemini 2.5 Rate Limiting
這段討論的核心主題是:
用戶對Gemini 2.5 Pro Experimental的體驗與使用限制的困惑,具體包括以下幾點:
- 對Gemini效能的高度評價:用戶起初讚賞該模型能協助生成完整應用程式代碼,即使自己非專業開發者也能順利使用。
- 突然遭遇使用限制:原本可自由使用,但某天被「速率限制」(rate limiting)阻擋,無法繼續發送訊息。
- 對配額(quota)系統的困惑:
- 用戶認為自己的用量(如單一對話達30萬tokens)理應更早觸發限制,但模型卻持續運作,直到突然被切斷。
- 不確定配額計算方式或重置時間,尋求其他用戶的類似經驗與解答。
簡言之,討論聚焦於AI工具的實際體驗與官方限制政策之間的落差,以及用戶對配額機制透明度的疑問。
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1jom1s0/gemini_25_rate_limiting/
- 外部連結: https://www.reddit.com/r/ChatGPTCoding/comments/1jom1s0/gemini_25_rate_limiting/
- 發布時間: 2025-04-01 11:19:49
內容
I know this isn't a ChatGPT question, but I'm new to the scene and don't know where else to ask.
I've been using Gemini 2.5 Pro Experimental for the past few days and it is amazing. Or was, until it completely shut me out today. It built one complete app and most of a second one. This after noon I got a rate limiting message and I can't send it any more messages.
I read the quotas and I'm confused. I feel like I should have been cut off long ago, but this thing gave me tons of working code. I'm not a coder, I just told it what to do and it just keep going. I had one chat up to 300k tokens.
Has anyone had this experience, and will my rate reset?
討論
評論 1:
Same here. I'd like to know as well
8. Y'all who are raving about Gemini 2.5 Pro - which IDE / plugin are you using? Aider, Cline, Roo, Cursor, etc
這段討論的核心主題是:用戶在使用Roo與Gemini進行程式開發時遇到的技術問題與替代工具評估。具體可分為以下幾點:
-
Gemini的技術缺陷
用戶指出Gemini在整合開發中出現明顯錯誤(如將程式碼誤置於註解區、檔案內容遺失、頻繁觸發429錯誤),推測可能與自身配置或工具適配性(如Roo預設優化於Claude)有關。 -
工具相容性探討
用戶質疑當前開發環境(如Roo)是否適合Gemini的長上下文需求,並考慮轉換其他IDE/插件(如Cursor或Aider),尤其關注社群案例(YouTuber使用Cursor)與成本效益(Cursor訂閱費$20)。 -
替代方案比較
提及Aider作為「少量請求導向」工具的潛在優勢,反映用戶在「頻繁小改動(Claude模式)」與「長上下文處理(Gemini)」之間尋求平衡的決策過程。
總結:討論聚焦於Gemini在實際開發中的技術瓶頸,以及如何透過工具選擇或配置優化提升協作效率,同時權衡成本與現有工作流程的適配性。
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1jnw1z8/yall_who_are_raving_about_gemini_25_pro_which_ide/
- 外部連結: https://www.reddit.com/r/ChatGPTCoding/comments/1jnw1z8/yall_who_are_raving_about_gemini_25_pro_which_ide/
- 發布時間: 2025-03-31 13:27:27
內容
I'm trying Roo with Gemini, but it makes a lot of errors. Egregious errors like writing import statements inside a function's comment block; then just deleting the rest of the file, then getting stuck in 429. I've tried quite a few times and haven't gotten a session I didn't roll back entirely. So I've gotta think it's a configuration issue on my end. Or maybe Roo needs special configuration for Gemini, because it's inclined towards many and smaller changes via Claude (which I have great success with).
So I'm thinking, maybe one or other IDE / plugin is more conducive for Gemini's long-context usage, at this time? I figure they'll all get it ironed out, but I'd love to start feeling the magic now. I've seen some of the YouTubers using it via Cursor; so that's where I'm leaning, but figured I'd ask before re-subscribing $20. Also been seeing some chatter around Aider, which is typically more few-request style.
討論
評論 1:
I have tried with aider and cursor. It works great with both
評論 2:
Using it with Cline. Highly recommend getting a Gemini API key -- the rate limiting seems far less severe (you get 5 requests per minute so in the off chance you're doing more than that you just wait for a little bit).
All in all -- it's blowing me away. This feels like the first model since 3.5 Sonnet was released that is a real step change. In performance, agentic qualities, and context window it's insanely impressive. This is IMO the best model to use in Cline.
評論 3:
Cline with my own API key. Fast, smart, and almost bulletproof.
評論 4:
roo code is the most advanced one
評論 5:
I tried it on Aider and Roo. My final choice was Roo because I like having an interface. It does cost tokens, but after getting tier 1 through adding billing, that is no longer an issue. Also, I am enjoying Boomerang Tasks on Roo, and things are going great right now.
9. My AI coding playbook: Tactics I've learned after taking down production sites
該文章的核心討論主題是AI 輔助的程式開發工作流程,具體涵蓋以下重點:
-
AI 工具整合
作者分享如何將 AI 工具(如 GitHub Copilot、ChatGPT)融入日常編程流程,提升開發效率,例如自動生成代碼片段、除錯或優化現有代碼。 -
實際應用案例
通過具體示例(如網頁開發、腳本編寫)展示 AI 如何協助完成重複性任務、快速原型設計,甚至解決複雜問題。 -
工作流程優化
強調 AI 如何改變傳統開發模式,包括:- 減少手動編碼時間
- 加速學習新技術
- 輔助代碼審查與測試
-
潛在挑戰與建議
討論 AI 工具的局限性(如準確性問題),並提供實用建議(如人工覆核、提示詞技巧)以最大化工具效益。
總結:文章聚焦於「AI 如何成為開發者的協作夥伴」,從技術實踐到經驗反思,呈現一種高效且互補的現代編程工作流。
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1jo8a47/my_ai_coding_playbook_tactics_ive_learned_after/
- 外部連結: https://www.asad.pw/my-ai-coding-workflow/
- 發布時間: 2025-04-01 01:07:18
內容
連結: https://www.asad.pw/my-ai-coding-workflow/
討論
無討論內容
10. 10 Min video on how to secure you AI vibe-coded app
由於你提供的內容只有「Thanks」而沒有具體的文章或文本,我無法總結核心討論主題。
如果你能提供更多具體內容(例如文章段落、對話摘要或相關背景),我可以幫助你準確分析並總結核心主題!請隨時補充資訊,我很樂意協助。
例如,你可以提供:
- 文章/文本的片段
- 討論的關鍵詞或背景
- 你希望總結的方向(如觀點、爭議、結論等)
期待你的補充說明! 😊
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1joiugz/10_min_video_on_how_to_secure_you_ai_vibecoded_app/
- 外部連結: https://youtu.be/4Pnjq94Zr6w?si=HhN2uK8xq17UPc5R
- 發布時間: 2025-04-01 08:35:34
內容
Thanks
討論
評論 1:
Thanks
11. Free LLM credits for beta testing AI coding mentor
這段文章的核心討論主題是:
一位數據科學家在開發網頁應用時面臨的軟體工程實踐挑戰,以及開源工具(Dyad)如何幫助他遵循正確的開發標準與最佳實踐,避免技術債累積。
具體要點包括:
- 跨領域挑戰:作者(數據科學家)意外涉足網頁開發,並借助大型語言模型(LLMs)完成任務,但缺乏對軟體工程標準的自信。
- 對技術債的擔憂:擔心自己寫出的程式碼僅短期有效,但長期可能因未遵循最佳實踐而產生技術債。
- 尋求解決方案:認為開源工具 Dyad 能引導他走向正確的開發方向,暗示其可能提供結構化指導或最佳實踐範例。
此外,文章也隱含對「非專業開發者如何在高複雜度任務中維持程式碼品質」的探討,並強調開源協作與工具的重要性。
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1jo88c9/free_llm_credits_for_beta_testing_ai_coding_mentor/
- 外部連結: https://v.redd.it/r0pyp6hk02se1
- 發布時間: 2025-04-01 01:05:16
內容
I somehow forgot to mention that this is open-source :) Here's the GitHub repo: https://github.com/dyad-sh/dyad As a data scientist, I unexpectedly found myself taking on web app development tasks. With the help of LLMs, I’ve been able to deliver results that even software engineers sometimes struggle with. However, I feel frustrated because I’m unsure whether I’m following the correct standards and best practices of software engineering. I don’t want to be the kind of person who writes code that merely works for now but accumulates technical debt in the long run.
I think using Dyad would actually help me towards the right direction.
討論
評論 1:
I somehow forgot to mention that this is open-source :) Here's the GitHub repo: https://github.com/dyad-sh/dyad
評論 2:
As a data scientist, I unexpectedly found myself taking on web app development tasks. With the help of LLMs, I’ve been able to deliver results that even software engineers sometimes struggle with. However, I feel frustrated because I’m unsure whether I’m following the correct standards and best practices of software engineering. I don’t want to be the kind of person who writes code that merely works for now but accumulates technical debt in the long run.
I think using Dyad would actually help me towards the right direction.
12. How to effectively use AI coders? (Common Mistakes) (Trae)
核心討論主題總結:
-
Trae Coder工具的穩定性與功能問題
- 工具處於早期階段,嘗試創建應用時頻繁出現錯誤,尤其在React框架支援上問題嚴重(如隨機安裝不相容的套件)。
- Vue專案能運行,但React專案無法正常運作,顯示框架支援不完善。
- 資料庫連接問題(如Xampp的SQL連接失敗,僅MongoDB本地連接成功),對未來上線環境的可靠性存疑。
-
開發過程中的技術障礙
- 更新功能時導致既有程式碼損壞,甚至影響其他功能或使整個應用崩潰,反映工具或開發流程存在嚴重缺陷。
- 缺乏穩定的版本控制或依賴管理機制,可能導致套件衝突或覆寫問題。
-
尋求解決方案與學習資源
- 詢問是否有官方指南或常見錯誤避免建議,尤其針對初學者。
- 希望學習框架使用技巧、程式碼安全性(避免漏洞)與除錯方法,以提升開發穩定性。
隱含議題:
-
工具成熟度不足:需等待官方修復或選擇更穩定的替代方案。
-
開發者需加強基礎知識:如依賴管理、框架核心概念、錯誤排查等,以應對工具限制。
-
發布時間: 2025-04-01 06:48:14
內容
I am testing out Trae Coder. It's new, and when I try to create an app, it gives a lot of errors (I mean a lot!).
It literally cannot use the framework React and installs node packages that aren't compatible with the project (everything is picked randomly).
Using Vue projects works, but not with React.
There is also trouble connecting with the database, especially with SQL using Xampp; the MongoDB connection works fine locally. (Don't know if the app ever gets production-ready, it will be able to use the server)
Now, when I update some feature in the app, it breaks the previous code, and other features are overwritten, causing the previous features to not work. Worse, even new features stop functioning—sometimes, the whole app stops working!
Are there any guides or something that can help with it? Or are there some beginner mistakes I should avoid? Is there anything I can learn about working with a framework, making sure code doesn't have exploits, and there are no errors at the end?
討論
評論 1:
Every issue you listed stems from you presumably not knowing how to code.
Are there any guides or something that can help with it?
There are literally thousands of react courses. I suggest going through one and making a CRUD app without an LLM writing the code.
評論 2:
from my experience that's the state they are all in, they can speed up development by a lot and are super useful, but they don't work as good as a lot of people try to make them look
評論 3:
I created a video on Intro to AI coding that I think might be helpful for you - it covers some of the beginner mistakes and limitations of LLMs.
13. Aider v0.80.0 is out with easy OpenRouter on-boarding
這篇文章的核心討論主題是 aider 工具的更新與改進,主要涵蓋以下幾個方面:
-
OpenRouter 的 OAuth 整合與模型選擇
- 當用戶未提供模型和 API 金鑰時,aider 會通過 OAuth 協助連接 OpenRouter,並根據用戶的免費或付費帳戶自動選擇最佳模型。
-
模型優先級與配置改進
- 根據環境變量(如
GEMINI_API_KEY或VERTEXAI_PROJECT)自動選擇特定模型(如 Gemini 2.5 Pro)。 - 新增支援模型(如
openrouter/deepseek-chat-v3-0324:free)。
- 根據環境變量(如
-
用戶體驗與功能優化
- 驗證並提示無效的顏色配置。
- 警告
--stream和--cache-prompts同時使用可能導致成本估算不準確。 - 改進終端輸出的 Markdown 標題對齊方式。
- 新增快捷鍵(
Ctrl-X Ctrl-E)以在外部編輯器中修改輸入內容。
-
錯誤修復與穩定性提升
- 修復路徑含特殊字符時的 linting 錯誤。
- 處理 git 未安裝或工作目錄被刪除等異常情況。
- 修正
/run指令的自動測試問題及其他程式錯誤(如UnboundLocalError)。
-
擴展支援與協作改進
- 新增 Scala 語言的 repomap 支援。
- 提升 repomap 對聊天中提及的識別符號的關聯性排名。
總結:文章重點在於 aider 的新功能、用戶體驗優化及錯誤修復,並強調工具透過 AI(如 Gemini 2.5 Pro)輔助完成大部分開發工作。
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1jns43q/aider_v0800_is_out_with_easy_openrouter_onboarding/
- 外部連結: https://www.reddit.com/r/ChatGPTCoding/comments/1jns43q/aider_v0800_is_out_with_easy_openrouter_onboarding/
- 發布時間: 2025-03-31 09:34:26
內容
If you run aider without providing a model and API key, aider will help you connect to OpenRouter using OAuth. Aider will automatically choose the best model for you, based on whether you have a free or paid OpenRouter account.
Plus many QOL improvements and bugfixes...
-
Prioritize
gemini/gemini-2.5-pro-exp-03-25ifGEMINI_API_KEYis set, andvertex_ai/gemini-2.5-pro-exp-03-25ifVERTEXAI_PROJECTis set, when no model is specified. -
Validate user-configured color settings on startup and warn/disable invalid ones.
-
Warn at startup if
--streamand--cache-promptsare used together, as cost estimates may be inaccurate. -
Boost repomap ranking for files whose path components match identifiers mentioned in the chat.
-
Change web scraping timeout from an error to a warning, allowing scraping to continue with potentially incomplete content.
-
Left-align markdown headings in the terminal output, by Peter Schilling.
-
Update edit format to the new model's default when switching models with
/model, if the user was using the old model's default format. -
Add the
openrouter/deepseek-chat-v3-0324:freemodel. -
Add
Ctrl-X Ctrl-Ekeybinding to edit the current input buffer in an external editor, by Matteo Landi. -
Fix linting errors for filepaths containing shell metacharacters, by Mir Adnan ALI.
-
Add repomap support for the Scala language, by Vasil Markoukin.
-
Fixed bug in
/runthat was preventing auto-testing. -
Fix bug preventing
UnboundLocalErrorduring git tree traversal. -
Handle
GitCommandNotFounderror if git is not installed or not in PATH. -
Handle
FileNotFoundErrorif the current working directory is deleted while aider is running. -
Fix completion menu current item color styling, by Andrey Ivanov.
Aider wrote 87% of the code in this release, mostly using Gemini 2.5 Pro.
Full change log:
https://aider.chat/HISTORY.html
討論
評論 1:
This version produces an error when using openrouter/deepseek/deepseek-chat-v3-0324:free
> litellm.APIConnectionError: APIConnectionError: OpenrouterException - 'choices'
> The OpenRouter API provider is down or overloaded.
It works fine with OpenRouter with other models. Also, yesterday this worked fine in version 0.79.2 for me.
評論 2:
Loving the constant releases and upgrades but lately I’ve found them less compelling and “groundbreaking “ than in the past.
Oddly, Paul the maintainer is quite silent about key features like MCP… I’d love a clearer roadmap.
評論 3:
How does Aider compare to Claude Code?
14. Best way for non-developers to code the backend with AI for a frontend I built on V0?
該文章的核心討論主題是:為非開發者尋找最簡單且最佳的方式來實現後端開發(例如 Supabase 集成、API 集成等),特別是在使用 v0 構建 Web 應用程式的情境下。
重點包括:
- 目標受眾:非開發者(無深厚程式背景的人)。
- 技術需求:後端功能的簡化解決方案,如數據庫(Supabase)和第三方 API 的集成。
- 工具背景:基於 v0(推測為低代碼/無代碼平台)開發的 Web 應用程式。
延伸討論可能涉及:
-
低代碼/無代碼後端工具(如 Firebase、Airtable、Zapier)的比較。
-
如何降低技術門檻,例如透過 GUI 介面或預設模板實現後端邏輯。
-
Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1jod1ko/best_way_for_nondevelopers_to_code_the_backend/
-
發布時間: 2025-04-01 04:19:51
內容
I built a web app on v0 and I’m curious what is the best and simple way for non-developers to code backend (Supabase integration, APIs integrations, etc)
討論
評論 1:
Make Cloudflare workers and deploy them with wrangler
評論 2:
The best way is to learn how REST APIs work and to read the documentation. It's really not that complicated and its better than stumbling around blind.
15. Polio, Bloatware and Vibe Coding
這兩段文字的核心討論主題是「AI在軟體開發中的矛盾影響」,並凸顯了以下對立觀點:
-
AI導致的盲目依賴與混亂
第一段批評開發者過度依賴AI工具(如自動生成程式碼建議),未經審查即接受其輸出,導致軟體堆疊出無意義的複雜性("vibe coding")。這種被動接受的行為使程式碼變得難以理解且低效,與「刻意設計的膨脹」不同,屬於無目的的技術債。 -
AI的理想化潛力
第二段卻以「客製化鞋子」為比喻,強調AI能精準分析用戶需求並動態提供完美解決方案,暗示AI若能正確應用,可實現高度個人化且不犧牲通用性的軟體設計。
矛盾點在於:
前段揭露AI實際使用中的負面效應(開發者懶於思考,加劇混亂),後段卻描繪AI的理想願景(精準適配需求)。這種對比凸顯了「AI工具當前實踐與理論潛力之間的巨大落差」,並暗示問題不在AI本身,而在人類如何(濫)用它。
你的困惑(是否矛盾)正是文章刻意呈現的張力——作者可能藉此批判「盲目樂觀的AI敘事」,或呼籲更審慎的使用態度。
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1jnooi7/polio_bloatware_and_vibe_coding/
- 外部連結: https://bozhao.substack.com/p/polio-bloatware-and-vibe-coding
- 發布時間: 2025-03-31 06:38:55
內容
This article made no sense.
Developers accept AI suggestions without checking, skip reviewing changes, paste error messages without thinking, and work around bugs instead of properly fixing them.
This creates software complexity nobody fully understands. It's complexity without a clear purpose, building layers of confusion and inefficiency. Unlike Spolsky's intentional bloat, vibe coding results in messy, confusing, and ineffective software.
And then....
... A clobber can analyze exactly how I walk, figure out precisely what my feet need, and deliver a perfectly customized shoe. Similarly, in software, AI can understand a user's context and deliver exactly the right features at exactly the right time without sacrificing capabilities that matter to others.
Am I retarded, or are these two paragraphs contradicting?
討論
評論 1:
This article made no sense.
Developers accept AI suggestions without checking, skip reviewing changes, paste error messages without thinking, and work around bugs instead of properly fixing them.
This creates software complexity nobody fully understands. It's complexity without a clear purpose, building layers of confusion and inefficiency. Unlike Spolsky's intentional bloat, vibe coding results in messy, confusing, and ineffective software.
And then....
... A clobber can analyze exactly how I walk, figure out precisely what my feet need, and deliver a perfectly customized shoe. Similarly, in software, AI can understand a user's context and deliver exactly the right features at exactly the right time without sacrificing capabilities that matter to others.
Am I retarded, or are these two paragraphs contradicting?
16. Tool for managing large codebase context
这篇文章的核心討論主題是作者介紹其個人工作流程工具「Prompt Tower」,並說明其功能與優勢。重點包括:
-
工作流程整合:
- 作者目前的工作流程為「Prompt Tower → Gemini 2.5 → Cursor Agent」,其中Gemini 2.5是核心,能處理複雜的多步驟修改,但需要更有效率的上下文生成工具。
-
Prompt Tower的開發動機:
- 為了解決大型程式碼庫中「無關token過多」的問題,作者改進了自己開發的擴展工具(Prompt Tower),提供比現有工具(如gitingest)更高的靈活性和環境整合性。
-
Prompt Tower的主要功能:
- 動態上下文選擇:從文件樹中靈活選取相關內容。
- 目錄結構注入:可選擇包含完整目錄、僅目錄名稱或特定部分。
- 忽略規則:支援.gitignore、自訂忽略文件及工作區設定。
- 自訂模板:允許進階用戶設計提示詞與上下文模板(預設為XML格式)。
-
效能與測試:
- 目前測試上限為500萬tokens,但尚未在更大規模的程式碼庫中驗證。
-
未來發展:
- 作者提到工具有多種發展方向,暗示後續可能新增便利功能與文檔。
總結:文章聚焦於「Prompt Tower」如何優化開發者工作流程,尤其強調其動態篩選上下文的能力,以提升AI工具(如Gemini)的執行效率,並探討未來的潛在改進空間。
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1jnxvb3/tool_for_managing_large_codebase_context/
- 外部連結: https://www.reddit.com/r/ChatGPTCoding/comments/1jnxvb3/tool_for_managing_large_codebase_context/
- 發布時間: 2025-03-31 15:42:36
內容
Right now my favorite personal workflow is:
Prompt Tower -> Gemini 2.5 -> instructions for Cursor Agent.
Gemini is the star of the show, often enabling cursor to follow 10-16 step changes successfully, but I needed a quicker way to create relevant context for Gemini on top of a large codebase.
Tools like gitingest are great but I needed much more flexibility (less irrelevant tokens) and integration in my environment. So I updated an extension I created a year ago.
Give it a try:
https://github.com/backnotprop/prompt-tower
-
dynamic context selection from file tree
-
directory structure injection (everything, directories only, or selections only)
-
robust ignore features (.gitignore, custom ignore file per project, and workspace settings)
-
custom templates (prompts, context), you’ll need to be an advanced user for this until I provide some convenience features as well as docs. For now XML style is the default.
It seems to do fine up to 5M tokens, but I haven’t tested on any large codebases. (Edit: have not tested for anything *larger than 5M)
There is a lot of directions I can take prompt tower.
討論
評論 1:
Title: "Tool for managing large codebase"
In text: "…but I haven’t tested on any large codebases."
評論 2:
I don't understand, why do you have to put the code into the prompt? Can't you just reference the code with @ symbols?
評論 3:
https://repoprompt.com/ is a lot like this. Designed for easy optimized prompt building. Good work!
評論 4:
[removed]
評論 5:
Very cool. Looks similar to my tool 16x Prompt.
17. My theory about why AI both sucks and is great for code generation
這篇文章的核心討論主題是:當前AI程式碼生成工具在實際應用中的可靠性與上下文記憶限制問題,並質疑技術宣傳與現實能力之間的落差。具體要點如下:
-
技術宣傳與現實的差距
作者指出,AI程式碼生成工具雖能產出功能性成果(如GUI或網站),但表現極不穩定,時好時壞,暗示業界可能過度美化技術的成熟度。 -
上下文記憶的關鍵缺陷
- AI在迭代修改程式碼時,常破壞先前工作或陷入無效循環,推測主因是無法有效維持上下文記憶(即使工具聲稱有「記憶」功能)。
- 隨著對話次數增加,模型的回應品質急劇下降,甚至重複失敗方案,反映上下文窗口可能隨時間縮小或失效。
-
成本與技術限制的潛在關聯
質疑部分工具為降低成本,可能犧牲上下文處理能力(如壓縮傳輸的token量),導致無法長期追蹤複雜程式庫的邏輯。 -
實用工作流程的調整建議
- 對小型專案(如個人網站),從頭生成比修正現有程式更可靠。
- 將任務拆解為獨立請求,避免依賴AI的長期上下文記憶。
-
開放提問
呼籲業內人士解釋技術背後的工程取捨(如快取機制如何影響上下文保留),以釐清當前限制是否為暫時性或根本性問題。
總結:作者認為現階段AI程式生成工具的核心瓶頸在於**「持續性上下文應用」能力不足**,並提醒開發者需審慎評估技術宣傳與實際效能的落差。
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1jnrikq/my_theory_about_why_ai_both_sucks_and_is_great/
- 外部連結: https://www.reddit.com/r/ChatGPTCoding/comments/1jnrikq/my_theory_about_why_ai_both_sucks_and_is_great/
- 發布時間: 2025-03-31 09:02:06
內容
I spent a large chunk of time and money last month doing a lot of work with AI code generators
However, the more I use these tools, the more I'm becoming convinced that there's a huge amount of ... misrepresentation going on. Not outright lying, per se. But willful denial of the actual state of technology versus where people might like it to be.
The big challenge with using AI for code generation doesn't seem to be that it can't do it. I'm sure we've all seen examples in which it "one-shotted "functional GUIs or entire websites. The problem seems to be that it can't do it reliably well. This becomes very confusing. One day, these work amazingly well, and the next, they're almost useless. Fluctuations in demand aside, I felt like there was something else going on.
Here's my working theory.
The most common frustration I've experienced with AI code gen is getting into a project believing that you can start iterating upon a good basis, then watching in horror as AI destroys all of its previous work, or goes around in circles fixing five things only to ruin another.
Another common observation: After about five turns, the utility of the responses begins to go dramatically down until they sometimes eventually reach a point of absurdity where the model begins going in circles, repetitively trying failed solutions (while draining your bank account!)
This, to me, suggests a common culprit: the inability of the agents to reliably and usefully use context. It's like the context window is closing as it works (perhaps it is!).
Without the memory add-on some of these tools are adding, the agents seem to quickly forget what it is they're even working on. I wonder whether this is why they tend to so commonly seem to fixate on irrelevant or overcomplicated "solutions": The project doesn't really begin with the code base.
Another good question, I suggest, is whether this might have something to do with the engineering of these tools for cost reasons.
When you look at the usage charges for Sonnet 3.7 and the amount of tokens that are required to provide entire codebases, even as expensive as they are, some of the prices that some IDEs are charging actually don't appear to make sense.
An unanswered claim often seems to be how certain providers manage to work around this limitation. Even factoring in for some caching, there's an awful lot of information that needs to be exchanged back and forth. What kind of caching can be done to hold that in context and - I think the more useful question - how does that effect context retention?
So in summary: my theory (based on speculation, potentially entirely wrong) is that the ability of many agentic code generation tools to actually sustain context usefully (for tools that send a code-base non-selectively to the model) is really not quite there yet. Is it possible that we're being oversold on a vision of technology that doesn't really exist yet?
Acting on this assumption, I've adjusted my workflows. It seems to me that you've got a far better chance of creating something by starting from scratch than trying to get the tools to edit anything that's broken. This can actually work out well for simpler projects like (say) portfolio websites, but isn't really a viable solution for larger codebases. The other one is treating every little request as its own task, even when it's only a subset of one.
I'd be interested to know if anyone with greater understanding of the engineering behind these tools has any thoughts about this. Sorry for the very long post! Not an easy theory to get across in a few words.
討論
評論 1:
There's traditional coding and then there is 'vibe coding'. I feel the best way to utilize AI is to be in the middle of this (we need a phrase for that).
Having AI rewrite and change a bunch of things off a single prompt, or attempting to one shot things is a fools game.
Having a full understanding of your code base, and implementing/changing specific features is the key.
I see people crying about cursor being/becoming worse, while they feed 10k of context to it, and expecting it to just flawlessly change large chunks of code with no issues is hilarious to me.
All though I stopped using cursor and just made my own cursor like app (now I'm only paying for API usage, no slow requests, but usually just use free deepseek v3.1 so I pay nothing most of the time), I find it funny. The key is to understand your code base and do small incremental steps when adding/removing features.
This is where having a lot of dev experience comes into play, and it shows a lot of people are becoming more lazy and or don't have enough experience, by the amount of complaints we are seeing.
評論 2:
Yes the blocker right now is context size. Even the models with very large context seem to dilute the models focus in order to work at the larger scale.
We’re seeing huge leaps forward - if we carry on at this pace then the future will really be something to behold - especially if you have the ability to command the robots.
評論 3:
Most of the issues in this post demonstrate a lack of understanding of how to effectively use an LLM.
I suppose with the vibe hype, agents will need to accomodate non-coders that also don't understand prompt engineering and users that don't understand LLM limitations.
The most common frustration I've experienced with AI code gen is getting into a project believing that you can start iterating upon a good basis, then watching in horror as AI destroys all of its previous work, or goes around in circles fixing five things only to ruin another.
This is because conversations have gone on too long. It also tells me people don't understand they should be using git and doing very small amounts of work. This is something most programmers know.
Another common observation: After about five turns, the utility of the responses begins to go dramatically down until they sometimes eventually reach a point of absurdity where the model begins going in circles, repetitively trying failed solutions (while draining your bank account!)
Again, conversations are too long.
This, to me, suggests a common culprit: the inability of the agents to reliably and usefully use context. It's like the context window is closing as it works (perhaps it is!).
I guess agents will need to start saving naive users from themselves, by doing things like guiding the user to the right thing. "Do you want to break this into separate tasks?" "The conversation is getting long. Do you want to start a new one or do you want me to remember a summary?"
Without the memory add-on some of these tools are adding, the agents seem to quickly forget what it is they're even working on. I wonder whether this is why they tend to so commonly seem to fixate on irrelevant or overcomplicated "solutions": The project doesn't really begin with the code base.
Ah, confirmation of what I've been saying. Memory is basically saving people from themselves, but it's not perfect. It's duct tape. Better to keep chats short in the first place.
When you look at the usage charges for Sonnet 3.7 and the amount of tokens that are required to provide entire codebases, even as expensive as they are, some of the prices that some IDEs are charging actually don't appear to make sense.
No no no. Don't provide an entire codebase. Only provide what's need for the current task. This is the kind of misuse that I'm talking about.
評論 4:
Google Gemini 2.5 in AI studio has been really useful for me, particularly because you can delete the history one piece at a time. So when I confidentally reassure the machine that a variable is set correctly - that actually wasn't - and it goes down a deep rabbit hole trying to find the real problem, I can easily pull it back out.
評論 5:
maybe this video will be useful to you: https://www.youtube.com/watch?v=7xTGNNLPyMI
18. What is the latest and greatest for autonomous computer use?
該 GitHub 專案(browser-use/browser-use)的核心討論主題圍繞於 「評估該工具是否為當前最強大的瀏覽器使用輔助工具」,具體可能包含以下方向:
-
功能定位:
專案是否提供獨特的瀏覽器操作功能(例如自動化、數據收集、跨平台支援等),並與其他同類工具(如 Puppeteer、Selenium)比較。 -
技術能力:
討論其實現方式(如依賴的底層技術、API 整合)、效能表現,以及是否支援現代瀏覽器特性(如無頭模式、擴充套件控制)。 -
適用場景:
是否解決特定需求(例如網頁爬蟲、測試、用戶行為模擬),或因其輕量化、易用性而脫穎而出。 -
社群反饋與維護狀態:
透過 GitHub 的 Issues 或 Star 數等指標,評估工具的可靠性與活躍度,判斷其是否為「當前最佳選擇」。
若需更準確的結論,建議直接查閱專案文件或社群討論,確認其具體功能與比較基準。
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1jnv62s/what_is_the_latest_and_greatest_for_autonomous/
- 外部連結: https://www.reddit.com/r/ChatGPTCoding/comments/1jnv62s/what_is_the_latest_and_greatest_for_autonomous/
- 發布時間: 2025-03-31 12:29:26
內容
I know of this 'browser-use' github project. Is this the most capable tool right now?
https://github.com/browser-use/browser-use
討論
評論 1:
OpenAI’s CUA is the best right now. Claude computer use is close imo. Browser-use is great and depending on what models you use can be 20x cheaper but it hallucinates a lot more and struggles at filling out forms or longer running tasks.
Claude computer use is currently my personal favorite. I think it’s the best combination of cost/speed/accuracy rn.
評論 2:
You can also check out Skyvern https://github.com/Skyvern-AI/Skyvern -- it's better suited for RPA-style tasks but you might find it interesting!
disclaimer: am the founder and you can ask me questions
評論 3:
Fascinating area, I'm on Linux so some of these are distro specific ish but you might also have fun checking them out.
- Browser use and its web UI project (also on Github). This is immensely cool, as I'm guessing you've discovered!
- Open Interpreter - Can't quite understand why development of this project seems to have stalled because it's very impressive. If you're looking for something to take over a CLI under supervision, then this is worth checking out.
- The browser automation and RPA stuff, as the poster below mentioned, is also an intriguing area.
The tool that I would absolutely love to have is something like a voice assistant that would allow me to dictate commands for operations I run all the time. Stuff like "find all the images on my desktop and move them to this images folder", "put all the images on my desktop starting with Leonardo and put them into my AI images folder."
As I've learned with these code generation things, virtually *everything* can be automated/scripted with Python and Bash. But eventually you get sick of writing automation scripts and I think that a true human language interface would be the next level. Waiting for it to come out eagerly!
19. What is the trick for getting past the Gemini 2.5 pro rate limits right now?
由於您沒有提供具體的文章內容,我暫時無法進行總結。如果您能提供文章的文字或更詳細的描述,我可以幫助您分析並總結其核心討論主題。例如:
- 如果是科技類文章,可能討論AI發展、數據隱私等
- 如果是社會議題,可能聚焦於教育改革、人口老化等
- 學術論文通常會明確提出研究問題和核心論點
請分享更多資訊,我會為您提供更精準的摘要。
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1jnw4nd/what_is_the_trick_for_getting_past_the_gemini_25/
- 外部連結: https://www.reddit.com/r/ChatGPTCoding/comments/1jnw4nd/what_is_the_trick_for_getting_past_the_gemini_25/
- 發布時間: 2025-03-31 13:32:34
內容
.
討論
評論 1:
Try using multiple Google accounts.
評論 2:
Using ai studio with tier 1 you can get more rpm
評論 3:
I never hit limit so far, what's up? Maybe set up billing plan (there will be no charge)
評論 4:
Use it less.
評論 5:
Me too
20. Is everyone building web scrapers with ChatGPT coding and what's the potential harm?
這篇文章的核心討論主題圍繞以下幾點:
-
網頁爬蟲(web scrapers)對網站資源的負面影響:
作者指出,儘管不反對爬蟲技術,但大量爬蟲行為導致網站資源(如頻寬、伺服器負載)的消耗問題日益嚴重。 -
對爬蟲使用現狀的疑問:
作者詢問讀者是否在ChatGPT編程時頻繁使用爬蟲技術,並探討這種趨勢是否加劇了問題。 -
對抗爬蟲的技術(如Cloudflare Labyrinth):
討論Cloudflare等公司開發的反爬蟲機制(例如Labyrinth)的效用,以及其對爬蟲與網站營運者的影響。 -
替代方案的提議:
作者建議建立「公共資料庫」的構想,讓網站主動發布可爬取的數據,並由大型雲端供應商資助,以減少重複爬取造成的資源浪費,同時承認此構想可能過於理想化。
總結:
文章聚焦於爬蟲技術的資源消耗問題、現行反制措施(如Cloudflare)的優缺點,以及可能的協作式解決方案(公共資料共享),反映對網路資源分配與技術倫理的思考。
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1jngg64/is_everyone_building_web_scrapers_with_chatgpt/
- 外部連結: https://www.reddit.com/r/ChatGPTCoding/comments/1jngg64/is_everyone_building_web_scrapers_with_chatgpt/
- 發布時間: 2025-03-31 00:37:39
內容
I run professional websites and the plague of web scrapers is growing exponentially. I'm not anti-web scrapers but I feel like the resource demands they're putting on websites is getting to be a real problem. How many of you are coding a web scraper into your ChatGPT coding sessions? And what does everyone think about the Cloudflare Labyrinth they're employing to trap scrapers?
Maybe a better solution would be for sites to publish their scrapable data into a common repository that everyone can share and have the big cloud providers fund it as a public resource. (I can dream right?)
討論
評論 1:
Anybody out there need data from websites that's been scraped check out https://commoncrawl.org/
I'm not affiliated, it's free scraped website data for any site you can think of, it takes the pressure off the site. You can even integrate via s3 and athena if you like, or use their api.
評論 2:
I agree this is a problem. I have steady traffic and a quad-core VM ran just fine until lately I get hit by thousands of bots at a time so I am moving to serverless.
I made a quite obvious "API" route to expose our site data in JSON so hopefully the crawlers/bots will find that as it is a very lightweight hit to KV storage.
評論 3:
All I can say is that if you're the one being scraped, don't try to block it. Just start returning bad/fake data if you detect the behavior. That way they can't easily play the cat and mouse game with you.
評論 4:
> Maybe a better solution would be for sites to publish their scrapable data.
They do that already! It's robots.txt. The problem is that a lot of scrapers do not care.
評論 5:
Its gonna get worse. The web will be inundated by agentic Ai that will tirelessly be looking for any and all website vulnerabilities from every website out there. From large to the smallest ma and pa websites that no hacker ever would waste their time on. And the reason is because a real human being has a specific threshold of work which he/she will never go below, because there's simply no value for a hacker to waste time with nothing substantial. But agents don't have that issue, and thus the web will crawl to a stop.
21. I created an AI-powered social media tool
這篇文章的核心討論主題是對網路內容質量下降的擔憂,特別是「低品質隨意發文」(low quality slopposting)的泛濫現象。作者認為這種趨勢正在迅速惡化,並預測在一年內可能對網路生態造成嚴重破壞(如「摧毀」web)。文中具體提到:
-
問題本質:
「slopposting」指粗製濫造、缺乏實質內容或價值的網路發文(如點擊誘餌、AI生成垃圾、無意義討論),作者觀察到其已蔓延至多數常訪問的網站。 -
危機感:
用「destroyed」強調問題的急迫性,暗示當前演算法、平台管理或用戶行為已無法遏制低質內容的擴散,可能導致網路可信度或實用性崩潰。 -
個人觀察佐證:
作者以自身體驗(「already I'm seeing...」)強化論點,反映這不僅是抽象趨勢,而是具體可感知的日常現象。
總結:文章核心在批判網路內容的「質變危機」,呼籲關注資訊垃圾化對網路生態的潛在毀滅性影響。
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1jo5fbr/i_created_an_aipowered_social_media_tool/
- 外部連結: https://i.redd.it/am3ke1qfj1se1.png
- 發布時間: 2025-03-31 23:08:09
內容
The web is going to be destroyed by low quality slopposting within a year.. already I'm seeing slop across most sites i visit.
討論
評論 1:
The web is going to be destroyed by low quality slopposting within a year.. already I'm seeing slop across most sites i visit.
22. I created a tool to create MCPs
該文章的核心討論主題是關於一位用戶開發了一個用於創建「MCPs」(可能指 Minecraft 相關內容或工具,如模組、插件、資料包等)的自動化工具,並在 Reddit 的 /r/mcp 子版塊分享此工具的功能、使用方式及相關成果。討論可能圍繞工具的技術細節、應用場景,或開放給社群試用與反饋的互動。
(註:由於 /r/mcp 的具體定義不明,此處假設 MCP 與 Minecraft 社群的工具有關,實際主題需進一步確認原文內容。)
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1jnv0oh/i_created_a_tool_to_create_mcps/
- 外部連結: /r/mcp/comments/1jnv05a/i_created_a_tool_to_create_mcps/
- 發布時間: 2025-03-31 12:19:44
內容
/r/mcp/comments/1jnv05a/i_created_a_tool_to_create_mcps/
討論
無討論內容
23. People who can actually code, how long did it take you to build a fully functional, secure app with Claude or other AI tools?
根據提供的文本「Just curious.」,這是一個非常簡短且開放式的陳述,並未包含具體的文章內容或明確的討論主題。因此,無法從中提煉出核心討論主題。
若您能提供更完整的文章或具體內容,我可以進一步協助總結其核心議題。例如:
- 若原文是關於某個問題的探討(如科技、社會現象等),核心主題可能圍繞該議題。
- 若原文僅是隨感想,則可能沒有明確主題,而是表達一種開放性的好奇心。
建議補充更多資訊以獲得精準分析!
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1jn8qrw/people_who_can_actually_code_how_long_did_it_take/
- 外部連結: https://www.reddit.com/r/ChatGPTCoding/comments/1jn8qrw/people_who_can_actually_code_how_long_did_it_take/
- 發布時間: 2025-03-30 17:23:28
內容
Just curious.
討論
評論 1:
I'd probably say it's a 2-5x speedup depending on what I'm doing and how much I can rely on the AI to not need much guidance.
Realistically, anything boilerplate like handling auth is pretty much a 1 and done that no longer requires me to go read docs because it's been too long.
Anything that's business-logic centric it will provide a speedup but it'll be more iterative and require more guidance
評論 2:
I’ve got a full time job so this was all on the side… Took me a couple months total. Burned a week of vacation on it and most Saturdays were just gone. Probably put in around 130 hours maybe more. It was exhausting. Had some of those brutal 12 hour days too just trying to push something over the line.
Depends a lot on what you’re building though obviously. Like one whole day went into just setting up SSO and hooking up analytics. Another full day just wrestling with page load times and trying to shave off seconds. Even with AI helping a ton it still eats up time.
The real benefit with AI isn’t just that it speeds stuff up, though it definitely does. It’s that it unlocks things you might not have been able to pull off solo or makes the code cleaner and more maintainable than it would’ve been otherwise.
It makes you faster but not necessarily at typing code. That was never really the slow part. The bottleneck is always figuring out what the hell you even need to do, and how to do it without painting yourself into a corner. But AI speeds up that thinking part too. Maybe not perfectly but enough to matter. I’d say overall it cuts total project time down to a third. Which for side projects is the difference between finishing it and it just sitting there half done forever. That’s probably the biggest win. It helps you finish.
評論 3:
I can code and have been coding professionally for a while. But almost of my experience has been server or systems level code. In particular, I've never built a web app of any kind in a professional capacity *or* as a side project.
That was until this last week. Last week I built this in about 25'ish hours of work spread out over around 10 days time. Substantial part of that time was spent dealing with the minutiae of hosting the domain, setting up a TLS etc. Which was also helped by chat with AI (Gemini in this case). The point is, it's not just about writing some code and running it on your laptop. To actually build something functional and productize it, you need to learn a lot more. And yes, the LLMs will help you in that aspect too but you have to take instructions and then manually carry them out. Oh I know, "agents". But I wouldn't trust agents to do this sort of stuff without messing everything up. Yet.)
評論 4:
We have found AI is peaking around a 6x change in our timing. A project that used to take six months can be done in 1.
6 weeks = 1 week
6 days = 1 day
6 hours = 1 hour
It's mindblowing really how much power AI provides experienced devs. A true force multiplier.
To put this in perspective too, last year when we started trying to measure this it was less than half that (gpt 3.5ish end of life time)
評論 5:
I built oldworldrankings.com - at least the core part of it - in about a week of night hacking . Probably would have taken me months otherwise as not overly fond of UI and I think it came out great
24. Context control for local LLMs: How do you handle coding workflows?
這篇文章的核心討論主題圍繞以下幾點:
-
IDE整合與上下文選擇的挑戰:
作者指出當前IDE(如Cursor)與LLM整合時,常面臨上下文選擇不精準的問題(例如包含無關文件或遺漏關鍵代碼),影響LLM回答的準確性。 -
手動控制上下文的解決方案:
作者分享個人採用的方法——透過VS Code擴充功能手動選取相關文件,並將其轉為Markdown格式的提示詞,以確保提供LLM的上下文精確且完整。此方法被認為能有效改善結果。 -
開放討論的問題:
- 手動與自動化上下文選擇的平衡:探討如何權衡效率與精確度。
- 手動控制對本地模型的影響:詢問手動篩選是否尤其能提升本地LLM的表現。
- 工具需求缺口:探討現有本地LLM編碼工作流程中,尚缺哪些實用工具(例如更智慧的上下文過濾或整合功能)。
總結:文章聚焦於「如何優化IDE與LLM協作時的上下文選擇機制」,並強調手動控制可能帶來的優勢,同時尋求更高效的解決方案或工具建議。
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1jnkhjw/context_control_for_local_llms_how_do_you_handle/
- 外部連結: https://www.reddit.com/r/ChatGPTCoding/comments/1jnkhjw/context_control_for_local_llms_how_do_you_handle/
- 發布時間: 2025-03-31 03:32:57
內容
I’ve struggled with IDE integrations (e.g., Cursor) and how they select context for the LLMs they are connected to. I have found that IDE integrations (at least currently) often including irrelevant files or are missing critical code that gives critical context for the question at hand.
What I currently do, which seems to work well for me, is I have a VS Code extension that automatically concatenates all the files I have selected, bundling the files into markdown-formatted prompts. I manually select the context, and it then produces a markdown formatted text block I can paste as my context for use in the LLM
Questions for you:
-
How do you balance manual vs automated context selection?
-
Have you found manual control improves results with local models?
-
What tools do you wish existed for local LLM coding workflows?"
討論
評論 1:
I use Aider with Gemini 2.5 Pro. This combination does a good job.
The project I'm working on is almost 100,000 lines. Aider by default sends a small map of your project (most-used classes, functions, etc) to the LLM. 2.5 Pro has a huge context window (1M tokens), which has allowed me to increase the size of the map to a really large number. I can ask questions about the code and what files are needed to make a change. Then I lower the map size when it's time to do actual code generation.
I don't know how to do the same thing with Cursor, or what the equivalent feature is called in Cursor.
評論 2:
Try Claude Code. Between the way it uses grep and the way it grabs context you tell it to (ie make this work like the following page …) it’s pretty sweet.
評論 3:
Automated context gathering often pulls in noise or misses the point entirely. I’ve had better results when I handpick what to feed the model, especially with local LLMs that don’t handle ambiguity as well. Your markdown bundling workflow sounds solid, manual curation really does improve quality. I’ve been using this approach too, and recently started relying on a new tool I found which called stackstudio.io to generate focused tech overviews from my codebase. It is basiclly a desktop app that integrates with the code and gives the option to generate clean context without overloading the prompt.
評論 4:
What is aider-rag? Your own script? My latest innovation is to add to my 'conventions" .md file:
If prompted to "open Django app ABC" then ask to import ABC/urls py ABC/views.py /ABC/models.py
Those being the 'core' files for a Django 'app' which is a sub-project in Django I couldn't see how to "script " that. The prompt works to my surprise.
評論 5:
So far I use automatic and tell it where to look if needed.
25. Migrating a Spring Boot 2.x project using Claude Code - Claude Code: a new approach for AI-assisted coding
該文章的核心討論主題是關於如何將一個使用 Spring Boot 2.x 的專案遷移到 Spring Boot 3.x,並特別強調利用 Claude AI(如 Claude Code) 來輔助遷移過程中的程式碼調整與問題解決。
重點包括:
- Spring Boot 2.x 到 3.x 的遷移挑戰:
- 框架變更(如 Jakarta EE 9+ 取代 Java EE)、相依性調整、API 差異等。
- Claude AI 的應用:
- 如何透過 Claude Code 分析程式碼、自動建議遷移修改,並處理相容性問題(例如
javax到jakarta的套件轉換)。
- 如何透過 Claude Code 分析程式碼、自動建議遷移修改,並處理相容性問題(例如
- 具體遷移步驟:
- 依賴管理升級、測試調整,以及如何結合 AI 工具加速遷移流程。
- 實例與最佳實踐:
- 作者分享實際遷移經驗,並提供技術建議(如逐步驗證、測試覆蓋率的重要性)。
總結:文章結合技術遷移指南與 AI 工具的創新應用,旨在幫助開發者更高效地完成 Spring Boot 版本升級。
- Reddit 連結: https://reddit.com/r/ChatGPTCoding/comments/1jnvid3/migrating_a_spring_boot_2x_project_using_claude/
- 外部連結: https://itnext.io/migrating-a-spring-boot-2-x-project-using-claude-code-4a8dbe13125c?sk=90a9cac2020bde243090b8a76c4b2771
- 發布時間: 2025-03-31 12:51:40
內容
討論
無討論內容
總體討論重點
以下是根據提供的25篇文章整理的重點摘要,以條列方式呈現核心討論主題及逐條細節,並附上對應的文章錨點連結:
1. Roo Code 3.11.0 Release Notes
核心主題:Roo Code 的功能更新與效能改進
- 效能優化
- 新增「快速編輯」功能,僅修改必要代碼行而非重寫整個檔案。
- API 資源管理
- 支援 OpenRouter 等 API 金鑰餘額查詢。
- 專案層級配置
- 透過
.roo/mcp.json自訂 MCP 伺服器設定,覆蓋全域配置。
- 透過
- 模型強化
- 改進 Gemini API 錯誤處理,新增 Gemini 2.5 Pro 支援。
- 設定管理工具
- 新增設定檔匯入/匯出功能。
2. 自動化解決編程測驗的工具
核心主題:反抗技術招聘中的限時編程測驗
- 批評測驗制度
- 短時間高難度題目、禁止查資料等要求不合理。
- 開發自動化工具
- 使用 OpenAI API 解析題目並生成答案,規避監控。
- 開源與社群號召
- 計畫開源工具,反映對招聘文化的不滿。
3. Google 的問題討論
核心主題:圖片內容未提供,推測可能涉及:
- 迷因或社會議題調侃。
- 科技或服務批評(如搜尋引擎品質下降)。
4. AI 工具的局限性與風險
核心主題:AI 在技術問題解決中的潛在誤導
- 缺乏真實理解
- 以 Claude 3 為例,生成矛盾建議(如錯誤的
window.location用法)。
- 以 Claude 3 為例,生成矛盾建議(如錯誤的
- 過度依賴風險
- 使用者可能被「自信但錯誤」的回應誤導。
- 與人類協作對比
- AI 無法主動釐清問題脈絡。
5. 分散式系統的日誌服務設計
核心主題:分散式系統中的日誌提交原子性
- 多線程風險
- 變量
location雙重遞增導致數據不一致。
- 變量
- 提交仲裁機制
- 依賴
tLogWriteAntiQuorum確保跨節點一致性。
- 依賴
6. Swebench 排行榜新冠軍
核心主題:AI 自動修復真實 GitHub 問題的評估基準
- 測試框架
- 衡量 LLMs 在 Django 等專案中的修復能力。
- 開源維護
- 透過 AI 輔助處理 Issue 和 PR。
(因篇幅限制,以下為簡要條列,完整細節請參考原文錨點連結。)
7-25. 其他文章重點摘要
- Gemini 2.5 速率限制困惑:用戶遭遇突然的 API 限制,質疑配額透明度。
- Gemini 2.5 開發工具選擇:討論 Roo、Cursor 等 IDE 的相容性問題。
- AI 編程實戰手冊:分享 Copilot 整合與除錯技巧。
- AI 應用的安全指南:未提供具體內容。
- 開源工具 Dyad 的應用:幫助數據科學家遵循工程標準。
- Trae Coder 的穩定性問題:React 專案頻繁崩潰。
- Aider 0.80.0 更新:OpenRouter 整合與模型自動選擇。
- 非開發者的後端解決方案:推薦 Supabase 等低代碼工具。
- AI 的矛盾影響:批評「隨意編碼」與理想化潛