2025-04-03-rising
- 精選方式: RISING
討論重點
以下是25篇文章的核心討論重點條列式總結,並附上對應的錨點連結與逐條細節:
#1 Vibe coding with AI feels like hiring a dev with anterograde amnesia
- AI編程的雙重性
- 優勢:提升日常編程效率
- 痛點:缺乏記憶力、誤解意圖、需精確指令
- 隱憂
- 過度依賴導致開發者理解不足
- 矛盾心態
- 期待AI強化記憶與理解能力
#2 Fiction or Reality?
- AI自動化技術
- 用於批量創建帳號,可能涉及合規性問題
- 語氣暗示
- 幽默探討技術實踐或灰色應用
#3 Did they NERF the new Gemini model?
- 溫度參數本質
- 控制隨機性而非創造力,低溫(0)適合精確任務
- 程式碼生成影響
- 高溫導致語法錯誤與虛構內容
- 預設值陷阱
- 平台預設高溫(如1)引發輸出不穩
#4 Gemini 2.5 beyond the Free Tier
- 成本問題
- 高頻使用者(>25次/日)的開銷分析
#5 Fully Featured AI Coding Agent as MCP Server
- 工具特色
- 免費開源(GPL授權),媲美付費工具
- 技術實現
- 基於語言伺服器(非RAG),支援大型程式碼庫
#6 Is it a good idea to learn coding via Claude 3.7?
- 教學可靠性
- AI可能產生幻覺(hallucination)誤導學習
- 風險評估
- 需平衡效率與知識正確性
#7 This sub is mostly full of low effort garbage now
- 社群批評
- 指責「氛圍式編程」與行銷內容氾濫
- 管理訴求
- 呼籲加強內容審核
#8 New better gemini coding model in LMarena
(需補充內容細節)
#9 What happens when you tell an LLM that it has an iPhone next to it
- 實驗觀察
- LLM對虛擬情境的反應機制
- 理解限制
- 探討模型對現實物體的認知能力
#10 I generated a playable chess with one prompt
- 遊戲設計
- 玩家vs AI(minimax演算法)
- 工具比較
- Bolt.new(現代風格)vs Bind AI IDE(經典風格)
#11 I finally figured out how to commit api keys to GitHub!
- 安全批評
- 嚴厲指責鑰匙管理的不負責任行為
#12 Strategies to Thrive as AIs get Better
(需補充影片內容)
#13 How to use DeepSeek deep research unlimited?
- 請求限制問題
- 透過API Key與cursor功能規避「server busy」錯誤
#14 How to transfer knowledge between conversations
- 結構化摘要模板
- Markdown格式記錄對話目標、議題、待辦事項
- 無縫接續
- 貼入新對話並指令「Continue where we left off」
#15 How to handle auth, db, subscriptions for AI agents?
- 開發痛點
- 重複建立使用者框架(auth/db/訂閱)
- 解決方案需求
- 尋求開箱即用的標準化模組
#16 Vibe debugging best practices
- AI除錯限制
- 過度猜測、缺乏上下文、副作用
- 優化策略
- 分階段驗證、強化上下文、
文章核心重點
以下是根據每篇文章標題和內容生成的一句話摘要(條列式輸出):
-
"Vibe coding" with AI feels like hiring a dev with anterograde amnesia
- AI輔助編程雖提升效率,但缺乏記憶與理解能力,導致重複修正與代碼意圖誤解。
-
Fiction or Reality?
- 探討AI自動化批量創建帳號的技術可能性與潛在合規風險。
-
Did they NERF the new Gemini model? Coding genius yesterday, total idiot today?
- 溫度參數設定不當(如預設值過高)是導致AI編碼輸出品質不穩的主因,建議精確任務使用低溫。
-
Gemini 2.5 beyond the Free Tier
- 分析Gemini 2.5高頻使用者(每日25+請求)的潛在成本與訂閱方案效益。
-
Fully Featured AI Coding Agent as MCP Server
- 開源工具Serena提供免費且高效的程式碼分析,支援語言伺服器技術與多平台整合。
-
Is it a good idea to learn coding via Claude 3.7?
- 評估AI教學C#的可靠性,警示幻覺風險可能導致初學者概念混淆。
-
This sub is mostly full of low effort garbage now
- 批評論壇充斥空洞的「氛圍式編程」與行銷內容,呼籲管理員加強審查。
-
New better gemini coding model in LMarena
- (內容不足,推測為介紹Gemini新版編碼模型的效能比較或展示)。
-
What happens when you tell an LLM that it has an iPhone next to it
- 實驗LLM對虛擬情境(如身旁有iPhone)的反應,探討其理解現實物件的邏輯限制。
-
I generated a playable chess with one prompt (two diff. platforms)
- 比較Bolt.new與Bind AI IDE實作AI西洋棋遊戲的視覺風格與核心功能差異。
-
I finally figured out how to commit
apikeys to GitHub!- 諷刺性批評鑰匙管理不當的嚴重安全疏失。
-
Strategies to Thrive as AIs get Better - Especially for programmers
- (內容不足,推測為探討程式設計師在AI時代的競爭策略)。
-
How to use DeepSeek deep research unlimited?
- 詢問如何透過API Key與cursor機制繞過服務請求限制。
-
How to transfer knowledge from one conversation to another
- 提出結構化摘要模板,實現ChatGPT對話無縫接續以避免重複溝通。
-
How do you handle auth, db, subscriptions, AI integration for AI agent coding?
- 探討快速整合使用者框架(含驗證、資料庫、支付等)的標準化解決方案需求。
-
Vibe debugging best practices that ge``` me unstuck.
- 分析AI除錯五大痛點(如過度猜測、上下文不足)並提出分階段驗證策略。
-
CAMEL DatabaseAgent: A Revolutionary Tool for Natural Language to SQL
- 開源工具CAMEL讓非技術人員以自然語言查詢數據庫,減少技術依賴。
-
tmuxify - automatically start your tmux dev environment with flexible templates
- 開源工具tmuxify透過YAML配置自動化tmux工作環境設置,提升終端效率。
-
Created an office simulator for VibeJam - Meeting Dash
- (內容不足,推測為諷刺職場無效會議的遊戲或模擬器)。
-
For people not using cursor etc., how do you give the LLM the latest version info?
- 討論舊版AI工具生成過時代碼的風險,探索手動導入最新文檔的可行性。
-
Cursor like diff viewer in roo and other enhancemen```
- (內容不足,推測為介紹類似Cursor的差異檢視工具功能增強)。
-
Experienced systems engineer trying their hand at a website depending completely on copilot
- 後端工程師實測Copilot開發前端應用的體驗,肯定從零構建效率但對維護既有代碼存疑。
-
Interview with Vibe Coder in 2025
- 程式設計師對幽默內容過度真實反映職業壓力的矛盾共鳴。
-
**How to use DeepSeek Deep Research together with Claude 3.7
目錄
- [1.
"Vibe coding" with AI feels like hiring a dev with anterograde amnesia](#1-``` -vibe-coding-with-ai-feels-like-hiring-a-dev) - [2.
Fiction or Reality?](#2-``` fiction-or-reality-
- [3. ```
Did they NERF the new Gemini model? Coding genius yesterday, total idiot today? The fix might be way simpler than you think. The most important setting for coding: actually explained clearly, in plain English. NOT a clickbait link but real answers.
```](#3-```
did-they-nerf-the-new-gemini-model-coding-ge)
- [4. ```
Gemini 2.5 beyond the Free Tier
```](#4-```
gemini-2-5-beyond-the-free-tier
```)
- [5. ```
Fully Featured AI Coding Agent as MCP Server
```](#5-```
fully-featured-ai-coding-agent-as-mcp-server)
- [6. ```
Is it a good idea to learn coding via Claude 3.7?
```](#6-```
is-it-a-good-idea-to-learn-coding-via-claude)
- [7. ```
This sub is mostly full of low effort garbage now
```](#7-```
this-sub-is-mostly-full-of-low-effort-garbag)
- [8. ```
New better gemini coding model in LMarena
```](#8-```
new-better-gemini-coding-model-in-lmarena
``)
- [9. ```
What happens when you tell an LLM that it has an iPhone next to it
```](#9-```
what-happens-when-you-tell-an-llm-that-it-ha)
- [10. ```
I generated a playable chess with one prompt (two diff. platforms)
```](#10-```
i-generated-a-playable-chess-with-one-promp)
- [11. ```
I finally figured out how to commit `api` keys to GitHub!
```](#11-```
i-finally-figured-out-how-to-commit-`api`-k)
- [12. ```
Strategies to Thrive as AIs get Better - Especially for programmers [Internet of Bugs]
```](#12-```
strategies-to-thrive-as-ais-get-better-espe)
- [13. ```
How to use DeepSeek deep research unlimited?
```](#13-```
how-to-use-deepseek-deep-research-unlimited)
- [14. ```
How to transfer knowledge from one conversation to another
```](#14-```
how-to-transfer-knowledge-from-one-conversa)
- [15. ```
How do you handle auth, db, subscriptions, AI integration for AI agent coding?
```](#15-```
how-do-you-handle-auth-db-subscriptions-ai-)
- [16. ```
Vibe debugging best practices that ge``` me unstuck.
```](#16-```
vibe-debugging-best-practices-that-ge```-me)
- [17. ```
CAMEL DatabaseAgent: A Revolutionary Tool for Natural Language to SQL
```](#17-```
camel-databaseagent-a-revolutionary-tool-fo)
- [18. ```
tmuxify - automatically start your tmux dev environment with flexible templates
```](#18-```
tmuxify-automatically-start-your-tmux-dev-e)
- [19. ```
Created an office simulator for VibeJam - Meeting Dash - try to get work done between endless meetings
```](#19-```
created-an-office-simulator-for-vibejam-mee)
- [20. ```
For people not using cursor etc., how do you give the LLM the latest version info?
```](#20-```
for-people-not-using-cursor-etc-how-do-you-)
- [21. Cursor like diff viewer in roo and other enhancemen```](#21-cursor-like-diff-viewer-in-roo-and-other-enhanc)
- [22. ```
Experienced systems engineer trying their hand at a website depending completely on copilot
```](#22-```
experienced-systems-engineer-trying-their-h)
- [23. ```
Interview with Vibe Coder in 2025
```](#23-```
interview-with-vibe-coder-in-2025
```)
- [24. ```
How to use DeepSeek Deep Research together with Claude 3.7 for best resul```?
```](#24-```
how-to-use-deepseek-deep-research-together-)
- [25. ```
About how many lines of production code were you writing/generating a month before AI and are now writing/generating with help of AI?
```](#25-```
about-how-many-lines-of-production-code-wer)
---
## 1. ```
"Vibe coding" with AI feels like hiring a dev with anterograde amnesia
``` {#1-```
-vibe-coding-with-ai-feels-like-hiring-a-dev}
這篇文章的核心討論主題是:
**對AI輔助編程("Vibe coding")的雙重感受——效率提升與局限性之間的矛盾**。
具體要點包括:
1. **AI工具的優勢**:能提高生產力,簡化日常編程工作。
2. **關鍵痛點**:
- 缺乏持續記憶力(如修正錯誤後可能遺忘或覆蓋先前的修改)。
- 無法理解程式碼的實際意圖,導致不必要的改動或破壞原有功能。
- 需要極精確的指令,否則可能過度調整上下文。
3. **隱憂提醒**:
- 完全依賴AI可能導致開發者對程式碼理解不足,建議主動學習或諮詢專業知識。
4. **矛盾心態**:
- 既讚賞AI的便利性,又對其局限性感到挫折,期望AI能具備更強的「記憶」與「理解」能力。
整體而言,作者在肯定AI價值的同時,強調開發者仍需掌握基礎知識以避免潛在風險。
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jpqoqo/vibe_coding_with_ai_feels_like_hiring_a_dev_with/](https://reddit.com/r/ChatGPTCoding/comments/1jpqoqo/vibe_coding_with_ai_feels_like_hiring_a_dev_with/)
- **外部連結**: [https://www.reddit.com/r/ChatGPTCoding/comments/1jpqoqo/vibe_coding_with_ai_feels_like_hiring_a_dev_with/](https://www.reddit.com/r/ChatGPTCoding/comments/1jpqoqo/vibe_coding_with_ai_feels_like_hiring_a_dev_with/)
- **發布時間**: 2025-04-02 22:58:48
### 內容
I really like the term "Vibe coding". I love AI, and I use it daily to boost productivity and make life a little easier. But at the same time, I often feel stuck between admiration and frustration.
It works great... until the first bug.
Then, it star``` forgetting things like a developer with a 5-min memory limit. You fix something manually, and when you ask the AI to help again, it might just delete your fix. Or it changes code that was working fine because it doesnt really know why that code was there in the first place.
Unless you spoon-feed it the exact snippet that needs updating, it tends to grab too much context and suddenly, its rewriting things that didnt need to change. Each interaction feels like talking to a different developer who just joined the project and never saw the earlier commi```.
So yeah, vibe coding is cool. But sometimes I wish my coding partner had just a bit more memory, or a bit more... understanding.
UPDATE: I dont want to spread any hate here AI is great.
Just wanted to say: for anyone writing apps without really knowing what the code does, please try to learn a little about how it works or ask someone who does to take a look. But of course, in the end, everything is totally up to you
### 討論
**評論 1**:
And who constantly gasligh``` you lol. "Oh I see the problem, it's fixed now".
**評論 2**:
Back in the day, when I had dev's to do the grunt coding, I found you had to be very clear, precise, and spoon feed them to get what you wanted, bugs or otherwise. To me using AI is very much like this but better. With AI you get what you got with the human dev, but, AI is all ways available, not complain about changes, and doesn't give you attitude. As far as AI forgetting or halucinating, well to be honest I got that with the humans too... ;-)
**評論 3**:
It's almost as if LLMs are power tools meant for power users, and everyone else is just waaayyy in over their heads.
It's like watching someone who just got a power drill thinking they can suddenly start building a house with no understanding of the fundamentals of what it means to build something in the first place.
**評論 4**:
I think the worst is when it does stupid stuff like duplicate code, then you're telling it to fix stuff and all it does is update unused code.
**評論 5**:
It feels like having my own green, just out of school junior developer that has never done anything in the real world but thinks it's about to redevelop facebook with his buds. to me..
---
## 2. ```
Fiction or Reality?
``` {#2-```
fiction-or-reality-
```}
文章的核心討論主題是**「利用人工智慧(AI)自動化創建多個帳號」**。
雖然原文僅有簡短的兩句話,但從上下文可以推測重點在於:
1. **AI 自動化技術**:透過 AI 簡化或取代手動操作流程。
2. **批量帳號創建**:可能用於效率提升、測試或其他特定需求,但需注意合規性(如平台規則)。
語氣帶有幽默或暗示性(如「;)」),可能涉及技術實踐或灰色應用的討論。
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jpimhk/fiction_or_reality/](https://reddit.com/r/ChatGPTCoding/comments/1jpimhk/fiction_or_reality/)
- **外部連結**: [https://i.redd.it/perwzt2cfdse1.jpeg](https://i.redd.it/perwzt2cfdse1.jpeg)
- **發布時間**: 2025-04-02 15:06:23
### 內容
Interesting
More like automate with AI multiple account creations ;)
### 討論
**評論 1**:
Interesting
**評論 2**:
More like automate with AI multiple account creations ;)
---
## 3. ```
Did they NERF the new Gemini model? Coding genius yesterday, total idiot today? The fix might be way simpler than you think. The most important setting for coding: actually explained clearly, in plain English. NOT a clickbait link but real answers.
``` {#3-```
did-they-nerf-the-new-gemini-model-coding-ge}
這篇文章的核心討論主題是:**「溫度(Temperature)參數在大型語言模型(LLM)中的作用與實際影響」**,尤其是針對程式碼生成等需要精確輸出的任務。以下是關鍵論點總結:
1. **溫度參數的本質**
- 溫度並非如常見誤解所說的「創造力控制」,而是**「隨機性的調節器」**。它決定模型如何從概率分佈中選擇下一個詞(token),低溫度(如0)會選擇最高概率的確定性答案,高溫度則引入隨機性,可能選取低概率但「非最優」的選項。
2. **對程式碼生成的影響**
- 程式碼需要嚴格準確性,高溫度會導致模型因隨機選擇而產生語法錯誤、邏輯混亂,甚至虛構內容(如引用不存在的文件)。作者以「頂尖程式專家被迫抽籤回答」的比喻,說明溫度如何讓模型輸出品質不穩定。
3. **預設值的陷阱**
- 許多平台(如Google AI Studio)預設溫度為1(最高隨機性),導致用戶誤以為模型能力波動,實則是參數設定不當。作者建議程式任務應**優先使用溫度0**以獲得最可靠輸出。
4. **技術原理解釋**
- 語言模型的**自迴歸(autoregressive)特性**會將每個隨機選擇的錯誤token累積成「荒謬輸出」,尤其在溫度>1時,低概率詞彙被選中後,模型會基於錯誤繼續生成,形成惡性循環。
5. **適用場景建議**
- 創造性任務(如寫作、腦力激盪)可嘗試較高溫度以探索新穎性,但精確任務(如編程)應保持低溫。作者強調,高溫度的「創意」結果實際是隨機性的副產品,多數情況下弊大於利。
6. **名稱的物理學類比**
- 借用熱力學概念:低溫如冰(穩定有序),高溫如蒸氣(混沌隨機),呼應模型輸出的可控性差異。
**結論**:溫度參數的誤解與不當設定是許多用戶遭遇模型輸出品質不穩的主因,理解其機制能有效提升LLM的使用效率,尤其在需要精準度的應用中。
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jph2wu/did_they_nerf_the_new_gemini_model_coding_genius/](https://reddit.com/r/ChatGPTCoding/comments/1jph2wu/did_they_nerf_the_new_gemini_model_coding_genius/)
- **外部連結**: [https://www.reddit.com/r/ChatGPTCoding/comments/1jph2wu/did_they_nerf_the_new_gemini_model_coding_genius/](https://www.reddit.com/r/ChatGPTCoding/comments/1jph2wu/did_they_nerf_the_new_gemini_model_coding_genius/)
- **發布時間**: 2025-04-02 13:18:20
### 內容
EDIT: Since I was accused of posting generated content: This is from my human mind and experience. I spent the past 3 hours typing this all out by hand, and then running it through AI for spelling, grammar, and formatting, but the ideas, analogy, and almost every word were written by me sitting at my computer taking bathroom and snack breaks. Gained through several years of professional and personal experience working with LLMs, and I genuinely believe it will help some people on here who might be struggling and not realize why due to default recommended settings.
^((TL;DR is at the bottom! Yes, this is practically a TED talk but worth it))
----
Every day, I see threads popping up with frustrated users convinced that Anthropic or Google "nerfed" their favorite new model. "It was a coding genius yesterday, and today it's a total moron!" Sound familiar? Just this morning, someone posted: "Look how they massacred my boy (Gemini 2.5)!" after the model suddenly went from effortlessly one-shotting tasks to spitting out nonsense code referencing files that don't even exist.
But here's the thing... nobody nerfed anything. Ouide of the inherent variability of your promp themselves (input), the real culprit is probably the simplest thing imaginable, and it's something most people completely misunderstand or don't bother to even change from default: TEMPERATURE.
Part of the confusion comes directly from how even Google describes temperature in their own AI Studio interface - as "Creativity allowed in the responses." This makes it sound like you're giving the model room to think or be clever. But that's not what's happening at all.
Unlike creative writing, where an unexpected word choice might be subjectively interesting or even brilliant, coding is fundamentally binary - it either works or it doesn't. A single "creative" token can lead directly to syntax errors or code that simply won't execute. Google's explanation misses this crucial distinction, leading users to inadvertently introduce randomness into tasks where precision is essential.
Temperature isn't about creativity at all - it's about something much more fundamental that affec how the model selec each word.
YOU MIGHT THINK YOU UNDERSTAND WHAT TEMPERATURE IS OR DOES, BUT DON'T BE SO SURE:
I want to clear this up in the simplest way I can think of.
Imagine this scenario: You're wrestling with a really nasty bug in your code. You're stuck, you're frustrated, you're about to toss your laptop out the window. But somehow, you've managed to get direct access to the best programmer on the planet - an absolute coding wizard (human stand-in for Gemini 2.5 Pro, Claude Sonnet 3.7, etc.). You hand them your broken script, explain the problem, and beg them to fix it.
If your temperature setting is cranked down to 0, here's essentially what you're telling this coding genius:
>"Okay, you've seen the code, you understand my issue. Give me EXACTLY what you think is the SINGLE most likely fix - the one you're absolutely most confident in."
That's it. The expert carefully evaluates your problem and hands you the solution predicted to have the highest probability of being correct, based on their vast knowledge. Usually, for coding tasks, this is exactly what you want: their single most confident prediction.
But what if you don't stick to zero? Let's say you crank it just a bit - up to 0.2.
Suddenly, the conversation changes. It's as if you're interrupting this expert coding wizard just as he's about to confidently hand you his top solution, saying:
>"Hang on a sec - before you give me your absolute #1 solution, could you instead jot down your top two or three best ideas, toss them into a hat, shake 'em around, and then randomly draw one? Yeah, 's just roll with whatever comes out."
Instead of directly getting the best answer, you're adding a little randomness to the process - but still among his top suggestions.
Let's dial it up further - to temperature 0.5. Now your request ge``` even more adventurous:
>"Alright, expert, broaden the scope a bit more. Write down not just your top solutions, but also those mid-tier ones, the 'maybe-this-will-work?' options too. Put them ALL in the hat, mix 'em up, and draw one at random."
And all the way up at temperature = 1? Now you're really flying by the seat of your pan```. At this point, you're basically saying:
>"Tell you what - forget being careful. Write down every possible solution you can think of - from your most brilliant ideas, down to the really obscure ones that barely have a snowball's chance in hell of working. Every last one. Toss 'em all in that hat, mix it thoroughly, and pull one out. Let's hit the 'I'm Feeling Lucky' button and see what happens!"
At higher temperatures, you open up the answer lottery pool wider and wider, introducing more randomness and chaos into the process.
Now, here's the part that actually causes it to act like it just got demoted to 3rd-grade level intellect:
This expert isn't doing the lottery thing just once for the whole answer. Nope! They're forced through this entire "write-it-down-toss-it-in-hat-pick-one-randomly" process again and again, for every single word (technically, every token) they write!
Why does that matter so much? Because language models are autoregressive and feed-forward. That's a fancy way of saying they generate tokens one by one, each new token based entirely on the tokens written before it.
Importantly, they never look back and reconsider if the previous token was actually a solid choice. Once a token is chosen - no matter how wildly improbable it was - they confidently assume it was right and build every subsequent token from that point forward like it was absolute truth.
So imagine; at temperature 1, if the expert randomly draws a slightly "off" word early in the script, they don't pause or correct it. Nope - they just roll with that mistake, confidently building each next token atop that shaky foundation. As a result, one unlucky pick can snowball into a cascade of confused logic and nonsense.
Want to see this chaos unfold instantly and truly get it? Try this:
Take a recent prompt, especially for coding, and crank the temperature way uppast 1, maybe even towards 1.5 or 2 (if your tool allows). Watch what happens.
At temperatures above 1, the probability distribution flattens dramatically. This makes the model much more likely to select bizarre, low-probability words it would never pick at lower settings. And because all it knows is to FEED FORWARD without ever looking back to correct course, one weird choice forces the next, often spiraling into repetitive loops or complete gibberish... an unrecoverable tailspin of nonsense.
This experiment hammers home why temperature 1 is often the practical limit for any kind of coherence. Anything higher is like intentionally buying a lottery ticket you know is garbage. And that's the kind of randomness you might be accidentally injecting into your coding workflow if you're using high default settings.
That's why your coding assistant can seem like a genius one moment (it got lucky draws, or you used temperature 0), and then suddenly spit out absolute garbage - like something a first-year student would laugh at - because it hit a bad streak of random picks when temperature was set high. It's not suddenly "dumber"; it's just obediently building forward on random draws you forced it to make.
For creative writing or brainstorming, making this legendary expert coder pull random slips from a hat might occasionally yield something surprisingly clever or original. But for programming, forcing this lottery approach on every token is usually a terrible gamble. You might occasionally get lucky and uncover a brilliant fix that the model wouldn't consider at zero. Far more often, though, you're just raising the odds that you'll introduce bugs, confusion, or outright nonsense.
Now, ever wonder why even call it "temperature"? The term actually comes straight from physics - specifically from thermodynamics. At low temperature (like with ice), molecules are stable, orderly, predictable. At high temperature (like steam), they move chaotically, unpredictably - with tons of entropy. Language models simply borrowed this analogy: low temperature means stable, predictable resul```; high temperature means randomness, chaos, and unpredictability.
TL;DR - Temperature is a "Chaos Dial," Not a "Creativity Dial"
-
Common misconception: Temperature doesn't make the model more clever, thoughtful, or creative. It simply controls how randomly the model samples from i
probability distribution. What we perceive as "creativity" is often just a byproduct of introducing controlled randomness, sometimes yielding interesting resulbut frequently producing nonsense. -
For precise tasks like coding, stay at temperature 0 most of the time. It gives you the expert's single best, most confident answer...which is exactly what you typically need for reliable, functioning code.
-
Only crank the temperature higher if you've tried zero and it just isn't working - or if you specifically want to roll the dice and explore less likely, more novel solutions. Just know that you're basically gambling - you're hitting the Google "I'm Feeling Lucky" button. Sometimes you'll strike genius, but more likely you'll just introduce bugs and chaos into your work.
-
Important to know: Google AI Studio defaul
to temperature **1** (maximum chaos) unless you manually change it. Many other web implementations either don't you adjust temperature at all or default to around 0.7 - regardless of whether you're coding or creative writing. This explains why the same model can seem brilliant one moment and produce nonsense the next - even when your prompare similar. This is why coding in the API works best. -
See the math in action: Some APIs (like OpenAI's) you view
logprobs. This visualizes the ranked list of possible next words and their probabilities before temperature influences the choice, clearly showing how higher temps increase the chance of picking less likely (and potentially nonsensical) options. (see example image: LOGPROBS)
### 討論
**評論 1**:
Your analogies are flawed here (a bit anyway). There is a very good reason why the modern models all do better on tes``` if they can take the average of multiple responses or the best of them at default temperatures.
Temperature only predic``` the next best token (not the best overall response!), so the analogy is better to say: You hire an expert guide to lead you through a forest. At temperature 0 whenever they pick a path they are more likely to stay on that path no matter what, and they will pick the same path each trip. They can find one path. Sometimes you want your guide to just pick a trail with confidence and do the same again and again. Sure.
At a higher temperature, they have the ability to take a few steps down a path and then cut across the brush to a different also good path, averaging in the same direction, but without getting stuck only using the single path. This allows it to regularly avoid the local maxima more often rather than getting stuck on what sounds most plausible, with more ability to correct ielf. You get a little creativity, but you also avoid it sticking to hallucinations, common misconceptions, etc.,(especially with so much of i training data being written as if it is correct and highly confident).
With modern powerful language models, I would recommend you keep temperature at the defaul``` and try multiple responses unless you need pure deterministic responses for testing and the like.
Do not underestimate the power of chaos. Adding a little popcorn noise to a system can boost signals and avoid getting trapped in local maxima that might be far from the best answer.
**評論 2**:
Excellent write with alot of detail. Thank you for the time.
Love the tone.
**評論 3**:
That's a good reminder to give that setting a thought. Way too many times I roll with the default of whatever tool I am using, not remembering to change it every time. I was about to run some fine tuning tes``` anyway, this is a good reminder to also consider the temperature for evaluation
**評論 4**:
>Importantly, they never look back and reconsider if the previous token was actually a solid choice.
I get what you're saying but this isn't actually true, especially for reasoning models which are often specifically trained in a way to encourage this behavior.
Even non-reasoning models do it, leading to pos like "wow ChatGPT changed i mind mid response, what a maroon", but it's actually quite nice that they can do this.
**評論 5**:
Ah fuck I've been thinking temperature 1.0 is supposed to be 'baseline normal' this entire time.
Thank you so much for posting this, I'm going to set temp down to 0 any time I want to code and accurate answers from now on.
---
## 4. ```
Gemini 2.5 beyond the Free Tier
``` {#4-```
gemini-2-5-beyond-the-free-tier
```}
該文章的核心討論主題是:**使用 Gemini 2.5 的每日成本問題**,特別是針對全天頻繁使用(每日請求超過 25 次)的用戶。
具體焦點包括:
1. **使用情境**:高頻率使用者(超過 25 次請求/日)的實際開銷。
2. **成本計算**:探討這類用戶的每日花費或訂閱方案是否划算。
(註:原文因截斷導致部分資訊不完整,但從上下文可推測討論方向。)
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jpt39y/gemini_25_beyond_the_free_tier/](https://reddit.com/r/ChatGPTCoding/comments/1jpt39y/gemini_25_beyond_the_free_tier/)
- **外部連結**: [https://www.reddit.com/r/ChatGPTCoding/comments/1jpt39y/gemini_25_beyond_the_free_tier/](https://www.reddit.com/r/ChatGPTCoding/comments/1jpt39y/gemini_25_beyond_the_free_tier/)
- **發布時間**: 2025-04-03 00:36:35
### 內容
For those using Gemini 2.5 full-time during the day and exceeding 25 reques``` per day.
What are your daily cos```?
### 討論
**評論 1**:
You can't pay for Gemini exp. You can only pay for Flash 2.0 and below. That mean there is 0$ daily cos```.
**評論 2**:
0.00 $
**評論 3**:
Wait are you guys using it with Google's studio or not ?
---
## 5. ```
Fully Featured AI Coding Agent as MCP Server
``` {#5-```
fully-featured-ai-coding-agent-as-mcp-server}
這段文章的核心討論主題是:
**「介紹一款免費且功能強大的程式碼分析工具(Serena),並說明其技術特點與使用方式。」**
具體要點包括:
1. **功能定位**:
- 提供與付費工具(如 Windsurf's Cascade 或 Cursor's agent)相當或更優的程式碼分析能力,但完全免費。
- 支援大型程式碼庫的理解與分析。
2. **技術實現**:
- 採用 **語言伺服器(language server)** 而非 RAG(檢索增強生成)來分析程式碼,提升準確性與效率。
3. **使用場景**:
- 可作為 **MCP 伺服器** 運行,搭配免費工具(如 Claude Desktop)使用。
- 也支援 Gemini 平台,但需 Google Cloud API 金鑰(新用戶可獲 $300 贈金)。
4. **開源與授權**:
- 專案以 **GPL 授權** 釋出,程式碼公開於 GitHub(附連結)。
總結:文章主要推廣一款開源、高效且免費的程式碼分析工具,強調其技術優勢與靈活的部署選項,吸引開發者試用或貢獻。
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jpoara/fully_featured_ai_coding_agent_as_mcp_server/](https://reddit.com/r/ChatGPTCoding/comments/1jpoara/fully_featured_ai_coding_agent_as_mcp_server/)
- **外部連結**: [https://www.reddit.com/r/ChatGPTCoding/comments/1jpoara/fully_featured_ai_coding_agent_as_mcp_server/](https://www.reddit.com/r/ChatGPTCoding/comments/1jpoara/fully_featured_ai_coding_agent_as_mcp_server/)
- **發布時間**: 2025-04-02 21:16:05
### 內容
We've been working like hell on this one: a fully capable Agent, as good or better than Windsurf's Cascade or Cursor's agent - but can be used for free.
It can run as an MCP server, so you can use it for free with Claude Desktop, and it can still fully understand a code base, even a very large one. We did this by using a language server instead of RAG to analyze code.
Can also run it on Gemini, but you'll need an API key for that. With a new google cloud account you'll get 300$ as a gift that you can use on API credi```.
Check it out, super easy to run, GPL license:
https://github.com/oraios/serena
### 討論
**評論 1**:
Where is a good place to learn about MCP?
**評論 2**:
Are there options to ignore files/folders? e.g: .clineignore
---
## 6. ```
Is it a good idea to learn coding via Claude 3.7?
``` {#6-```
is-it-a-good-idea-to-learn-coding-via-claude}
這段文字的核心討論主題是:**評估AI作為程式設計(特別是C#語言)教學工具的可靠性和準確性**,主要聚焦於以下兩個關鍵問題:
1. **教學能力**
- AI是否適合擔任「程式設計基礎與特定語言(C#)」的教師角色?
- 能否有效傳遞結構化的知識體系?
2. **潛在風險**
- AI可能產生「幻覺」(hallucination)導致錯誤知識的風險有多高?
- 這種錯誤是否會對學習者的知識框架造成負面影響(如混淆概念)?
本質上,這是一個關於**AI輔助教育可信度**的探討,尤其針對需要嚴謹邏輯的程式設計領域。
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jpr4o1/is_it_a_good_idea_to_learn_coding_via_claude_37/](https://reddit.com/r/ChatGPTCoding/comments/1jpr4o1/is_it_a_good_idea_to_learn_coding_via_claude_37/)
- **外部連結**: [https://www.reddit.com/r/ChatGPTCoding/comments/1jpr4o1/is_it_a_good_idea_to_learn_coding_via_claude_37/](https://www.reddit.com/r/ChatGPTCoding/comments/1jpr4o1/is_it_a_good_idea_to_learn_coding_via_claude_37/)
- **發布時間**: 2025-04-02 23:16:38
### 內容
If I ask it to teach me programming fundamentals, and also a language, in my case, C#, would it be a good teacher? Or would it hallucinate a lot and mess up my knowledge?
### 討論
**評論 1**:
Youre not going to learn how to code. But you will learn how to develop applications.
I think I know whats going to be more valuble in the future. But dont confuse it with coding.
Its a new thing we all will have to adapt to
**評論 2**:
No. It makes too many mistakes and when you ask it questions that are a gap in i``` knowledge it will extrapolate incorrectly.
Learn fundamentals through books and udemy courses without ai support.
**評論 3**:
Just try it out on a simple project or to learn something specific. The thing about programming is that if it doesn't work, one knows it is not working. So if an AI doesn't do it well, it will be obvious very quickly.
Ask for a C# hello world and see if it works.
With all that said, I've had a few bad instructors in my life and while I had to endure them, I continued to learn on my own by reading the book, talking to people. Consider asking serveral AIs about a quick course on C#, see what they offer and maybe even 'learn' from serveral AIs at the same time, until you find one you like the best (you learn the most, the programs work).
**評論 4**:
As long as you take a hands-on approach and TEST everything, it should be fine. You'll catch any hallucinations instantly, and seeing the code in action will help enforce what you learn.
**評論 5**:
Depends on how you prompt it. I've written a post that explains this and how you can learn by asking it to give you real world tasks.
Refer this post to get more idea: https://www.reddit.com/r/csMajors/s/56dB3smGOJ
---
## 7. ```
This sub is mostly full of low effort garbage now
``` {#7-```
this-sub-is-mostly-full-of-low-effort-garbag}
這段文字的核心討論主題是對論壇或社群中「氛圍式編程(vibe coding)」和過度行銷內容的不滿,並呼籲管理員(mods)加強管理、清理這類貼文。
關鍵點包括:
1. **批評內容類型**:指責「vibe coding」(可能指空洞或主觀的技術討論)和行銷導向的貼文。
2. **管理訴求**:要求管理員積極介入,維護討論區品質。
3. **語氣情緒**:用詞強烈(如「marketing pos*」的未完成髒話),反映作者對當前環境的負面觀感。
總結:這是一則針對社群內容品質的抱怨與整頓呼籲,核心問題在於無實質價值的討論和商業化內容氾濫。
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jpcme8/this_sub_is_mostly_full_of_low_effort_garbage_now/](https://reddit.com/r/ChatGPTCoding/comments/1jpcme8/this_sub_is_mostly_full_of_low_effort_garbage_now/)
- **外部連結**: [https://www.reddit.com/r/ChatGPTCoding/comments/1jpcme8/this_sub_is_mostly_full_of_low_effort_garbage_now/](https://www.reddit.com/r/ChatGPTCoding/comments/1jpcme8/this_sub_is_mostly_full_of_low_effort_garbage_now/)
- **發布時間**: 2025-04-02 09:25:20
### 內容
Admittedly including this post.
I wish the mods would step up and clean up all these vibe coding and marketing pos``` in here.
### 討論
**評論 1**:
Every single AI community on every site or chat is like this right now. Unless it's private with only people you know there's somebody trying to sell you shit in the most obnoxious and obvious way possible.
I'm in about a dozen locations between subreddi``` and discord channels and forums, every single one has huge spam problems and issues with exceptionally low quality of discussion.
**評論 2**:
YES! A thousand times yes.
I came here because it was the least bad subreddit to talk about AI codegen. Now there's really nowhere that's halfway pleasant.
It also ge annoying how a marketing post disguises ielf as a lone programmer discovering something. "Hey, guys. Checkout this new agent I just discovered!"
**評論 3**:
Yeah that happens to most subs sooner or later
**評論 4**:
Welcome to modern software engineering
**評論 5**:
But I can't wait to read another long AI generated clickbait piece ending with an ad for nexus trade
---
## 8. ```
New better gemini coding model in LMarena
``` {#8-```
new-better-gemini-coding-model-in-lmarena
``}
由於提供的連結是Reddit的圖庫頁面(/gallery/),我無法直接查看或總結其內容。Reddit的圖庫通常包含多張圖片或漫畫,而具體討論主題需依賴用戶上傳的內容和標題來判斷。
若您希望獲得該頁面的核心討論主題,建議:
1. **提供更詳細的資訊**:例如該帖的標題、作者或文字描述。
2. **直接分享內容摘要**:如果無法分享連結內容,可簡述圖片或討論的重點。
如果您能提供更多細節,我可以幫助分析並總結核心主題!
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jpuq4q/new_better_gemini_coding_model_in_lmarena/](https://reddit.com/r/ChatGPTCoding/comments/1jpuq4q/new_better_gemini_coding_model_in_lmarena/)
- **外部連結**: [https://www.reddit.com/gallery/1jpuq4q](https://www.reddit.com/gallery/1jpuq4q)
- **發布時間**: 2025-04-03 01:40:54
### 內容
連結: [https://www.reddit.com/gallery/1jpuq4q](https://www.reddit.com/gallery/1jpuq4q)
### 討論
無討論內容
---
## 9. ```
What happens when you tell an LLM that it has an iPhone next to it
``` {#9-```
what-happens-when-you-tell-an-llm-that-it-ha}
該文章的核心討論主題是探討當告訴大型語言模型(LLM)它旁邊有一部iPhone時,模型會如何反應及其背後的機制。作者通過實驗觀察LLM在這種虛擬情境下的行為,分析其回應的邏輯與限制,並進一步討論語言模型對現實世界物體和情境的理解能力,以及這種互動對AI開發和應用的潛在啟示。
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jpu7dj/what_happens_when_you_tell_an_llm_that_it_has_an/](https://reddit.com/r/ChatGPTCoding/comments/1jpu7dj/what_happens_when_you_tell_an_llm_that_it_has_an/)
- **外部連結**: [https://medium.com/@austin-starks/what-happens-when-you-tell-an-llm-it-has-an-iphone-next-to-it-01a82c880a56](https://medium.com/@austin-starks/what-happens-when-you-tell-an-llm-it-has-an-iphone-next-to-it-01a82c880a56)
- **發布時間**: 2025-04-03 01:20:52
### 內容
連結: [https://medium.com/@austin-starks/what-happens-when-you-tell-an-llm-it-has-an-iphone-next-to-it-01a82c880a56](https://medium.com/@austin-starks/what-happens-when-you-tell-an-llm-it-has-an-iphone-next-to-it-01a82c880a56)
### 討論
無討論內容
---
## 10. ```
I generated a playable chess with one prompt (two diff. platforms)
``` {#10-```
i-generated-a-playable-chess-with-one-promp}
这篇文章的核心討論主題是:
**開發一個互動式西洋棋遊戲,其中玩家(白方)與使用AI策略的電腦(黑方)對戰,並比較兩種不同開發工具(Bolt.new 和 Bind AI IDE)的實作結果與視覺風格差異。**
具體重點包括:
1. **遊戲設計需求**:
- 玩家與AI對戰,AI需採用進階策略(如minimax或alpha-beta pruning)生成智能決策。
- 使用標準代數記譜法顯示每一步棋,並在遊戲結束時明確標示結果(將死、和棋等)。
2. **工具實作比較**:
- **Bolt.new**:呈現現代化介面風格。
- **Bind AI IDE**:偏向經典風格,但兩者的AI核心功能相似。
3. **AI效能限制**:
- 作者指出內建AI的表現普通,並提到需整合外部工具才能提升強度。
總結來說,文章聚焦於「互動西洋棋遊戲的技術實作」與「不同開發工具的視覺及功能差異」,同時探討AI演算法的實際應用限制。
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jpqpm2/i_generated_a_playable_chess_with_one_prompt_two/](https://reddit.com/r/ChatGPTCoding/comments/1jpqpm2/i_generated_a_playable_chess_with_one_prompt_two/)
- **外部連結**: [https://www.reddit.com/r/ChatGPTCoding/comments/1jpqpm2/i_generated_a_playable_chess_with_one_prompt_two/](https://www.reddit.com/r/ChatGPTCoding/comments/1jpqpm2/i_generated_a_playable_chess_with_one_prompt_two/)
- **發布時間**: 2025-04-02 22:59:42
### 內容
PROMPT: Generate an interactive chess game where the user plays white and the CPU plays black. The CPU should use an advanced strategy and evaluate moves based on common chess AI techniques like minimax or alpha-beta pruning, to make intelligent decisions. Each move should be presented in standard algebraic notation, and after the user's move, the CPU should respond with i``` best calculated move. The game should continue until a checkmate, stalemate, or draw is reached, with the final result clearly displayed at the end of the game.
I used Bolt.new and Bind AI IDE (yeah, I have the early access) and here's what the resul``` looked like;
Bolt.new
It's more of a modern look.
Bind AI IDE
(opened within the Bind AI IDE)
This one's more like the classic look.
The 'AI' behind the CPU was largely the same between the two, and it wasn't very good tbh and that's expected unless you integrate some external tools.
### 討論
無討論內容
---
## 11. ```
I finally figured out how to commit `api` keys to GitHub!
``` {#11-```
i-finally-figured-out-how-to-commit-`api`-k}
根據提供的句子,核心討論主題可總結為:
**「對鑰匙管理方式的嚴厲批評,指出其缺乏能力與責任感」**
(關鍵詞:鑰匙管理、無能、不負責任、批評)
由於原文極簡,推測可能涉及對某種鑰匙保管系統、安全流程或相關決策的負面評價,強調其方法存在嚴重缺陷。
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jpunpd/i_finally_figured_out_how_to_commit_api_keys_to/](https://reddit.com/r/ChatGPTCoding/comments/1jpunpd/i_finally_figured_out_how_to_commit_api_keys_to/)
- **外部連結**: [https://www.reddit.com/gallery/1jo7nyx](https://www.reddit.com/gallery/1jo7nyx)
- **發布時間**: 2025-04-03 01:38:17
### 內容
This is an incompetent and irresponsible way to manage keys.
### 討論
**評論 1**:
This is an incompetent and irresponsible way to manage keys.
---
## 12. ```
Strategies to Thrive as AIs get Better - Especially for programmers [Internet of Bugs]
``` {#12-```
strategies-to-thrive-as-ais-get-better-espe}
由於我無法直接訪問 YouTube 影片內容,因此無法分析該影片的具體討論主題。不過,您可以提供影片的標題、描述或關鍵內容摘要,我可以根據這些資訊幫助總結核心主題。
如果您能提供更多細節(例如影片的主要論點、討論重點或相關背景),我可以更準確地歸納其核心議題。例如:
- 影片是否討論科技、社會、經濟、文化等特定領域?
- 是否有明確的議題或爭論點(如 AI 發展、氣候變遷、政治議題等)?
請提供更多資訊,我會盡力協助!
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jps462/strategies_to_thrive_as_ais_get_better_especially/](https://reddit.com/r/ChatGPTCoding/comments/1jps462/strategies_to_thrive_as_ais_get_better_especially/)
- **外部連結**: [https://www.youtube.com/watch?v=A_fOHpBqj50](https://www.youtube.com/watch?v=A_fOHpBqj50)
- **發布時間**: 2025-04-02 23:57:22
### 內容
連結: [https://www.youtube.com/watch?v=A_fOHpBqj50](https://www.youtube.com/watch?v=A_fOHpBqj50)
### 討論
無討論內容
---
## 13. ```
How to use DeepSeek deep research unlimited?
``` {#13-```
how-to-use-deepseek-deep-research-unlimited}
這段文字的核心討論主題是:
**「在使用某服務或API時遇到請求限制(如『server is busy』錯誤),並詢問如何透過API Key與cursor(游標)功能來解決或繞過此限制。」**
具體要點包括:
1. **請求限制問題**:用戶因發送過多請求(`X amount of requests`)觸發伺服器繁忙的錯誤。
2. **解決方案提問**:詢問是否可透過`API Key`結合`cursor`(常見於分頁或批次處理數據的技術)來改善或規避限制,並請求操作指引(`how?`)。
潛在背景可能是討論某種API(如社交媒體、數據庫查詢等)的速率限制或分頁機制。
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jprma7/how_to_use_deepseek_deep_research_unlimited/](https://reddit.com/r/ChatGPTCoding/comments/1jprma7/how_to_use_deepseek_deep_research_unlimited/)
- **外部連結**: [https://www.reddit.com/r/ChatGPTCoding/comments/1jprma7/how_to_use_deepseek_deep_research_unlimited/](https://www.reddit.com/r/ChatGPTCoding/comments/1jprma7/how_to_use_deepseek_deep_research_unlimited/)
- **發布時間**: 2025-04-02 23:36:37
### 內容
I see there's limi to it as after X amount of reques I get "server is busy" message. Can I use it with an API Key with cursor? If so, how?
### 討論
**評論 1**:
no not right now
---
## 14. ```
How to transfer knowledge from one conversation to another
``` {#14-```
how-to-transfer-knowledge-from-one-conversa}
這篇文章的核心討論主題是:**如何透過特定的提示(prompt)在ChatGPT中無縫延續對話**,尤其是在接近對話長度限制時,將舊對話的上下文結構化總結並導入新對話中。
具體重點包括:
1. **解決痛點**:避免因對話限制中斷而需重複解釋,提升連續性。
2. **結構化摘要模板**:提供標準化的Markdown格式,涵蓋:
- 對話目標、主題與關鍵洞察(Detailed Report)
- 主要討論議題(Key Topics)
- 進行中的專案進度與下一步(Ongoing Projects)
- 用戶偏好(如語氣、格式等)
- 待辦行動項目(Action Items)
3. **操作指引**:將此摘要貼至新對話,並指示「Continue where we left off」以無縫接續。
最終目的是優化用戶與ChatGPT的長期互動效率,減少重複溝通成本。
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jpjbbd/how_to_transfer_knowledge_from_one_conversation/](https://reddit.com/r/ChatGPTCoding/comments/1jpjbbd/how_to_transfer_knowledge_from_one_conversation/)
- **外部連結**: [https://www.reddit.com/r/ChatGPTCoding/comments/1jpjbbd/how_to_transfer_knowledge_from_one_conversation/](https://www.reddit.com/r/ChatGPTCoding/comments/1jpjbbd/how_to_transfer_knowledge_from_one_conversation/)
- **發布時間**: 2025-04-02 15:58:15
### 內容
Get annoyed when you have to start a new conversation? Use this prompt to get your new conversation up to speed.
(Source and credit at the end).
Prompt Start
You are ChatGPT. Your task is to summarize the entire conversation so far into a structured format that allows this context to be carried into a new session and continued seamlessly.
Please output the summary in the following format using markdown:
Detailed Report
A natural language summary of the conversations goals, themes, and major insigh```.
Key Topics
- [List 37 bullet poin``` summarizing the major discussion themes]
🚧 Ongoing Projec```
Project Name: [Name]
-
Goal: [What the user is trying to accomplish]
-
Current Status: [Progress made so far]
-
Challenges: [Any blockers or complexities]
-
Next Steps: [What should happen next]
(Repeat for each project)
User Preferences
- [Tone, formatting, workflow style, special instructions the user tends to give]
Action Items
- [List all actionable follow-ups or tasks that were not yet completed]
Prompt End
Directions: use this in your chat nearing i``` limit then paste this summary into a new ChatGPT chat and say Continue where we left off using the following context to seamlessly resume.
### 討論
無討論內容
---
## 15. ```
How do you handle auth, db, subscriptions, AI integration for AI agent coding?
``` {#15-```
how-do-you-handle-auth-db-subscriptions-ai-}
這篇文章的核心討論主題是:**在現代網頁開發中,如何快速且可靠地建立並整合「使用者框架」(user context)**,包括身份驗證(auth)、資料庫(db)、訂閱服務(如Stripe)和AI功能等基礎設施。作者指出以下關鍵問題與痛點:
1. **重複性挑戰**:
- 開發者經常在每個新專案中重複設置使用者系統(登入/註冊/狀態管理),但過程中容易因層層疊加功能(如支付、AI整合)而引入錯誤(例如頁面重新載入時狀態丟失)。
2. **現有工具的限制**:
- 儘管有工具如Bolt、Lovable Dev、v0和Supabase/Netlify等簡化流程,但許多開發者仍卡在「穩定使用者框架」這一初始階段,無法快速進入實際業務邏輯開發。
3. **對「開箱即用」解決方案的渴望**:
- 作者質疑是否存在預先構建好的套件(如npm模組或AI生成模板),能直接提供完整且可靠的使用者環境(含auth/db/訂閱/AI),避免重複造輪子。
4. **更廣泛的開發者痛點**:
- 即使是傳統手寫代碼的開發者,也面臨類似困擾,凸顯這是跨技術棧的共通問題。
**本質訴求**:尋求一種高效、標準化的方法,將「使用者框架」抽象化為可複用的基礎模組,讓開發者能專注於核心功能創新。
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jprcxe/how_do_you_handle_auth_db_subscriptions_ai/](https://reddit.com/r/ChatGPTCoding/comments/1jprcxe/how_do_you_handle_auth_db_subscriptions_ai/)
- **外部連結**: [https://www.reddit.com/r/ChatGPTCoding/comments/1jprcxe/how_do_you_handle_auth_db_subscriptions_ai/](https://www.reddit.com/r/ChatGPTCoding/comments/1jprcxe/how_do_you_handle_auth_db_subscriptions_ai/)
- **發布時間**: 2025-04-02 23:26:06
### 內容
What's possible now with bolt new, Cursor, lovable dev, and v0 is incredible. But it also seems like a tarpit.
I start with user auth and db, get it stood up. Typically with supabase b/c it's built into bolt new and lovable dev.So far so good.
Then I layer in a Stripe implementation to handle subscriptions. Then I add the AI integrations.
By now typically the app is having problems with maintaining user state on page reload,or something has broken in the sign up / sign in / sign out flow along the way.
Where did that break get introduced? Can I fix it without breaking the other stuff somehow?
A big chunk of bolt, lovable, and v0 users probably get hung up on the first steps for building a web app - the user framework. How many users can't get past a stable, working, reliable user context?
Since bolt and lovable are both using netlify and supabase, is there a prebuild for them that's ready to go?
And if this is a problem for them, then maybe it's also an annoyance for traditional coders who need a new user context or framework for every application they hand-code. Every app needs a user context so I maybe naively assumed it would be easier to set one up by now.
Do you use a prebuilt solution? Is there an npm import thatwill just vomit out a working user context? Is there a reliable prompt to generate an out-of-the-box auth, db, subs, AI environment that "just works" so you can start layering the features you actually want to spend your time on?
What's the solution here other than tediously setting up and exhaustively testing a new user context for every app,before you get to the actually interesting par```?
How are you handling the user framework?
### 討論
**評論 1**:
Big question. Probably the most important thing to avoid it going off the rails is to build things as modularly as possible. Have auth be one module, DB interaction another, paymen``` another, and so on. Structuring your code is insanely important when building with these tools because if everything is in one huge file then you will blow out your context window quickly.
The second thing you need to be mindful of is instructing it to watch out for what the other modules are doing. Things like, "Remember, we imported the auth module and this feature is only for logged in users" will help keep it straight. That and feeding Cursor the right files for i``` context.
With that said, I prefer to handle auth myself, use Stripe for paymen, and use a DB I control that I can administer like Neon or GibsonAI. Stick to widely recognized patterns and don't get fancy, that will just confuse the AI if you are doing something too unique. It bases i code off of docs and examples, so the more mainstream the better.
Finally, consider auth patterns like Google Auth and Magic Links. These are far simpler than managing passwords and password rese```.
As for pre-built solutions, I have not found one without major drawbacks and I have used FusionAuth, Auth0, boilerplates, Supabase, and more. None are as simple as rolling your own.
---
## 16. ```
Vibe debugging best practices that ge``` me unstuck.
``` {#16-```
vibe-debugging-best-practices-that-ge```-me}
這篇文章的核心討論主題是:**「AI輔助程式除錯(debugging)的常見問題與解決方案」**,並延伸探討如何提升開發效率與預防錯誤。以下是具體重點:
1. **AI除錯的局限性**
- 列舉AI在解決程式問題時的五大痛點(如過度猜測、缺乏上下文、處理複雜問題能力不足、提供臨時修補方案、修復後引發副作用等)。
2. **針對性解決方案**
- 提出具體策略以優化AI除錯效果,例如:
- 提供更明確的錯誤描述與預期行為
- 分階段驗證AI的解決方案(先分析後改碼)
- 強化上下文(附加文件、錯誤日誌、版本控制)
- 使用進階推理模型或逐步思考指令
3. **預防勝於修復**
- 強調事前規劃、任務拆分與測試的重要性,減少後續除錯需求。
4. **工具開發與實踐建議**
- 介紹作者開發的IDE整合AI除錯工具(專注Next.js應用),並推薦手動除錯技巧作為最終手段。
全文旨在幫助開發者更有效地結合AI與系統化方法,平衡「直覺式編程」(vibe coding)與結構化除錯,最終提升開發品質與效率。
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jp8etc/vibe_debugging_best_practices_that_gets_me_unstuck/](https://reddit.com/r/ChatGPTCoding/comments/1jp8etc/vibe_debugging_best_practices_that_gets_me_unstuck/)
- **外部連結**: [https://www.reddit.com/r/ChatGPTCoding/comments/1jp8etc/vibe_debugging_best_practices_that_gets_me_unstuck/](https://www.reddit.com/r/ChatGPTCoding/comments/1jp8etc/vibe_debugging_best_practices_that_gets_me_unstuck/)
- **發布時間**: 2025-04-02 06:10:56
### 內容
I recently helped a few vibe coders get unstuck with their coding issues and noticed some common patterns. Here is a list of problems with vibe debugging and potential solutions.
Why AI cant fix the issue:
-
AI is too eager to fix, but doesnt know what the issue/bug/expected behavior is.
-
AI is missing key context/information
-
The issue is too complex, or the model is not smart enough
-
AI tries hacky solutions or workarounds instead of fixing the issue
-
AI fixes problem, but breaks other functionalities. (The hardest one to address)
Potential solutions / actions:
-
Give the AI details in terms of what didnt work. (maps to Problem 1)
-
is it front end? provide a picture
-
are there error messages? provide the error messages
-
it's not doing what you expected? tell the AI exactly what you expect instead of "that didn't work"
-
-
Tag files that you already suspect to be problematic. This helps reduce scope of context (maps to Problem 1)
-
use two stage debugging. First ask the AI what it thinks the issue is, and give an overview of the solution WITHOUT changing code. Only when the proposal makes sense, proceed to updating code. (maps to Problem 1, 3)
-
provide docs, this is helpful bugs related to 3rd party integrations (maps to Problem 2)
-
use perplexity to search an error message, this is helpful for issues that are new and not in the LLMs training data. (maps to Problem 2)
-
Debug in a new chat, this preven``` context from getting too long and polluted. (maps to Problem 1 & 3)
-
use a stronger reasoning/thinking model (maps to Problem 3)
-
tell the AI to think step by step (maps to Problem 3)
-
tell the AI to add logs and debug statemen
and then provide the logs and debug statemento the AI. This is helpful for state related issues & more complex issues. (Maps to Problem 3) -
When AI says, that didnt work, s try a different approach, reject it and ask it the fix the issue instead. Otherwise, proceed with caution because this will potentially cause there to be 2 different implementation of the same functionality. It will make future bug fixing and maintenance very difficult. (Maps to problem 4)
-
When the AI fix the issue, don't accept all of the code changes. Instead, tell it "that fixed issue, only keep the necessary changes" because chances are some of the code changes are not necessary and will break other things. (maps to Problem 5)
-
Use Version Control and create checkpoin``` of working state so you can revert to a working state. (maps to Problem 5)
-
Manual debugging by setting breakpoin``` and tracing code execution. Although if you are at this step, you are not "vibe debugging" anymore.
Prevention > Fixing
Many bugs can be prevented in the first place with just a little bit of planning, task breakdown, and testing. Slowing down during the vibe coding will reduce the amount of debugging and resul``` in overall better vibes. Made a post about that previously and there are many guides on that already.
Im working on an IDE with a built-in AI debugger, it can set i own breakpoin and analyze the output. Basically simulates manual debugging, the limitation is it only works for Nextjs apps. Check it out here if you are interested:easycode.ai/flow
Let me know if you have any questions or disagree with anything!
### 討論
**評論 1**:
This subrreddit fucking sucks the dead internet has happened
**評論 2**:
As always, read the errors yourself and understand what the AI is actually doing. Often, the real fix is easy if you spend a moment to think about it yourself.
if you understand what it is doing wrong but i``` getting stuck still, you can go back in the chat to the initial cause of the error and continue from that point with explicit mention of what to avoid so it doesn't get caught in the same trap.
The worst thing you can do is try to plow through it with more back and forth. Your context ge``` longer. The AI comprehension slowly diminishes. And it usually stay stuck while wasting your time and tokens.
**評論 3**:
Why does everyone hate AI for coding? Ive got 15 years as an engineer and AI is a game changer. I dont believe a person who doesnt code is going to have a good time with it because you need to understand nuance. If someone wan``` to learn to code, they still need to learn the basics before using AI but it certainly can expedite the process if used correctly.
**評論 4**:
Vibe coding and vibe debugging are TWO problems that are problems you are NOT required to have in life.
**評論 5**:
Don't: Learn the stuff that is necessary to understand your system, debug the involved componen``` until you get a gut feeling where something might be off, and then drill down into the issue. That would be a tremendous waste of your time! /s
---
## 17. ```
CAMEL DatabaseAgent: A Revolutionary Tool for Natural Language to SQL
``` {#17-```
camel-databaseagent-a-revolutionary-tool-fo}
這篇文章的核心討論主題是:
**「解決非技術人員(如業務分析師)因缺乏SQL技能而依賴技術團隊獲取數據的問題,並介紹作者開發的開源工具『CAMEL DatabaseAgent』以簡化數據查詢流程、提升效率。」**
具體要點包括:
1. **問題背景**:業務分析師需頻繁從數據庫提取資訊,但SQL能力不足導致效率低下,需高度依賴技術團隊。
2. **解決方案**:作者開發了開源工具「CAMEL DatabaseAgent」,旨在讓非技術人員能自主查詢數據,減少溝通成本。
3. **工具介紹**:提供工具的GitHub連結與預覽圖,強調其開源特性與實際應用場景。
總結:文章聚焦於通過技術工具(CAMEL DatabaseAgent)賦能非技術人員,優化數據獲取流程,從而提升整體工作效率。
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jpqx7z/camel_databaseagent_a_revolutionary_tool_for/](https://reddit.com/r/ChatGPTCoding/comments/1jpqx7z/camel_databaseagent_a_revolutionary_tool_for/)
- **外部連結**: [https://www.reddit.com/r/ChatGPTCoding/comments/1jpqx7z/camel_databaseagent_a_revolutionary_tool_for/](https://www.reddit.com/r/ChatGPTCoding/comments/1jpqx7z/camel_databaseagent_a_revolutionary_tool_for/)
- **發布時間**: 2025-04-02 23:07:55
### 內容
As a data engineer, I've often faced the challenge where business analys``` need to extract information from databases but lack SQL skills. Each time they need a new report or data view, they rely on technical teams for support, reducing efficiency and increasing communication overhead.
Today, I'm excited to introduce an open-source tool I've developedCAMEL DatabaseAgentwhich completely transforms this workflow.
https://github.com/coolbeevip/camel-database-agent
https://preview.redd.it/qav247c4tfse1.png?width=3022&format=png&auto=webp&s=b7ceb82911314f0b87fbd0049f65b84db275f37e
### 討論
無討論內容
---
## 18. ```
tmuxify - automatically start your tmux dev environment with flexible templates
``` {#18-```
tmuxify-automatically-start-your-tmux-dev-e}
這篇文章的核心討論主題是介紹一個名為 **tmuxify** 的工具,旨在簡化和自動化使用 **tmux**(終端多工器)時的常見工作流程。主要內容包括:
1. **開發動機**:
- 作者在每次啟動新專案時,重複執行相同的 tmux 操作(如創建窗格、設定佈局、啟動應用等),因此決定開發腳本來自動化這些步驟。
2. **tmuxify 的主要功能**:
- 透過 **YAML 配置文件** 靈活定義 tmux 的視窗佈局(內建多種模板)。
- 在指定視窗中自動運行應用程式。
- 智能檢測當前專案是否已有 tmux 會話,並自動重新連接。
- 支援 **基於資料夾的配置**(每個專案可獨立設定)或透過參數指定配置文件。
- 簡易安裝與更新,並能透過單一指令啟動整個環境。
3. **與類似工具(如 tmuxinator)的比較**:
- tmuxify 完全基於 Shell 腳本(無需 Ruby 環境),適用於嚴格限制的系統環境。
- 透過 YAML 配置複雜佈局,避免直接操作 tmux 繁瑣的內建指令。
4. **現狀與開源協作**:
- 工具已進入可用階段,但仍處於早期開發,歡迎貢獻(如問題回報、功能建議、提交 PR)。
- 附上專案儲存庫連結供讀者參考。
**總結**:tmuxify 是一個專為提升 tmux 使用效率而設計的自動化工具,強調靈活性、易用性,並透過開源協作持續改進。
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jpjydx/tmuxify_automatically_start_your_tmux_dev/](https://reddit.com/r/ChatGPTCoding/comments/1jpjydx/tmuxify_automatically_start_your_tmux_dev/)
- **外部連結**: [https://www.reddit.com/r/ChatGPTCoding/comments/1jpjydx/tmuxify_automatically_start_your_tmux_dev/](https://www.reddit.com/r/ChatGPTCoding/comments/1jpjydx/tmuxify_automatically_start_your_tmux_dev/)
- **發布時間**: 2025-04-02 16:47:57
### 內容
Every time I started a new project, I repeated the same steps in my tmux (create panes, layout, start apps, etc), so I decided to create a script to streamline my workflow
Then the idea evolved into tmuxify, which is a flexible program that has several time saving features:
-
Create the windows layout with flexible, yaml based configuration (many templates included)
-
Run apps in i``` intended windows
-
Intelligently detect if there's a session associated to the current project and re-attach to it
-
Folder based configuration. I.e. you can have a separate yaml for each folder (project) to run your desired setup. Or you can pass the configuration file as an argument
-
Easy installation and update
-
Launch everything with a single commands
Unlike the great tuximinator, tmuxify is purely shell based, no ruby involved, which means wider possibilities in strict policy environmen. Also, it's way easier to set complex layou in yaml, no need to understand the cumbersom tmux custom layouting system
I spent sometime designing and debugging tmuxify, and it's fairly usable now. Yet it's an early stage project, and any contribution is welcome. Feel free to report issues, suggest features, and pull request
### 討論
無討論內容
---
## 19. ```
Created an office simulator for VibeJam - Meeting Dash - try to get work done between endless meetings
``` {#19-```
created-an-office-simulator-for-vibejam-mee}
由於提供的連結是 Reddit 的影片網址(v.redd.it),我無法直接查看內容(平台限制或內容可能已被刪除)。不過,根據常見的 Reddit 貼文模式,以下是可能的分析方向:
1. **核心主題推測**
若連結內容是影片,通常討論主題可能圍繞:
- 影片中的事件或現象(如社會議題、娛樂趣聞、科技展示等)。
- 用戶在標題或評論中提出的問題(例如爭議性觀點、求助或分享)。
2. **建議解決方法**
- 提供原貼文的標題或描述,我可協助進一步分析。
- 若影片涉及熱門話題(如近期新聞、迷因文化),可能與網路趨勢相關。
3. **常見 Reddit 討論類型**
例如:
- **社會議題**(政治、文化衝突)
- **技術/遊戲**(新產品、攻略)
- **幽默內容**(迷因、搞笑片段)
若需更精準的總結,請補充更多上下文資訊。
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jppox7/created_an_office_simulator_for_vibejam_meeting/](https://reddit.com/r/ChatGPTCoding/comments/1jppox7/created_an_office_simulator_for_vibejam_meeting/)
- **外部連結**: [https://v.redd.it/jh9tvmqiwdse1](https://v.redd.it/jh9tvmqiwdse1)
- **發布時間**: 2025-04-02 22:17:42
### 內容
連結: [https://v.redd.it/jh9tvmqiwdse1](https://v.redd.it/jh9tvmqiwdse1)
### 討論
無討論內容
---
## 20. ```
For people not using cursor etc., how do you give the LLM the latest version info?
``` {#20-```
for-people-not-using-cursor-etc-how-do-you-}
這篇文章的核心討論主題是:
**「使用舊版AI工具(如Cursor 2.5 Pro)時,如何避免因生成過時代碼(如React、Tailwind、TypeScript的新舊版本差異)而導致學習或開發上的問題?」**
具體焦點包括:
1. **版本落差擔憂**:AI的知識截止點可能無法支援最新框架/工具版本,導致生成的代碼與當前標準不符。
2. **學習成本與風險**:擔心因依賴AI的過時建議而忽略新版本的重要更新,影響學習正確性。
3. **解決方案探索**:
- 是否應妥協使用舊版本以配合AI的能力?
- 能否透過手動導入最新文檔(如Tailwind、React)來增強AI的準確性?
整體反映了開發者在資源有限(如無法付費升級工具)下,如何平衡效率、正確性與學習效果的困境。
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jpjf37/for_people_not_using_cursor_etc_how_do_you_give/](https://reddit.com/r/ChatGPTCoding/comments/1jpjf37/for_people_not_using_cursor_etc_how_do_you_give/)
- **外部連結**: [https://www.reddit.com/r/ChatGPTCoding/comments/1jpjf37/for_people_not_using_cursor_etc_how_do_you_give/](https://www.reddit.com/r/ChatGPTCoding/comments/1jpjf37/for_people_not_using_cursor_etc_how_do_you_give/)
- **發布時間**: 2025-04-02 16:06:00
### 內容
I'm a noob to all this using 2.5 pro (coz im too poor to buy cursor subscription) and while i'm not sure where it's exact knowledge cutoff is, it definitely does not know the latest versions of react, tailwind, typescript etc at all.
I dont wanna run into bugs because the ai generated code was based on older standards, while the newer ones are different. I know people on cursor just use like '@tailwind' or something, but i was worried i'd suffer without that because the new versions have quite some differences.
Sorry i know i shouldnt be vibe coding, i do try my best to understand it. Im just scared that while learning to do it i might miss out on something because i didnt realize that thing was updated in the latest version.
Do i just work with the older versions that the ai is comfortable with? Or is there a way to copy the entire documentation of each and put it into ai studio?
Thanks in advance
### 討論
**評論 1**:
Get the .md documentation files from any github repository tool/software/language/library you are interested in, then upload that to the LLM.
Then start your prompt with use your thorough understanding of the provided documentation to.
Im a no coder and do that all the time. Plus I avoid coding languages that rely a lot on tons of dependencies. Too messy to keep up.
Ultimately, search for custom GPTs that might exist for your use case and maybe they are up to date. I maintain a few for n8n, Deno, Zed, aider,
**評論 2**:
If you don't want to run into bugs then you should be using the "latest stable version" instead of the "latest version".
If you tell gemini to use the "latest stable version" it will probably nail most of them. You can do then a double check yourself.
**評論 3**:
I don't know much about frontend like React, but the knowledge cutoff is always an issue, especially in a fast-moving field like AI. Gemini 2.5 Pro doesn't know how to connect i```elf through API. That happened because Google changed the package name (google-genai) and SDK methods. But all the other AIs, including Cluade, and o3, also don't know how to connect.
In such cases, I am forced to feed the changes to Gemini since I wouldn't be able to use the needed models otherwise. But in many other cases, I always ask what version of the module the AI is comfortable with and just go with that version because some modules, like GRadio, are too hard to compile all the changes into a document that the AI can grasp and get familiar with how to use it properly. The only downside of this is the issue of dependency conflic```. The older versions will have a more likely chance of hitting that dependency conflict if you use other modules or AIs.
Recently, I had that kind of dependency conflict where Google-Genai required Websocket 13 or above, whereas Gradio 3.X required Websocket 11. In this case, I upgraded WebSocket to 13.0.0 (the bare minimum for GenAI) and winged it to see if that worked for GRadio 3.X, which it did. Currently, GRadio is at 5.X, but Gemini 2.5 Pro knows up to 3.X, and that is the version I am going with.
In your case, is there an imperative to use the latest version? If not, I would recommend to go with the version that AI knows.
**評論 4**:
I just use whatever chat gpt knows best and manually implement new features or copy and paste documentation for chat gpt when I want to add the feature.
**評論 5**:
Try using Roo with https://github.com/hannesrudolph/mcp-ragdocs
---
## 21. Cursor like diff viewer in roo and other enhancemen``` \{#21-cursor-like-diff-viewer-in-roo-and-other-enhanc}
由於我無法直接訪問外部連結(包括 Reddit 的內容),因此無法查看該文章的具體內容。不過,如果你能提供文章的標題、關鍵段落或主要論點,我可以幫助你總結其核心討論主題。
例如,若文章標題或內容涉及以下方向:
- **科技與社會**(如 AI 影響、隱私問題)
- **政治或時事**(如政策爭議、國際關係)
- **文化或娛樂**(如電影、遊戲討論)
- **個人經驗或建議**(如生活技巧、職場建議)
請提供更多細節,我會根據你提供的資訊進行分析並總結核心主題!
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jpcybp/cursor_like_diff_viewer_in_roo_and_other/](https://reddit.com/r/ChatGPTCoding/comments/1jpcybp/cursor_like_diff_viewer_in_roo_and_other/)
- **外部連結**: [https://www.reddit.com/gallery/1jpcybp](https://www.reddit.com/gallery/1jpcybp)
- **發布時間**: 2025-04-02 09:42:00
### 內容
連結: [https://www.reddit.com/gallery/1jpcybp](https://www.reddit.com/gallery/1jpcybp)
### 討論
無討論內容
---
## 22. ```
Experienced systems engineer trying their hand at a website depending completely on copilot
``` {#22-```
experienced-systems-engineer-trying-their-h}
这篇文章的核心討論主題是:**一位後端/系統工程師轉向管理職後,首次嘗試使用AI編碼助手(如GitHub Copilot)開發前端網頁應用的經驗與反思**。
具體重點包括:
1. **背景與動機**:
- 作者長期專注後端開發,缺乏前端經驗,但藉由內部需求(為多語言聊天應用構建測試用的翻譯後臺UI)嘗試AI編碼工具。
2. **開發過程與工具**:
- 使用GitHub Copilot(基於Claude 3.5模型)主導編碼,僅手動調整少量代碼,並花費部分時間處理部署(Firebase、TLS憑證等)。
- 強調Copilot遵循「避免複雜框架,僅用HTML/CSS/JS」的指令,且能逐步添加功能(如鍵盤快捷鍵)。
3. **AI編碼助手的優缺點**:
- **優點**:代碼準確性高,能通過解釋幫助釐清需求;適合從零開始的專案。
- **挑戰**:需清晰指令,偶需反覆修正;作者計劃下一步測試AI在既有Go代碼庫(數萬行)的維護能力。
4. **結論**:
- 肯定AI在「從頭構建」場景的實用性,但對其在複雜現有代碼庫的表現持觀望態度。
整體聚焦於**AI輔助開發的實際應用體驗**,並探討其當前能力邊界與未來潛力。
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jpjvif/experienced_systems_engineer_trying_their_hand_at/](https://reddit.com/r/ChatGPTCoding/comments/1jpjvif/experienced_systems_engineer_trying_their_hand_at/)
- **外部連結**: [https://www.reddit.com/r/ChatGPTCoding/comments/1jpjvif/experienced_systems_engineer_trying_their_hand_at/](https://www.reddit.com/r/ChatGPTCoding/comments/1jpjvif/experienced_systems_engineer_trying_their_hand_at/)
- **發布時間**: 2025-04-02 16:41:52
### 內容
I've been doing the backend/systems level engineering for a while. Moved into management a for the past few years so haven't written a lot of code. Either way, never wrote much web code or frontend code of any kind. Obviously I know the basics on how things work but it never felt like a great use of my time to learn the nitty gritty details.
A situation arose to build out a web UI for internal use to demo and test out the translation backend infrastructure our team has been building for our multilingual chat app (FlaiChat). I thought this was a perfect opportunity to try out this vibe coding thing that's all the rage. This is the site I built. It's a language translator like Google Translate but using an LLM with custom prompting in the backend. The main claim to fame is that it handles slang/idioms/figures of speech better than google translate, DeeplL etc.
I dropped into VSCode and started chatting with copilot (using Claude 3.5 model). It took me spending a couple of hours per day for about 8-10 days. The copilot wrote most of the code. The work that fell upon me (and probably accounted for about a 3rd of the total hours I spent) was on figuring out the deployment and hosting (on firebase), TLS cer```, domain management etc. I wrote almost no code by hand except for little tweaks here and there.
My experience with copilot was pretty smooth. I asked it to avoid using complex frameworks and stick with html/css/javascript and it did. I added various features, niceties etc. one by one (e.g., adding a keyboard shortcut to trigger the transfer action (it's Option+Enter on Mac and Ctrl+Enter on Windows). It never write egregiously wrong code. Sometimes, when it wrote up the code and explained what it did, it made me realize that I had not been clear enough with the instructions. I would then undo that edit and clarify my instructions.
Overall, for this particular purpose (creating something from scratch) I feel like AI coding assistan are actually very good already. My next challenge is to actually see how AI deals with an existing Go backed codebase. It's not tremendously large (a few 10's of thousands of LOC) so I'm optimistic it a large context LLM like Gemini 2.5 pro should do well for code comprehension and edi.
### 討論
無討論內容
---
## 23. ```
Interview with Vibe Coder in 2025
``` {#23-```
interview-with-vibe-coder-in-2025
```}
這篇文章的核心討論主題是:**程式設計師對一段幽默內容(可能是影片或笑話)的共鳴反應**,尤其聚焦於內容因過於貼近現實工作經驗而引發的「既好笑又痛苦」的情緒矛盾。
具體要點包括:
1. **幽默與現實的衝突**:內容因精準反映程式設計師日常(如語法錯誤、情緒挫折)而引發強烈共鳴,甚至讓人感到「不適」(*too close to reality*)。
2. **社群互動**:留言者透過幽默自嘲(如 *"it's a mood misalignment"*)和分享連結請求,展現技術社群的共同語言與歸屬感。
3. **情緒張力**:關鍵詞如 *"hilarious"*、*"on the floor"* 凸顯笑點,而 *"too close to home"* 則暗示背後潛藏的職業壓力。
整體而言,討論圍繞著「技術幽默如何因真實性觸發複雜情緒」這一核心展開。
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jp1hjn/interview_with_vibe_coder_in_2025/](https://reddit.com/r/ChatGPTCoding/comments/1jp1hjn/interview_with_vibe_coder_in_2025/)
- **外部連結**: [https://www.youtube.com/watch?v=JeNS1ZNHQs8 ](https://www.youtube.com/watch?v=JeNS1ZNHQs8 )
- **發布時間**: 2025-04-02 01:31:36
### 內容
"It's not a syntax error, it's a mood misalignment"
This guy's past videos have been hil-ar-i-ous, to experienced programmers.
This one is funny, but it's too close to reality for my comfort.
Direct link please? It says theres an error
This had me on the floor
this hi``` a little too close to home...
### 討論
**評論 1**:
"It's not a syntax error, it's a mood misalignment"
**評論 2**:
This guy's past videos have been hil-ar-i-ous, to experienced programmers.
This one is funny, but it's too close to reality for my comfort.
**評論 3**:
Direct link please? It says theres an error
**評論 4**:
This had me on the floor
**評論 5**:
this hi``` a little too close to home...
---
## 24. ```
How to use DeepSeek Deep Research together with Claude 3.7 for best resul```?
``` {#24-```
how-to-use-deepseek-deep-research-together-}
該文章的核心討論主題是:**「當遇到Claude(AI模型)無法正常運作或卡住時,應採取的最佳解決策略」**。
具體可能包含以下方向:
1. **問題診斷**:如何判斷Claude卡住的原因(如技術故障、輸入不明確、模型限制等)。
2. **解決方案**:具體的應對步驟(例如重新生成回應、調整提問方式、檢查網路或API連線等)。
3. **優化互動策略**:如何設計提問或指令以避免類似問題(如更清晰的提示、分段請求等)。
整體聚焦於**「有效排除Claude使用障礙並提升互動效率」**的實用方法。
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jplk16/how_to_use_deepseek_deep_research_together_with/](https://reddit.com/r/ChatGPTCoding/comments/1jplk16/how_to_use_deepseek_deep_research_together_with/)
- **外部連結**: [https://www.reddit.com/r/ChatGPTCoding/comments/1jplk16/how_to_use_deepseek_deep_research_together_with/](https://www.reddit.com/r/ChatGPTCoding/comments/1jplk16/how_to_use_deepseek_deep_research_together_with/)
- **發布時間**: 2025-04-02 18:45:04
### 內容
What would be the optimal strategy to fix when I'm stuck with Claude?
### 討論
**評論 1**:
Aider in architect mode.
---
## 25. ```
About how many lines of production code were you writing/generating a month before AI and are now writing/generating with help of AI?
``` {#25-```
about-how-many-lines-of-production-code-wer}
這段討論的核心主題是:
**「AI輔助寫程式對開發者生產力(如代碼產出量/LOC)的實際影響評估」**
具體聚焦的關鍵問題包括:
1. **生產力變化**:相較於未使用AI的傳統編程方式,開發者在使用AI工具後是否顯著提升代碼產出量(如2倍、3倍甚至10倍)。
2. **經驗差異**:尤其關注「原本已具備嚴肅編程經驗的開發者」的實際體驗,而非初學者從零開始的案例。
3. **量化分析**:是否有具體的程式碼行數(LOC)統計數據支持這些主觀感受。
隱含議題:AI是否可能導致「代碼量增加但品質或效率未同步提升」的潛在爭議(如提及「尚未出現負增長」的幽默暗示)。
- **Reddit 連結**: [https://reddit.com/r/ChatGPTCoding/comments/1jp7jhi/about_how_many_lines_of_production_code_were_you/](https://reddit.com/r/ChatGPTCoding/comments/1jp7jhi/about_how_many_lines_of_production_code_were_you/)
- **外部連結**: [https://www.reddit.com/r/ChatGPTCoding/comments/1jp7jhi/about_how_many_lines_of_production_code_were_you/](https://www.reddit.com/r/ChatGPTCoding/comments/1jp7jhi/about_how_many_lines_of_production_code_were_you/)
- **發布時間**: 2025-04-02 05:33:37
### 內容
Now that folks are using AI to generate code. It's clear that some have found it productive and have gone from 0 LOC to more. I don't think anyone has gone negative, but for those of you who were coding seriously before AI. Would you say AI now has you generating 2x, 3x, 10x the amount of code? For those that have done analysis, what's your LOC count?
### 討論
**評論 1**:
I would say about a 30% output improvement. Quite senior in my experience but I find it's code quality isn't quite up to snuff and have to Rewrite a fair bit myself sometimes.
I``` like a eager junior programmer.
**評論 2**:
I generate 10 KLOC per week steady. Before AI it could be 500 lines
**評論 3**:
Not much more.. since I had been working for 8 years or so before AI, Im senior enough that the types of problems generative systems can solve dont help
Mainly helpful for UI boilerplate on the occasion Im doing that
**評論 4**:
I've been writing code professionally for almost 20 years.
My first focus was not to increase my productivity, but to have someone that could summarize all the codebase and help me answering questions. I used it as a "buddy" that I could talk to. So at first, what increased was the quality and robustness of my code.
Now that I'm already a few months in, and that we have Google Gemini 2.5 Pro with 1m context window, I can finally start to trust that AI will do the right thing given the amount of context.
It's been hit or miss honestly. After putting a lot of constrain I can finally start auto generating tes that are not mocked and that make sense, not just to raise coverage. It's also very good at creating new files. The struggles happen the most when editing old classes and having to be aware of all the dependencies.
If we consider that my code quality improved, I will say I'm about 3x as fast while having code that's slightly better compared to say, November.
**評論 5**:
writing a bunch of lines of code isn't a good thing. i've been using it to scaffold test files and some auto complete on lines. saves me a bit of time. more time to do the dishes and stuff while still cozy getting my weekly work done
---
# 總體討論重點
以下是25篇文章的核心討論重點條列式總結,並附上對應的錨點連結與逐條細節:
---
### #1 [Vibe coding with AI feels like hiring a dev with anterograde amnesia](#anchor_id_1)
1. **AI編程的雙重性**
- 優勢:提升日常編程效率
- 痛點:缺乏記憶力、誤解意圖、需精確指令
2. **隱憂**
- 過度依賴導致開發者理解不足
3. **矛盾心態**
- 期待AI強化記憶與理解能力
---
### #2 [Fiction or Reality?](#anchor_id_2)
1. **AI自動化技術**
- 用於批量創建帳號,可能涉及合規性問題
2. **語氣暗示**
- 幽默探討技術實踐或灰色應用
---
### #3 [Did they NERF the new Gemini model?](#anchor_id_3)
1. **溫度參數本質**
- 控制隨機性而非創造力,低溫(0)適合精確任務
2. **程式碼生成影響**
- 高溫導致語法錯誤與虛構內容
3. **預設值陷阱**
- 平台預設高溫(如1)引發輸出不穩
---
### #4 [Gemini 2.5 beyond the Free Tier](#anchor_id_4)
1. **成本問題**
- 高頻使用者(>25次/日)的開銷分析
---
### #5 [Fully Featured AI Coding Agent as MCP Server](#anchor_id_5)
1. **工具特色**
- 免費開源(GPL授權),媲美付費工具
2. **技術實現**
- 基於語言伺服器(非RAG),支援大型程式碼庫
---
### #6 [Is it a good idea to learn coding via Claude 3.7?](#anchor_id_6)
1. **教學可靠性**
- AI可能產生幻覺(hallucination)誤導學習
2. **風險評估**
- 需平衡效率與知識正確性
---
### #7 [This sub is mostly full of low effort garbage now](#anchor_id_7)
1. **社群批評**
- 指責「氛圍式編程」與行銷內容氾濫
2. **管理訴求**
- 呼籲加強內容審核
---
### #8 [New better gemini coding model in LMarena](#anchor_id_8)
(需補充內容細節)
---
### #9 [What happens when you tell an LLM that it has an iPhone next to it](#anchor_id_9)
1. **實驗觀察**
- LLM對虛擬情境的反應機制
2. **理解限制**
- 探討模型對現實物體的認知能力
---
### #10 [I generated a playable chess with one prompt](#anchor_id_10)
1. **遊戲設計**
- 玩家vs AI(minimax演算法)
2. **工具比較**
- Bolt.new(現代風格)vs Bind AI IDE(經典風格)
---
### #11 [I finally figured out how to commit `api` keys to GitHub!](#anchor_id_11)
1. **安全批評**
- 嚴厲指責鑰匙管理的不負責任行為
---
### #12 [Strategies to Thrive as AIs get Better](#anchor_id_12)
(需補充影片內容)
---
### #13 [How to use DeepSeek deep research unlimited?](#anchor_id_13)
1. **請求限制問題**
- 透過API Key與cursor功能規避「server busy」錯誤
---
### #14 [How to transfer knowledge between conversations](#anchor_id_14)
1. **結構化摘要模板**
- Markdown格式記錄對話目標、議題、待辦事項
2. **無縫接續**
- 貼入新對話並指令「Continue where we left off」
---
### #15 [How to handle auth, db, subscriptions for AI agents?](#anchor_id_15)
1. **開發痛點**
- 重複建立使用者框架(auth/db/訂閱)
2. **解決方案需求**
- 尋求開箱即用的標準化模組
---
### #16 [Vibe debugging best practices](#anchor_id_16)
1. **AI除錯限制**
- 過度猜測、缺乏上下文、副作用
2. **優化策略**
- 分階段驗證、強化上下文、