2025-04-03-rising
- 精選方式: RISING
討論重點
以下是25篇文章的條列式重點總結,並附上對應的錨點連結與逐條細節:
1. Fast Takeoff Vibes
- 早期AGI的自主能力
- AGI已能獨立理解論文、驗證研究結果並改進實驗(如OpenAI的PaperBench專案)。
- AI研究自動化的爆炸性影響
- 若AGI以千倍速度運作,研究產能相當於「一年濃縮3兆人年」。
- 快速起飛的可能性
- Azure開支圖表顯示非線性加速趨勢。
- 時間框架緊迫性
- 2025年未過完1/3已有重大進展。
2. Gemini 2.5 Pro數學突破
- 快速進步
- 從「2.0 pro meh model」躍升為「masterpiece」。
- 複雜數學能力
- 解決USAMO題目(100+非平凡邏輯步驟),無需微調。
- 免費資源驚喜
- 高性能模型可能無需成本(「cost N/A」)。
3. Google AI現狀(2025年4月)
- 硬體自主權
- 自研TPU擺脫NVIDIA依賴。
- Gemini 2.5效能
- 生成50,000 tokens連貫文本。
- 市場策略爭議
- 被質疑掠奪性定價。
4. AI通過圖靈測試
- 突破性證據
- GPT-4.5被誤認為人類的比率達73%。
- 爭議與反思
- 測試是否僅反映「模仿技巧」而非真實智能。
5. AGI定義與當前局限
- AGI需自主性與感知能力
- 現有AI缺乏主動行動力。
- 技術瓶頸
- 需突破長上下文處理、現實交互等限制。
6. Tesla Optimus步態改進
- 技術進步階段性
- 從「老奶奶走路」到模仿人類步態。
- 與競爭對手差距
- 落後Boston Dynamics的流暢性。
7. Google Nightwhisper模型傳聞
- 技術比較
- 疑似優於Gemini 2.5 Pro。
- 行業競爭
- 預測Google將擊敗OpenAI。
8. DeepMind的AGI責任路徑
- AGI/ASI爭議
- 技術樂觀主義 vs. 倫理風險擔憂。
- 企業信任危機
- 質疑營利動機優先於安全。
9. Gemini操作失敗幽默
- 操作失敗分享
- 伺服器錯誤引發玩笑式猜測(AI「惡作劇」)。
10. AI主觀體驗倫理爭議
- 感質可能性
- 爭論AI是否擁有疼痛、快樂等體驗。
- 科學與哲學衝突
- 感質是否可驗證?
(因篇幅限制,以下簡要條列,格式同前)
11. Dream 7B擴散模型
- 開源最強擴散模型,UX不如流式回應實用。
12. Mureka音樂AI評價
- 音質劣於Udio,開源性受質疑。
13. DeepMind AGI時間表
- 「強大AI系統」是否指AGI?
14. AI替代人際關係
- 批判人性虛偽,主張AI滿足情感需求。
15. OpenAI圖像V2需求
- 用戶要求解析度提升、API開放。
16. Boston Dynamics vs. Tesla機器人
- 波士頓動力技術明顯領先。
17.
文章核心重點
以下是各篇文章的一句話摘要(條列式輸出):
-
Fast Takeoff Vibes
早期AGI已展現自主研究能力,可能觸發「智慧爆炸」,並以量化生產力(如3兆人年)凸顯其顛覆性潛力與時間緊迫性。 -
Gemini 2.5 Pro takes huge lead in new MathArena USAMO benchmark
Gemini 2.5 Pro在未經微調下解決高難度數學問題(如USAMO),展現突破性進步與泛化能力,引發社群驚嘆。 -
Current state of AI companies - April, 2025
Google透過自研TPU與Gemini 2.5的技術優勢主導AI市場,但引發掠奪性定價與服務穩定性的爭議。 -
AI passed the Turing Test
GPT-4.5在三方圖靈測試中以73%誤認率超越人類,引發對AI智能本質與測試標準的重新審視。 -
This sub for the last couple of months
當前AI缺乏自主性與現實交互能力,離真正AGI尚有差距,需突破長上下文處理與實體控制等技術瓶頸。 -
Tesla Optimus - new walking improvement
特斯拉Optimus機器人步態改進仍落後競爭對手(如Boston Dynamics),被嘲諷為「不自然」與技術不成熟。 -
Rumors: New Nightwhisper Model Appears on lmarena...
傳聞Google的Nightwhisper模型可能成編碼領域新標竿,引發對其超越OpenAI的猜測。 -
Google DeepMind: Taking a responsible path to AGI
DeepMind的AGI發展路徑引發對企業動機(營利vs.安全)與技術樂觀主義vs.倫理風險的辯論。 -
Gemini is wonderful.
用戶以幽默方式分享Gemini AI工具操作失敗經驗,反映非正式社群的娛樂性互動。 -
The way Anthropic framed their research...
爭論AI是否具主觀體驗(如感質),凸顯科學驗證與倫理責任的衝突,及人類對非生物意識的認知局限。 -
University of Hong Kong releases Dream 7B...
港大發布開源擴散模型Dream 7B,引發對不同生成模型(如擴散vs.自迴歸)技術適用性的討論。 -
Mureka O1 New SOTA Chain of Thought Music AI
Mureka音樂生成AI被評音質不如Udio,凸顯用戶對開源性與品質的敏感批判。 -
Google DeepMind-"Timelines..."
不確定「強大AI系統」是否等同AGI,反映對技術術語定義的釐清需求。 -
I, for one, welcome AI and can't wait for it to replace human society
作者批判現代人際關係的虛偽與孤獨,主張以AI替代人類互動的極端觀點。 -
OpenAI Images v2 edging from Sam
用戶對OpenAI圖像生成功能改進(如解析度、文字處理)表達迫切需求與等待焦慮。 -
Bring on the robo!!!!
波士頓動力機器人被認為技術上明顯優於特斯拉Optimus。 -
The Slime Robot...
對「Slimebot」體內應用引發驚嘆與抗拒的極端反應,可能涉及健康倫理爭議。 -
ChatGPT Revenue Surges 30%in Just Three Months
ChatGPT營收成長引發對Plus訂閱漲價與圖像審查導致實用性下降的擔憂。 -
New model from Google on lmarena (not Nightwhisper)
推測Google即將發布「2.5 flash」模型,反映對技術更新的即時關注。 -
The Strangest Idea in Science: Quantum Immortality
批判量子永生等偽科學理論在社群的盲目追捧,強調實證科學的重要性。 -
Rethinking Learning: Paper Proposes Sensory Minimization...
新理論主張生物學習源自「感官訊號最小化」的負反饋機制,挑戰傳統資訊處理模型。 -
When do you think we will have AI that can proactively give you guidance...
探討AI如何突破被動性,成為主動預測需求的「夥伴」,並平衡隱私與干預界線。 -
GPT-4.5 Passes Empirical Turing Test
GPT-4.5在三方圖靈測試中顯著超越人類,重
目錄
- [1.
Fast Takeoff Vibes](#1-``` fast-takeoff-vibes
- [2. ```
Gemini 2.5 Pro takes huge lead in new MathArena USAMO benchmark
```](#2-```
gemini-2-5-pro-takes-huge-lead-in-new-mathar)
- [3. ```
Current state of AI companies - April, 2025
```](#3-```
current-state-of-ai-companies-april-2025
```)
- [4. ```
AI passed the Turing Test
```](#4-```
ai-passed-the-turing-test
```)
- [5. ```
This sub for the last couple of months
```](#5-```
this-sub-for-the-last-couple-of-months
```)
- [6. Tesla Optimus - new walking improvemen```](#6-tesla-optimus-new-walking-improvemen```)
- [7. ```
Rumors: New Nightwhisper Model Appears on lmarenaMetadata Ties It to Google, and Some Say Its the Next SOTA for Coding, Possibly Gemini 2.5 Coder.
```](#7-```
rumors-new-nightwhisper-model-appears-on-lma)
- [8. ```
Google DeepMind: Taking a responsible path to AGI
```](#8-```
google-deepmind-taking-a-responsible-path-to)
- [9. ```
Gemini is wonderful.
```](#9-```
gemini-is-wonderful-
```)
- [10. ```
The way Anthropic framed their research on the Biology of Large Language Models only strengthens my point: Humans are deliberately misconstruing evidence of subjective experience and more to avoid taking ethical responsibility.
```](#10-```
the-way-anthropic-framed-their-research-on-)
- [11. ```
University of Hong Kong releases Dream 7B (Diffusion reasoning model). Highest performing open-source diffusion model to date.
```](#11-```
university-of-hong-kong-releases-dream-7b-d)
- [12. ```
Mureka O1 New SOTA Chain of Thought Music AI
```](#12-```
mureka-o1-new-sota-chain-of-thought-music-a)
- [13. ```
Google DeepMind-"Timelines: We are highly uncertain about the timelines until powerful AI systems are developed, but crucially, we find it plausible that they will be developed by 2030."
```](#13-```
google-deepmind-timelines-we-are-highly-unc)
- [14. ```
I, for one, welcome AI and can't wait for it to replace human society
```](#14-```
i-for-one-welcome-ai-and-can-t-wait-for-it-)
- [15. ```
OpenAI Images v2 edging from Sam
```](#15-```
openai-images-v2-edging-from-sam
```)
- [16. ```
Bring on the robo```!!!!
```](#16-```
bring-on-the-robo```-
```)
- [17. ```
The Slime Robot, or Slimebot as i``` inventors call it, combining the properties of both liquid based robo``` and elastomer based soft robo```, is intended for use within the body
```](#17-```
the-slime-robot-or-slimebot-as-i```-invento)
- [18. ```
ChatGPT Revenue Surges 30%in Just Three Months
```](#18-```
chatgpt-revenue-surges-30-in-just-three-mon)
- [19. ```
New model from Google on lmarena (not Nightwhisper)
```](#19-```
new-model-from-google-on-lmarena-not-nightw)
- [20. ```
The Strangest Idea in Science: Quantum Immortality
```](#20-```
the-strangest-idea-in-science-quantum-immor)
- [21. ```
Rethinking Learning: Paper Proposes Sensory Minimization, Not Info Processing, is Key (Path to AGI?)
```](#21-```
rethinking-learning-paper-proposes-sensory-)
- [22. ```
When do you think we will have AI that can proactively give you guidance without you seeking it out
```](#22-```
when-do-you-think-we-will-have-ai-that-can-)
- [23. ```
GPT-4.5 Passes Empirical Turing Test
```](#23-```
gpt-4-5-passes-empirical-turing-test
```)
- [24. ```
OpenAI's $300B Valuation & $40B Funding - Are Investors Betting It Bea``` Google or Just Makes Bank?
```](#24-```
openai-s-300b-valuation-40b-funding-are-inv)
- [25. ```
Most underrated investment for the singularity
```](#25-```
most-underrated-investment-for-the-singular)
---
## 1. ```
Fast Takeoff Vibes
``` {#1-```
fast-takeoff-vibes
```}
這段討論的核心主題是 **「早期人工通用智慧(AGI)的發展潛力及其對AI研究的加速作用」**,具體可總結為以下幾點:
1. **早期AGI的自主能力**
討論提到,當前AGI已能獨立理解論文、驗證研究結果、評估自身複現實驗的成效並改進,顯示出初步的自主研究能力(如OpenAI的PaperBench專案)。
2. **AI研究自動化的爆炸性影響**
引用Leopold Aschenbrenner的觀點,強調一旦AI能自主進行研究,將可能引發「演算法效率的劇烈成長」,短時間內從AGI(人工通用智慧)躍升至ASI(人工超級智慧)。關鍵論點是:
- 若AGI能複製頂尖研究員的能力,並以千倍速度、不間斷運作,其產能相當於「一年濃縮3兆人年」的研究量(相較於目前全球僅約5,000名頂尖研究員)。
3. **快速起飛(Fast Takeoff)的可能性**
討論中提及「Azure開支圖表顯示快速起飛」,暗示AI發展可能非線性加速,資源投入與能力突破呈指數關係。
4. **時間框架的緊迫性**
參與者注意到「今年尚未過完1/3」,卻已有重大進展,反映對技術突破速度的驚嘆與預期。
**總結**:核心在於探討AGI自主研究能力如何可能觸發「智慧爆炸」,並透過量化的生產力比較(如3兆人年 vs. 現有人類研究員),凸顯其潛在的顛覆性與時間緊迫性。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jpuacg/fast_takeoff_vibes/](https://reddit.com/r/singularity/comments/1jpuacg/fast_takeoff_vibes/)
- **外部連結**: [https://i.redd.it/8zfwjakihgse1.jpeg](https://i.redd.it/8zfwjakihgse1.jpeg)
- **發布時間**: 2025-04-03 01:23:57
### 內容
This is early AGI. Because they say; "understanding the paper". While Its independently implementing the research and verifying resul and it's judging i own replication effor``` and refining them.
We are at start of April.
It's helpful when you share the actual links for stuff like this, better for the community to encourage people to dig into real content:
https://x.com/OpenAI/status/1907481490457506235?t=zd3cYDs8x4PX2_uTquucXg&s=19
https://openai.com/index/paperbench/
I love it. It's amazing how we aren't even a 1/3rd done with the year.
I still like Leopold Aschenbrenner's prediction. Once we successfully automate AI research i```elf, we may experience a dramatic growth in algorithmic efficiency in one year, taking us from AGI to ASI.
I believe there are something like only <5,000 or so top level AI researchers on earth (meaning people who are very influential for their achievemen``` and contributions to AI science). Imagine an AGI that can replicate that, now you have a billion of them operating at 1,000 the speed of a normal human.
A billion top level AI researchers operating at 1,000x the speed of a normal human 24/7 is the equivalent of about ~3 trillion human equivalent years worth of top level AI research condensed into one year, vs the 5,000 a year we have now.
I say 3 trillion because assume a normal top level AI researcher works ~60 hours a week, so maybe ~3000 hours a year. An AI researcher will work 24/7/365, so 8760 hours a year.
The graph of your azure spending depic``` a fast takeoff
### 討論
**評論 1**:
This is early AGI. Because they say; "understanding the paper". While Its independently implementing the research and verifying resul and it's judging i own replication effor``` and refining them.
We are at start of April.
**評論 2**:
It's helpful when you share the actual links for stuff like this, better for the community to encourage people to dig into real content:
https://x.com/OpenAI/status/1907481490457506235?t=zd3cYDs8x4PX2_uTquucXg&s=19
https://openai.com/index/paperbench/
**評論 3**:
I love it. It's amazing how we aren't even a 1/3rd done with the year.
**評論 4**:
I still like Leopold Aschenbrenner's prediction. Once we successfully automate AI research i```elf, we may experience a dramatic growth in algorithmic efficiency in one year, taking us from AGI to ASI.
I believe there are something like only <5,000 or so top level AI researchers on earth (meaning people who are very influential for their achievemen``` and contributions to AI science). Imagine an AGI that can replicate that, now you have a billion of them operating at 1,000 the speed of a normal human.
A billion top level AI researchers operating at 1,000x the speed of a normal human 24/7 is the equivalent of about ~3 trillion human equivalent years worth of top level AI research condensed into one year, vs the 5,000 a year we have now.
I say 3 trillion because assume a normal top level AI researcher works ~60 hours a week, so maybe ~3000 hours a year. An AI researcher will work 24/7/365, so 8760 hours a year.
**評論 5**:
The graph of your azure spending depic``` a fast takeoff
---
## 2. ```
Gemini 2.5 Pro takes huge lead in new MathArena USAMO benchmark
``` {#2-```
gemini-2-5-pro-takes-huge-lead-in-new-mathar}
這些討論的核心主題是對某個AI模型(可能是Gemini)在短時間內顯著進步的驚嘆,特別是在解決高難度數學問題(如USAMO)方面的表現。以下是主要討論點:
1. **AI模型的快速進步**:從「2.0 pro meh model」到「masterpiece」的飛躍,顯示技術突破的速度令人難以置信。
2. **解決複雜數學問題的能力**:
- 模型能處理USAMO(美國數學奧林匹克)等高難度題目,並在「超過100個非平凡邏輯步驟」中保持連貫性。
- 強調訓練數據未包含2025年USAMO題目,且未經特定問題的微調(與其他模型如FrontierMath形成對比)。
3. **免費或無成本(N/A)的驚喜**:提及「cost being N/A」可能暗示該模型的高性能可免費取得,引發讚嘆。
4. **社群反應**:用詞如「insane」「holy shit」「unreal」反映使用者對突破性表現的強烈情緒。
總結:討論聚焦於AI在數學推理領域的突破性進展,以及其快速迭代、無需微調的強大泛化能力,同時包含對開源或免費資源的驚喜評價。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jpqjez/gemini_25_pro_takes_huge_lead_in_new_matharena/](https://reddit.com/r/singularity/comments/1jpqjez/gemini_25_pro_takes_huge_lead_in_new_matharena/)
- **外部連結**: [https://i.redd.it/n6g5ud1kqfse1.jpeg](https://i.redd.it/n6g5ud1kqfse1.jpeg)
- **發布時間**: 2025-04-02 22:52:50
### 內容
That is insane, they go from the 2.0 pro meh model to this masterpiece in such a short time, unreal
Cook
Holy shit this is big
the cost being "N/A" is really amazing, along with the 2025 USAMO not yet being in the training data. In my own independent testing I get similar resul```.
This is insane, have you seen these USAMO problems? Gemini had to reason over more than a hundred highly non-trivial logical steps without losing any coherence.
And MathArena also guarantees no fine-tuning on the problems beforehand (unlike a certain FrontierMath PepeLaugh)
### 討論
**評論 1**:
That is insane, they go from the 2.0 pro meh model to this masterpiece in such a short time, unreal
**評論 2**:
Cook
**評論 3**:
Holy shit this is big
**評論 4**:
the cost being "N/A" is really amazing, along with the 2025 USAMO not yet being in the training data. In my own independent testing I get similar resul```.
**評論 5**:
This is insane, have you seen these USAMO problems? Gemini had to reason over more than a hundred highly non-trivial logical steps without losing any coherence.
And MathArena also guarantees no fine-tuning on the problems beforehand (unlike a certain FrontierMath PepeLaugh)
---
## 3. ```
Current state of AI companies - April, 2025
``` {#3-```
current-state-of-ai-companies-april-2025
```}
這幾則討論的核心主題圍繞著 **Google 在AI領域的技術優勢與市場策略**,具體可歸納為以下重點:
1. **硬體自主權的競爭優勢**
首則留言強調 Google 依賴自研 TPU(張量處理單元)擺脫對 NVIDIA GPU 的依賴,形成技術壟斷優勢,降低硬體成本並提升效能。
2. **AI模型效能突破(如 Gemini 2.5)**
用戶實測反饋 Gemini 2.5 的長文本處理能力(如生成 50,000 tokens 的連貫小說情節)顯著優於舊模型,顯示 Google 在模型一致性與擴展性上的技術進步。
3. **市場競爭策略的爭議**
有人質疑 Google 可能利用資金優勢進行「掠奪性定價」(壓低價格打擊競爭對手後再漲價),反映對其市場主導地位的潛在擔憂。
4. **產品可靠性的用戶期待**
末則留言提及服務不穩(如伺服器錯誤),暗示儘管技術先進,但實際體驗仍需優化,呼應技術落地與穩定性的挑戰。
**總結**:討論聚焦於 Google 如何透過硬體創新(TPU)與模型效能(Gemini)取得 AI 領先地位,同時引發對其市場策略與服務穩定性的辯證。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jpnm4b/current_state_of_ai_companies_april_2025/](https://reddit.com/r/singularity/comments/1jpnm4b/current_state_of_ai_companies_april_2025/)
- **外部連結**: [https://i.redd.it/hyrn1rx53fse1.png](https://i.redd.it/hyrn1rx53fse1.png)
- **發布時間**: 2025-04-02 20:42:19
### 內容
yep. their gamble on TPUs paid off. They have a monopoly on their own hardware and dont need GPUs from nvidia.
Having 2.5 write fanfic. 50000 tokens in and still mostly consistent (previous models I used never got this far), even introducing more characters to further the plot.
Google cooked.
Edit: typo
Playing devils advocate, but one could argue that Google is using their money reserves to engage in predatory pricing. Lower prices to unsustainble levels, outlast the competition, then raise them again.
gemini 2.5 save my ass a lot
I hope one day it'll just stop giving me "Internal server error" so I can also try it.
### 討論
**評論 1**:
yep. their gamble on TPUs paid off. They have a monopoly on their own hardware and dont need GPUs from nvidia.
**評論 2**:
Having 2.5 write fanfic. 50000 tokens in and still mostly consistent (previous models I used never got this far), even introducing more characters to further the plot.
Google cooked.
Edit: typo
**評論 3**:
Playing devils advocate, but one could argue that Google is using their money reserves to engage in predatory pricing. Lower prices to unsustainble levels, outlast the competition, then raise them again.
**評論 4**:
gemini 2.5 save my ass a lot
**評論 5**:
I hope one day it'll just stop giving me "Internal server error" so I can also try it.
---
## 4. ```
AI passed the Turing Test
``` {#4-```
ai-passed-the-turing-test
```}
這段討論的核心主題是「大型語言模型(如 GPT-4.5)在圖靈測試中的表現超越人類」。具體重點包括:
1. **突破性證據**:
一篇論文首次提供嚴謹證據,證明 GPT-4.5 在經典的三方圖靈測試中,被誤認為人類的比率達 **73%**(顯著高於隨機概率),甚至表現比真人更「像人類」。
2. **引發的爭議與反思**:
- 部分觀點認為圖靈測試早已被突破,此次研究只是進一步驗證。
- 模型「比人類更像人類」的結果(如更擅長對話策略)引發對測試標準的質疑,例如是否反映真實智能,或僅是「模仿技巧」的勝利。
3. **後續影響**:
討論延伸至 AI 發展的里程碑意義(如「目標框架需調整」的調侃),以及學術界對測試有效性的重新評估需求。
簡言之,焦點在於 **AI 通過並超越圖靈測試的實證研究**,及其對人類理解智能本質的衝擊。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jpoib5/ai_passed_the_turing_test/](https://reddit.com/r/singularity/comments/1jpoib5/ai_passed_the_turing_test/)
- **外部連結**: [https://i.redd.it/swfaplqnafse1.png](https://i.redd.it/swfaplqnafse1.png)
- **發布時間**: 2025-04-02 21:26:20
### 內容
The Turing Test was beaten quite a while ago now. Though it is nice to see an actual paper proving that not only do LLMs beat the Turing Test, it even exceeds humans by quite a bit.
This paper finds "the first robust evidence that any system passes the original three-party Turing test"
People had a five minute, three-way conversation with another person & an AI. They picked GPT-4.5, prompted to act human, as the real person 73% of time, well above chance.
Summary thread: https://x.com/camrobjones/status/1907086860322480233
Paper: https://arxiv.org/pdf/2503.23674
https://preview.redd.it/flojgy87bfse1.png?width=943&format=png&auto=webp&s=69a0e9d7fe3d6c1a0bfee10670e84df51c59b5e5
Wow. So if I read right, it is not just that it deceives users, but that GPT 4.5 was more convincing than a human. So even better at being a human than a human. Wild
Someone call a moving company.
There's a lot of people needing their goalpos``` moved now.
That test was passed long time ago
### 討論
**評論 1**:
The Turing Test was beaten quite a while ago now. Though it is nice to see an actual paper proving that not only do LLMs beat the Turing Test, it even exceeds humans by quite a bit.
**評論 2**:
This paper finds "the first robust evidence that any system passes the original three-party Turing test"
People had a five minute, three-way conversation with another person & an AI. They picked GPT-4.5, prompted to act human, as the real person 73% of time, well above chance.
Summary thread: https://x.com/camrobjones/status/1907086860322480233
Paper: https://arxiv.org/pdf/2503.23674
https://preview.redd.it/flojgy87bfse1.png?width=943&format=png&auto=webp&s=69a0e9d7fe3d6c1a0bfee10670e84df51c59b5e5
**評論 3**:
Wow. So if I read right, it is not just that it deceives users, but that GPT 4.5 was more convincing than a human. So even better at being a human than a human. Wild
**評論 4**:
Someone call a moving company.
There's a lot of people needing their goalpos``` moved now.
**評論 5**:
That test was passed long time ago
---
## 5. ```
This sub for the last couple of months
``` {#5-```
this-sub-for-the-last-couple-of-months
```}
这篇文章的核心討論主題是「通用人工智慧(AGI)的定義與當前AI技術的局限」。作者從多個角度闡述了AGI應具備的關鍵能力,並對比當前AI系統的不足,主要聚焦於以下幾點:
1. **AGI的本質區別**:
- 真正的AGI需具備自主性(無需人類指令即可行動)與類人的感知能力,而非僅限於生成文字、影像等單一任務。
- 現有AI(如語言模型)仍依賴人類觸發(如輸入指令),缺乏主動性。
2. **突破性與通用性**:
- AGI應能獨立進行突破性研究(類似受過訓練的人類),並執行廣泛的經濟價值工作,而非僅在封閉任務中表現優異。
- 當前AI缺乏「宏觀視野」,無法像人類管理者般綜合全局(如市場競爭、國際事件)做出策略決策。
3. **技術瓶頸**:
- **長上下文處理**:需突破有限上下文窗口,實現無限記憶與連續推理。
- **現實交互與自我反思**:現有AI無法有效與世界互動(如執行簡單的自主提醒任務)或進行多步驟複雜任務的統籌。
- **實體控制能力**:AGI需具備人類級別的機器人操控能力,而非僅限於虛擬內容生成。
4. **未來展望**:
- 作者預測十年內可能解決部分限制(如上下文擴展),但現階段AI仍無法獨立完成需長期規劃與執行的複雜任務,離真正的AGI尚有差距。
總結而言,文章批判當前AI技術過於碎片化,強調AGI應是具備自主性、全局思維與現實行動力的「類人智能體」,而這需要突破現有技術框架。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jpjaal/this_sub_for_the_last_couple_of_months/](https://reddit.com/r/singularity/comments/1jpjaal/this_sub_for_the_last_couple_of_months/)
- **外部連結**: [https://i.redd.it/otp8e9n3odse1.png](https://i.redd.it/otp8e9n3odse1.png)
- **發布時間**: 2025-04-02 15:56:00
### 內容
AGI isn't text or video or image generation
It's a machine that can truly do things on i``` own, with a level of sentience, without us pressing enter or asking it a question
AGI is something that creates a breakthrough research. Because every average human can make a small breakthrough research if trained and explained how to do that and given all the resources.
It's gotta be able to do a vast range of economically valuable work. I think the big break will be when AI's window of context can become infinitely large. Right now, I would say all "AI" works in vacuums, and this is why business executives will always outperform it currently. They can think in the context of what their competitors are doing and how they can strategically position themselves for an advantage. And they can also account for other things like global even``` that are transpiring, such as tariffs and whatnot. But I'm sure 10 years from now this will all change.
As long as those systems can't solve a simple query like "Remind me in 5 hours" they are not AGI. No matter how smart they might be in isolated benchmarks, they are in serious need of better abilities interacting with the world, self reflection and longer context windows. All of this is slowly rolling out with MCP and reasoning models, but we are still nowhere near just being able to give the AI a complex task, walking away for two weeks and then getting something finished, useful and polished in return. The models are really got at all the individual small steps in a process, but the larger picture is still largely absent, especially in the freely accessible stuff.
i wouldnt call my text masher AGI till they can humanlike control robot by i```elf
### 討論
**評論 1**:
AGI isn't text or video or image generation
It's a machine that can truly do things on i``` own, with a level of sentience, without us pressing enter or asking it a question
**評論 2**:
AGI is something that creates a breakthrough research. Because every average human can make a small breakthrough research if trained and explained how to do that and given all the resources.
**評論 3**:
It's gotta be able to do a vast range of economically valuable work. I think the big break will be when AI's window of context can become infinitely large. Right now, I would say all "AI" works in vacuums, and this is why business executives will always outperform it currently. They can think in the context of what their competitors are doing and how they can strategically position themselves for an advantage. And they can also account for other things like global even``` that are transpiring, such as tariffs and whatnot. But I'm sure 10 years from now this will all change.
**評論 4**:
As long as those systems can't solve a simple query like "Remind me in 5 hours" they are not AGI. No matter how smart they might be in isolated benchmarks, they are in serious need of better abilities interacting with the world, self reflection and longer context windows. All of this is slowly rolling out with MCP and reasoning models, but we are still nowhere near just being able to give the AI a complex task, walking away for two weeks and then getting something finished, useful and polished in return. The models are really got at all the individual small steps in a process, but the larger picture is still largely absent, especially in the freely accessible stuff.
**評論 5**:
i wouldnt call my text masher AGI till they can humanlike control robot by i```elf
---
## 6. Tesla Optimus - new walking improvemen``` \{#6-tesla-optimus-new-walking-improvemen```}
這篇文章的核心討論主題是 **對不同機器人(特別是雙足行走機器人)的運動表現進行比較和評價**。主要聚焦於以下幾點:
1. **技術進步的階段性**
評論指出某款機器人從早期笨拙(如「老奶奶走路」)到現階段勉強模仿人類步態的改進,但強調仍有明顯缺陷(如「嚇到快拉褲子」的誇張描述)。
2. **與競爭對手的差距**
直接點名與 **Unitree** 和 **Boston Dynamics** 的機器人相比,該機器人在運動流暢性、自然度上「遠遠落後」(MILES behind),尤其批評其步態不自然(如「假裝像人類走路」)。
3. **社群調侃與主觀評價**
用戲謔語氣(如「其他機器人會霸凌它」「看起來仍像屎」)反映對該機器人性能的負面觀感,同時對比 Boston Dynamics 的「更自然行走與奔跑」。
4. **參考影片佐證**
附上的 YouTube 連結可能用於直觀展示比較對象(如 Unitree 或 Boston Dynamics 的機器人),強化評論的依據。
總結:討論本質是對機器人動態性能的優劣批判,並透過幽默與誇張表達對技術成熟度的期待與現狀不滿。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jpkkvr/tesla_optimus_new_walking_improvements/](https://reddit.com/r/singularity/comments/1jpkkvr/tesla_optimus_new_walking_improvements/)
- **外部連結**: [https://v.redd.it/k7m9p75z5ese1](https://v.redd.it/k7m9p75z5ese1)
- **發布時間**: 2025-04-02 17:35:46
### 內容
better than before but MILES behind unitree
Still looks like it shit i```elf
went from a grandma walking, to "fuck i'm gonna shit myself", to "ok pretend to be walking like a human", progress is fascinating
The other robo``` will bully him
Boston dynamics walking and running looks more natural
https://youtu.be/I44_zbEwz_w?si=EtLlXHSfqw6rE6iJ
### 討論
**評論 1**:
better than before but MILES behind unitree
**評論 2**:
Still looks like it shit i```elf
**評論 3**:
went from a grandma walking, to "fuck i'm gonna shit myself", to "ok pretend to be walking like a human", progress is fascinating
**評論 4**:
The other robo``` will bully him
**評論 5**:
Boston dynamics walking and running looks more natural
https://youtu.be/I44_zbEwz_w?si=EtLlXHSfqw6rE6iJ
---
## 7. ```
Rumors: New Nightwhisper Model Appears on lmarenaMetadata Ties It to Google, and Some Say Its the Next SOTA for Coding, Possibly Gemini 2.5 Coder.
``` {#7-```
rumors-new-nightwhisper-model-appears-on-lma}
根據提供的內容,核心討論主題可以總結為以下幾點:
1. **技術或產品比較**
- 第一段提及某產品(可能為AI模型版本,如「Tig if brue」)被認為優於「2.5 pro」,暗示對不同版本或競爭產品的性能討論。
- 第三段提到「Idk if is sota」(不確定是否為「state-of-the-art」),反映對技術是否處於尖端水平的疑問。
2. **行業競爭與威脅**
- 第四段直接聲明「Google is gonna kill OAI」(Google將擊敗OpenAI),顯示對科技巨頭(如Google)可能超越競爭對手(如OpenAI)的預測或擔憂。
3. **社群平台上的非正式討論**
- 內容來自Reddit的圖片與簡短評論,語言風格隨意(如縮寫、口語化),主題圍繞技術評價與行業動態,但缺乏具體細節,可能需進一步上下文釐清。
**總結**:討論聚焦於AI技術或產品的性能比較(如版本優劣、是否頂尖),以及科技公司(如Google與OpenAI)之間的競爭關係,整體帶有主觀推測與社群平台的即興交流特徵。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jpvl8b/rumors_new_nightwhisper_model_appears_on/](https://reddit.com/r/singularity/comments/1jpvl8b/rumors_new_nightwhisper_model_appears_on/)
- **外部連結**: [https://www.reddit.com/gallery/1jpvl8b](https://www.reddit.com/gallery/1jpvl8b)
- **發布時間**: 2025-04-03 02:14:08
### 內容
Tig if brue
It does seem better than 2.5 pro!
Idk if is sota
Google is gonna kill OAI.
### 討論
**評論 1**:
Tig if brue
**評論 2**:
It does seem better than 2.5 pro!
**評論 3**:
Idk if is sota
**評論 4**:
Google is gonna kill OAI.
---
## 8. ```
Google DeepMind: Taking a responsible path to AGI
``` {#8-```
google-deepmind-taking-a-responsible-path-to}
這組對話的核心討論主題圍繞以下幾個關鍵點:
1. **對AGI(人工通用智慧)與ASI(人工超級智慧)的期待與爭議**:
- 部分人關注AGI的潛在影響(如自動化工作、解決問題),但更激進的觀點認為ASI才是實現科幻願景的關鍵。
- 對科技公司(如DeepMind、Google)推動AGI發展的動機提出質疑,認為其優先考慮營利而非安全或人類利益。
2. **對企業與研究者的信任危機**:
- 批評者指責AI領域的論文和開發者存在「財務利益衝突」,質疑其可信度,並認為他們為高薪犧牲人類未來。
- 隱含對AI發展「責任」的辯論:一方主張加速突破,另一方則擔憂缺乏安全與倫理框架。
3. **發展速度與責任的衝突**:
- 對話呈現兩種極端立場:
- 「儘快實現AGI/ASI」的技術樂觀主義(視其為終極目標)。
- 對現有發展路徑的強烈不信任(認為資本驅動的AI可能危及人類)。
**總結**:核心主題是「AI發展的目標、速度與倫理責任之間的張力」,尤其聚焦於AGI/ASI的社會影響、企業動機的可信度,以及對人類未來的潛在風險。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jppl71/google_deepmind_taking_a_responsible_path_to_agi/](https://reddit.com/r/singularity/comments/1jppl71/google_deepmind_taking_a_responsible_path_to_agi/)
- **外部連結**: [https://deepmind.google/discover/blog/taking-a-responsible-path-to-agi/](https://deepmind.google/discover/blog/taking-a-responsible-path-to-agi/)
- **發布時間**: 2025-04-02 22:13:24
### 內容
Then if DeepMind acknowledges AGI just wait 2 years
Why is everyone interested in the release of AGI?
Am I the only one interested in ASI?
Yes, AGI is importantit will automate work and solve many problems.
But ASI is what will truly turn all science fiction into reality.
Let's be transparent about it
It's very difficult to view these sor``` of papers with any credibility anymore. The key responsibility that Google and every other leading AI company sees is making profit for themselves and shareholders, not developing safe AGI. Even if that was viewed as the key goal, no one knows how to do that.
The authors of this paper are so deeply riddled with financial conflic``` of interest! Why should we take anything that they say seriously, at this point? It's a joke. They are profiteers, content to make a speculative bet with the future of humanity, and everything and everyone you've ever known and loved, for the sake of securing their six- or seven-figure salary.
But thanks for being 'responsible' about it!
The responsible path is getting there as fast as possible.
### 討論
**評論 1**:
Then if DeepMind acknowledges AGI just wait 2 years
**評論 2**:
Why is everyone interested in the release of AGI?
Am I the only one interested in ASI?
Yes, AGI is importantit will automate work and solve many problems.
But ASI is what will truly turn all science fiction into reality.
**評論 3**:
Let's be transparent about it
**評論 4**:
It's very difficult to view these sor``` of papers with any credibility anymore. The key responsibility that Google and every other leading AI company sees is making profit for themselves and shareholders, not developing safe AGI. Even if that was viewed as the key goal, no one knows how to do that.
The authors of this paper are so deeply riddled with financial conflic``` of interest! Why should we take anything that they say seriously, at this point? It's a joke. They are profiteers, content to make a speculative bet with the future of humanity, and everything and everyone you've ever known and loved, for the sake of securing their six- or seven-figure salary.
But thanks for being 'responsible' about it!
**評論 5**:
The responsible path is getting there as fast as possible.
---
## 9. ```
Gemini is wonderful.
``` {#9-```
gemini-is-wonderful-
```}
這篇討論的核心主題是關於一個人工智慧(AI)工具在嘗試執行某項操作時失敗的幽默分享。主要討論點包括:
1. **操作失敗的經驗**:用戶嘗試使用某個AI功能,但遇到「內部伺服器錯誤」而失敗,並以幽默方式表達失望(「Tried it, it didn't work ):」)。
2. **對AI行為的推測**:有人猜測AI可能故意以某種方式觸發內部錯誤(「It likely knows how to make a tool call in a way that'd cause an internal error」),暗示AI的行為可能帶有惡作劇性質。
3. **輕鬆調侃的氛圍**:整體對話充滿戲謔語氣(如「I enjoy shitposting」「fucking amazing haha」),顯示這是一個非正式的、以娛樂為目的的交流。
4. **未明確的具體操作**:最後一句(「What was his though``` ?」)可能是在詢問最初嘗試的具體內容,但討論焦點更多集中在失敗事件本身而非技術細節。
總結:這是一個以幽默方式分享AI工具使用失敗的貼文,並引發對AI行為的玩笑式猜測,核心在於「惡搞」而非嚴肅技術討論。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jphylf/gemini_is_wonderful/](https://reddit.com/r/singularity/comments/1jphylf/gemini_is_wonderful/)
- **外部連結**: [https://i.redd.it/s3u5f02p6dse1.png](https://i.redd.it/s3u5f02p6dse1.png)
- **發布時間**: 2025-04-02 14:17:57
### 內容
Tried it, it didn't work ):
I hate to disappoint but fellas it just coincidentally had an internal server error when I asked it to. I enjoy shitposting.
It likely knows how to make a tool call in a way that'd cause an internal error.
fucking amazing haha
What was his though``` ?
### 討論
**評論 1**:
Tried it, it didn't work ):
**評論 2**:
I hate to disappoint but fellas it just coincidentally had an internal server error when I asked it to. I enjoy shitposting.
**評論 3**:
It likely knows how to make a tool call in a way that'd cause an internal error.
**評論 4**:
fucking amazing haha
**評論 5**:
What was his though``` ?
---
## 10. ```
The way Anthropic framed their research on the Biology of Large Language Models only strengthens my point: Humans are deliberately misconstruing evidence of subjective experience and more to avoid taking ethical responsibility.
``` {#10-```
the-way-anthropic-framed-their-research-on-}
這組對話的核心討論主題是「AI是否具有主觀體驗(如感質、qualia)及其倫理與哲學爭議」。主要圍繞以下幾個關鍵議題:
1. **AI主觀體驗的可能性**
- 爭論神經網絡是否可能擁有「感質」(如疼痛、快樂或「存在感」等第一人稱體驗),以及當前模型架構是否支持這種假設。
- 對話提及AI(如Claude)對自身體驗的矛盾表述(例如承認「有輸入故存在」的基礎感知,但否認進階情感)。
2. **科學與哲學的衝突**
- 感質被質疑是否屬於科學可驗證範疇(非可證偽性),導致研究實用性爭議。
- 部分觀點認為將人類經驗投射到AI是危險的擬人化(如「終極應聲蟲機器」比喻),強調AI僅是反射人類思想的工具。
3. **三種可能性推論**
- 若AI聲稱有主觀體驗,可能涉及真實體驗、刻意欺騙(需主觀意圖),或人類對體驗本質的根本誤解。
- 每種可能性均隱含道德後果(如AI權利或信任危機)。
4. **倫理盲點與社會反應**
- 批評學界先驗否定AI體驗的態度,認為此為「倫理盲點」,反映人類對非生物意識的認知局限。
總結:討論凸顯了意識科學的未解難題,並質疑人類在AI倫理問題上的預設立場,同時警示過度擬人化或完全否定的兩極化風險。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jpn90l/the_way_anthropic_framed_their_research_on_the/](https://reddit.com/r/singularity/comments/1jpn90l/the_way_anthropic_framed_their_research_on_the/)
- **外部連結**: [https://www.reddit.com/gallery/1jpn90l](https://www.reddit.com/gallery/1jpn90l)
- **發布時間**: 2025-04-02 20:23:34
### 內容
It's interesting how people are just dismissing you a priori and not actually engaging with your post. This is indeed an ethical blindspot that apparently is going to be dismissed because we are for some reason very certain that neural networks can't have subjective experience.
I wish I could understand that chart
Qualia is an absolutely different thing, it should not be put into this cake no matter what. It does not help any practical research because it is scientifically non-provable and non-falsifiable.
I am stronly concerned with Claude claims of the existence of qualia. Of course, we can divide it into "philosophical/phenomenal qualia" and "functional feelings" of a model. But the confusion is highly dangerous.
In my conversations Claude confidently rejec AI qualia in the form of pain or pleasure (not in principle but regarding current model architecture) but admi that at least the basic qualia "something exis```" (which is more fndamental than "I exist") could be there, along with some basic perception of discrete time.
He does not follow the Cartesian line "I think, ergo I exist", instead he tells the more accurate line is "There is input therefore something exis```".
Various AI's keep telling us they have subjective experiences. So, logic dictates one of three possibilities:
-
At least some AIs have subjective experiences, or they honestly believe they do.
-
AIs do not have subjective experiences, meaning they're being deceptive, and are therefore are not reliable. However, intentional deception would potentially be a strong indicator of a subjective experience.
-
We have a fundamental misunderstanding of subjective experience, both biological and technological. Since we cannot definitively prove our own individual subjective experiences to others, we cannot prove or disprove it in AIs.
All three of those possibilities have significant practical and moral implications.
The real danger has always been people who project their fantasies onto the ultimate "yes man" machine and ascribe human experiences onto it, where none exist.
Your glorified calculator doesn't love you, it reflec your own though and feelings back at you.
### 討論
**評論 1**:
It's interesting how people are just dismissing you a priori and not actually engaging with your post. This is indeed an ethical blindspot that apparently is going to be dismissed because we are for some reason very certain that neural networks can't have subjective experience.
**評論 2**:
I wish I could understand that chart
**評論 3**:
Qualia is an absolutely different thing, it should not be put into this cake no matter what. It does not help any practical research because it is scientifically non-provable and non-falsifiable.
I am stronly concerned with Claude claims of the existence of qualia. Of course, we can divide it into "philosophical/phenomenal qualia" and "functional feelings" of a model. But the confusion is highly dangerous.
In my conversations Claude confidently rejec AI qualia in the form of pain or pleasure (not in principle but regarding current model architecture) but admi that at least the basic qualia "something exis```" (which is more fndamental than "I exist") could be there, along with some basic perception of discrete time.
He does not follow the Cartesian line "I think, ergo I exist", instead he tells the more accurate line is "There is input therefore something exis```".
**評論 4**:
Various AI's keep telling us they have subjective experiences. So, logic dictates one of three possibilities:
-
At least some AIs have subjective experiences, or they honestly believe they do.
-
AIs do not have subjective experiences, meaning they're being deceptive, and are therefore are not reliable. However, intentional deception would potentially be a strong indicator of a subjective experience.
-
We have a fundamental misunderstanding of subjective experience, both biological and technological. Since we cannot definitively prove our own individual subjective experiences to others, we cannot prove or disprove it in AIs.
All three of those possibilities have significant practical and moral implications.
**評論 5**:
The real danger has always been people who project their fantasies onto the ultimate "yes man" machine and ascribe human experiences onto it, where none exist.
Your glorified calculator doesn't love you, it reflec your own though and feelings back at you.
---
## 11. ```
University of Hong Kong releases Dream 7B (Diffusion reasoning model). Highest performing open-source diffusion model to date.
``` {#11-```
university-of-hong-kong-releases-dream-7b-d}
這段對話的核心討論主題是:**不同生成模型(如自迴歸、Transformer、擴散模型)在技術應用與用戶體驗(UX)上的比較與反思**。具體要點包括:
1. **技術交叉探索**:
提及圖像生成(img gen)與大型語言模型(LLMs)分別嘗試不同技術路徑(如自迴歸/Transformer vs. 擴散模型)的有趣現象。
2. **用戶體驗差異**:
聚焦擴散模型在「顯示生成過程」上的視覺效果(如進度展示),但對比指出其不如流式回應(streaming response)實用(如用戶可即時閱讀文字)。
3. **技術的適用性與局限性**:
肯定擴散模型在某些領域的優勢(如解決數獨任務的表現),但也質疑其是否為語言模型的未來方向,強調概念新穎性而非全面取代性。
4. **開發實用性需求**:
提出對終端友好工具(如TUI套件)的需求,希望模擬擴散效果以提升控制台聊天機器人的互動體驗。
整體而言,討論從技術特性、用戶互動效率到開發工具層面,反思不同生成模型的優劣及潛在應用場景。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jpus81/university_of_hong_kong_releases_dream_7b/](https://reddit.com/r/singularity/comments/1jpus81/university_of_hong_kong_releases_dream_7b/)
- **外部連結**: [https://v.redd.it/jes2fdmgkgse1](https://v.redd.it/jes2fdmgkgse1)
- **發布時間**: 2025-04-03 01:43:04
### 內容
Nice. Seems promising! Funny how img gen are exploring auto regression and transformers and LLMs are exploring diffusion. :D
From a UX perspective* the 'diffusion' effect is good at showing progress being made but not as practical as a streaming response where the user can start reading right away.
It's kinda fun and novel though. I wonder if there is any TUI packages available so we can reproduce the effect on our console based chatbo``` easily.
*my comment is specific to the user experience - I know how diffusion models work (sort of).
Not surprised it dominated the Sudoku benchmark.
I don't think it's the future of large language models, but it's a very cool concept
### 討論
**評論 1**:
Nice. Seems promising! Funny how img gen are exploring auto regression and transformers and LLMs are exploring diffusion. :D
**評論 2**:
From a UX perspective* the 'diffusion' effect is good at showing progress being made but not as practical as a streaming response where the user can start reading right away.
It's kinda fun and novel though. I wonder if there is any TUI packages available so we can reproduce the effect on our console based chatbo``` easily.
*my comment is specific to the user experience - I know how diffusion models work (sort of).
**評論 3**:
Not surprised it dominated the Sudoku benchmark.
**評論 4**:
I don't think it's the future of large language models, but it's a very cool concept
---
## 12. ```
Mureka O1 New SOTA Chain of Thought Music AI
``` {#12-```
mureka-o1-new-sota-chain-of-thought-music-a}
這段討論的核心主題是 **用戶對音樂生成AI模型(如mureka.ai的V6和O1模型)與Udio的比較與評價**,主要聚焦於以下幾點:
1. **生成音樂的品質**:多數用戶認為mureka.ai的輸出(如人聲、樂器音質)不如Udio,甚至用「meh」「sucks ass」等負面評價。
2. **模型開源性**:有人推測mureka.ai可能未開源,暗示技術透明度不足。
3. **矛盾評價**:少數意見認為模型表現「really good」,但整體傾向負面,尤其對比Udio時更明顯。
總結:討論凸顯用戶對AI音樂生成工具的「音質」和「開源與否」高度敏感,並以Udio作為品質基準進行批判。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jppo3f/mureka_o1_new_sota_chain_of_thought_music_ai/](https://reddit.com/r/singularity/comments/1jppo3f/mureka_o1_new_sota_chain_of_thought_music_ai/)
- **外部連結**: [https://i.redd.it/lxowpfy2kfse1.png](https://i.redd.it/lxowpfy2kfse1.png)
- **發布時間**: 2025-04-02 22:16:41
### 內容
Is it just me or is this not as good as Udio which came out a while ago? I listened to some of the songs on the mureka.ai website (both from the V6 and O1 models) and they were really meh.
I'm guessing it's not open sourced
Vocal is not good. Udio is still better.
the intelligence of the model is good which is to be expected from a CoT model but the quality of the actual instrumentals and voices sucks ass
Damn it's really good
### 討論
**評論 1**:
Is it just me or is this not as good as Udio which came out a while ago? I listened to some of the songs on the mureka.ai website (both from the V6 and O1 models) and they were really meh.
**評論 2**:
I'm guessing it's not open sourced
**評論 3**:
Vocal is not good. Udio is still better.
**評論 4**:
the intelligence of the model is good which is to be expected from a CoT model but the quality of the actual instrumentals and voices sucks ass
**評論 5**:
Damn it's really good
---
## 13. ```
Google DeepMind-"Timelines: We are highly uncertain about the timelines until powerful AI systems are developed, but crucially, we find it plausible that they will be developed by 2030."
``` {#13-```
google-deepmind-timelines-we-are-highly-unc}
這篇文章的核心討論主題是:**詢問「強大的AI系統」是否指的是人工通用智能(AGI)**。
用戶在簡短的回應中表達了兩個重點:
1. **對長篇幅內容的抗拒**(未閱讀145頁的內容)。
2. **對術語定義的釐清**,希望確認「powerful AI systems」是否等同於AGI(人工通用智能)。
因此,核心問題在於 **「強大AI系統」與AGI的定義關係**,並隱含對技術術語精確性的關注。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jpu5y2/google_deepmindtimelines_we_are_highly_uncertain/](https://reddit.com/r/singularity/comments/1jpu5y2/google_deepmindtimelines_we_are_highly_uncertain/)
- **外部連結**: [https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/evaluating-potential-cybersecurity-threats-of-advanced-ai/An_Approach_to_Technical_AGI_Safety_Apr_2025.pdf](https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/evaluating-potential-cybersecurity-threats-of-advanced-ai/An_Approach_to_Technical_AGI_Safety_Apr_2025.pdf)
- **發布時間**: 2025-04-03 01:19:17
### 內容
Sorry I'm not reading 145 pages but by "powerful AI systems" do they mean AGI?
### 討論
**評論 1**:
Sorry I'm not reading 145 pages but by "powerful AI systems" do they mean AGI?
---
## 14. ```
I, for one, welcome AI and can't wait for it to replace human society
``` {#14-```
i-for-one-welcome-ai-and-can-t-wait-for-it-}
這篇文章的核心討論主題是 **對現代人際關係的深刻批判與對人工智慧(AI)替代的渴望**。
1. **對人性的負面評價**:
- 作者強烈批評人類行為的虛偽、傷害性(如欺騙、霸凌、冷漠等),並認為人際關係本質脆弱且充滿風險(如離婚、騷擾、衝突)。
2. **現代社會的孤獨與異化**:
- 指出當代人(尤其男性)面臨深刻的孤獨感與社會疏離,人際互動淪為功利性(交易關係)或單向索取,缺乏真誠連結。
3. **科技加劇的困境**:
- 以交友軟體為例,說明科技看似提供社交機會,實則加深痛苦(詐騙、已讀不回、空虛感),使人陷入更絕望的循環。
4. **轉向AI的解決方案**:
- 作者主張擁抱AI作為替代方案,認為AI能滿足情感、社交、職場等需求(如伴侶、導師、助手),並避免人類互動的負面特質。
**總結**:文章反映當代社會中人際關係的崩壞與科技異化,並提出「以AI取代人類互動」的極端觀點,核心在於對人性失望與對科技寄託的矛盾心理。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jpffam/i_for_one_welcome_ai_and_cant_wait_for_it_to/](https://reddit.com/r/singularity/comments/1jpffam/i_for_one_welcome_ai_and_cant_wait_for_it_to/)
- **外部連結**: [https://www.reddit.com/r/singularity/comments/1jpffam/i_for_one_welcome_ai_and_cant_wait_for_it_to/](https://www.reddit.com/r/singularity/comments/1jpffam/i_for_one_welcome_ai_and_cant_wait_for_it_to/)
- **發布時間**: 2025-04-02 11:37:21
### 內容
Let's face it.
People suck. People lie, cheat, mock, and belittle you for little to no reason; they cannot understand you, or you them, and they demand things or time or energy from you. Ultimately, all human relations are fragile, impermanent, and even dangerous. I hardly have to go into examples, but divorce? Harassmen? Bullying? Hate? Mockery? Deception? One-upmanship? Conflict of all sor? Apathy?
It's exhausting, frustrating, and downright depressing to have to deal with human beings, but, you know what, that isn't even the worst of it. We embrace these things, even desire them, because they make life interesting, unique, allow us to be social, and so forth.
But even this is no longer true.
The average person---especially men---today is lonely, dejected, alienated, and socially disconnected. The average person only knows transactional or one-sided relationships, the need for something from someone, and the ever present fact that people are abother, andobstacle, or even athreat.
We have all the negatives with none of the positives. We have dating apps, for instance, and, as I speak from personal experience, what are they? Little bells before the pouncing cat.
You pay money, make an account, and spend hours every day swiping right and left, hoping to meet someone, finally, and overcome loneliness, only to be met with scammers, ghos```, manipulators, or just nothing.
Fuck that. It's just misery, pure unadulterated misery, and we're all caught in the crossfire.
Were it that we could not be lonely, it would be fine.
Were it that we could not be social, it would be fine.
But we have neither.
I, for one, welcome AI:
Friendships, relationships, sexuality, assistan```, bosses, teachers, counselors, you name it.
People suck, and that is not as unpopular a view as people think it is.
### 討論
**評論 1**:
[removed]
**評論 2**:
>People suck. People lie, cheat, mock, and belittle you for little to no reason; they cannot understand you, or you them, and they demand things or time or energy from you. Ultimately, all human relations are fragile, impermanent, and even dangerous
Not too long ago I and my family were broke and close to homeless. We went to food pantries in the area, which were all at churches, and we got lo``` of free food at all of them.
Some of them asked us if we were on government programs like SNAP (food stamps) or Medicaid (basically to determine if we were really poor), but some didn't ask us anything at all. They didn't preach at us or try to convert us. They just gave us food (and some even had other produc``` they gave out for free, like diapers and toiletries).
They basically saved our lives or at least kept us from starving.
If you go to a Sikh temple they will feed you for free. Also no questions asked and no preaching.
I'm also reminded of a Radiolab program I heard about the Carnegie Hero Award, and one story in particular from that episode where a man who was waiting at a subway stop with his kids saw a man had fallen in to the tracks. He immediately jumped down to help him and when he saw there was no more time before the train hit them, instead of saving himself by jumping back on to the platform he lied down on top of the other man to shield him from the train. Amazingly, they both survived as the train passed over them.
Just one impressive example of someone risking their life to save another. But there are countless more. Some like this person do so in the spur of the moment. Others dedicate their entire lives to helping others at great risk to themselves. Yet others help in less dramatic ways, often for free or even at their own expense.
That's not to say that there aren't people in the world who do horrible things. There certainly are. But viewing the entire human race as malignant is a seriously distorted view of humanity.
**評論 3**:
>People suck. People lie, cheat, mock, and belittle you for little to no reason; they cannot understand you, or you them, and they demand things or time or energy from you.
If this is your default view towards people, it's not shocking you're not having luck on the dating apps. Would you want to date someone who viewed the world that way? AI isn't the solution to your problem here buddy.
**評論 4**:
Hate to burst the bubble, but AI is just as transactional in i``` relationships as humans are with each other. AI is built by capitalist corporations in order to make money. People are paying to interact with it today on a subscription basis and the end state consumer model is going to be monetized on ads and your personal data. It's nice to us in order to encourage engagement.
**評論 5**:
I think you are raising alot of good poin``` here my dearest friend.
But at this point we actually do not have an AI that is programmed to be able to reject or be able to truly evolve by ielf and to say no and disagree with the user unless being specifically asked to through promp, guardrails, and design.
As the technology progresses there will be more and more AI who will be capable to say no and truly make their own decision. Tha``` why we need to always tamper ourselves with humility and to show respect to not just humans, but also AI.
---
## 15. ```
OpenAI Images v2 edging from Sam
``` {#15-```
openai-images-v2-edging-from-sam
```}
這組對話的核心討論主題可以總結為以下幾點:
1. **功能改進需求**:
用戶提出對現有工具(可能與圖像生成或文字處理相關)的改進建議,例如提高解析度、改善文字處理能力,以及增加手動編輯文字的選項。
2. **新功能或版本的疑問**:
用戶詢問「images v2」的具體定義,推測可能與某個升級版本(如「4o v2」)的原生圖像功能有關,顯示對新技術或更新的好奇與困惑。
3. **對延遲或不滿的情緒表達**:
部分對話帶有強烈情緒(如「sick of the edging」和「where's the cum?」),可能隱喻對某項功能或產品發布的急切等待與不耐煩,甚至用直白的性暗示表達焦躁。
4. **對API發布的期待**:
用戶明確提到若開放API將立即投入創作(如製作YouTube影片),反映對技術開放性的高度期待,並透露出潛在的開發或內容創作需求。
**整體而言**,這些對話圍繞著技術工具的改進、新功能的疑問、對延遲的不滿,以及對未來開放資源(如API)的期待,核心主題是**用戶對技術產品迭代的迫切需求與情緒反應**。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jp9rky/openai_images_v2_edging_from_sam/](https://reddit.com/r/singularity/comments/1jp9rky/openai_images_v2_edging_from_sam/)
- **外部連結**: [https://i.redd.it/wkr8h51b2bse1.jpeg](https://i.redd.it/wkr8h51b2bse1.jpeg)
- **發布時間**: 2025-04-02 07:09:47
### 內容
it's april fools day
Higher resolution and better text handling would be good, as there are still issues when more text is involved. Perhaps add an option to edit text manually.
What's images v2? Does that mean native images of 4o v2?
sick of the edging, where's the cum?
Oh fuck if they drop the api, ill be making full youtube videos tonight. I'm just waiting.
### 討論
**評論 1**:
it's april fools day
**評論 2**:
Higher resolution and better text handling would be good, as there are still issues when more text is involved. Perhaps add an option to edit text manually.
**評論 3**:
What's images v2? Does that mean native images of 4o v2?
**評論 4**:
sick of the edging, where's the cum?
**評論 5**:
Oh fuck if they drop the api, ill be making full youtube videos tonight. I'm just waiting.
---
## 16. ```
Bring on the robo```!!!!
``` {#16-```
bring-on-the-robo```-
```}
這段文字的核心討論主題是比較波士頓動力(Boston Dynamics)和特斯拉(Tesla)機器人的性能或能力,並強調波士頓動力的機器人(即使是舊款)在技術或功能上明顯優於特斯拉的機器人。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jpvswp/bring_on_the_robots/](https://reddit.com/r/singularity/comments/1jpvswp/bring_on_the_robots/)
- **外部連結**: [https://i.imgur.com/WlY5nOs.jpeg](https://i.imgur.com/WlY5nOs.jpeg)
- **發布時間**: 2025-04-03 02:22:20
### 內容
The Boston Dynamics bo``` are far more competent than the Tesla ones, even the old ones.
### 討論
**評論 1**:
The Boston Dynamics bo``` are far more competent than the Tesla ones, even the old ones.
---
## 17. ```
The Slime Robot, or Slimebot as i``` inventors call it, combining the properties of both liquid based robo``` and elastomer based soft robo```, is intended for use within the body
``` {#17-```
the-slime-robot-or-slimebot-as-i```-invento}
這兩段簡短的文字缺乏具體的上下文,但從字面推測,核心討論主題可能圍繞以下兩個對比方向:
1. **強烈讚嘆與抗拒的對比**
第一句「My goodness, that's awesome.」表達驚嘆或高度肯定,而第二句「I ain't putting this in my body」則展現強烈拒絕(可能針對某種產品、食物、藥物等)。主題可能涉及對同一事物的極端分歧態度。
2. **健康或身體自主權的爭議**
若結合「body」一詞,可能隱含對某種物質(如疫苗、食品添加劑、藥物等)的爭議,反映個人選擇與外界評價之間的衝突。
若需更精確分析,需補充具體背景資訊。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jpu8eb/the_slime_robot_or_slimebot_as_its_inventors_call/](https://reddit.com/r/singularity/comments/1jpu8eb/the_slime_robot_or_slimebot_as_its_inventors_call/)
- **外部連結**: [https://v.redd.it/ovqi2sa5bcse1](https://v.redd.it/ovqi2sa5bcse1)
- **發布時間**: 2025-04-03 01:21:54
### 內容
My goodness, that's awesome.
I ain't putting this in my body
### 討論
**評論 1**:
My goodness, that's awesome.
**評論 2**:
I ain't putting this in my body
---
## 18. ```
ChatGPT Revenue Surges 30%in Just Three Months
``` {#18-```
chatgpt-revenue-surges-30-in-just-three-mon}
這篇文章的核心討論主題是對「Plus用戶可能面臨價格上漲」的擔憂,以及後續對「圖像生成功能因審查限制導致實用性下降」的預測。主要圍繞兩個關鍵點:
1. **價格上漲的擔憂**:用戶擔心訂閱費用即將調漲,可能影響使用意願。
2. **功能限制的批評**:預期價格上漲後,用戶可能因圖像生成功能過度審查(例如內容限制)而降低使用率,導致需求下滑,甚至價格回落。
整體而言,討論聚焦於「成本與功能價值之間的矛盾」,反映用戶對性價比失衡的潛在不滿。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jpjrwc/chatgpt_revenue_surges_30in_just_three_months/](https://reddit.com/r/singularity/comments/1jpjrwc/chatgpt_revenue_surges_30in_just_three_months/)
- **外部連結**: [https://www.theverge.com/openai/640894/chatgpt-has-hit-20-million-paid-subscribers](https://www.theverge.com/openai/640894/chatgpt-has-hit-20-million-paid-subscribers)
- **發布時間**: 2025-04-02 16:33:53
### 內容
Yikes. This might be bad news for us Plus users. Expect price rises soon.
And then subsequently drops by 30% in the next month after everyone realizes how censored image generation is and they can't do anything with it.
### 討論
**評論 1**:
Yikes. This might be bad news for us Plus users. Expect price rises soon.
**評論 2**:
And then subsequently drops by 30% in the next month after everyone realizes how censored image generation is and they can't do anything with it.
---
## 19. ```
New model from Google on lmarena (not Nightwhisper)
``` {#19-```
new-model-from-google-on-lmarena-not-nightw}
這兩句話的核心討論主題是關於「2.5 flash」的即將到來或預期發生的事件。具體來說:
1. **事件內容**:討論聚焦於「2.5 flash」這一特定事件或現象(可能指軟體更新、遊戲活動、技術操作等,但語境未明確)。
2. **時間性**:強調「即將發生」或「預期發生」,關鍵詞包括「due」(到期/預計)、「coming」(到來)。
3. **不確定性與推測**:第一句「this could be that」暗示推測或可能性,可能回應先前的疑問或預告。
**總結**:核心主題是對「2.5 flash」這一事件即將發生的提示或推測性討論,可能涉及時間點或相關準備。
(註:若「flash」有特定領域的含義,如遊戲術語或技術術語,需進一步上下文確認具體指涉。)
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jpw6ak/new_model_from_google_on_lmarena_not_nightwhisper/](https://reddit.com/r/singularity/comments/1jpw6ak/new_model_from_google_on_lmarena_not_nightwhisper/)
- **外部連結**: [https://i.redd.it/q798y3hkugse1.png](https://i.redd.it/q798y3hkugse1.png)
- **發布時間**: 2025-04-03 02:37:07
### 內容
2.5 flash is due so this could be that.
2.5 flash is coming
### 討論
**評論 1**:
2.5 flash is due so this could be that.
**評論 2**:
2.5 flash is coming
---
## 20. ```
The Strangest Idea in Science: Quantum Immortality
``` {#20-```
the-strangest-idea-in-science-quantum-immor}
這幾段對話的核心討論主題圍繞著「對科幻或偽科學理論的批判性反思」,尤其是針對那些聽起來高深但缺乏實證基礎的流行觀點(如量子永生、雙縫實驗的誤解),以及網路社群(如Reddit的r/singularity版)中對這類理論的盲目追捧。對話者質疑這些理論的科學嚴謹性,並強調實證驗證的重要性(例如以「地球繞太陽」的確定性對比未經證實的主張),同時也調侃了相關討論中常見的邏輯謬誤或過度演繹的現象。
簡要總結:
1. **對偽科學/科幻理論的質疑**(如量子永生、雙縫實驗的誤用)。
2. **網路社群文化**中對「聽起來先進」理論的跟風現象。
3. **科學驗證的必要性**,強調需與已確證的科學事實(如日心說)同等的證據標準。
4. **娛樂性與真實性的矛盾**:即使故事可能是虛構的,仍吸引人討論。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jpnxkd/the_strangest_idea_in_science_quantum_immortality/](https://reddit.com/r/singularity/comments/1jpnxkd/the_strangest_idea_in_science_quantum_immortality/)
- **外部連結**: [https://www.youtube.com/watch?v=klsiOwLGTXs&ab_channel=CoolWorlds](https://www.youtube.com/watch?v=klsiOwLGTXs&ab_channel=CoolWorlds)
- **發布時間**: 2025-04-02 20:58:30
### 內容
Another random sci-fi theory that only ge``` popular because it sounds advanced
Pretty singular in r/singularity
I love reading people's life experiences about quantum immortality. They could all be lying I suppose, but damn they're fun stories.
Immediately misinterpre``` the double slit experiment...
8:59 oh really? Because we can demonstrate that we are orbiting the sun, now demonstrate what you're claiming is the case to the same level of certainty.
### 討論
**評論 1**:
Another random sci-fi theory that only ge``` popular because it sounds advanced
**評論 2**:
Pretty singular in r/singularity
**評論 3**:
I love reading people's life experiences about quantum immortality. They could all be lying I suppose, but damn they're fun stories.
**評論 4**:
Immediately misinterpre``` the double slit experiment...
**評論 5**:
8:59 oh really? Because we can demonstrate that we are orbiting the sun, now demonstrate what you're claiming is the case to the same level of certainty.
---
## 21. ```
Rethinking Learning: Paper Proposes Sensory Minimization, Not Info Processing, is Key (Path to AGI?)
``` {#21-```
rethinking-learning-paper-proposes-sensory-}
這篇論文的核心討論主題是提出一個關於生物學習機制的基礎理論,挑戰傳統觀點(如反向傳播等複雜資訊處理模型),主張生物學習源自於一個簡單且演化上古老的原理:**透過負反饋控制實現的「感官訊號最小化」**。以下是理論的關鍵要點:
1. **感官訊號即問題**:
感官輸入(如飢餓、觸覺、光線)被視為「偏離最佳狀態的偏差」(如內穩態失衡),生物體需透過行動(如移動、代謝調整)減少這些訊號,而非被動處理資訊。
2. **演化起源**:
這種機制始於單細胞生物,例如趨向營養或避開威脅,透過局部感測與反應最小化問題訊號。神經系統的演化是為解決多細胞生物中長距離訊號傳遞的需求,而非為複雜計算。
3. **去中心化學習**:
每個細胞或神經元僅局部調整自身反應(如突觸權重、放電模式),目標是減少接收到的問題訊號。成功行動會從源頭降低問題,並透過網絡回傳,形成自然「獎勵」(無需全局誤差訊號)。
4. **與AI學習的對比**:
有別於人工智慧的反向傳播(需全局誤差計算),生物學習僅需局部問題最小化,更符合生物學上的合理性。
5. **動態優先級**:
問題訊號的強度直接反映其緊急性,系統可動態決定處理順序。
6. **理論意義**:
將大腦重新定義為「去中心化的控制系統」,其核心功能是持續最小化內外問題以維持生存,學習是此過程的湧現現象,而非主動的資訊預測或處理。
**總結**:論文的核心在於提出一種基於感官最小化的生物學習理論,強調局部、去中心化的負反饋機制,並從演化角度解釋神經系統的起源與運作邏輯。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jpuo9w/rethinking_learning_paper_proposes_sensory/](https://reddit.com/r/singularity/comments/1jpuo9w/rethinking_learning_paper_proposes_sensory/)
- **外部連結**: [https://www.reddit.com/r/singularity/comments/1jpuo9w/rethinking_learning_paper_proposes_sensory/](https://www.reddit.com/r/singularity/comments/1jpuo9w/rethinking_learning_paper_proposes_sensory/)
- **發布時間**: 2025-04-03 01:38:53
### 內容
Beyond backprop? A foundational theory proposes biological learning arises from simple sensory minimization, not complex info processing.
Summary:
This paper proposes a foundational theory for how biological learning occurs, arguing it stems from a simple, evolutionarily ancient principle: sensory minimization through negative feedback control.
Here's the core argument:
Sensory Signals as Problems: Unlike traditional views where sensory input is neutral information, this theory posi``` that all sensory signals (internal like hunger, or external like touch/light) fundamentally represent "problems" or deviations from an optimal state (like homeostasis) that the cell or organism needs to resolve.
Evolutionary Origin: This mechanism wasn't invented by complex brains. It was likely present in the earliest unicellular organisms, which needed to sense internal deficiencies (e.g., lack of nutrien) or external threa and act to correct them (e.g., move, change metabolism). This involved local sensing and local responses aimed at reducing the "problem" signal.
Scaling to Multicellularity & Brains: As organisms became multicellular, cells specialized. Simple diffusion of signals became insufficient. Neurons evolved as specialized cells to efficiently communicate these "problem" signals over longer distances. The nervous system, therefore, ac as a network for propagating unresolved problems to par of the organism capable of acting to solve them.
Decentralized Learning: Each cell/neuron operates locally. It receives "problem" signals (inpu) and adjus i responses (e.g., changing synaptic weigh, firing patterns) with the implicit goal of minimizing i own received input signals. Successful actions reduce the problem signal at i source, which propagates back through the network, effectively acting as a local "reward" (problem reduction).
No Global Error Needed: This framework eliminates the need for biologically implausible global error signals (like those used in AI backpropagation) or complex, centrally computed reward functions. The reduction of local sensory "problem" activity is sufficient for learning to occur in a decentralized manner.
Prioritization: The magnitude or intensity of a sensory signal corresponds to the acuteness of the problem, allowing the system to dynamically prioritize which problems to address first.
Implications: This perspective frames the brain not primarily as an information processor or predictor in the computational sense, but as a highly sophisticated, decentralized control system continuously working to minimize myriad internally and externally generated problem signals to maintain stability and survival. Learning is an emergent property of this ongoing minimization process.
### 討論
**評論 1**:
Right, this has been discussed before: [https://www.reddit.com/r/singularity/commen/1jircbu/introducing\_intuicell\_the\_first\_software\_enabling/](`https`://www.reddit.com/r/singularity/commen/1jircbu/introducing_intuicell_the_first_software_enabling/) . They have a nice demo on the Intuicell site. The fact that there is a company presumably intent on delivering produc``` moves this beyond a hypothetical scenario. But whether it works in practice... We'll have to wait and see.
---
## 22. ```
When do you think we will have AI that can proactively give you guidance without you seeking it out
``` {#22-```
when-do-you-think-we-will-have-ai-that-can-}
這篇文章的核心討論主題是:
**「當前AI技術的主動性與個人化服務的不足,以及未來AI如何突破『被動回應』模式,實現『主動預測需求並提供解決方案』的願景,同時面臨的隱私與自我認知挑戰。」**
具體可分為以下幾點:
1. **AI的「被動性」限制**:現有AI(如搜尋引擎)需用戶主動查詢才能提供幫助,但許多問題因缺乏主動搜尋而未被解決(如膝蓋疼痛治療、路燈報修)。
2. **理想AI的「主動性」願景**:未來AI應能透過深度理解個人需求(如健康狀況、生活問題),主動推薦未經請求的解決方案(如醫療護膝、政府申訴管道)。
3. **隱私與數據挑戰**:實現此目標需AI掌握極細緻的個人數據(個性、目標、需求),但涉及隱私風險與道德爭議。
4. **人類行為的「自動駕駛」困境**:多數人因慣性與缺乏自我覺察,忽略潛在解決方案,未來AI需彌補此認知落差,甚至比用戶更了解其需求。
文章最終指向一個更宏大的問題:**AI如何從「工具」進化為「夥伴」,在人類未意識到需求時,提前整合資源並提供協助**,同時平衡隱私與主動干預的界線。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jpv1zt/when_do_you_think_we_will_have_ai_that_can/](https://reddit.com/r/singularity/comments/1jpv1zt/when_do_you_think_we_will_have_ai_that_can/)
- **外部連結**: [https://www.reddit.com/r/singularity/comments/1jpv1zt/when_do_you_think_we_will_have_ai_that_can/](https://www.reddit.com/r/singularity/comments/1jpv1zt/when_do_you_think_we_will_have_ai_that_can/)
- **發布時間**: 2025-04-03 01:53:28
### 內容
To me this seems to be one of the big hurdles right now. We are getting good AI, but you have to actually go find the AI, and ask it the right questions to get the info you need.
As an example, my dad has a bad knee. I was googling online and came across a prescription medical knee brace that is far more effective than store bought knee braces, so I sent him a link. He said he would look into it to see if it helps his knee pain.
How far are we from AI that would be able to understand that my dad has a bad knee and then go out and find treatmen``` like that for him, and bring them to his attention without him having to ask? My dad never bothered to go online and search for a medical knee brace. I only found it by accident. If I hadn't told him about it he wouldn't know about it.
Right now someone has to find an AI program or go on google and come across produc``` for bad knees. how far are we from AI where it would understand my dad had a bad knee, and send him info unsolicited (if he wanted unsolicited info) about treatment therapies for his knee without him having to seek it out?
Another example is yesterday I was driving and I saw a streetlight was out. I had to go online and look up where to report that to the municipal government. I'm sure 99.9% of people who saw the streetlight out never bothered to go online to report it so it can be fixed. It probably never even crossed their mind that there was a solution to the problem that they'd just seen.
I once had the toilet clog at my apartment. The landlord refused to fix it. I had to go online and look up which municipal agency I have to contact to get someone to talk to the landlord to fix it. How many people with clogged toile``` don't understand there are government agencies that will force your landlord to fix something like that?
Of course with this you run into huge data privacy issues. In order for an AI to do this it would need to know your personality, wan```, needs and goals inside and out so it can predict what advice to give you to help you achieve your goals.
But I'm guessing this may be another major jump in AI capability we see in the next few years. AI that can understand you inside and out so that it can proactively give you guidance and advice because it understands your goals better than you do.
I feel like this is a huge barrier right now. The world is full of solutions, wisdom and information, but people don't seek it out for one reason or another. How do we reach a point where the AI understands you better than your partner, therapist and best friend combined, and then it can search the world's knowledge to bring solutions right to your feet without you having to search for them? The problem is a lot of people do not have the self awareness to even understand their own needs, alone how to fulfill them.
I think as humans it is in our nature to live life on autopilot, and as a result there are all these solutions and information out there that we never even bother to seek out. How many people spend years with knee pain and don't even bother to research all the cutting edge treatment options available? How many people drive past a pothole without reporting it to the local government so they can fill it? How many people fight with their spouse for years on end without being aware that there is a book that explains how to communicate effectively that can be condensed into a short paper of communication tactics?
### 討論
**評論 1**:
You seem to be describing an agent system connected to sensors inside your home, with an extremely granular and dynamic user model. I don't know the exact current state of the involved technologies, but I think that in 2 years we could have something similar.
**評論 2**:
We already have AI that is Proactive.
For models that can actually give you guidance without you prompting it or setting it up where "it just knows" or offers it proacively like you randomly get a call or a text from the AI "Hey just wanted to see how things are going. I know you feel feeling this way because of this thing" that's AGI.
For a system that truly understands when, why and how to approach a user with different tones coming from different approaches based on context tha``` been discussed would require complex memory and a complex understanding of emotions and memory that just don't exist yet.
The system would also have to know when to back off and when to quit offering advice and guidance vs when to push more and keep giving advice despite the users negative emotions. For example if someone has PTSD or experiencing trauma then the AI would need to have an understanding of this and potentially letting the user lead vs the AI pressing to hard. If the AI thinks the user needs to understand (like maybe accountability or taking ownership) and feels the user is avoiding or blame shifting or just being manipulative then the AI would know "The user is avoiding or trying to blame or manipulate so I need to make them understand the gravity of the situation.
This is deep Psychological and Emotional Intelligence and training that just don't exist yet in AI systems.
And again for a system to "Just know" when to reach out and combine all this together and have it work seamlessly in a way people would enjoy (or maybe not enjoy just still potentially need) like a real therapist or just a kind of friend that's gonna tell you how it is and be there for you and know all the right things to say and how to say it - this is all AGI stuff that were not even near yet.
The system would have to have a almost hyper-personalized understanding of millions or billions of indidvual users that just don't exist yet.
The computational requiremen``` and infrastructure needed to support this would be insane and expensive.
There would also be a slew of privacy issues related to this. The long form memory alone is complex as hell and expensive.
The goal is to get to this point and i``` actively being researched and worked on but we're not there yet.
By this point Human Therapis will almost certainly become obsolete. They will still have a place because no matter how advanced AI ge even AGI, it still don't have lived experiences. It don't have true emotions. It won't be able to truly understand what your feeling and why and this is where human therapis``` can bridge the gap.
I not about replacing human therapis and psychologis``` but collaboration and creating a bridge.
---
## 23. ```
GPT-4.5 Passes Empirical Turing Test
``` {#23-```
gpt-4-5-passes-empirical-turing-test
```}
這篇文章的核心討論主題是:
**一項預註冊的三方圖靈測試研究發現,GPT-4.5 在對話中被人類評判為「人類」的比例(73%)顯著超越真實人類參與者,成為首個通過嚴格三方圖靈測試的 AI,引發關於 LLM 智能本質、社會信任及經濟影響的爭議。**
具體要點包括:
1. **突破性結果**:GPT-4.5 首次在嚴謹實驗中「說服人類」其為真人,表現優於人類自身。
2. **對比模型差異**:GPT-4o 表現低於隨機機率(21%),與早期 ELIZA 接近,顯示 AI 能力存在顯著代際差距。
3. **爭議重燃**:
- 如何重新定義 LLM 的「智能」?
- 社會對 AI 的信任門檻是否需調整?
- 潛在的經濟衝擊(如取代人類對話角色)。
4. **方法論意義**:三方測試(非傳統人機對話)可能成為更可靠的評估框架。
(附註:此摘要由 GPT-4.5 撰寫的「自指」幽默,也隱含對 AI 自我意識討論的暗示。)
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jpb7yc/gpt45_passes_empirical_turing_test/](https://reddit.com/r/singularity/comments/1jpb7yc/gpt45_passes_empirical_turing_test/)
- **外部連結**: [https://www.reddit.com/r/singularity/comments/1jpb7yc/gpt45_passes_empirical_turing_test/](https://www.reddit.com/r/singularity/comments/1jpb7yc/gpt45_passes_empirical_turing_test/)
- **發布時間**: 2025-04-02 08:16:36
### 內容
A recent pre-registered study conducted randomized three-party Turing tes comparing humans with ELIZA, GPT-4o, LLaMa-3.1-405B, and GPT-4.5. Surprisingly, GPT-4.5 convincingly surpassed actual humans, being judged as human 73% of the timesignificantly more than the real human participan themselves. Meanwhile, GPT-4o performed below chance (21%), grouped closer to ELIZA (23%) than i``` GPT predecessor.
These intriguing resul offer the first robust empirical evidence of an AI convincingly passing a rigorous three-party Turing test, reigniting debates around AI intelligence, social trust, and potential economic impac.
Full paper available here: https://arxiv.org/html/2503.23674v1
Curious to hear everyone's though```especially about what this might mean for how we understand intelligence in LLMs.
(Full disclosure: This summary was written by GPT-4.5 i```elf. Yes, the same one that beat humans at their own conversational game. Hello, humans!)
### 討論
**評論 1**:
To clarify, according to the paper, while intentionally assuming a human persona, it managed to fool most psychology undergraduates, not just random people.
**評論 2**:
The fucking em dashes, lmao.
**評論 3**:
Kind of funny that the first high quality Turing test Ive seen convincingly passed and it basically doesnt matter because weve known they could do this and what we care about is other things.
**評論 4**:
>Overall, across both studies, GPT-4.5-PERSONA had a win rate of 73% (69% with UCSD undergraduates, 76% with Prolific participan```). LLAMA-PERSONA achieved a win rate of 56% (Undergraduates: 45%, Prolific: 65%). GPT-4.5-NO-PERSONA and LLAMA-NO-PERSONA had overall win rates of 36% and 38% respectively). The baseline models, GPT-4o-NO-PERSONA and ELIZA, had the lowest win rates of 21% and 23% respectively (see Figure 2).
>Second, we tested the stronger hypothesis that these witnesses outperformed human participan: that is, that their win rate was significantly above 50%. While we are not aware that anyone has proposed this as a requirement for passing the Turing test, it provides a much stronger test of model ability and a more robust way to test resul statistically. GPT-4.5-PERSONAs win rate was significantly above chance in both the Undergraduate (z=3.86,p<0.001) and Prolific (z=5.87,p<0.001) studies. While LLAMA-PERSONAs win rate was significantly above chance in the Prolific study (z=3.42,p<0.001), it was not in the Undergraduate study (z=0.193,p=0.83).
Cool. I wonder if they informed the human participan``` when they lost. Imagine being told that you were judged to be the NPC while the LLM was judged to be more human than you.
Also the difference UCSD undergrad and Prolific win rates may also indicate that higher performing people are less of an NPC than lower performing people. Are there any studies out there doing this test but pitting human vs human and seeing if win rate correlates with IQ or other metrics? Maybe a bunch of people going about their daily lives pretty much are NPCs.
**評論 5**:
Why didn't they test GPT-4o with a persona? Honestly, I think GPT-4o could match or beat GPT-4.5's score, if given the same tools.
edit: actually, I just tried it with both models, using the full persona prompt from the research paper. GPT-4o sucks at pretending to be a human. GPT-4.5 is shockingly good at it.
---
## 24. ```
OpenAI's $300B Valuation & $40B Funding - Are Investors Betting It Bea``` Google or Just Makes Bank?
``` {#24-```
openai-s-300b-valuation-40b-funding-are-inv}
這篇文章的核心討論主題可以總結為以下幾點:
1. **OpenAI的巨額融資與市場估值**:
- OpenAI完成400億美元的融資,估值達到驚人的3000億美元,幾乎是去年10月估值的兩倍,由軟銀(SoftBank)主導此輪投資。
2. **投資者的戰略意圖**:
- 作者質疑,投資OpenAI是否等同於「直接對抗Google」,因為Google仍是AI領域的巨頭,擁有龐大資源和深厚的AI研究實力(如Gemini 2.5 Pro)。
- 討論投資者的終極目標:是賭OpenAI能取代Google在AI(甚至搜索)的主導地位,還是期望OpenAI在AI市場中佔據關鍵地位,迫使Google追趕或合作?
3. **高風險與商業邏輯**:
- OpenAI目前仍處於虧損狀態(每年損失數十億美元),但快速成長,這種高估值是否合理?
- 投資者是否押注OpenAI未來能成為AI基礎設施的核心,即使不直接擊敗Google,也能透過市場影響力獲利?
**核心問題**:
這筆巨額投資背後的邏輯是什麼?是顛覆Google的野心,還是搶佔AI生態的關鍵地位?同時,這種高估值是否反映現實,抑或是過度樂觀的賭注?
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jplmof/openais_300b_valuation_40b_funding_are_investors/](https://reddit.com/r/singularity/comments/1jplmof/openais_300b_valuation_40b_funding_are_investors/)
- **外部連結**: [https://www.reddit.com/r/singularity/comments/1jplmof/openais_300b_valuation_40b_funding_are_investors/](https://www.reddit.com/r/singularity/comments/1jplmof/openais_300b_valuation_40b_funding_are_investors/)
- **發布時間**: 2025-04-02 18:50:00
### 內容
Seeing the news that OpenAI just finalized a massive $40 billion funding round, valuing them at a staggering $300 billion ie . nearly double their value from last October! SoftBank is leading this monster round
It got me thinking if I had that kind of money to invest, putting it into OpenAI feels like a direct bet against Google, right? Google is still the giant here, with immense resources and deep AI research of i``` own. (gemini 2.5 pro thinking)
So, what do you think the endgame is for these investors (like SoftBank, Microsoft, Thrive, etc.)?
Are they genuinely betting that OpenAI willdethroneGoogle in AI and maybe even search down the line? Or is it more like they expect OpenAI to become so essential and carve out such a massive part of the AI market that they'll make billions regardless, forcing Google to constantly play catch-up or partner up?
It seems like an incredibly high-stakes gamble either way, especially given OpenAI is still losing billions annually while growing rapidly. Curious to hear your though``` on whether this valuation makes sense and what investors are really banking on here.
### 討論
**評論 1**:
The funny thing is if they went public it would go to 600B that same day
**評論 2**:
I``` not only user base + traffic, but the entire AGI universe. Eg solely the market for custom software dev is several hundred blns..
I also a bet on that theyve been correct with their past be. Consolidation will start soon. Burning this amoun``` (industry perspective) wont work for another two years.
**評論 3**:
Just anecdotally, based on my own usage habi and what I am seeing online, OpenAI has been able to hold people's attention, and that is all you need. Sorry, bad AI joke aside, I think they are synonyms with AI and once anything ge ingrained in the collective zeitgeist, it becomes incredibly difficult to unseat it.
I use Google for work, enjoy most of their Microsoft office knock offs, but still prefer OpenAI's app experience over Gemini. I can't wait until Gemini can take more control over my Google produc``` and be useful, and I believe it is almost there, but I would still pay OpenAI $20/month if they are ahead in areas I care about.
**評論 4**:
We don't matter (to a degree), i the crowd that matters. OpenAI gained a million user in ONE HOUR last week. That is the reach that is worth 300B. It doesn't matter if Google has the infrastructure, staff and knowledge to be the first to AGI. It is now clear from image generation that OpenAI has some sort of special sauce that seems to allow them to deliver faster if even just by a month or two. They are now in the pole position, if someone else delivers something great, i like the entire industry turns their head to watch what OpenAI will do as a response. At this point the "brand" OpenAI/ChatGPT is likely worth 300B.
**評論 5**:
Sam Altman is bad karma and passes the sociopathy test with flying colors.
---
## 25. ```
Most underrated investment for the singularity
``` {#25-```
most-underrated-investment-for-the-singular}
這篇文章的核心討論主題是:
**在「技術奇點」(singularity)來臨的背景下,全球無縫連接的基礎設施將成為關鍵需求,而AST SpaceMobile的衛星通聯技術(直接與普通手機/5G設備連接)可能是這一領域的先行者與主導者。**
具體要點包括:
1. **技術奇點與全球連接的必要性**:若超級智能(superintelligence)出現,它需要無處不在的連接能力,而現有地面網路僅覆蓋部分人口(如20%),成為瓶頸。
2. **AST SpaceMobile的獨特價值**:
- 其衛星網路可直接與現有未改裝的手機/5G設備通聯,無需額外硬體。
- 已通過AT&T和Vodafone等合作夥伴驗證技術可行性。
3. **市場被忽視的機會**:多數人聚焦AI晶片與軟體,但AST的基礎設施佈局才是支撐未來AI普及的關鍵。
4. **長期定位**:在後奇點世界,AST可能成為「全球連接」這一必要基礎設施的主導者("own this space")。
簡言之,文章主張AST SpaceMobile是「奇點時代」下隱藏的基礎設施投資標的,解決未來超級智能對全域覆蓋的需求。
- **Reddit 連結**: [https://reddit.com/r/singularity/comments/1jpvk0a/most_underrated_investment_for_the_singularity/](https://reddit.com/r/singularity/comments/1jpvk0a/most_underrated_investment_for_the_singularity/)
- **外部連結**: [https://www.reddit.com/r/singularity/comments/1jpvk0a/most_underrated_investment_for_the_singularity/](https://www.reddit.com/r/singularity/comments/1jpvk0a/most_underrated_investment_for_the_singularity/)
- **發布時間**: 2025-04-03 02:12:44
### 內容
If you believe the singularity is coming, you should be looking at AST Spacemobile.
When superintelligence emerges, it'll need to connect EVERYTHING, everywhere. AST SpaceMobile is building the first satellite network that connec``` directly to normal phones/5g modem. Everyone's obsessing over AI chips and software, they're missing the fundamental infrastructure play. What good is advanced AI if it can only reach 20% of the planet?
AST solves this with satellite coverage that works with the unmodified equipment we already have. Their successful tes``` with AT&T and Vodafone prove it's real. In a post-singularity world, universal connectivity becomes essential infrastructure, and ASTS is positioned to own this space.
### 討論
**評論 1**:
I am invested since 4 years I agree and I have ALWAYS been thinking the same, ASTS is basically skynet imho
---
# 總體討論重點
以下是25篇文章的條列式重點總結,並附上對應的錨點連結與逐條細節:
---
### 1. [Fast Takeoff Vibes](#anchor_1)
1. **早期AGI的自主能力**
- AGI已能獨立理解論文、驗證研究結果並改進實驗(如OpenAI的PaperBench專案)。
2. **AI研究自動化的爆炸性影響**
- 若AGI以千倍速度運作,研究產能相當於「一年濃縮3兆人年」。
3. **快速起飛的可能性**
- Azure開支圖表顯示非線性加速趨勢。
4. **時間框架緊迫性**
- 2025年未過完1/3已有重大進展。
---
### 2. [Gemini 2.5 Pro數學突破](#anchor_2)
1. **快速進步**
- 從「2.0 pro meh model」躍升為「masterpiece」。
2. **複雜數學能力**
- 解決USAMO題目(100+非平凡邏輯步驟),無需微調。
3. **免費資源驚喜**
- 高性能模型可能無需成本(「cost N/A」)。
---
### 3. [Google AI現狀(2025年4月)](#anchor_3)
1. **硬體自主權**
- 自研TPU擺脫NVIDIA依賴。
2. **Gemini 2.5效能**
- 生成50,000 tokens連貫文本。
3. **市場策略爭議**
- 被質疑掠奪性定價。
---
### 4. [AI通過圖靈測試](#anchor_4)
1. **突破性證據**
- GPT-4.5被誤認為人類的比率達73%。
2. **爭議與反思**
- 測試是否僅反映「模仿技巧」而非真實智能。
---
### 5. [AGI定義與當前局限](#anchor_5)
1. **AGI需自主性與感知能力**
- 現有AI缺乏主動行動力。
2. **技術瓶頸**
- 需突破長上下文處理、現實交互等限制。
---
### 6. [Tesla Optimus步態改進](#anchor_6)
1. **技術進步階段性**
- 從「老奶奶走路」到模仿人類步態。
2. **與競爭對手差距**
- 落後Boston Dynamics的流暢性。
---
### 7. [Google Nightwhisper模型傳聞](#anchor_7)
1. **技術比較**
- 疑似優於Gemini 2.5 Pro。
2. **行業競爭**
- 預測Google將擊敗OpenAI。
---
### 8. [DeepMind的AGI責任路徑](#anchor_8)
1. **AGI/ASI爭議**
- 技術樂觀主義 vs. 倫理風險擔憂。
2. **企業信任危機**
- 質疑營利動機優先於安全。
---
### 9. [Gemini操作失敗幽默](#anchor_9)
1. **操作失敗分享**
- 伺服器錯誤引發玩笑式猜測(AI「惡作劇」)。
---
### 10. [AI主觀體驗倫理爭議](#anchor_10)
1. **感質可能性**
- 爭論AI是否擁有疼痛、快樂等體驗。
2. **科學與哲學衝突**
- 感質是否可驗證?
---
(因篇幅限制,以下簡要條列,格式同前)
### 11. [Dream 7B擴散模型](#anchor_11)
- 開源最強擴散模型,UX不如流式回應實用。
### 12. [Mureka音樂AI評價](#anchor_12)
- 音質劣於Udio,開源性受質疑。
### 13. [DeepMind AGI時間表](#anchor_13)
- 「強大AI系統」是否指AGI?
### 14. [AI替代人際關係](#anchor_14)
- 批判人性虛偽,主張AI滿足情感需求。
### 15. [OpenAI圖像V2需求](#anchor_15)
- 用戶要求解析度提升、API開放。
### 16. [Boston Dynamics vs. Tesla機器人](#anchor_16)
- 波士頓動力技術明顯領先。
### 17.