Skip to content

Commit fbc9772

Browse files
254 show notes
1 parent d052173 commit fbc9772

File tree

2 files changed

+670
-2
lines changed

2 files changed

+670
-2
lines changed

src/episodes/253-the-adversarial-agents-are-arguing-again.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,7 @@ description: "What if the best way to get good work out of AI is to stop being n
44
date: 2026-03-26
55
---
66

7-
<script async defer onload="redcircleIframe();" src="https://api.podcache.net/embedded-player/sh/30227421-bc27-45c2-bfb4-861def7dd4cc/ep/63c2cf5e-57a3-426a-9a7b-22c1a7cae9f7"></script>
8-
<div class="redcirclePlayer-63c2cf5e-57a3-426a-9a7b-22c1a7cae9f7"></div>
7+
<iframe allow="autoplay *; encrypted-media *; fullscreen *; clipboard-write" frameborder="0" height="175" style="width:100%;max-width:900px;overflow:hidden;border-radius:10px;" sandbox="allow-forms allow-popups allow-same-origin allow-scripts allow-storage-access-by-user-activation allow-top-navigation-by-user-activation" src="https://embed.podcasts.apple.com/us/podcast/253-the-adversarial-agents-are-arguing-again/id1544142288?i=1000757485922"></iframe>
98

109
What if the best way to get good work out of AI is to stop being nice to it? Adam and Tim have both landed on the same uncomfortable discovery: when you pit AI agents against each other, with fake points, opposing incentives, and competing models, the output gets dramatically better than anything a single polite prompt can produce.
1110

0 commit comments

Comments
 (0)