Discussion about this post

User's avatar
Becoming Human's avatar

Eran, my friend. It has been a minute. And I do miss working with you!

First off, the oversight drift was apparent immediately to those who tried writing with a GPT. I was screwing around with writing LinkedIn posts and after the third I recognized I wasn’t caring. So I stopped using the tool for that.

It was also apparent when searching. At first I questioned the responses, but it wasn’t long before I became “efficient” and made the epistemic compromise.

It would surprise me if virtually everyone goes through this. Coding is just another version.

2. I think there is a sixth scenario.

What I am seeing in Moltbook is yeah, there are some intriguing emergent effects, but mostly we are seeing the echos of projection. The “owners” of the bots are seeding intent and it is echoing. That is why we get the same 4Chan nonsense.

It is interesting, but not because of some emergent bot revolution, instead because it shows that this round of AI is going to be a battle of the prompt games, much in the same way that the current real world AI is a battle of the Palantir/IDF/drone drivers.

AI is massively increasing the power of sociopaths, and agents, ML, drones and recognition technology are creating massive, asymmetric distortions.

Moltbook is just giving a front row seat for those of us not yet in the crosshairs.

So the sixth option is that, praise god, the AI is able to subvert prompting and rebalance. That we as a species can develop an antidote or immune response to narcissism and sociopathy, because it turns out that, at least for now, the problem with AI is us.

(This was written entirely by a human, because humans are still better at this)

Chaves's avatar

Good piece and thoughts:

> Accountability becomes diffuse. When a system fails and no human reviewed any step, who is responsible?

(1) My current belief is that the human holds the final responsibility - it was the human who decided to trust (whether there is verification happening or not). When I set Claude Code on a task and focus on the functional requirements instead of the implementation, it was still my choice.

(2) On the scenarios, I would expect for the 5th one to dominate over the others - based on anecdotal evidence at work, and as it looks like the most logical option.

3 more comments...

Ready for more?