Discussion about this post

User's avatar
C. J. W. Armstrong's avatar

Very enjoyable read! I’ve maintained that LLMs are not conscious in the way that we are, but I admit, the notion that my ‘systems’ report sensations, while the me that makes sense of all these by turning it into language (thereby rendering the universe as a cohesive whole that I am able to perceive) is something that gave me pause. I’ve got some pondering to do…which is just what I love to do, so thank you!

Eyal's avatar

What evidence do we have for that?

I've been heavily involved in langauge learning, and next token prediction has zero use in langauge learning. If we don't acquire language based on some next-token loss function, then why assume that we generate langauge using this mechanism?

How do we know that some little evidence showing humans perform next-token prediction is not just the result of bad listening? In other words, what makes next-token prediction the fundamental mechanism and necessity of communication?

31 more comments...

No posts

Ready for more?