Quoting Joshua Bach
Joscha Bach (01:23:27) Yes. They are basically brute forcing the problem of thought. By training this thing with looking at instances where people have thought and then trying to deepfake that. If you have enough data, the deepfake becomes indistinguishable from the actual phenomenon, and in many circumstances, it's going to be identical.
Can you deepfake until you make it
but if you give them a reasoning task, it's often difficult for the experimenters to figure out whether the reasoning is the result of the emulation of the reasoning strategy that they saw in human written text or whether it's something that the system was able to infer by itself.
In many ways, people when they perform reasoning are emulating what other people wrote about reasoning, right?
I think it's not difficult to increase the temperature in the large language model to the point that is producing stuff that is maybe 90% nonsense and 10% viable and combine this with some prover that is trying to filter out the viable parts from the nonsense in the same way as our own thinking works. When we are very creative, we increase the temperature in our own mind, and we recreate hypothetical universes and solutions, most of which will not work.
—-
The opposite of free will is not determinism it's compulsion
https://pca.st/episode/cf1c7d34-1612-489a-99ac-6d57a5cd1571?t=4415.0
Yes. Addiction means that you're doing something compulsively, and the opposite of freewill is not determinism, it's compulsion.
(01:08:26) You don't want to lose yourself in the addiction to something nice? Addiction to love, to the pleasant feelings with humans experience?
(01:08:35) No, I find this gets old. I don't want to have the best possible emotions, I want to have the most appropriate emotions. I don't want to have the best possible experience, I want to have an adequate experience that is serving my goals, the stuff that I find meaningful in this world.
Comments ()