6 Comments
Apr 10, 2023Liked by Rajesh Kasturirangan

The real question is whether chatGPT realizes that the theorem Gรถdel formulates is true without trying to prove it, no? ๐Ÿ˜›

My response to Penrose's argument is that most humans won't realize it either ๐Ÿ™‚. So it's enough if AI is more intelligent than bottom 50% of humanity.

Expand full comment
author

Most human beings have no clue of Godelโ€™s theorems. Thereโ€™s no money to be made in it, but I bet that if you threw ten billion dollars at combining LLMs with formal theorem provers, you will get an AI system that will be better at math than everyone besides the very best mathematicians.

Expand full comment

Stephen Wolfram has already announced an integration of GPT( 3.5? 4?) with Wolfram Alpha. I dont know if it will prove theorems or not. But consider much of the $10B saved๐Ÿ˜€

Expand full comment

In the last several days, I have been spending time guiding ChatGPT plus to construct some very interesting imaginary conversations, forming groups of people alive and dead as conversationalists . In some cases the choice of people was also guided by ChatGPTplus . It requires a lot of patient iterations to get something very interesting to publish in a blog, but the effort is worth it, whether I publish it or not. I have now formulated, for myself, a new principle of learning: The best way to learn a very new subject is to begin writing an article on it. (Or a conversation like the one I mentioned above. Or even a textbook)This may always have been true, but it became a testable proposition a few weeks back

Expand full comment
Apr 10, 2023Liked by Rajesh Kasturirangan

An excellent post. I've taught Searle's 'Chinese Room' experiment for several years - and the rabbit hole AI is digging makes it all the more interesting. Thanks!

Expand full comment
author

Thanks Bryan!

Expand full comment