6 Comments
Apr 10, 2023Liked by Rajesh Kasturirangan

The real question is whether chatGPT realizes that the theorem Gödel formulates is true without trying to prove it, no? 😛

My response to Penrose's argument is that most humans won't realize it either 🙂. So it's enough if AI is more intelligent than bottom 50% of humanity.

Expand full comment
author

Most human beings have no clue of Godel’s theorems. There’s no money to be made in it, but I bet that if you threw ten billion dollars at combining LLMs with formal theorem provers, you will get an AI system that will be better at math than everyone besides the very best mathematicians.

Expand full comment
Apr 16, 2023Liked by Rajesh Kasturirangan

Stephen Wolfram has already announced an integration of GPT( 3.5? 4?) with Wolfram Alpha. I dont know if it will prove theorems or not. But consider much of the $10B saved😀

Expand full comment
Apr 15, 2023Liked by Rajesh Kasturirangan

In the last several days, I have been spending time guiding ChatGPT plus to construct some very interesting imaginary conversations, forming groups of people alive and dead as conversationalists . In some cases the choice of people was also guided by ChatGPTplus . It requires a lot of patient iterations to get something very interesting to publish in a blog, but the effort is worth it, whether I publish it or not. I have now formulated, for myself, a new principle of learning: The best way to learn a very new subject is to begin writing an article on it. (Or a conversation like the one I mentioned above. Or even a textbook)This may always have been true, but it became a testable proposition a few weeks back

Expand full comment
Apr 10, 2023Liked by Rajesh Kasturirangan

An excellent post. I've taught Searle's 'Chinese Room' experiment for several years - and the rabbit hole AI is digging makes it all the more interesting. Thanks!

Expand full comment
author

Thanks Bryan!

Expand full comment