Gödel Machines, deep learning and the limits of AI

by Robin Harris on Wednesday, 13 January, 2016

If, like me, you’re interested in AI and deep learning, you’ll like this. Neural networks are all the rage in AI, but there is a newer technology – Gödel machines – that is now a standard part of the AI toolset.

What is a Gödel machine? From the abstract of the paper A Family of Gödel Machine Implementations by Bas R. Steunebrink and Jürgen Schmidhuber of the IDSIA & University of Lugano:

The Gödel Machine is a universal problem solver encoded as a completely self-referential program capable of rewriting any part of itself, provided it can prove that the rewrite is useful according to some utility function, encoded within itself.

Gödel machines run on von Neumann architectures – they are not a new computer architecture. They consist of two main parts:

  • Solver. The solver interacts with some environment and determines utility using a reward function embedded in the machine.
  • Searcher. The searcher seeks to improve the entire Gödel machine in a provably – subject to Gödel’s limits of provability, of course – optimal way.

The Gödel Machine was invented by Jürgen Schmidhuber in 2003.

The limits of AI
As Gödel proved, any formal system that includes arithmetic, either allows for unprovable but true statements, or is flawed. The implication of the former is that since at least one improvement cannot be proved by the searcher, the AI will remain less than optimal.

For an interesting overview of the Gödel Machine, deep learning, and how a program can rewrite itself while running, Schmidhuber’s lecture is an excellent introduction.

The StorageMojo take
This is StorageMojo, not AIMojo. I’m curious about the system architecture running the Gödel Machine and the I/O workload the machine generates. IBM’s Watson runs on a massively parallel system, and I assume a Gödel Machine can too.

I’m also pleased – probably unjustifiably – by the notion that however smart Ais become, they won’t be perfect. Fallibility will always be part of an AI’s nature.

Courteous comments welcome, of course. Please feel free to expand on this topic in a comment.

{ 5 comments… read them below or add one }

armando January 14, 2016 at 4:33 am

The limits of AI are not bound by logic or self-consistency but by organizational. How does each AI agent will cooperate / engage with others and us. That’s the big computationally indecipherable question.

Paul Houle January 15, 2016 at 6:37 am

Similar techniques have been around for a while under the name “Genetic Programming” which is a step more complex than “Genetic Algorithms”.

Practically though, the most advanced compilers work this way. For instance if you want to do math on a computer that has a CPU, SIMD instructions, and a GPU, the speed you get will vary vastly on implementation details, for instance how does the code interact with the memory hierarchy, and the ultimate strategy is to try a lot of different things and pick the best.

Michael Zeldich January 15, 2016 at 7:27 am

Any kind of programmed system will work within limits of an set of algorithm.
The only imaginable way to move out of this boundaries is in designing of an artificial subjective system, and I have concept for such development.

Call by Skype, please. ID is Subjective1

Blissex January 15, 2016 at 3:06 pm

«As Gödel proved, any formal system that includes arithmetic, either allows for unprovable but true statements, or is flawed»

This “urban legend” is one of my pet peeves, Godel’s two notable theorems prove nothing of the sort, but a rather different and more technical result: that both consistency and completeness of an theory cannot be *proven* within the theory itself when reused as a proof theory.

The traditional system that includes arithmetic was in fact proven to be both complete and consistent after Godel’s two theorems were published, by Godel’s contemporary Gentzen. Gentzen also delivered two theorems that are far more important for mathematics than Godel’s widely misunderstood technicalities; he proved that to *prove* that a theory is both consistent and complete (if it is in fact so) one needs a proof theory with a higher “ordinal number”. Godel’s two theorems are just special cases of that.


Bruce Daley January 26, 2016 at 5:00 pm

This is an exciting time in AI and am in the process of writing my next book after “Where Data is Wealth” tentatively titled “Where Knowledge is Power” about it. I have interviewed Jürgen as part of my research, and he is a smart guy and usually ahead of his time. I have no doubt you are right and the Gödel machine will advance the field.

Leave a Comment

Previous post:

Next post: