David Hilbert dreamed of AI, too!
Now that's a hat!

David Hilbert dreamed of AI, too!

David Hilbert was one of the foremost mathematicians of the 19Th Century. His influence continues to this day. A strong thread runs straight forward in time from his work 120 years ago to “Artificial Intelligence”---so, let’s follow it!


Despite the appearance of a growing number of logical paradoxes in his era, Hilbert still dreamed that mathematics was not only fully self-consistent but that we could eventually automate or mechanize it with a single universal algorithm.


Hilbert challenged his colleagues and said that the next step to creating this powerful set of tools required us to be able to answer “Yes” to these three questions:


1) Is mathematics complete? That is, can we prove or disprove every possible conjecture in mathematics?


2) Is mathematics consistent? That is, can we prove that the same set of axioms can never produce contradictory outcomes?


3) Is mathematics decidable? That is, can we prove that it is always possible to create a step-by-step procedure (now we would call this an algorithm) that will give us a “Yes” or No” answer to any mathematical question?


Hilbert fervently believed, as did many other mathematicians and philosophers of his time, that the answer to all three of these question was “Yes”, and that this positive answer would then guide us to the ability to mechanize the entire mathematical landscape. Partly this belief was a cultural artifact. In this era, many thought that the one single source of the unknown was only our temporary human ignorance. This belief was pre-Quantum Mechanics...


Sometimes, as in Hilbert’s case, we see that leadership does not necessarily require being absolutely correct. In that moment, leadership, instead, requires the ability to state a position so clearly for the first time, so that those who follow can suddenly see more easily how to either support that position or or to clearly defeat it.


Two giants who followed Hilbert and who independently decided that they could and should disassemble Hilbert’s dream were Kurt Gödel and Alan Turing. They both started with the advantage of the clarity of Hilbert’s strong and clear ideas. They rigourously proved that the answers to his three questions could never be “Yes” and therefore, it followed that we could never succeed in mechanizing mathematics. Gödel proved that positive answers to questions 1 & 2 could never be verified. Turing defeated the quest for a "Yes" to question #3 with his famous "Halting Problem". Part of its fame comes from the fact that Turing conceived it before the first modern computer was even built.


In today’s terms, this mechanization is the essence of what we hope, or claim “AI” is going to do for us. The landscape in which some make this claim is confusing because today we can see so many algorithms that are successful in our daily existence. (Far more of these are invisible to most of us.) However, there is a big difference between any human written algorithm and “AI”. That is because since humans write algorithms, then humans also have access to taking the code apart to understand why they work or don’t work. 


In contrast with any algorithm, “AI” is a black box that we can only examine from an external point of view. If “AI” works already, as it sometimes does, that’s great and exciting, but AI internal processes that lead to those outputs have so far, remained completely invisible to us.


I suspect that some supporters of “AI” are so enthusiastic that they feel that these facts are not limitations to the potential of “AI”. Perhaps they feel that way because AI is very exciting and new, and it is, and after all, Hilbert has been dead for quite a while. Perhaps they tell themselves that “AI” is “post-Science”, that is, it has become smart enough to move forward on its own without concern for whether it remains consistent, or even dependent on what we have learned so slowly about mathematics or statistics. (Some advocates of "Big Data" have made this exact claim.) We now have applications which in fact might tell us explicitly that this is true, if we ask, using text or a simulated human “voice”. Hearing that voice doesn’t necessarily make it so.

I have my own personal view here. Here's mine: I come from three decades of measuring and recording calibration and process data in various pharmaceutical settings. I rapidly became aware that the data I produced every day was subject to multiple error sources. I learned that I could eliminate some of these sources, I could control others, but I never would be able to eliminate all of them. At the same time, as luck would have it, the international measurement community arrived at the same conclusion. They went much further that I, and published the GUM (Guide to Estimating Measurement Uncertainty, an ISO document) in 1993.

The GUM starts by explicitly accepting that there are no perfect measurements and can never be any. It then proceeds to describe an algorithmic method to describe some statistical limits that we can be sure will surround all attempts to measure anything. A very large group of very smart people say that this is a good as measurement assurance is ever going to get; that there will always be an accompanying uncertainty associated with all measurement results. I will continue to go along with this method and the theory behind it until I hear a convincing alternative.

I don't know for sure what Hilbert would have done were he alive to day. I can suppose that David Hilbert would have rebelled against this compromise that we make with measurement imperfection, and perhaps tried to blaze a path to an algorithmic method for mechanizing measurement which would require by the way, that we eliminate measurement uncertainty. I DO know that even today, some metrologists (measurement scientists) may still refer to a "True Measurement Value" even while simultaneously conceding that this value is both unknown and unknowable.


David Hilbert lives!

Zoe Brooks

Consultant, educator & speaker in Laboratory Risk Management: Transform statistical quality control to clinical/meaningful risk management. Implement staff competency programs. Quantify reductions in risk and cost.

10mo

My experience with AI is that you cannot trust it with statistics at all. It creates formulas and arguments that are just plain wrong. I do believe we can guide it to correct conclusions, and that's where the power will come. And I thought the original example of artificial intelligence was a blonde dying her hair brunette ;)

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics