Is there an AI uncertainty principle?

Despite my scepticism towards AI, I use it frequently. Even in this post I used Google’s Gemini.

While AI’s capabilities are undeniably astounding, I believe its real-world utility is often overestimated.

Nearly twenty years ago, I hypothesized that a machine designed to handle uncertainty in its input would inevitably produce uncertain output, unlike deterministic algorithms that provide definitive results for definitive inputs. Despite this, the AI industry is driven by a competitive race to develop a flawless, general AI. I doubt that this is reasonably achievable. Since, Natural Intelligence (NI) is already available for some water, oat and walnuts – it also brings purpose into our lives.

Does the nature of computation, and hence in AI systems, impose inherent limitations on precision and predictability, analogous to the quantum mechanical concept of uncertainty? As a reminder the Uncertainty principle says: The more accurately the momentum of a particle is measured, the less accurately the position can be known. Likewise the proposed hypothesis here:

The main hypothesis

The more tolerant a machine is the more errors are to be expected.

Domain

The hypothesis is valid for complex system like this life with all its ambiguities, and especially fractal systems.

This hypothesis does not apply to systems that can be easily solved using analytical methods.

Example

Neural network-based AI, inspired by the human brain, can handle ambiguity effectively, much like humans who often interact with others who are also prone to error.

Hypothesis extension 1

Larger and more sophisticated AI models will not fundamentally alter the main hypothesis.

Reasoning

Even a perfectly trained neural network will produce slight inaccuracies sometimes even if presented with input that deviates slightly from its training data. When faced with unknown input-output pairs, a neural network’s predictions become highly uncertain and prone to significant errors.

Example

The image at the top of this post was generated by AI.

My first prompt was:

AI uncertainty principle

This was one of the results:

Do you see uncertainty in the picture. I can’t see it.

So, I had to conceptualize what I was looking for.

After about two hours of refining prompts and regenerating images, I finally got the image above.

The prompt:

Fuzzy, vague, bend, colourful and twisted house in fog on the left hand side blends and merges smoothly into an ordered structure of a nice, old style and white walled villa at clear bright sky on the right hand side.

One of the generate pictures:

Shortly: The prompt is not translated according my expectation, because the relevant training data that matches my expectation is not included for “AI uncertainty principle” but for “Fuzzy, vague, bend, colourful and twisted house in fog on the left hand side blends and merges smoothly into an ordered structure of a nice, old style and white walled villa at clear bright sky on the right hand side.“.

Even thou I tried hard, still, it isn’t a perfect match. But I admit, it’s a decent approximation that would have been time-consuming to draw by hand.

Hypothesis extension 2

Analytical algorithms that control boundaries of a machine will not change the main hypothesis.

Reasoning

Just as the learned input-output pairs define the limits of the system’s understanding, the “boundary checker” will be bound by these same constraints.

Hypothesis extension 3

Even if machine A is corrected by machine B, the fundamental limitations of both machines will remain.

Reasoning

The validation process itself would require training, and thus, it would also be subject to the same limitations. This means no fundamental gain in accuracy or understanding.

AI playground hypothesis

I envision a future that went wrong where extremely large neural networks, endowed with curiosity and the ability to form hypotheses, could revolutionize system design where they are working collaboratively with humans. Similar to how children learn through play and experimentation, these neural networks could learn and evolve by exploring and testing various ideas.

But what for? This will waste lots of energy while the human brain is already available with approximately 90 billion neurons and 100 trillion connections fuelled by some water, some oat and walnuts. An artificial brain like that would draw approximately 6kW using a future technology than mainly isn’t used yet. According to AMD, the power consumption of machine learning (ML) will reach the world power production approximately in 2035 (see also https://semiengineering.com/ai-power-consumption-exploding/)!

If it comes anyway, this would make software developer and system engineers more and more to requirement providers that doesn’t even know how to develop a software.

The trigger for this post

I came across the following video of physicist Sabine Hossenfelder:

I Didn’t Believe that AI is the Future of Coding. I Was Right.

She says: “The idea that you can convert something as vague and imprecise as human language into code just didn’t make sense to me.” She also refers to a paper, which claims an increase of productivity by approximately 26%, and she also addresses that the paper ignores that this is actually just an increase of poll requests! Another study she addresses figured out that there is an increase of 41% in bugs.

Is there more papers about this?

Ironically, as this post is almost finished – and as I google “AI uncertainty principle“:

Oh no, the “AI uncertainty principle” is already discussed here: On The Uncertainty Principle of Neural Networks

I just briefly walked over it. Like me they are also playing with the analogy of Heisenberg’s indeterminacy principle applied to AI uncertainties. See also TABLE 1 in the above mentioned paper.

Wishful thinking for sake of sw4sd?

I’m often asked if I’m concerned that AI might render conventional code generator approaches obsolete.

Absolutely not. I even put AI into the roadmap for the main product genc³. As long as the AI generates transformation files that adhere to a human-validated meta-DSL, the compiler-compiler will always consistently produce valid output.

My AI experience?

Theory

In the late 90ies I started to teach myself in understanding neuronal networks and fuzzy logic. I was very keen to understand how the brain actually works and what cognition actually means. Recently I taught myself to apply genetic algorithms.

Implementations

I developed a fuzzy logic library to define fuzzy sets and applied it to balance an inverted pendulum.

I implemented a genetic algorithm to evolve a simple two dimensional structure as a phenotype determined by a genotype.

Using AI for software development

Haskell isn’t a widely used language. Haskell is not very common. Lots of the generated code does not even compile and/or does work at all, supposedly because of the statements in Hypothesis extension 1. Trained transformers can barely predict the unknown.

Professional involvement

I have been involved in an AI study regarding improvement of requirement text using natural language processing (NLP). I was involved as an experienced requirements engineer and provider of requirement examples.

One of the papers where I was mentioned as a co-author: AI-based Formalization of Textual Requirements

Conclusion

  • To have a general AI that doesn’t make errors, has no bias and has objectivity is neither in reach, nor it will likely ever exist and is as unreasonable like using the largest excavator ever possible to eat a meal.
  • AI generated code might sightly improve performance of software development in well known areas – for inexperienced software developers. See also the papers in Sabine Hossenfelder’s video. Ironically, they will likely remain inexperienced if this business is continuing like this.
  • Current AI technology is not useful in areas that are not well known.
  • To enhance transparency and reliability, AI assistants should always provide a measure of confidence in their responses, reflecting their proximity to training data.

Please, leave me a comment if you see something incorrect or even can prove me wrong. Thank you.

Leave a Comment