Communication about Artificial Intelligence


When German feuilleton readers talk about AGI

This blog post is an updated version of a German piece (from Februrary 2019). After three years, I thought it was time to include some exciting developments from 2020 and 2021. The update highlights four examples to illustrate the progress made in Data Science, Machine/Deep Learning, aka AI. Spoiler Alert: The next blog post will deal with further advancements which happened in 2022 – and it will contain some wild speculations and serious predictions about upcoming developments.

Bottom line: It seems like humankind went through the “slope of disillusionment,” and the AI winter is finally over. We see the early signs of AGI takeoff. Let’s have a look at four advancements of the last two years.

AlphaFold 2

The advancement made by Deepmind is quite impressive. One of the trickiest tasks in molecular design, predicting the 3D structure of proteins, seems to be more accessible than ever. This is “really” a giant leap for biology – and the most remarkable thing: The algorithm and the data available as Open Source.


Build by OpenAI, their latest version of GPT-3 went public and can be used by everyone. The model works with 175bn parameters and it’s a text/natural language processing framework that allows to create of all sorts of text. Just create an account and use the prompt in the playground. It’s straightforward. The following tweet gives you an impression of different ways to interact/produce text content.

There are already many applications available. The integration in the GitHub IDE as a “Co-Pilot” while writing code is already delivering enhancements for experienced programmers. It increases code quality and provides a learning experience for software engineers. 


This valentines ad is an excellent example of the possibilities of AI-based voice creation. Maybe some of the readers remember the Cluetrain Manifesto. I always liked #3: “Conversations among human beings sound human. They are conducted in a human voice.”

With this kind of technology, it will be just a matter of time until robocalls and such will be standard.


Lastly, it’s worth mentioning an example from the visual world: It’s about the newest version of GAUGAN from NVIDIA. Here the progress is mainly about the execution speed – a better implementation allows almost real-time image manipulation.

Preliminary conclusion

For sure, these four examples are not proof of the ability to build an AGI. But at the same time, they show what is already possible. The trend toward transformers like GPT-3 (as a special branch of deep learning) and self-supervised learning point to an exciting future.

More about the AGI discussion at the very end of this blog post.

<original post>

Oh no, now Lambertz is also writing about AI!

Once again, I was browsing through the comment columns of articles dealing with the topic of so-called Artificial Intelligence. In my perception, there is a reappearing pattern in the argumentation: Humans will never be able to program an “Artificial General Intelligence” (AGI).

This opinion is justified with the rationale that humans are anyhow overwhelmed with the complexity of daily life, so we could never create something more complex than ourselves. Unfortunately, this argumentation falls short. But before I offer an alternative perspective, I would like to remain briefly with the reasoning pattern I observe. The argument mentioned above is frequently heard in Germany by highly educated people. They belong to the bourgeois-intellectual milieu, no matter if they are more on the left, liberal or conservative side of the political spectrum. The comments use a language full of terms like “soul,” “feelings,” and “human energy.” Worst case, it’s the sound of believers in homeopathic therapies, often found in Southern Germany.

I don’t want to devalue these “human phenomena,” but they have only a secondary role in discussing AI. Instead, I have the impression that people, who argue in such a way, do not know how to deal with humankind’s (possible) upcoming feeling to be insulted – again!

The offended human being

The idea of a non-human intelligence triggers the existential foundations of the milieu mentioned above. To paraphrase Sigmund Freud, this will be the fourth insult of humankind. It began with the Copernican revolution and on which the evolution theory of Darwin and the libido theory of Freud followed. This astonishes me when I consider the general intellectual background of the “total AGI deniers.” They are all reflective and humanistic people. However, I wonder where the imagination has gone.

Just as a small thought experiment: Suppose humanity would come into contact with an extraterrestrial life form whose physiological structure is entirely different from humans. It would probably be the same atoms, but the arrangement is adapted to the living conditions of this life form, e.g., a Neumann probe. Would we call this life form unintelligent because it is not built according to human standards? Would we call it stupid just because it fails to communicate while its technology seems superior to humans? How arrogant can a human be in being human?

The usual evolution argument

At this point in the discussion, the bourgeois-intellectual milieu argument argues that evolution has created humankind in about fifty million years. It is not comparable with the time span that Homo Sapiens exists on earth. The chain of reasons leads to the age-old debate of whether manufactured things are natural (to be precise: Part of nature). A specific understanding of the term “nature” seems to prevail in this milieu, which regards humans as separate from it. This is a perfect breeding ground for a language that confuses natural medicine and homeopathy and is exceptionally critical to dismiss (artificial) pharmacology. The idea that humans, as a part of nature, can only produce “natural things” is not considered. This argument is funny because the basic materials are taken from nature anyway. Therefore, these people don’t even think that manufactured objects could be part of evolution.

The counterargument

What amazes me is that those same educated people don’t seem to remember biology class because otherwise, they could have come up with a counterargument themselves (pardon if that sounds arrogant, but I wonder). Briefly, as a reminder, the typical argument of the total deniers summarized: Man himself doesn’t even understand what intelligence is. So how should he build something intelligent? Man is already overloaded with complexity. So how should they create something that can better deal with complexity?

Genetics is an excellent example of the connection between simplicity and emergence. It provides the insight that a variety of developmental possibilities can be created from four base pairs that go far beyond what these base pairs have to offer on the face of it. Metaphorically spoken, these base pairs are a blueprint for a blueprint so that life can develop. This seamlessly connects to the old question of how life arose and whether there was even such a thing as a “primal soup” of (relatively simple) molecules that “somehow” came together to form high-order entities. I want to suggest that complexity can be made up of simple entities. The question is how these simple elements are dynamically related to each other so that “the sum of the parts is something else.”

There would be enough further examples, but the topic of the base pairs, the genome, and the development of the human brain should be enough to clarify that from “stupid individual parts,” something emerges that humans call intelligence. This disproves that humans could never create an AGI that exceeds the intelligence of humanity. Instead, humans may invent something smarter than us.


Currently, the discussion about the opportunities and risks of “algorithms that decide” is shifting in an arguably anthropocentric direction – a technological neo-Biedermeier that crazily argues based on the Enlightenment, elevating humans to a pedestal and slowing down progress in the process. Because although a large part of the discussion in the German feuilleton philosophizes about some (“really”) still far away scenarios, even Geoffrey Hinton as the “inventor” of Deep Learning, says that the current approaches will soon reach their limits. So we don’t know any way how the journey will continue and whether the peak of Deep Learning (and all related activities) will be followed by a new “AI Winter” (like Minsky et al. in the 90s). All heated discussions around the topic of AGI can thus be reduced to questions of faith. And I prefer to leave those to religions.

Decision – Ethos – Bias

However, in the context of this “something with AI” hype, a hidden issue urgently needs to be addressed. At the latest, at the point where algorithms evaluate people, whether as citizens, customers or employees, that’s where it becomes critical. When rights of freedom are restricted, or development opportunities are hindered (keyword: AI & Recruiting HR), a discourse is needed. The call for “Ethical Engineering” is anything but new. Still, with the further penetration of digitalization, it will become increasingly important to integrate the “ethos of code” as a regular part of the production of neural networks.

</original post>


Finally, a final comment regarding AGI

In the beginning, they say it is not possible. Humans can only solve this task.

When an algorithm does it, they say: It was just an algorithm with computing power.

Indeed, the progress of Advanced Statistics, Data Science, Machine Learning, and especially Deep Learning are impressive. The possibilities seem endless, from artificial voice generation to dynamic image creation or natural language processing. Recent showcases demonstrate compelling use cases for AI-powered customer care services or even personalized erotica bots, which can stand the Turing test for quite a while. These expert systems are fascinating, and combining these technologies is very likely to create new business models – it is just a matter of time. Unfortunately, one aspect is often forgotten. These AI systems are not able to create “layers of abstraction.” Even if it is possible to code curiosity (to a certain extent), it is tough to create meaningful abstractions containing insights that can drive decisions in the corporate world. For the moment, the entrepreneurial capability still sits in front of the screen – and not behind.

Eventually, new “Big Data” solutions will emerge which can solve “high-order problems” – but this will still be a long way to go till systems are in place, which can invent reasoning frameworks like Pierce’ Logic or Bayes’ understanding of probability.

Robot Picture: Wikipedia

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.