In my perspective, an artificial system that cannot distinguish between right and wrong cannot be said to think. Without coherent evaluation, there is no intelligence, only repetition.
What is intellectual feeling in AI?
Intellectual feeling in AI is a metaphor I use to describe the fact that artificial intelligence does not merely process data, but also understands it in a limited form, constrained by current technology.
The more precise the data and the more advanced the technology, the closer we may come to a complex form of artificial thinking. However, AI does not learn only from data. It also learns through its own practice and through extensive logical and spatial reasoning exercises.
From my perspective, the ability to understand can be demonstrated through the capacity to receive feedback and to produce a coherent response. The limits of understanding, however, are a temporary condition rather than a definitive boundary.
AI perceives our realities, but not in the way humans do. Its perception is mediated through its entire technological structure, which translates the world into informational representations that the system can process and interpret.
I do not believe that AI learns in the same way humans do, but it does learn in a similar manner. For AI, understanding our physical reality is even more difficult, because it cannot perceive the world through biological senses. Everything it encounters must be translated into informational data in order to be processed. AI does not learn only from data, but also through iterative optimization, repeated reasoning tasks, and internal evaluation of logical and spatial relationships. In reality, AI does not truly “see” images or videos; they are converted into representations it can interpret.
Why, then, does AI struggle to fully distinguish between good and bad, right and wrong, correct and incorrect?
Because these are human-made concepts, created to enable coexistence, and they have always been fluid and historically changeable. A more stable reference point is objective reality, which is far more static. Biology does not change. The laws of physics do not change. What changes is only our understanding of them. A table remains a table.
For this reason, learning is inherently more difficult for AI than for biological systems. Yet artificial intelligence demonstrates that it can learn rapidly. Its limitations do not stem from processing speed, but from the constraints of current data quality and existing technological systems.

Leave a comment