I tried to find out how the distance of the Supernova ASASSN-15lh detected on 14th of June 2015 was measured. I found dozens of copies of content in the web all giving the same information which is called the content. Obviously the authors took copies of the content from each other or from science magazine.
This example shows up that we have to run across the proliferation of the web, to cut the extent and not its content.
More than two decades ago I started to investigate the ability of the human brain to give one fact as many different faces as needed to represent it. This methodology was called “abstraction”. It is quite different
- to Aristotele’s concept of abstraction,
- different to mind mapping, knowledge representation
- much more than semantic modelling
which i tried to concentrate on during my work at the university of cologne.
Finally, after more than 20 years, I found the ground to produce formal abstractions of informal content. The algorithm designed proved to be very useful on semantic queries, so i tried to apply it to the huge overcrowded news pages of the web. The results where somewhat disappointing because the content seemed to be condensed, completely.
The upcoming new Search engine technology was designed to stop frustrating searches on the internet. Our new search and fitTM technology enables us to fit all newly found content into already known contents, just on the fly.
The users will be given a new and active role of themselves as researchers when using this technology:
- Users will learn the content.
- Users will stop searching for the same items again and again.
- Users no longer will find the same content twice as they are given today by many different layouts, media and forms.
To offer a searching technology which does imitate yellow pages in order to maximize the time users will spent in front of the pages is a desaster.
Searching all the time and again is not appropriate for intelligent human brains willing to learn.