Someone needs to help Ben Thompson / Stratechery figure out the difference between Retard Media and Rational Media

… and maybe that “someone” could / should be me. 😀

I saw this just now:

After the Internet, though, the total amount of information is so much greater that even if the total amount of misinformation remains just as low relatively speaking, the absolute amount will be correspondingly greater:

https://stratechery.com/2020/zero-trust-information

What bothers me most is the image used to present this idea. Obviously, Ben does not understand much about statistics. Otherwise, the statement might not be half as bad.

Regardless: this is actually a very simple matter. I do not pay attention to “retard media” (note that remediary.com — the website where I first published that article — is currently experiencing an inordinately high amount of traffic, so I have linked to an archive.org “archived” copy of the article instead).

Ben appears to be saying that the amount of misinformation online is small (compared to the total amount of information) — I do not share his naive optimism.

Brand names (such as “stratechery” or “nytimes”) are (IMHO) inherently distrustworthy — in other words: they are worthy of distrust. The only information that is potentially trustworthy is information that is about something. Brand names are not about anything — they are simply meaningless strings used to identify particular products and/or services. If there are many producers of the same product or service, brand names are used to identify particular producers or service providers — but they are not indicators of any level of quality themselves. They are simply contrived constructs to help consumers if they want to exercise loyalty to any particular producer or service provider.

Rational media (on the other hand) is potentially about something — namely whatever it says it is about. This is relatively simple and straightforward — yet one thing that has made it a little tricky was ICANN’s rollout of proprietary top level domains a few years ago. For example: novice users (who lack the required high level of “digital” literacy) might think that a domain in the “app” TLD is about an “app” (or “apps” in general). Yet first of all: “app” may actually not be very well-defined. But second of all — as people with more advanced literacy skills will probably be aware — the “app” TLD was auctioned off to Google (or Alphabet — whatever that corporation is now known as in the United States of America [I, for one, do not think that company is trustworthy in many regards, let alone with respect to literacy] ). Therefore: “app” is not actually about “apps”. It is about Google (because Google now owns it).

I doubt ICANN’s decision to auction off many TLDs will be reversed in the near future — and it probably doesn’t matter, either. There are already many generic TLDs in “wide distribution” (much as the English language is also in wide distribution across the globe). Over time, as more and more people become more and more literate, they will become more and more aware of the very large number of proprietary fiefdoms, versus the relatively small number of generic TLDs.

The widely distributed generic TLDs are like dictionaries. Each word in such dictionaries functions as it’s own specialized search engine — whether for “shopping”, or “hotels”, or “cars”, or whatever. Market forces will ultimately result in trustworthy information appearing in rational media.

The Library of Congress Web Archives

for our initial foray into the web archive analysis, we examined the metadata about the web objects as opposed to the web objects themselves. Our determination was that this approach would afford us a high-level view of the archive and a solid foundation from which to build out future analyses

https://blogs.loc.gov/thesignal/2019/01/the-library-of-congress-web-archives-dipping-a-toe-in-a-lake-of-data