Scientists have to become better communicators

Keywords: TED, talks, climate change, history, science, religion, astronomy, philosophy

they have to explain to us not just what they know, but how they know it [ca. 18:55]

What Naomi seems to overlook in her conclusive remarks (perhaps because she’s so thrilled by the phenomenal scientific successes she covers in her TED talk) is that scientists don’t actually know anything — instead, they simply make more or less educated guesses.

Normally (haha, there’s a sort of pun in there which IS in fact intended) scientists are highly educated in quantitative statistical methods — basically, most of this relies on something that is often referred to as the “Law of Large Numbers“. These days, there is such fanatical focus on these quantitative methods that science is increasingly succumbing to the fallacy of not actually paying (enough) attention to what it is counting up… and increasingly such quasi-scientists are counting up mesmerizing mashups of bullshit and similarly amorphous phenomena. Several decades ago, before the tsunami of big data overwhelmed most of the so-called scientific community, this junk was called “GIGO” (“garbage in, garbage out”).

My main point is that qualitative analysis must be the foundation for all quantitative analyses. Unfortunately, modern science does not have a long tradition or history of actually dealing with qualitative analysis — in other words: on this account, we have hardly even reached square one. We can easily measure and count up how often a stone, when released, falls directly to the ground. We can easily explain the reasons why leaves behave differently than stones. But we have not yet paid enough attention to the many and vast differences that exist between what we refer to as a “leaf” vs. what we refer to as a “stone”. In the case of leaves vs. stones the differences are indeed so vast (and numerous) that they appear obvious to the naked eye. But in cases such as “global warming” (the anthropogenic kind) or “corona virus” (the COVID-19 kind), the causes of “death”, “extinction”, and many other similar phenomena (e.g. what makes something a species, or a race, or whatever), we are dumbfounded by the multitudes of mutually exclusive interpretations — or perhaps we might be traumatized by various degrees of cognitive dissonance from our attempts to reconcile numerous different theories about “true” reality.

Obviously (to me, at least) we need to pay more attention to the qualities of things — in particular, what makes one thing a different thing than another thing (and also, thereby, what makes the similar things similar enough to be as countable as “one banana, two banana, three banana, four…”).

Once we have successfully achieved that, we can return to heralding the wonders of quantitative analysis (and / or beginning to analyze what is so wonderful about a Gaussian distribution, and / or whether the laws of physics actually do need to be changed inside of “black holes”, and so on). One result we might then trumpet is to revise Naomi’s conclusions to read something more like:

Scientists have to explain to us not just what they guess is probably* right, but how they guess it’s probably* right [* in most cases, usually]

moi (with acknowledgements for yet again calling my attention to related issues: Joe Rogan w/ Barbara Freese, Jeff Skoll & Diane Weyermann and Pierre Omidyar)

Someone needs to help Ben Thompson / Stratechery figure out the difference between Retard Media and Rational Media

… and maybe that “someone” could / should be me. 😀

I saw this just now:

After the Internet, though, the total amount of information is so much greater that even if the total amount of misinformation remains just as low relatively speaking, the absolute amount will be correspondingly greater:

What bothers me most is the image used to present this idea. Obviously, Ben does not understand much about statistics. Otherwise, the statement might not be half as bad.

Regardless: this is actually a very simple matter. I do not pay attention to “retard media” (note that — the website where I first published that article — is currently experiencing an inordinately high amount of traffic, so I have linked to an “archived” copy of the article instead).

Ben appears to be saying that the amount of misinformation online is small (compared to the total amount of information) — I do not share his naive optimism.

Brand names (such as “stratechery” or “nytimes”) are (IMHO) inherently distrustworthy — in other words: they are worthy of distrust. The only information that is potentially trustworthy is information that is about something. Brand names are not about anything — they are simply meaningless strings used to identify particular products and/or services. If there are many producers of the same product or service, brand names are used to identify particular producers or service providers — but they are not indicators of any level of quality themselves. They are simply contrived constructs to help consumers if they want to exercise loyalty to any particular producer or service provider.

Rational media (on the other hand) is potentially about something — namely whatever it says it is about. This is relatively simple and straightforward — yet one thing that has made it a little tricky was ICANN’s rollout of proprietary top level domains a few years ago. For example: novice users (who lack the required high level of “digital” literacy) might think that a domain in the “app” TLD is about an “app” (or “apps” in general). Yet first of all: “app” may actually not be very well-defined. But second of all — as people with more advanced literacy skills will probably be aware — the “app” TLD was auctioned off to Google (or Alphabet — whatever that corporation is now known as in the United States of America [I, for one, do not think that company is trustworthy in many regards, let alone with respect to literacy] ). Therefore: “app” is not actually about “apps”. It is about Google (because Google now owns it).

I doubt ICANN’s decision to auction off many TLDs will be reversed in the near future — and it probably doesn’t matter, either. There are already many generic TLDs in “wide distribution” (much as the English language is also in wide distribution across the globe). Over time, as more and more people become more and more literate, they will become more and more aware of the very large number of proprietary fiefdoms, versus the relatively small number of generic TLDs.

The widely distributed generic TLDs are like dictionaries. Each word in such dictionaries functions as it’s own specialized search engine — whether for “shopping”, or “hotels”, or “cars”, or whatever. Market forces will ultimately result in trustworthy information appearing in rational media.

The Library of Congress Web Archives

for our initial foray into the web archive analysis, we examined the metadata about the web objects as opposed to the web objects themselves. Our determination was that this approach would afford us a high-level view of the archive and a solid foundation from which to build out future analyses