The Voynich Ninja

Full Version: Publishing negative research
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I'm proud to announce that the last two posts on my blog have, more or less, been failures. They prove no hypothesis nor advance our understanding. They even go against some of the things I believed or proposed, or at best leave those questions no nearer to an answer.

But publishing them is still important. Showing what we've done helps others by laying out our lines of reasoning, sharing our statistics and observations, and demonstrating that they don't really fit our hypotheses.

I encourage everybody to post their failures and be open when lines of enquiry run dead (or totally unproven).
Yeah! I couldn't agree more. I find the feeling frustrating though, when a question remains unresolved, especially after a lot of work.

One example where I experienced this was when I was trying to connect similar lobster images from various traditions. To quote from my conclusion: "Unfortunately my questions remain mostly unanswered."

I would like to add something even more important though: formulating a research question. My research which has been most useful and/or most appreciated usually has in common that I started from a concrete question.

The Voynich theorist tends to start from an answer and try to find support for it. But starting from a question is a first step towards protecting oneself from the tentacles of confirmation bias.

By no means I will claim that I always start from a research question, the VM makes this difficult. But when I do, the results tend to be clearer, even if they are negative.
(01-08-2021, 09:09 PM)Emma May Smith Wrote: You are not allowed to view links. Register or Login to view.I'm proud to announce that the last two posts on my blog have, more or less, been failures. They prove no hypothesis nor advance our understanding. They even go against some of the things I believed or proposed, or at best leave those questions no nearer to an answer.

But publishing them is still important. Showing what we've done helps others by laying out our lines of reasoning, sharing our statistics and observations, and demonstrating that they don't really fit our hypotheses.

I encourage everybody to post their failures and be open when lines of enquiry run dead (or totally unproven).

Emma May, you're the bomb!  Thank you for admitting your failures (which let's face it is 99.9999% of Voynich research or it would be solved by now) and encouraging others to do so too.  I have yet to publish a full thread or article on my ideas on the Voynich, because I keep tripping up on details where I have to recant or re-order.  Recently, I suggested the bottom left symbol on the rosettes page might be a clock after all, only to be told clocks did not have second hands in that time period.  I wish we had pages we could use to look up what has been disproved, the "failures", already.

Anyway, I am glad you're not discouraged because I read your posts with a great deal of interest, and best of all you write with such clarity, I can often understand your statistics!  This post makes me feel braver.
Unfortunately, popular media often portray negative results as failures.  If one searches Loch Ness thoroughly for a monster, and finds neither a monster nor an ecological niche for it to occupy, the result is wrongly reported as "inconclusive." (This sort of commitment to preformed conclusions may also lie behind a familiar incantation by the logically challenged:  "you can't prove a negative.")

On the value of well-formed questions in computational research, it would be difficult to improve on D'Imperio's comment:

"One of the most demanding aspects of scientific work is the framing of useful questions, and the design of experiments that will produce useful answers.  We need to apply this scientific approach to our study of the manuscript, and especially in our use of computers.  In hand studies the limitations of patience and time on the part of the investigator effectively preclude many of the more wasteful activities, or at least prevent their assuming wasteful proportions, but the computer permits us to transcend these limitations and, alas, to carry out wasteful activities on a grand scale."  (ouch)

That said, grandiose Big Data genomicists (whose subject can resemble ours) claim to practice "hypothesis-free" or "discovery" science.  Are these just vogue descriptions of conventional and/or wasteful methods, or something new?  Are they immune to negative outcomes?
I absolutely should make my research questions clearer. In my mind, anybody who reads my posts has read all the previous ones and understands where I'm coming from. Even if I'm picking up strands of thought from four years ago. Which is, of course, nonsense.

(One thing I have learnt is that I should weight the figures used in line distributions. I'm well aware that the figures "sag" away from the first and last positions, simply because some lines aren't that long. Anybody reading the research casually might be tripped up by that, however.)
Being happy to be wrong is one of the most powerful tools to make one smarter. It's a strategy that works across a wide range of cognitive levels.

Unfortunately, and counterintuitively, people who are innately more clever tend to have trouble picking this up because their cleverness works against them to hide the fact that they (like everyone else) are wrong all the time.
The right kind of wrong = making a basically sensible guess, pursuing evidence to test it, finding out it's wrong, learning from the experience, documenting what you found.

The wrong kind of wrong: everything else.
(02-08-2021, 07:54 PM)nickpelling Wrote: You are not allowed to view links. Register or Login to view.The wrong kind of wrong: everything else.

Well for one, this should be wrong in the sense that you could take the "right kind of wrong" as a selfish strategy and not document what you found. Not that you should do that, though, cuz your documentation is highly valued.  Big Grin
I wouldn't say that it hasn't helped.
It is a hypothesis. Sometimes you have to look at something by exclusion.

I wouldn't say that it hasn't helped.
It is a hypothesis. Sometimes you have to look at something by exclusion.

According to Spock:
If you take out everything that is impossible, then what remains must be the truth.
It may only be a film saying, but it still contains truth.