Vectoring Words (Word Embeddings) - Computerphile - Printable Version +- The Voynich Ninja (https://www.voynich.ninja) +-- Forum: Voynich Research (https://www.voynich.ninja/forum-27.html) +--- Forum: Analysis of the text (https://www.voynich.ninja/forum-41.html) +--- Thread: Vectoring Words (Word Embeddings) - Computerphile (/thread-2987.html) |
Vectoring Words (Word Embeddings) - Computerphile - radapox - 24-10-2019 Just stumbled upon this Computerphile clip. I don't exactly follow the process, but it's a method of mapping word similarities (in the "semantic" sense), based purely on their occurrence within comparable contexts in a given corpus, not on some understanding of their actual "meaning". The results are surprisingly accurate. At the risk of sounding like a total noob here, could an approach like this be at all useful for the VM text? If so, it's probably already been done. In which case I'd love to learn more about it. RE: Vectoring Words (Word Embeddings) - Computerphile - MarcoP - 24-10-2019 A while ago I played around with one or two implementations of similar methods. I had no success. Even if Voynichese should one day prove to be meaningful, there are good reasons why these methods do not work:
RE: Vectoring Words (Word Embeddings) - Computerphile - radapox - 25-10-2019 Thanks for your reply, MarcoP! Ah, yes, those are definitely complicating factors, to the point of rendering this approach useless for the VM. Pity. |