Henry Chang
1 min readJun 15, 2019

--

Hi, thank you for the comment. The concatenation of contextual vectors is to gather more word samples and have a distribution (of contextual word vectors) in a high dimensional space. Ideally, more word samples, which means more sentences, needed to be gathered in order to represent the real distribution of contextual word vectors.

In the case of the word “plant”, we have 47 contextual word vectors after concatenation, each with a dimension of 1024. After PCA, each of 47 contextual word vectors has a size of 2. Then we can visualizing our contextual word vectors of interest (“plant”, “plants”, “planted”).

Hope that helps to clarify.

--

--

Henry Chang
Henry Chang

Written by Henry Chang

Machine Learning Engineer. Read, code and write.

Responses (1)