Yuchen Pei's Bloghttps://ypei.me/blog-feed.xml2019-01-03T00:00:00ZYuchen PeiPyAtomDiscriminant analysisposts/2019-01-03-discriminant-analysis.html2019-01-03T00:00:00ZYuchen Pei<p>In this post I talk about the theory and implementation of linear and quadratic discriminant analysis, classical methods in statistical learning.</p>
<p><strong>Acknowledgement</strong>. Various sources were of great help to my understanding of the subject, including Chapter 4 of <a href="https://web.stanford.edu/~hastie/ElemStatLearn/">The Elements of Statistical Learning</a>, <a href="http://cs229.stanford.edu/notes/cs229-notes2.pdf">Stanford CS229 Lecture notes</a>, and <a href="https://github.com/scikit-learn/scikit-learn/blob/7389dba/sklearn/discriminant_analysis.py">the scikit-learn code</a>. Research was done while working at KTH mathematics department.</p>
<p><em>If you are reading on a mobile device, you may need to "request desktop site" for the equations to be properly displayed. This post is licensed under CC BY-SA and GNU FDL.</em></p>
<h2 id="theory">Theory</h2>
<p>Quadratic discriminant analysis (QDA) is a classical classification algorithm. It assumes that the data is generated by Gaussian distributions, where each class has its own mean and covariance.</p>
<p><span class="math display">\[(x | y = i) \sim N(\mu_i, \Sigma_i).\]</span></p>
<p>It also assumes a categorical class prior:</p>
<p><span class="math display">\[\mathbb P(y = i) = \pi_i\]</span></p>
<p>The log-likelihood is thus</p>
<p><span class="math display">\[\begin{aligned}
\log \mathbb P(y = i | x) &= \log \mathbb P(x | y = i) \log \mathbb P(y = i) + C\\
&= - {1 \over 2} \log \det \Sigma_i - {1 \over 2} (x - \mu_i)^T \Sigma_i^{-1} (x - \mu_i) + \log \pi_i + C', \qquad (0)
\end{aligned}\]</span></p>
<p>where <span class="math inline">\(C\)</span> and <span class="math inline">\(C'\)</span> are constants.</p>
<p>Thus the prediction is done by taking argmax of the above formula.</p>
<p>In training, let <span class="math inline">\(X\)</span>, <span class="math inline">\(y\)</span> be the input data, where <span class="math inline">\(X\)</span> is of shape <span class="math inline">\(m \times n\)</span>, and <span class="math inline">\(y\)</span> of shape <span class="math inline">\(m\)</span>. We adopt the convention that each row of <span class="math inline">\(X\)</span> is a sample <span class="math inline">\(x^{(i)T}\)</span>. So there are <span class="math inline">\(m\)</span> samples and <span class="math inline">\(n\)</span> features. Denote by <span class="math inline">\(m_i = \#\{j: y_j = i\}\)</span> be the number of samples in class <span class="math inline">\(i\)</span>. Let <span class="math inline">\(n_c\)</span> be the number of classes.</p>
<p>We estimate <span class="math inline">\(\mu_i\)</span> by the sample means, and <span class="math inline">\(\pi_i\)</span> by the frequencies:</p>
<p><span class="math display">\[\begin{aligned}
\mu_i &:= {1 \over m_i} \sum_{j: y_j = i} x^{(j)}, \\
\pi_i &:= \mathbb P(y = i) = {m_i \over m}.
\end{aligned}\]</span></p>
<p>Linear discriminant analysis (LDA) is a specialisation of QDA: it assumes all classes share the same covariance, i.e. <span class="math inline">\(\Sigma_i = \Sigma\)</span> for all <span class="math inline">\(i\)</span>.</p>
<p>Guassian Naive Bayes is a different specialisation of QDA: it assumes that all <span class="math inline">\(\Sigma_i\)</span> are diagonal, since all the features are assumed to be independent.</p>
<h3 id="qda">QDA</h3>
<p>We look at QDA.</p>
<p>We estimate <span class="math inline">\(\Sigma_i\)</span> by the variance mean:</p>
<p><span class="math display">\[\begin{aligned}
\Sigma_i &= {1 \over m_i - 1} \sum_{j: y_j = i} \hat x^{(j)} \hat x^{(j)T}.
\end{aligned}\]</span></p>
<p>where <span class="math inline">\(\hat x^{(j)} = x^{(j)} - \mu_{y_j}\)</span> are the centred <span class="math inline">\(x^{(j)}\)</span>. Plugging this into (0) we are done.</p>
<p>There are two problems that can break the algorithm. First, if one of the <span class="math inline">\(m_i\)</span> is <span class="math inline">\(1\)</span>, then <span class="math inline">\(\Sigma_i\)</span> is ill-defined. Second, one of <span class="math inline">\(\Sigma_i\)</span>'s might be singular.</p>
<p>In either case, there is no way around it, and the implementation should throw an exception.</p>
<p>This won't be a problem of the LDA, though, unless there is only one sample for each class.</p>
<h3 id="vanilla-lda">Vanilla LDA</h3>
<p>Now let us look at LDA.</p>
<p>Since all classes share the same covariance, we estimate <span class="math inline">\(\Sigma\)</span> using sample variance</p>
<p><span class="math display">\[\begin{aligned}
\Sigma &= {1 \over m - n_c} \sum_j \hat x^{(j)} \hat x^{(j)T},
\end{aligned}\]</span></p>
<p>where <span class="math inline">\(\hat x^{(j)} = x^{(j)} - \mu_{y_j}\)</span> and <span class="math inline">\({1 \over m - n_c}\)</span> comes from <a href="https://en.wikipedia.org/wiki/Bessel%27s_correction">Bessel's Correction</a>.</p>
<p>Let us write down the decision function (0). We can remove the first term on the right hand side, since all <span class="math inline">\(\Sigma_i\)</span> are the same, and we only care about argmax of that equation. Thus it becomes</p>
<p><span class="math display">\[- {1 \over 2} (x - \mu_i)^T \Sigma^{-1} (x - \mu_i) + \log\pi_i. \qquad (1)\]</span></p>
<p>Notice that we just walked around the problem of figuring out <span class="math inline">\(\log \det \Sigma\)</span> when <span class="math inline">\(\Sigma\)</span> is singular.</p>
<p>But how about <span class="math inline">\(\Sigma^{-1}\)</span>?</p>
<p>We sidestep this problem by using the pseudoinverse of <span class="math inline">\(\Sigma\)</span> instead. This can be seen as applying a linear transformation to <span class="math inline">\(X\)</span> to turn its covariance matrix to identity. And thus the model becomes a sort of a nearest neighbour classifier.</p>
<h3 id="nearest-neighbour-classifier">Nearest neighbour classifier</h3>
<p>More specifically, we want to transform the first term of (0) to a norm to get a classifier based on nearest neighbour modulo <span class="math inline">\(\log \pi_i\)</span>:</p>
<p><span class="math display">\[- {1 \over 2} \|A(x - \mu_i)\|^2 + \log\pi_i\]</span></p>
<p>To compute <span class="math inline">\(A\)</span>, we denote</p>
<p><span class="math display">\[X_c = X - M,\]</span></p>
<p>where the <span class="math inline">\(i\)</span>th row of <span class="math inline">\(M\)</span> is <span class="math inline">\(\mu_{y_i}^T\)</span>, the mean of the class <span class="math inline">\(x_i\)</span> belongs to, so that <span class="math inline">\(\Sigma = {1 \over m - n_c} X_c^T X_c\)</span>.</p>
<p>Let</p>
<p><span class="math display">\[{1 \over \sqrt{m - n_c}} X_c = U_x \Sigma_x V_x^T\]</span></p>
<p>be the SVD of <span class="math inline">\({1 \over \sqrt{m - n_c}}X_c\)</span>. Let <span class="math inline">\(D_x = \text{diag} (s_1, ..., s_r)\)</span> be the diagonal matrix with all the nonzero singular values, and rewrite <span class="math inline">\(V_x\)</span> as an <span class="math inline">\(n \times r\)</span> matrix consisting of the first <span class="math inline">\(r\)</span> columns of <span class="math inline">\(V_x\)</span>.</p>
<p>Then with an abuse of notation, the pseudoinverse of <span class="math inline">\(\Sigma\)</span> is</p>
<p><span class="math display">\[\Sigma^{-1} = V_x D_x^{-2} V_x^T.\]</span></p>
<p>So we just need to make <span class="math inline">\(A = D_x^{-1} V_x^T\)</span>. When it comes to prediction, just transform <span class="math inline">\(x\)</span> with <span class="math inline">\(A\)</span>, and find the nearest centroid <span class="math inline">\(A \mu_i\)</span> (again, modulo <span class="math inline">\(\log \pi_i\)</span>) and label the input with <span class="math inline">\(i\)</span>.</p>
<h3 id="dimensionality-reduction">Dimensionality reduction</h3>
<p>We can further simplify the prediction by dimensionality reduction. Assume <span class="math inline">\(n_c \le n\)</span>. Then the centroid spans an affine space of dimension <span class="math inline">\(p\)</span> which is at most <span class="math inline">\(n_c - 1\)</span>. So what we can do is to project both the transformed sample <span class="math inline">\(Ax\)</span> and centroids <span class="math inline">\(A\mu_i\)</span> to the linear subspace parallel to the affine space, and do the nearest neighbour classification there.</p>
<p>So we can perform SVD on the matrix <span class="math inline">\((M - \bar x) V_x D_x^{-1}\)</span> where <span class="math inline">\(\bar x\)</span>, a row vector, is the sample mean of all data i.e. average of rows of <span class="math inline">\(X\)</span>:</p>
<p><span class="math display">\[(M - \bar x) V_x D_x^{-1} = U_m \Sigma_m V_m^T.\]</span></p>
<p>Again, we let <span class="math inline">\(V_m\)</span> be the <span class="math inline">\(r \times p\)</span> matrix by keeping the first <span class="math inline">\(p\)</span> columns of <span class="math inline">\(V_m\)</span>.</p>
<p>The projection operator is thus <span class="math inline">\(V_m\)</span>. And so the final transformation is <span class="math inline">\(V_m^T D_x^{-1} V_x^T\)</span>.</p>
<p>There is no reason to stop here, and we can set <span class="math inline">\(p\)</span> even smaller, which will result in a lossy compression / regularisation equivalent to doing <a href="https://en.wikipedia.org/wiki/Principal_component_analysis">principle component analysis</a> on <span class="math inline">\((M - \bar x) V_x D_x^{-1}\)</span>.</p>
<p>Note that as of 2019-01-04, in the <a href="https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/discriminant_analysis.py">scikit-learn implementation of LDA</a>, the prediction is done without any lossy compression, even if the parameter <code>n_components</code> is set to be smaller than dimension of the affine space spanned by the centroids. In other words, the prediction does not change regardless of <code>n_components</code>.</p>
<h3 id="fisher-discriminant-analysis">Fisher discriminant analysis</h3>
<p>The Fisher discriminant analysis involves finding an <span class="math inline">\(n\)</span>-dimensional vector <span class="math inline">\(a\)</span> that maximises between-class covariance with respect to within-class covariance:</p>
<p><span class="math display">\[{a^T M_c^T M_c a \over a^T X_c^T X_c a},\]</span></p>
<p>where <span class="math inline">\(M_c = M - \bar x\)</span> is the centred sample mean matrix.</p>
<p>As it turns out, this is (almost) equivalent to the derivation above, modulo a constant. In particular, <span class="math inline">\(a = c V_m^T D_x^{-1} V_x^T\)</span> where <span class="math inline">\(p = 1\)</span> for arbitrary constant <span class="math inline">\(c\)</span>.</p>
<p>To see this, we can first multiply the denominator with a constant <span class="math inline">\({1 \over m - n_c}\)</span> so that the matrix in the denominator becomes the covariance estimate <span class="math inline">\(\Sigma\)</span>.</p>
<p>We decompose <span class="math inline">\(a\)</span>: <span class="math inline">\(a = V_x D_x^{-1} b + \tilde V_x \tilde b\)</span>, where <span class="math inline">\(\tilde V_x\)</span> consists of column vectors orthogonal to the column space of <span class="math inline">\(V_x\)</span>.</p>
<p>We ignore the second term in the decomposition. In other words, we only consider <span class="math inline">\(a\)</span> in the column space of <span class="math inline">\(V_x\)</span>.</p>
<p>Then the problem is to find an <span class="math inline">\(r\)</span>-dimensional vector <span class="math inline">\(b\)</span> to maximise</p>
<p><span class="math display">\[{b^T (M_c V_x D_x^{-1})^T (M_c V_x D_x^{-1}) b \over b^T b}.\]</span></p>
<p>This is the problem of principle component analysis, and so <span class="math inline">\(b\)</span> is first column of <span class="math inline">\(V_m\)</span>.</p>
<p>Therefore, the solution to Fisher discriminant analysis is <span class="math inline">\(a = c V_x D_x^{-1} V_m\)</span> with <span class="math inline">\(p = 1\)</span>.</p>
<h3 id="linear-model">Linear model</h3>
<p>The model is called linear discriminant analysis because it is a linear model. To see this, let <span class="math inline">\(B = V_m^T D_x^{-1} V_x^T\)</span> be the matrix of transformation. Now we are comparing</p>
<p><span class="math display">\[- {1 \over 2} \| B x - B \mu_k\|^2 + \log \pi_k\]</span></p>
<p>across all <span class="math inline">\(k\)</span>s. Expanding the norm and removing the common term <span class="math inline">\(\|B x\|^2\)</span>, we see a linear form:</p>
<p><span class="math display">\[\mu_k^T B^T B x - {1 \over 2} \|B \mu_k\|^2 + \log\pi_k\]</span></p>
<p>So the prediction for <span class="math inline">\(X_{\text{new}}\)</span> is</p>
<p><span class="math display">\[\text{argmax}_{\text{axis}=0} \left(K B^T B X_{\text{new}}^T - {1 \over 2} \|K B^T\|_{\text{axis}=1}^2 + \log \pi\right)\]</span></p>
<p>thus the decision boundaries are linear.</p>
<p>This is how scikit-learn implements LDA, by inheriting from <code>LinearClassifierMixin</code> and redirecting the classification there.</p>
<h2 id="implementation">Implementation</h2>
<p>This is where things get interesting. How do I validate my understanding of the theory? By implementing and testing the algorithm.</p>
<p>I try to implement it as close as possible to the natural language / mathematical descriptions of the model, which means clarity over performance.</p>
<p>How about testing? Numerical experiments are harder to test than combinatorial / discrete algorithms in general because the output is less verifiable by hand. My shortcut solution to this problem is to test against output from the scikit-learn package.</p>
<p>It turned out to be harder than expected, as I had to dig into the code of scikit-learn when the outputs mismatch. Their code is quite well-written though.</p>
<p>The result is <a href="https://github.com/ycpei/machine-learning/tree/master/discriminant-analysis">here</a>.</p>
<h3 id="fun-facts-about-lda">Fun facts about LDA</h3>
<p>One property that can be used to test the LDA implementation is the fact that the scatter matrix <span class="math inline">\(B(X - \bar x)^T (X - \bar X) B^T\)</span> of the transformed centred sample is diagonal.</p>
<p>This can be derived by using another fun fact that the sum of the in-class scatter matrix and the between-class scatter matrix is the sample scatter matrix:</p>
<p><span class="math display">\[X_c^T X_c + M_c^T M_c = (X - \bar x)^T (X - \bar x) = (X_c + M_c)^T (X_c + M_c).\]</span></p>
<p>The verification is not very hard and left as an exercise.</p>
Shapley, LIME and SHAPposts/2018-12-02-lime-shapley.html2018-12-02T00:00:00ZYuchen Pei<p>In this post I explain LIME (Ribeiro et. al. 2016), the Shapley values (Shapley, 1953) and the SHAP values (Strumbelj-Kononenko, 2014; Lundberg-Lee, 2017).</p>
<p><strong>Acknowledgement</strong>. Thanks to Josef Lindman Hörnlund for bringing the LIME and SHAP papers to my attention. The research was done while working at KTH mathematics department.</p>
<p><em>If you are reading on a mobile device, you may need to "request desktop site" for the equations to be properly displayed. This post is licensed under CC BY-SA and GNU FDL.</em></p>
<h2 id="shapley-values">Shapley values</h2>
<p>A coalitional game <span class="math inline">\((v, N)\)</span> of <span class="math inline">\(n\)</span> players involves</p>
<ul>
<li>The set <span class="math inline">\(N = \{1, 2, ..., n\}\)</span> that represents the players.</li>
<li>A function <span class="math inline">\(v: 2^N \to \mathbb R\)</span>, where <span class="math inline">\(v(S)\)</span> is the worth of coalition <span class="math inline">\(S \subset N\)</span>.</li>
</ul>
<p>The Shapley values <span class="math inline">\(\phi_i(v)\)</span> of such a game specify a fair way to distribute the total worth <span class="math inline">\(v(N)\)</span> to the players. It is defined as (in the following, for a set <span class="math inline">\(S \subset N\)</span> we use the convention <span class="math inline">\(s = |S|\)</span> to be the number of elements of set <span class="math inline">\(S\)</span> and the shorthand <span class="math inline">\(S - i := S \setminus \{i\}\)</span> and <span class="math inline">\(S + i := S \cup \{i\}\)</span>)</p>
<p><span class="math display">\[\phi_i(v) = \sum_{S: i \in S} {(n - s)! (s - 1)! \over n!} (v(S) - v(S - i)).\]</span></p>
<p>It is not hard to see that <span class="math inline">\(\phi_i(v)\)</span> can be viewed as an expectation:</p>
<p><span class="math display">\[\phi_i(v) = \mathbb E_{S \sim \nu_i} (v(S) - v(S - i))\]</span></p>
<p>where <span class="math inline">\(\nu_i(S) = n^{-1} {n - 1 \choose s - 1}^{-1} 1_{i \in S}\)</span>, that is, first pick the size <span class="math inline">\(s\)</span> uniformly from <span class="math inline">\(\{1, 2, ..., n\}\)</span>, then pick <span class="math inline">\(S\)</span> uniformly from the subsets of <span class="math inline">\(N\)</span> that has size <span class="math inline">\(s\)</span> and contains <span class="math inline">\(i\)</span>.</p>
<p>The Shapley values satisfy some nice properties which are readily verified, including:</p>
<ul>
<li><strong>Efficiency</strong>. <span class="math inline">\(\sum_i \phi_i(v) = v(N) - v(\emptyset)\)</span>.</li>
<li><strong>Symmetry</strong>. If for some <span class="math inline">\(i, j \in N\)</span>, for all <span class="math inline">\(S \subset N\)</span>, we have <span class="math inline">\(v(S + i) = v(S + j)\)</span>, then <span class="math inline">\(\phi_i(v) = \phi_j(v)\)</span>.</li>
<li><strong>Null player</strong>. If for some <span class="math inline">\(i \in N\)</span>, for all <span class="math inline">\(S \subset N\)</span>, we have <span class="math inline">\(v(S + i) = v(S)\)</span>, then <span class="math inline">\(\phi_i(v) = 0\)</span>.</li>
<li><strong>Linearity</strong>. <span class="math inline">\(\phi_i\)</span> is linear in games. That is <span class="math inline">\(\phi_i(v) + \phi_i(w) = \phi_i(v + w)\)</span>, where <span class="math inline">\(v + w\)</span> is defined by <span class="math inline">\((v + w)(S) := v(S) + w(S)\)</span>.</li>
</ul>
<p>In the literature, an added assumption <span class="math inline">\(v(\emptyset) = 0\)</span> is often given, in which case the Efficiency property is defined as <span class="math inline">\(\sum_i \phi_i(v) = v(N)\)</span>. Here I discard this assumption to avoid minor inconsistencies across different sources. For example, in the LIME paper, the local model is defined without an intercept, even though the underlying <span class="math inline">\(v(\emptyset)\)</span> may not be <span class="math inline">\(0\)</span>. In the SHAP paper, an intercept <span class="math inline">\(\phi_0 = v(\emptyset)\)</span> is added which fixes this problem when making connections to the Shapley values.</p>
<p>Conversely, according to Strumbelj-Kononenko (2010), it was shown in Shapley's original paper (Shapley, 1953) that these four properties together with <span class="math inline">\(v(\emptyset) = 0\)</span> defines the Shapley values.</p>
<h2 id="lime">LIME</h2>
<p>LIME (Ribeiro et. al. 2016) is a model that offers a way to explain feature contributions of supervised learning models locally.</p>
<p>Let <span class="math inline">\(f: X_1 \times X_2 \times ... \times X_n \to \mathbb R\)</span> be a function. We can think of <span class="math inline">\(f\)</span> as a model, where <span class="math inline">\(X_j\)</span> is the space of <span class="math inline">\(j\)</span>th feature. For example, in a language model, <span class="math inline">\(X_j\)</span> may correspond to the count of the <span class="math inline">\(j\)</span>th word in the vocabulary, i.e. the bag-of-words model.</p>
<p>The output may be something like housing price, or log-probability of something.</p>
<p>LIME tries to assign a value to each feature <em>locally</em>. By locally, we mean that given a specific sample <span class="math inline">\(x \in X := \prod_{i = 1}^n X_i\)</span>, we want to fit a model around it.</p>
<p>More specifically, let <span class="math inline">\(h_x: 2^N \to X\)</span> be a function defined by</p>
<p><span class="math display">\[(h_x(S))_i =
\begin{cases}
x_i, & \text{if }i \in S; \\
0, & \text{otherwise.}
\end{cases}\]</span></p>
<p>That is, <span class="math inline">\(h_x(S)\)</span> masks the features that are not in <span class="math inline">\(S\)</span>, or in other words, we are perturbing the sample <span class="math inline">\(x\)</span>. Specifically, <span class="math inline">\(h_x(N) = x\)</span>. Alternatively, the <span class="math inline">\(0\)</span> in the "otherwise" case can be replaced by some kind of default value (see the section titled SHAP in this post).</p>
<p>For a set <span class="math inline">\(S \subset N\)</span>, let us denote <span class="math inline">\(1_S \in \{0, 1\}^n\)</span> to be an <span class="math inline">\(n\)</span>-bit where the <span class="math inline">\(k\)</span>th bit is <span class="math inline">\(1\)</span> if and only if <span class="math inline">\(k \in S\)</span>.</p>
<p>Basically, LIME samples <span class="math inline">\(S_1, S_2, ..., S_m \subset N\)</span> to obtain a set of perturbed samples <span class="math inline">\(x_i = h_x(S_i)\)</span> in the <span class="math inline">\(X\)</span> space, and then fits a linear model <span class="math inline">\(g\)</span> using <span class="math inline">\(1_{S_i}\)</span> as the input samples and <span class="math inline">\(f(h_x(S_i))\)</span> as the output samples:</p>
<p><strong>Problem</strong>(LIME). Find <span class="math inline">\(w = (w_1, w_2, ..., w_n)\)</span> that minimises</p>
<p><span class="math display">\[\sum_i (w \cdot 1_{S_i} - f(h_x(S_i)))^2 \pi_x(h_x(S_i))\]</span></p>
<p>where <span class="math inline">\(\pi_x(x')\)</span> is a function that penalises <span class="math inline">\(x'\)</span>s that are far away from <span class="math inline">\(x\)</span>. In the LIME paper the Gaussian kernel was used:</p>
<p><span class="math display">\[\pi_x(x') = \exp\left({- \|x - x'\|^2 \over \sigma^2}\right).\]</span></p>
<p>Then <span class="math inline">\(w_i\)</span> represents the importance of the <span class="math inline">\(i\)</span>th feature.</p>
<p>The LIME model has a more general framework, but the specific model considered in the paper is the one described above, with a Lasso for feature selection.</p>
<p><strong>Remark</strong>. One difference between our account here and the one in the LIME paper is: the dimension of the data space may differ from <span class="math inline">\(n\)</span> (see Section 3.1 of that paper). But in the case of text data, they do use bag-of-words (our <span class="math inline">\(X\)</span>) for an "intermediate" representation. So my understanding is, in their context, there is an "original" data space (let's call it <span class="math inline">\(X'\)</span>). And there is a one-one correspondence between <span class="math inline">\(X'\)</span> and <span class="math inline">\(X\)</span> (let's call it <span class="math inline">\(r: X' \to X\)</span>), so that given a sample <span class="math inline">\(x' \in X'\)</span>, we can compute the output of <span class="math inline">\(S\)</span> in the local model with <span class="math inline">\(f(r^{-1}(h_{r(x')}(S)))\)</span>. As an example, in the example of <span class="math inline">\(X\)</span> being the bag of words, <span class="math inline">\(X'\)</span> may be the embedding vector space, so that <span class="math inline">\(r(x') = A^{-1} x'\)</span>, where <span class="math inline">\(A\)</span> is the word embedding matrix. Therefore, without loss of generality, we assume the input space to be <span class="math inline">\(X\)</span> which is of dimension <span class="math inline">\(n\)</span>.</p>
<h2 id="shapley-values-and-lime">Shapley values and LIME</h2>
<p>The connection between the Shapley values and LIME is noted in Lundberg-Lee (2017), but the underlying connection goes back to 1988 (Charnes et. al.).</p>
<p>To see the connection, we need to modify LIME a bit.</p>
<p>First, we need to make LIME less efficient by considering <em>all</em> the <span class="math inline">\(2^n\)</span> subsets instead of the <span class="math inline">\(m\)</span> samples <span class="math inline">\(S_1, S_2, ..., S_{m}\)</span>.</p>
<p>Then we need to relax the definition of <span class="math inline">\(\pi_x\)</span>. It no longer needs to penalise samples that are far away from <span class="math inline">\(x\)</span>. In fact, we will see later than the choice of <span class="math inline">\(\pi_x(x')\)</span> that yields the Shapley values is high when <span class="math inline">\(x'\)</span> is very close or very far away from <span class="math inline">\(x\)</span>, and low otherwise. We further add the restriction that <span class="math inline">\(\pi_x(h_x(S))\)</span> only depends on the size of <span class="math inline">\(S\)</span>, thus we rewrite it as <span class="math inline">\(q(s)\)</span> instead.</p>
<p>We also denote <span class="math inline">\(v(S) := f(h_x(S))\)</span> and <span class="math inline">\(w(S) = \sum_{i \in S} w_i\)</span>.</p>
<p>Finally, we add the Efficiency property as a constraint: <span class="math inline">\(\sum_{i = 1}^n w_i = f(x) - f(h_x(\emptyset)) = v(N) - v(\emptyset)\)</span>.</p>
<p>Then the problem becomes a weighted linear regression:</p>
<p><strong>Problem</strong>. minimises <span class="math inline">\(\sum_{S \subset N} (w(S) - v(S))^2 q(s)\)</span> over <span class="math inline">\(w\)</span> subject to <span class="math inline">\(w(N) = v(N) - v(\emptyset)\)</span>.</p>
<p><strong>Claim</strong> (Charnes et. al. 1988). The solution to this problem is</p>
<p><span class="math display">\[w_i = {1 \over n} (v(N) - v(\emptyset)) + \left(\sum_{s = 1}^{n - 1} {n - 2 \choose s - 1} q(s)\right)^{-1} \sum_{S \subset N: i \in S} \left({n - s \over n} q(s) v(S) - {s - 1 \over n} q(s - 1) v(S - i)\right). \qquad (-1)\]</span></p>
<p>Specifically, if we choose</p>
<p><span class="math display">\[q(s) = c {n - 2 \choose s - 1}^{-1}\]</span></p>
<p>for any constant <span class="math inline">\(c\)</span>, then <span class="math inline">\(w_i = \phi_i(v)\)</span> are the Shapley values.</p>
<p><strong>Remark</strong>. Don't worry about this specific choice of <span class="math inline">\(q(s)\)</span> when <span class="math inline">\(s = 0\)</span> or <span class="math inline">\(n\)</span>, because <span class="math inline">\(q(0)\)</span> and <span class="math inline">\(q(n)\)</span> do not appear on the right hand side of (-1). Therefore they can be defined to be of any value. A common convention of the binomial coefficients is to set <span class="math inline">\({\ell \choose k} = 0\)</span> if <span class="math inline">\(k < 0\)</span> or <span class="math inline">\(k > \ell\)</span>, in which case <span class="math inline">\(q(0) = q(n) = \infty\)</span>.</p>
<p>In Lundberg-Lee (2017), <span class="math inline">\(c\)</span> is chosen to be <span class="math inline">\(1 / n\)</span>, see Theorem 2 there.</p>
<p>In Charnes et. al. 1988, the <span class="math inline">\(w_i\)</span>s defined in (-1) are called the generalised Shapley values.</p>
<p><strong>Proof</strong>. The Lagrangian is</p>
<p><span class="math display">\[L(w, \lambda) = \sum_{S \subset N} (v(S) - w(S))^2 q(s) - \lambda(w(N) - v(N) + v(\emptyset)).\]</span></p>
<p>and by making <span class="math inline">\(\partial_{w_i} L(w, \lambda) = 0\)</span> we have</p>
<p><span class="math display">\[{1 \over 2} \lambda = \sum_{S \subset N: i \in S} (w(S) - v(S)) q(s). \qquad (0)\]</span></p>
<p>Summing (0) over <span class="math inline">\(i\)</span> and divide by <span class="math inline">\(n\)</span>, we have</p>
<p><span class="math display">\[{1 \over 2} \lambda = {1 \over n} \sum_i \sum_{S: i \in S} (w(S) q(s) - v(S) q(s)). \qquad (1)\]</span></p>
<p>We examine each of the two terms on the right hand side.</p>
<p>Counting the terms involving <span class="math inline">\(w_i\)</span> and <span class="math inline">\(w_j\)</span> for <span class="math inline">\(j \neq i\)</span>, and using <span class="math inline">\(w(N) = v(N) - v(\emptyset)\)</span> we have</p>
<p><span class="math display">\[\begin{aligned}
&\sum_{S \subset N: i \in S} w(S) q(s) \\
&= \sum_{s = 1}^n {n - 1 \choose s - 1} q(s) w_i + \sum_{j \neq i}\sum_{s = 2}^n {n - 2 \choose s - 2} q(s) w_j \\
&= q(1) w_i + \sum_{s = 2}^n q(s) \left({n - 1 \choose s - 1} w_i + \sum_{j \neq i} {n - 2 \choose s - 2} w_j\right) \\
&= q(1) w_i + \sum_{s = 2}^n \left({n - 2 \choose s - 1} w_i + {n - 2 \choose s - 2} (v(N) - v(\emptyset))\right) q(s) \\
&= \sum_{s = 1}^{n - 1} {n - 2 \choose s - 1} q(s) w_i + \sum_{s = 2}^n {n - 2 \choose s - 2} q(s) (v(N) - v(\emptyset)). \qquad (2)
\end{aligned}\]</span></p>
<p>Summing (2) over <span class="math inline">\(i\)</span>, we obtain</p>
<p><span class="math display">\[\begin{aligned}
&\sum_i \sum_{S: i \in S} w(S) q(s)\\
&= \sum_{s = 1}^{n - 1} {n - 2 \choose s - 1} q(s) (v(N) - v(\emptyset)) + \sum_{s = 2}^n n {n - 2 \choose s - 2} q(s) (v(N) - v(\emptyset))\\
&= \sum_{s = 1}^n s{n - 1 \choose s - 1} q(s) (v(N) - v(\emptyset)). \qquad (3)
\end{aligned}\]</span></p>
<p>For the second term in (1), we have</p>
<p><span class="math display">\[\sum_i \sum_{S: i \in S} v(S) q(s) = \sum_{S \subset N} s v(S) q(s). \qquad (4)\]</span></p>
<p>Plugging (3)(4) in (1), we have</p>
<p><span class="math display">\[{1 \over 2} \lambda = {1 \over n} \left(\sum_{S \subset N} s q(s) v(S) - \sum_{s = 1}^n s {n - 1 \choose s - 1} q(s) (v(N) - v(\emptyset))\right). \qquad (5)\]</span></p>
<p>Plugging (5)(2) in (0) and solve for <span class="math inline">\(w_i\)</span>, we have</p>
<p><span class="math display">\[w_i = {1 \over n} (v(N) - v(\emptyset)) + \left(\sum_{s = 1}^{n - 1} {n - 2 \choose s - 1} q(s) \right)^{-1} \left( \sum_{S: i \in S} q(s) v(S) - {1 \over n} \sum_{S \subset N} s q(s) v(S) \right). \qquad (6)\]</span></p>
<p>By splitting all subsets of <span class="math inline">\(N\)</span> into ones that contain <span class="math inline">\(i\)</span> and ones that do not and pair them up, we have</p>
<p><span class="math display">\[\sum_{S \subset N} s q(s) v(S) = \sum_{S: i \in S} (s q(s) v(S) + (s - 1) q(s - 1) v(S - i)).\]</span></p>
<p>Plugging this back into (6) we get the desired result. <span class="math inline">\(\square\)</span></p>
<h2 id="shap">SHAP</h2>
<p>The paper that coined the term "SHAP values" (Lundberg-Lee 2017) is not clear in its definition of the "SHAP values" and its relation to LIME, so the following is my interpretation of their interpretation model, which coincide with a model studied in Strumbelj-Kononenko 2014.</p>
<p>Recall that we want to calculate feature contributions to a model <span class="math inline">\(f\)</span> at a sample <span class="math inline">\(x\)</span>.</p>
<p>Let <span class="math inline">\(\mu\)</span> be a probability density function over the input space <span class="math inline">\(X = X_1 \times ... \times X_n\)</span>. A natural choice would be the density that generates the data, or one that approximates such density (e.g. empirical distribution).</p>
<p>The feature contribution (SHAP value) is thus defined as the Shapley value <span class="math inline">\(\phi_i(v)\)</span>, where</p>
<p><span class="math display">\[v(S) = \mathbb E_{z \sim \mu} (f(z) | z_S = x_S). \qquad (7)\]</span></p>
<p>So it is a conditional expectation where <span class="math inline">\(z_i\)</span> is clamped for <span class="math inline">\(i \in S\)</span>. In fact, the definition of feature contributions in this form predates Lundberg-Lee 2017. For example, it can be found in Strumbelj-Kononenko 2014.</p>
<p>One simplification is to assume the <span class="math inline">\(n\)</span> features are independent, thus <span class="math inline">\(\mu = \mu_1 \times \mu_2 \times ... \times \mu_n\)</span>. In this case, (7) becomes</p>
<p><span class="math display">\[v(S) = \mathbb E_{z_{N \setminus S} \sim \mu_{N \setminus S}} f(x_S, z_{N \setminus S}) \qquad (8)\]</span></p>
<p>For example, Strumbelj-Kononenko (2010) considers this scenario where <span class="math inline">\(\mu\)</span> is the uniform distribution over <span class="math inline">\(X\)</span>, see Definition 4 there.</p>
<p>A further simplification is model linearity, which means <span class="math inline">\(f\)</span> is linear. In this case, (8) becomes</p>
<p><span class="math display">\[v(S) = f(x_S, \mathbb E_{\mu_{N \setminus S}} z_{N \setminus S}). \qquad (9)\]</span></p>
<p>It is worth noting that to make the modified LIME model considered in the previous section fall under the linear SHAP framework (9), we need to make two further specialisations, the first is rather cosmetic: we need to change the definition of <span class="math inline">\(h_x(S)\)</span> to</p>
<p><span class="math display">\[(h_x(S))_i =
\begin{cases}
x_i, & \text{if }i \in S; \\
\mathbb E_{\mu_i} z_i, & \text{otherwise.}
\end{cases}\]</span></p>
<p>But we also need to boldly assume the original <span class="math inline">\(f\)</span> to be linear, which in my view, defeats the purpose of interpretability, because linear models are interpretable by themselves.</p>
<p>One may argue that perhaps we do not need linearity to define <span class="math inline">\(v(S)\)</span> as in (9). If we do so, however, then (9) loses mathematical meaning. A bigger question is: how effective is SHAP? An even bigger question: in general, how do we evaluate models of interpretation?</p>
<h2 id="evaluating-shap">Evaluating SHAP</h2>
<p>The quest of the SHAP paper can be decoupled into two independent components: showing the niceties of Shapley values and choosing the coalitional game <span class="math inline">\(v\)</span>.</p>
<p>The SHAP paper argues that Shapley values <span class="math inline">\(\phi_i(v)\)</span> are a good measurement because they are the only values satisfying the some nice properties including the Efficiency property mentioned at the beginning of the post, invariance under permutation and monotonicity, see the paragraph below Theorem 1 there, which refers to Theorem 2 of Young (1985).</p>
<p>Indeed, both efficiency (the "additive feature attribution methods" in the paper) and monotonicity are meaningful when considering <span class="math inline">\(\phi_i(v)\)</span> as the feature contribution of the <span class="math inline">\(i\)</span>th feature.</p>
<p>The question is thus reduced to the second component: what constitutes a nice choice of <span class="math inline">\(v\)</span>?</p>
<p>The SHAP paper answers this question with 3 options with increasing simplification: (7)(8)(9) in the previous section of this post (corresponding to (9)(11)(12) in the paper). They are intuitive, but it will be interesting to see more concrete (or even mathematical) justifications of such choices.</p>
<h2 id="references">References</h2>
<ul>
<li>Charnes, A., B. Golany, M. Keane, and J. Rousseau. "Extremal Principle Solutions of Games in Characteristic Function Form: Core, Chebychev and Shapley Value Generalizations." In Econometrics of Planning and Efficiency, edited by Jati K. Sengupta and Gopal K. Kadekodi, 123--33. Dordrecht: Springer Netherlands, 1988. <a href="https://doi.org/10.1007/978-94-009-3677-5_7" class="uri">https://doi.org/10.1007/978-94-009-3677-5_7</a>.</li>
<li>Lundberg, Scott, and Su-In Lee. "A Unified Approach to Interpreting Model Predictions." ArXiv:1705.07874 [Cs, Stat], May 22, 2017. <a href="http://arxiv.org/abs/1705.07874" class="uri">http://arxiv.org/abs/1705.07874</a>.</li>
<li>Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "'Why Should I Trust You?': Explaining the Predictions of Any Classifier." ArXiv:1602.04938 [Cs, Stat], February 16, 2016. <a href="http://arxiv.org/abs/1602.04938" class="uri">http://arxiv.org/abs/1602.04938</a>.</li>
<li>Shapley, L. S. "17. A Value for n-Person Games." In Contributions to the Theory of Games (AM-28), Volume II, Vol. 2. Princeton: Princeton University Press, 1953. <a href="https://doi.org/10.1515/9781400881970-018" class="uri">https://doi.org/10.1515/9781400881970-018</a>.</li>
<li>Strumbelj, Erik, and Igor Kononenko. "An Efficient Explanation of Individual Classifications Using Game Theory." J. Mach. Learn. Res. 11 (March 2010): 1--18.</li>
<li>Strumbelj, Erik, and Igor Kononenko. “Explaining Prediction Models and Individual Predictions with Feature Contributions.” Knowledge and Information Systems 41, no. 3 (December 2014): 647–65. <a href="https://doi.org/10.1007/s10115-013-0679-x" class="uri">https://doi.org/10.1007/s10115-013-0679-x</a>.</li>
<li>Young, H. P. “Monotonic Solutions of Cooperative Games.” International Journal of Game Theory 14, no. 2 (June 1, 1985): 65–72. <a href="https://doi.org/10.1007/BF01769885" class="uri">https://doi.org/10.1007/BF01769885</a>.</li>
</ul>
Automatic differentiationposts/2018-06-03-automatic_differentiation.html2018-06-03T00:00:00ZYuchen Pei<p>This post serves as a note and explainer of autodiff. It is licensed under <a href="https://www.gnu.org/licenses/fdl.html">GNU FDL</a>.</p>
<p>For my learning I benefited a lot from <a href="http://www.cs.toronto.edu/%7Ergrosse/courses/csc321_2018/slides/lec10.pdf">Toronto CSC321 slides</a> and the <a href="https://github.com/mattjj/autodidact/">autodidact</a> project which is a pedagogical implementation of <a href="https://github.com/hips/autograd">Autograd</a>. That said, any mistakes in this note are mine (especially since some of the knowledge is obtained from interpreting slides!), and if you do spot any I would be grateful if you can let me know.</p>
<p>Automatic differentiation (AD) is a way to compute derivatives. It does so by traversing through a computational graph using the chain rule.</p>
<p>There are the forward mode AD and reverse mode AD, which are kind of symmetric to each other and understanding one of them results in little to no difficulty in understanding the other.</p>
<p>In the language of neural networks, one can say that the forward mode AD is used when one wants to compute the derivatives of functions at all layers with respect to input layer weights, whereas the reverse mode AD is used to compute the derivatives of output functions with respect to weights at all layers. Therefore reverse mode AD (rmAD) is the one to use for gradient descent, which is the one we focus in this post.</p>
<p>Basically rmAD requires the computation to be sufficiently decomposed, so that in the computational graph, each node as a function of its parent nodes is an elementary function that the AD engine has knowledge about.</p>
<p>For example, the Sigmoid activation <span class="math inline">\(a' = \sigma(w a + b)\)</span> is quite simple, but it should be decomposed to simpler computations:</p>
<ul>
<li><span class="math inline">\(a' = 1 / t_1\)</span></li>
<li><span class="math inline">\(t_1 = 1 + t_2\)</span></li>
<li><span class="math inline">\(t_2 = \exp(t_3)\)</span></li>
<li><span class="math inline">\(t_3 = - t_4\)</span></li>
<li><span class="math inline">\(t_4 = t_5 + b\)</span></li>
<li><span class="math inline">\(t_5 = w a\)</span></li>
</ul>
<p>Thus the function <span class="math inline">\(a'(a)\)</span> is decomposed to elementary operations like addition, subtraction, multiplication, reciprocitation, exponentiation, logarithm etc. And the rmAD engine stores the Jacobian of these elementary operations.</p>
<p>Since in neural networks we want to find derivatives of a single loss function <span class="math inline">\(L(x; \theta)\)</span>, we can omit <span class="math inline">\(L\)</span> when writing derivatives and denote, say <span class="math inline">\(\bar \theta_k := \partial_{\theta_k} L\)</span>.</p>
<p>In implementations of rmAD, one can represent the Jacobian as a transformation <span class="math inline">\(j: (x \to y) \to (y, \bar y, x) \to \bar x\)</span>. <span class="math inline">\(j\)</span> is called the <em>Vector Jacobian Product</em> (VJP). For example, <span class="math inline">\(j(\exp)(y, \bar y, x) = y \bar y\)</span> since given <span class="math inline">\(y = \exp(x)\)</span>,</p>
<p><span class="math inline">\(\partial_x L = \partial_x y \cdot \partial_y L = \partial_x \exp(x) \cdot \partial_y L = y \bar y\)</span></p>
<p>as another example, <span class="math inline">\(j(+)(y, \bar y, x_1, x_2) = (\bar y, \bar y)\)</span> since given <span class="math inline">\(y = x_1 + x_2\)</span>, <span class="math inline">\(\bar{x_1} = \bar{x_2} = \bar y\)</span>.</p>
<p>Similarly,</p>
<ol style="list-style-type: decimal">
<li><span class="math inline">\(j(/)(y, \bar y, x_1, x_2) = (\bar y / x_2, - \bar y x_1 / x_2^2)\)</span></li>
<li><span class="math inline">\(j(\log)(y, \bar y, x) = \bar y / x\)</span></li>
<li><span class="math inline">\(j((A, \beta) \mapsto A \beta)(y, \bar y, A, \beta) = (\bar y \otimes \beta, A^T \bar y)\)</span>.</li>
<li>etc...</li>
</ol>
<p>In the third one, the function is a matrix <span class="math inline">\(A\)</span> multiplied on the right by a column vector <span class="math inline">\(\beta\)</span>, and <span class="math inline">\(\bar y \otimes \beta\)</span> is the tensor product which is a fancy way of writing <span class="math inline">\(\bar y \beta^T\)</span>. See <a href="https://github.com/mattjj/autodidact/blob/master/autograd/numpy/numpy_vjps.py">numpy_vjps.py</a> for the implementation in autodidact.</p>
<p>So, given a node say <span class="math inline">\(y = y(x_1, x_2, ..., x_n)\)</span>, and given the value of <span class="math inline">\(y\)</span>, <span class="math inline">\(x_{1 : n}\)</span> and <span class="math inline">\(\bar y\)</span>, rmAD computes the values of <span class="math inline">\(\bar x_{1 : n}\)</span> by using the Jacobians.</p>
<p>This is the gist of rmAD. It stores the values of each node in a forward pass, and computes the derivatives of each node exactly once in a backward pass.</p>
<p>It is a nice exercise to derive the backpropagation in the fully connected feedforward neural networks (e.g. <a href="http://neuralnetworksanddeeplearning.com/chap2.html#the_four_fundamental_equations_behind_backpropagation">the one for MNIST in Neural Networks and Deep Learning</a>) using rmAD.</p>
<p>AD is an approach lying between the extremes of numerical approximation (e.g. finite difference) and symbolic evaluation. It uses exact formulas (VJP) at each elementary operation like symbolic evaluation, while evaluates each VJP numerically rather than lumping all the VJPs into an unwieldy symbolic formula.</p>
<p>Things to look further into: the higher-order functional currying form <span class="math inline">\(j: (x \to y) \to (y, \bar y, x) \to \bar x\)</span> begs for a functional programming implementation.</p>
Updates on open researchposts/2018-04-10-update-open-research.html2018-04-29T00:00:00ZYuchen Pei<p>It has been 9 months since I last wrote about open (maths) research. Since then two things happened which prompted me to write an update.</p>
<p>As always I discuss open research only in mathematics, not because I think it should not be applied to other disciplines, but simply because I do not have experience nor sufficient interests in non-mathematical subjects.</p>
<p>First, I read about Richard Stallman the founder of the free software movement, in <a href="http://shop.oreilly.com/product/9780596002879.do">his biography by Sam Williams</a> and his own collection of essays <a href="https://shop.fsf.org/books-docs/free-software-free-society-selected-essays-richard-m-stallman-3rd-edition"><em>Free software, free society</em></a>, from which I learned a bit more about the context and philosophy of free software and its relation to that of open source software. For anyone interested in open research, I highly recommend having a look at these two books. I am also reading Levy's <a href="http://www.stevenlevy.com/index.php/books/hackers">Hackers</a>, which documented the development of the hacker culture predating Stallman. I can see the connection of ideas from the hacker ethic to the free software philosophy and to the open source philosophy. My guess is that the software world is fortunate to have pioneers who advocated for various kinds of freedom and openness from the beginning, whereas for academia which has a much longer history, credit protection has always been a bigger concern.</p>
<p>Also a month ago I attended a workshop called <a href="https://www.perimeterinstitute.ca/conferences/open-research-rethinking-scientific-collaboration">Open research: rethinking scientific collaboration</a>. That was the first time I met a group of people (mostly physicists) who also want open research to happen, and we had some stimulating discussions. Many thanks to the organisers at Perimeter Institute for organising the event, and special thanks to <a href="https://www.perimeterinstitute.ca/people/matteo-smerlak">Matteo Smerlak</a> and <a href="https://www.perimeterinstitute.ca/people/ashley-milsted">Ashley Milsted</a> for invitation and hosting.</p>
<p>From both of these I feel like I should write an updated post on open research.</p>
<h3 id="freedom-and-community">Freedom and community</h3>
<p>Ideals matter. Stallman's struggles stemmed from the frustration of denied request of source code (a frustration I shared in academia except source code is replaced by maths knowledge), and revolved around two things that underlie the free software movement: freedom and community. That is, the freedom to use, modify and share a work, and by sharing, to help the community.</p>
<p>Likewise, as for open research, apart from the utilitarian view that open research is more efficient / harder for credit theft, we should not ignore the ethical aspect that open research is right and fair. In particular, I think freedom and community can also serve as principles in open research. One way to make this argument more concrete is to describe what I feel are the central problems: NDAs (non-disclosure agreements) and reproducibility.</p>
<p><strong>NDAs</strong>. It is assumed that when establishing a research collaboration, or just having a discussion, all those involved own the joint work in progress, and no one has the freedom to disclose any information e.g. intermediate results without getting permission from all collaborators. In effect this amounts to signing an NDA. NDAs are harmful because they restrict people's freedom from sharing information that can benefit their own or others' research. Considering that in contrast to the private sector, the primary goal of academia is knowledge but not profit, NDAs in research are unacceptable.</p>
<p><strong>Reproducibility</strong>. Research papers written down are not necessarily reproducible, even though they appear on peer-reviewed journals. This is because the peer-review process is opaque and the proofs in the papers may not be clear to everyone. To make things worse, there are no open channels to discuss results in these papers and one may have to rely on interacting with the small circle of the informed. One example is folk theorems. Another is trade secrets required to decipher published works.</p>
<p>I should clarify that freedom works both ways. One should have the freedom to disclose maths knowledge, but they should also be free to withhold any information that does not hamper the reproducibility of published works (e.g. results in ongoing research yet to be published), even though it may not be nice to do so when such information can help others with their research.</p>
<p>Similar to the solution offered by the free software movement, we need a community that promotes and respects free flow of maths knowledge, in the spirit of the <a href="https://www.gnu.org/philosophy/">four essential freedoms</a>, a community that rejects NDAs and upholds reproducibility.</p>
<p>Here are some ideas on how to tackle these two problems and build the community:</p>
<ol style="list-style-type: decimal">
<li>Free licensing. It solves NDA problem - free licenses permit redistribution and modification of works, so if you adopt them in your joint work, then you have the freedom to modify and distribute the work; it also helps with reproducibility - if a paper is not clear, anyone can write their own version and publish it. Bonus points with the use of copyleft licenses like <a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Share-Alike</a> or the <a href="https://www.gnu.org/licenses/fdl.html">GNU Free Documentation License</a>.</li>
<li>A forum for discussions of mathematics. It helps solve the reproducibility problem - public interaction may help quickly clarify problems. By the way, Math Overflow is not a forum.</li>
<li>An infrastructure of mathematical knowledge. Like the GNU system, a mathematics encyclopedia under a copyleft license maintained in the Github-style rather than Wikipedia-style by a "Free Mathematics Foundation", and drawing contributions from the public (inside or outside of the academia). To begin with, crowd-source (again, Github-style) the proofs of say 1000 foundational theorems covered in the curriculum of a bachelor's degree. Perhaps start with taking contributions from people with some credentials (e.g. having a bachelor degree in maths) and then expand the contribution permission to the public, or taking advantage of existing corpus under free license like Wikipedia.</li>
<li>Citing with care: if a work is considered authorative but you couldn't reproduce the results, whereas another paper which tries to explain or discuss similar results makes the first paper understandable to you, give both papers due attribution (something like: see [1], but I couldn't reproduce the proof in [1], and the proofs in [2] helped clarify it). No one should be offended if you say you can not reproduce something - there may be causes on both sides, whereas citing [2] is fairer and helps readers with a similar background.</li>
</ol>
<h3 id="tools-for-open-research">Tools for open research</h3>
<p>The open research workshop revolved around how to lead academia towards a more open culture. There were discussions on open research tools, improving credit attributions, the peer-review process and the path to adoption.</p>
<p>During the workshop many efforts for open research were mentioned, and afterwards I was also made aware by more of them, like the following:</p>
<ul>
<li><a href="https://osf.io">OSF</a>, an online research platform. It has a clean and simple interface with commenting, wiki, citation generation, DOI generation, tags, license generation etc. Like Github it supports private and public repositories (but defaults to private), version control, with the ability to fork or bookmark a project.</li>
<li><a href="https://scipost.org/">SciPost</a>, physics journals whose peer review reports and responses are public (peer-witnessed refereeing), and allows comments (post-publication evaluation). Like arXiv, it requires some academic credential (PhD or above) to register.</li>
<li><a href="https://knowen.org/">Knowen</a>, a platform to organise knowledge in directed acyclic graphs. Could be useful for building the infrastructure of mathematical knowledge.</li>
<li><a href="https://fermatslibrary.com/">Fermat's Library</a>, the journal club website that crowd-annotates one notable paper per week released a Chrome extension <a href="https://fermatslibrary.com/librarian">Librarian</a> that overlays a commenting interface on arXiv. As an example Ian Goodfellow did an <a href="https://fermatslibrary.com/arxiv_comments?url=https://arxiv.org/pdf/1406.2661.pdf">AMA (ask me anything) on his GAN paper</a>.</li>
<li><a href="https://polymathprojects.org/">The Polymath project</a>, the famous massive collaborative mathematical project. Not exactly new, the Polymath project is the only open maths research project that has gained some traction and recognition. However, it does not have many active projects (<a href="http://michaelnielsen.org/polymath1/index.php?title=Main_Page">currently only one active project</a>).</li>
<li><a href="https://stacks.math.columbia.edu/">The Stacks Project</a>. I was made aware of this project by <a href="https://people.kth.se/~yitingl/">Yiting</a>. Its data is hosted on github and accepts contributions via pull requests and is licensed under the GNU Free Documentation License, ticking many boxes of the free and open source model.</li>
</ul>
<h3 id="an-anecdote-from-the-workshop">An anecdote from the workshop</h3>
<p>In a conversation during the workshop, one of the participants called open science "normal science", because reproducibility, open access, collaborations, and fair attributions are all what science is supposed to be, and practices like treating the readers as buyers rather than users should be called "bad science", rather than "closed science".</p>
<p>To which an organiser replied: maybe we should rename the workshop "Not-bad science".</p>
The Mathematical Bazaarposts/2017-08-07-mathematical_bazaar.html2017-08-07T00:00:00ZYuchen Pei<p>In this essay I describe some problems in academia of mathematics and propose an open source model, which I call open research in mathematics.</p>
<p>This essay is a work in progress - comments and criticisms are welcome! <a href="#fn1" class="footnoteRef" id="fnref1"><sup>1</sup></a></p>
<p>Before I start I should point out that</p>
<ol style="list-style-type: decimal">
<li>Open research is <em>not</em> open access. In fact the latter is a prerequisite to the former.</li>
<li>I am not proposing to replace the current academic model with the open model - I know academia works well for many people and I am happy for them, but I think an open research community is long overdue since the wide adoption of the World Wide Web more than two decades ago. In fact, I fail to see why an open model can not run in tandem with the academia, just like open source and closed source software development coexist today.</li>
</ol>
<h2 id="problems-of-academia">problems of academia</h2>
<p>Open source projects are characterised by publicly available source codes as well as open invitations for public collaborations, whereas closed source projects do not make source codes accessible to the public. How about mathematical academia then, is it open source or closed source? The answer is neither.</p>
<p>Compared to some other scientific disciplines, mathematics does not require expensive equipments or resources to replicate results; compared to programming in conventional software industry, mathematical findings are not meant to be commercial, as credits and reputation rather than money are the direct incentives (even though the former are commonly used to trade for the latter). It is also a custom and common belief that mathematical derivations and theorems shouldn't be patented. Because of this, mathematical research is an open source activity in the sense that proofs to new results are all available in papers, and thanks to open access e.g. the arXiv preprint repository most of the new mathematical knowledge is accessible for free.</p>
<p>Then why, you may ask, do I claim that maths research is not open sourced? Well, this is because 1. mathematical arguments are not easily replicable and 2. mathematical research projects are mostly not open for public participation.</p>
<p>Compared to computer programs, mathematical arguments are not written in an unambiguous language, and they are terse and not written in maximum verbosity (this is especially true in research papers as journals encourage limiting the length of submissions), so the understanding of a proof depends on whether the reader is equipped with the right background knowledge, and the completeness of a proof is highly subjective. More generally speaking, computer programs are mostly portable because all machines with the correct configurations can understand and execute a piece of program, whereas humans are subject to their environment, upbringings, resources etc. to have a brain ready to comprehend a proof that interests them. (these barriers are softer than the expensive equipments and resources in other scientific fields mentioned before because it is all about having access to the right information)</p>
<p>On the other hand, as far as the pursuit of reputation and prestige (which can be used to trade for the scarce resource of research positions and grant money) goes, there is often little practical motivation for career mathematicians to explain their results to the public carefully. And so the weird reality of the mathematical academia is that it is not an uncommon practice to keep trade secrets in order to protect one's territory and maintain a monopoly. This is doable because as long as a paper passes the opaque and sometimes political peer review process and is accepted by a journal, it is considered work done, accepted by the whole academic community and adds to the reputation of the author(s). Just like in the software industry, trade secrets and monopoly hinder the development of research as a whole, as well as demoralise outsiders who are interested in participating in related research.</p>
<p>Apart from trade secrets and territoriality, another reason to the nonexistence of open research community is an elitist tradition in the mathematical academia, which goes as follows:</p>
<ul>
<li>Whoever is not good at mathematics or does not possess a degree in maths is not eligible to do research, or else they run high risks of being labelled a crackpot.</li>
<li>Mistakes made by established mathematicians are more tolerable than those less established.</li>
<li>Good mathematical writings should be deep, and expositions of non-original results are viewed as inferior work and do not add to (and in some cases may even damage) one's reputation.</li>
</ul>
<p>All these customs potentially discourage public participations in mathematical research, and I do not see them easily go away unless an open source community gains momentum.</p>
<p>To solve the above problems, I propose a open source model of mathematical research, which has high levels of openness and transparency and also has some added benefits listed in the last section of this essay. This model tries to achieve two major goals:</p>
<ul>
<li>Open and public discussions and collaborations of mathematical research projects online</li>
<li>Open review to validate results, where author name, reviewer name, comments and responses are all publicly available online.</li>
</ul>
<p>To this end, a Github model is fitting. Let me first describe how open source collaboration works on Github.</p>
<h2 id="open-source-collaborations-on-github">open source collaborations on Github</h2>
<p>On <a href="https://github.com">Github</a>, every project is publicly available in a repository (we do not consider private repos). The owner can update the project by "committing" changes, which include a message of what has been changed, the author of the changes and a timestamp. Each project has an issue tracker, which is basically a discussion forum about the project, where anyone can open an issue (start a discussion), and the owner of the project as well as the original poster of the issue can close it if it is resolved, e.g. bug fixed, feature added, or out of the scope of the project. Closing the issue is like ending the discussion, except that the thread is still open to more posts for anyone interested. People can react to each issue post, e.g. upvote, downvote, celebration, and importantly, all the reactions are public too, so you can find out who upvoted or downvoted your post.</p>
<p>When one is interested in contributing code to a project, they fork it, i.e. make a copy of the project, and make the changes they like in the fork. Once they are happy with the changes, they submit a pull request to the original project. The owner of the original project may accept or reject the request, and they can comment on the code in the pull request, asking for clarification, pointing out problematic part of the code etc and the author of the pull request can respond to the comments. Anyone, not just the owner can participate in this review process, turning it into a public discussion. In fact, a pull request is a special issue thread. Once the owner is happy with the pull request, they accept it and the changes are merged into the original project. The author of the changes will show up in the commit history of the original project, so they get the credits.</p>
<p>As an alternative to forking, if one is interested in a project but has a different vision, or that the maintainer has stopped working on it, they can clone it and make their own version. This is a more independent kind of fork because there is no longer intention to contribute back to the original project.</p>
<p>Moreover, on Github there is no way to send private messages, which forces people to interact publicly. If say you want someone to see and reply to your comment in an issue post or pull request, you simply mention them by <code>@someone</code>.</p>
<h2 id="open-research-in-mathematics">open research in mathematics</h2>
<p>All this points to a promising direction of open research. A maths project may have a wiki / collection of notes, the paper being written, computer programs implementing the results etc. The issue tracker can serve as a discussion forum about the project as well as a platform for open review (bugs are analogous to mistakes, enhancements are possible ways of improving the main results etc.), and anyone can make their own version of the project, and (optionally) contribute back by making pull requests, which will also be openly reviewed. One may want to add an extra "review this project" functionality, so that people can comment on the original project like they do in a pull request. This may or may not be necessary, as anyone can make comments or point out mistakes in the issue tracker.</p>
<p>One may doubt this model due to concerns of credits because work in progress is available to anyone. Well, since all the contributions are trackable in project commit history and public discussions in issues and pull request reviews, there is in fact <em>less</em> room for cheating than the current model in academia, where scooping can happen without any witnesses. What we need is a platform with a good amount of trust like arXiv, so that the open research community honours (and can not ignore) the commit history, and the chance of mis-attribution can be reduced to minimum.</p>
<p>Compared to the academic model, open research also has the following advantages:</p>
<ul>
<li>Anyone in the world with Internet access will have a chance to participate in research, whether they are affiliated to a university, have the financial means to attend conferences, or are colleagues of one of the handful experts in a specific field.</li>
<li>The problem of replicating / understanding maths results will be solved, as people help each other out. This will also remove the burden of answering queries about one's research. For example, say one has a project "Understanding the fancy results in [paper name]", they write up some initial notes but get stuck understanding certain arguments. In this case they can simply post the questions on the issue tracker, and anyone who knows the answer, or just has a speculation can participate in the discussion. In the end the problem may be resolved without the authors of the paper being bothered, who may be too busy to answer.</li>
<li>Similarly, the burden of peer review can also be shifted from a few appointed reviewers to the crowd.</li>
</ul>
<h2 id="related-readings">related readings</h2>
<ul>
<li><a href="http://www.catb.org/esr/writings/cathedral-bazaar/">The Cathedral and the Bazaar by Eric Raymond</a></li>
<li><a href="http://michaelnielsen.org/blog/doing-science-online/">Doing sience online by Michael Nielson</a></li>
<li><a href="https://gowers.wordpress.com/2009/01/27/is-massively-collaborative-mathematics-possible/">Is massively collaborative mathematics possible? by Timothy Gowers</a></li>
</ul>
<div class="footnotes">
<hr />
<ol>
<li id="fn1"><p>Please send your comments to my email address - I am still looking for ways to add a comment functionality to this website.<a href="#fnref1">↩</a></p></li>
</ol>
</div>
Open mathematical research and launching toywikiposts/2017-04-25-open_research_toywiki.html2017-04-25T00:00:00ZYuchen Pei<p>As an experimental project, I am launching toywiki.</p>
<p>It hosts a collection of my research notes.</p>
<p>It takes some ideas from the open source culture and apply them to mathematical research: 1. It uses a very permissive license (CC-BY-SA). For example anyone can fork the project and make their own version if they have a different vision and want to build upon the project. 2. All edits will done with maximum transparency, and discussions of any of notes should also be as public as possible (e.g. Github issues) 3. Anyone can suggest changes by opening issues and submitting pull requests</p>
<p>Here are the links: <a href="http://toywiki.xyz">toywiki</a> and <a href="https://github.com/ycpei/toywiki">github repo</a>.</p>
<p>Feedbacks are welcome by email.</p>
A \(q\)-Robinson-Schensted-Knuth algorithm and a \(q\)-polymerposts/2016-10-13-q-robinson-schensted-knuth-polymer.html2016-10-13T00:00:00ZYuchen Pei<p>(Latest update: 2017-01-12) In <a href="http://arxiv.org/abs/1504.00666">Matveev-Petrov 2016</a> a \(q\)-deformed Robinson-Schensted-Knuth algorithm (\(q\)RSK) was introduced. In this article we give reformulations of this algorithm in terms of Noumi-Yamada description, growth diagrams and local moves. We show that the algorithm is symmetric, namely the output tableaux pair are swapped in a sense of distribution when the input matrix is transposed. We also formulate a \(q\)-polymer model based on the \(q\)RSK and prove the corresponding Burke property, which we use to show a strong law of large numbers for the partition function given stationary boundary conditions and \(q\)-geometric weights. We use the \(q\)-local moves to define a generalisation of the \(q\)RSK taking a Young diagram-shape of array as the input. We write down the joint distribution of partition functions in the space-like direction of the \(q\)-polymer in \(q\)-geometric environment, formulate a \(q\)-version of the multilayer polynuclear growth model (\(q\)PNG) and write down the joint distribution of the \(q\)-polymer partition functions at a fixed time.</p>
<p>This article is available at <a href="https://arxiv.org/abs/1610.03692">arXiv</a>. It seems to me that one difference between arXiv and Github is that on arXiv each preprint has a few versions only. In Github many projects have a "dev" branch hosting continuous updates, whereas the master branch is where the stable releases live.</p>
<p><a href="%7B%7B%20site.url%20%7D%7D/assets/resources/qrsklatest.pdf">Here</a> is a "dev" version of the article, which I shall push to arXiv when it stablises. Below is the changelog.</p>
<ul>
<li>2017-01-12: Typos and grammar, arXiv v2.</li>
<li>2016-12-20: Added remarks on the geometric \(q\)-pushTASEP. Added remarks on the converse of the Burke property. Added natural language description of the \(q\)RSK. Fixed typos.</li>
<li>2016-11-13: Fixed some typos in the proof of Theorem 3.</li>
<li>2016-11-07: Fixed some typos. The \(q\)-Burke property is now stated in a more symmetric way, so is the law of large numbers Theorem 2.</li>
<li>2016-10-20: Fixed a few typos. Updated some references. Added a reference: <a href="http://web.mit.edu/~shopkins/docs/rsk.pdf">a set of notes titled "RSK via local transformations"</a>. It is written by <a href="http://web.mit.edu/~shopkins/">Sam Hopkins</a> in 2014 as an expository article based on MIT combinatorics preseminar presentations of Alex Postnikov. It contains some idea (applying local moves to a general Young-diagram shaped array in the order that matches any growth sequence of the underlying Young diagram) which I thought I was the first one to write down.</li>
</ul>
AMS review of 'Double Macdonald polynomials as the stable limit of Macdonald superpolynomials' by Blondeau-Fournier, Lapointe and Mathieuposts/2015-07-15-double-macdonald-polynomials-macdonald-superpolynomials.html2015-07-15T00:00:00ZYuchen Pei<p>A Macdonald superpolynomial (introduced in [O. Blondeau-Fournier et al., Lett. Math. Phys. <span class="bf">101</span> (2012), no. 1, 27–47; <a href="http://www.ams.org/mathscinet/search/publdoc.html?pg1=MR&s1=2935476&loc=fromrevtext">MR2935476</a>; J. Comb. <span class="bf">3</span> (2012), no. 3, 495–561; <a href="http://www.ams.org/mathscinet/search/publdoc.html?pg1=MR&s1=3029444&loc=fromrevtext">MR3029444</a>]) in \(N\) Grassmannian variables indexed by a superpartition \(\Lambda\) is said to be stable if \({m (m + 1) \over 2} \ge |\Lambda|\) and \(N \ge |\Lambda| - {m (m - 3) \over 2}\) , where \(m\) is the fermionic degree. A stable Macdonald superpolynomial (corresponding to a bisymmetric polynomial) is also called a double Macdonald polynomial (dMp). The main result of this paper is the factorisation of a dMp into plethysms of two classical Macdonald polynomials (Theorem 5). Based on this result, this paper</p>
<ol style="list-style-type: decimal">
<li><p>shows that the dMp has a unique decomposition into bisymmetric monomials;</p></li>
<li><p>calculates the norm of the dMp;</p></li>
<li><p>calculates the kernel of the Cauchy-Littlewood-type identity of the dMp;</p></li>
<li><p>shows the specialisation of the aforementioned factorisation to the Jack, Hall-Littlewood and Schur cases. One of the three Schur specialisations, denoted as \(s_{\lambda, \mu}\), also appears in (7) and (9) below;</p></li>
<li><p>defines the \(\omega\) -automorphism in this setting, which was used to prove an identity involving products of four Littlewood-Richardson coefficients;</p></li>
<li><p>shows an explicit evaluation of the dMp motivated by the most general evaluation of the usual Macdonald polynomials;</p></li>
<li><p>relates dMps with the representation theory of the hyperoctahedral group \(B_n\) via the double Kostka coefficients (which are defined as the entries of the transition matrix from the bisymmetric Schur functions \(s_{\lambda, \mu}\) to the modified dMps);</p></li>
<li><p>shows that the double Kostka coefficients have the positivity and the symmetry property, and can be written as sums of products of the usual Kostka coefficients;</p></li>
<li><p>defines an operator \(\nabla^B\) as an analogue of the nabla operator \(\nabla\) introduced in [F. Bergeron and A. M. Garsia, in <em>Algebraic methods and \(q\)-special functions</em> (Montréal, QC, 1996), 1–52, CRM Proc. Lecture Notes, 22, Amer. Math. Soc., Providence, RI, 1999; <a href="http://www.ams.org/mathscinet/search/publdoc.html?r=1&pg1=MR&s1=1726826&loc=fromrevtext">MR1726826</a>]. The action of \(\nabla^B\) on the bisymmetric Schur function \(s_{\lambda, \mu}\) yields the dimension formula \((h + 1)^r\) for the corresponding representation of \(B_n\) , where \(h\) and \(r\) are the Coxeter number and the rank of \(B_n\) , in the same way that the action of \(\nabla\) on the \(n\) th elementary symmetric function leads to the same formula for the group of type \(A_n\) .</p></li>
</ol>
<p>Copyright notice: This review is published at http://www.ams.org/mathscinet-getitem?mr=3306078, its copyright owned by the AMS.</p>
On a causal quantum double product integral related to Lévy stochastic area.posts/2015-07-01-causal-quantum-product-levy-area.html2015-07-01T00:00:00ZYuchen Pei<p>In <a href="https://arxiv.org/abs/1506.04294">this paper</a> with <a href="http://homepages.lboro.ac.uk/~marh3/">Robin</a> we study the family of causal double product integrals \[ \prod_{a < x < y < b}\left(1 + i{\lambda \over 2}(dP_x dQ_y - dQ_x dP_y) + i {\mu \over 2}(dP_x dP_y + dQ_x dQ_y)\right) \]</p>
<p>where <span class="math inline">\(P\)</span> and <span class="math inline">\(Q\)</span> are the mutually noncommuting momentum and position Brownian motions of quantum stochastic calculus. The evaluation is motivated heuristically by approximating the continuous double product by a discrete product in which infinitesimals are replaced by finite increments. The latter is in turn approximated by the second quantisation of a discrete double product of rotation-like operators in different planes due to a result in <a href="http://www.actaphys.uj.edu.pl/findarticle?series=Reg&vol=46&page=1851">(Hudson-Pei2015)</a>. The main problem solved in this paper is the explicit evaluation of the continuum limit <span class="math inline">\(W\)</span> of the latter, and showing that <span class="math inline">\(W\)</span> is a unitary operator. The kernel of <span class="math inline">\(W\)</span> is written in terms of Bessel functions, and the evaluation is achieved by working on a lattice path model and enumerating linear extensions of related partial orderings, where the enumeration turns out to be heavily related to Dyck paths and generalisations of Catalan numbers.</p>
AMS review of 'Infinite binary words containing repetitions of odd period' by Badkobeh and Crochemoreposts/2015-05-30-infinite-binary-words-containing-repetitions-odd-periods.html2015-05-30T00:00:00ZYuchen Pei<p>This paper is about the existence of pattern-avoiding infinite binary words, where the patterns are squares, cubes and \(3^+\)-powers. There are mainly two kinds of results, positive (existence of an infinite binary word avoiding a certain pattern) and negative (non-existence of such a word). Each positive result is proved by the construction of a word with finitely many squares and cubes which are listed explicitly. First a synchronising (also known as comma-free) uniform morphism \(g\: \Sigma_3^* \to \Sigma_2^*\)</p>
<p>is constructed. Then an argument is given to show that the length of squares in the code \(g(w)\) for a squarefree \(w\) is bounded, hence all the squares can be obtained by examining all \(g(s)\) for \(s\) of bounded lengths. The argument resembles that of the proof of, e.g., Theorem 1, Lemma 2, Theorem 3 and Lemma 4 in [N. Rampersad, J. O. Shallit and M. Wang, Theoret. Comput. Sci. <strong>339</strong> (2005), no. 1, 19–34; <a href="http://www.ams.org/mathscinet/search/publdoc.html?r=1&pg1=MR&s1=2142071&loc=fromrevtext">MR2142071</a>]. The negative results are proved by traversing all possible finite words satisfying the conditions.</p>
<p> Let \(L(n_2, n_3, S)\) be the maximum length of a word with \(n_2\) distinct squares, \(n_3\) distinct cubes and that the periods of the squares can take values only in \(S\) , where \(n_2, n_3 \in \Bbb N \cup \{\infty, \omega\}\) and \(S \subset \Bbb N_+\) . \(n_k = 0\) corresponds to \(k\)-free, \(n_k = \infty\) means no restriction on the number of distinct \(k\)-powers, and \(n_k = \omega\) means \(k^+\)-free.</p>
<p> Below is the summary of the positive and negative results:</p>
<ol style="list-style-type: decimal">
<li><p>(Negative) \(L(\infty, \omega, 2 \Bbb N) < \infty\) : \(\nexists\) an infinite \(3^+\) -free binary word avoiding all squares of odd periods. (Proposition 1)</p></li>
<li><p>(Negative) \(L(\infty, 0, 2 \Bbb N + 1) \le 23\) : \(\nexists\) an infinite 3-free binary word, avoiding squares of even periods. The longest one has length \(\le 23\) (Proposition 2).</p></li>
<li>(Positive) \(L(\infty, \omega, 2 \Bbb N +
<ol style="list-style-type: decimal">
<li><dl>
<dt>= \infty\)</dt>
<dd>\(\exists\) an infinite \(3^+\) -free binary word avoiding squares of even periods (Theorem 1).
</dd>
</dl></li>
</ol></li>
<li><p>(Positive) \(L(\infty, \omega, \{1, 3\}) = \infty\) : \(\exists\) an infinite \(3^+\) -free binary word containing only squares of period 1 or 3 (Theorem 2).</p></li>
<li><p>(Negative) \(L(6, 1, 2 \Bbb N + 1) = 57\) : \(\nexists\) an infinite binary word avoiding squares of even period containing \(< 7\) squares and \(< 2\) cubes. The longest one containing 6 squares and 1 cube has length 57 (Proposition 6).</p></li>
<li><p>(Positive) \(L(7, 1, 2 \Bbb N + 1) = \infty\) : \(\exists\) an infinite \(3^+\) -free binary word avoiding squares of even period with 1 cube and 7 squares (Theorem 3).</p></li>
<li><p>(Positive) \(L(4, 2, 2 \Bbb N + 1) = \infty\) : \(\exists\) an infinite \(3^+\) -free binary words avoiding squares of even period and containing 2 cubes and 4 squares (Theorem 4).</p></li>
</ol>
<p>Copyright notice: This review is published at http://www.ams.org/mathscinet-getitem?mr=3313467, its copyright owned by the AMS.</p>
jstposts/2015-04-02-juggling-skill-tree.html2015-04-02T00:00:00ZYuchen Pei<p>jst = juggling skill tree</p>
<p>If you have ever played a computer role playing game, you may have noticed the protagonist sometimes has a skill "tree" (most of the time it is actually a directed acyclic graph), where certain skills leads to others. For example, <a href="http://hydra-media.cursecdn.com/diablo.gamepedia.com/3/37/Sorceress_Skill_Trees_%28Diablo_II%29.png?version=b74b3d4097ef7ad4e26ebee0dcf33d01">here</a> is the skill tree of sorceress in <a href="https://en.wikipedia.org/wiki/Diablo_II">Diablo II</a>.</p>
<p>Now suppose our hero embarks on a quest for learning all the possible juggling patterns. Everyone would agree she should start with cascade, the simplest nontrivial 3-ball pattern, but what afterwards? A few other accessible patterns for beginners are juggler's tennis, two in one and even reverse cascade, but what to learn after that? The encyclopeadic <a href="http://libraryofjuggling.com/">Library of Juggling</a> serves as a good guide, as it records more than 160 patterns, some of which very aesthetically appealing. On this website almost all the patterns have a "prerequisite" section, indicating what one should learn beforehand. I have therefore written a script using <a href="http://python.org">Python</a>, <a href="http://www.crummy.com/software/BeautifulSoup/">BeautifulSoup</a> and <a href="http://pygraphviz.github.io/">pygraphviz</a> to generate a jst (graded by difficulties, which is the leftmost column) from the Library of Juggling (click the image for the full size):</p>
<p><a href="../assets/resources/juggling.png"><img src="../assets/resources/juggling.png" alt="The juggling skill tree" style="width:38em" /></a></p>
Unitary causal quantum stochastic double products as universal interactions Iposts/2015-04-01-unitary-double-products.html2015-04-01T00:00:00ZYuchen Pei<p>In <a href="http://www.actaphys.uj.edu.pl/findarticle?series=Reg&vol=46&page=1851">this paper</a> with <a href="http://homepages.lboro.ac.uk/~marh3/">Robin</a> we show the explicit formulae for a family of unitary triangular and rectangular double product integrals which can be described as second quantisations.</p>
AMS review of 'A weighted interpretation for the super Catalan numbers' by Allen and Gheorghiciucposts/2015-01-20-weighted-interpretation-super-catalan-numbers.html2015-01-20T00:00:00ZYuchen Pei<p>The super Catalan numbers are defined as $$ T(m,n) = {(2 m)! (2 n)! 2 m! n! (m + n)!}. $$</p>
<p> This paper has two main results. First a combinatorial interpretation of the super Catalan numbers is given: $$ T(m,n) = P(m,n) - N(m,n) $$ where \(P(m,n)\) enumerates the number of 2-Motzkin paths whose \(m\) -th step begins at an even level (called \(m\)-positive paths) and \(N(m,n)\) those with \(m\)-th step beginning at an odd level (\(m\)-negative paths). The proof uses a recursive argument on the number of \(m\)-positive and -negative paths, based on a recursion of the super Catalan numbers appearing in [I. M. Gessel, J. Symbolic Comput. <strong>14</strong> (1992), no. 2-3, 179–194; <a href="http://www.ams.org/mathscinet/search/publdoc.html?r=1&pg1=MR&s1=1187230&loc=fromrevtext">MR1187230</a>]: $$ 4T(m,n) = T(m+1, n) + T(m, n+1). $$ This result gives an expression for the super Catalan numbers in terms of numbers counting the so-called ballot paths. The latter sometimes are also referred to as the generalised Catalan numbers forming the entries of the Catalan triangle.</p>
<p> Based on the first result, the second result is a combinatorial interpretation of the super Catalan numbers \(T(2,n)\) in terms of counting certain Dyck paths. This is equivalent to a theorem, which represents \(T(2,n)\) as counting of certain pairs of Dyck paths, in [I. M. Gessel and G. Xin, J. Integer Seq. <strong>8</strong> (2005), no. 2, Article 05.2.3, 13 pp.; <a href="http://www.ams.org/mathscinet/search/publdoc.html?r=1&pg1=MR&s1=2134162&loc=fromrevtext">MR2134162</a>], and the equivalence is explained at the end of the paper by a bijection between the Dyck paths and the pairs of Dyck paths. The proof of the theorem itself is also done by constructing two bijections between Dyck paths satisfying certain conditions. All the three bijections are formulated by locating, removing and adding steps.</p>
<p>Copyright notice: This review is published at http://www.ams.org/mathscinet-getitem?mr=3275875, its copyright owned by the AMS.</p>
Symmetry property of \(q\)-weighted Robinson-Schensted algorithms and branching algorithmsposts/2014-04-01-q-robinson-schensted-symmetry-paper.html2014-04-01T00:00:00ZYuchen Pei<p>In <a href="http://link.springer.com/article/10.1007/s10801-014-0505-x">this paper</a> a symmetry property analogous to the well known symmetry property of the normal Robinson-Schensted algorithm has been shown for the \(q\)-weighted Robinson-Schensted algorithm. The proof uses a generalisation of the growth diagram approach introduced by Fomin. This approach, which uses "growth graphs", can also be applied to a wider class of insertion algorithms which have a branching structure.</p>
<div class="figure">
<img src="../assets/resources/1423graph.jpg" alt="Growth graph of q-RS for 1423" />
<p class="caption">Growth graph of q-RS for 1423</p>
</div>
<p>Above is the growth graph of the \(q\)-weighted Robinson-Schensted algorithm for the permutation \({1 2 3 4\choose1 4 2 3}\).</p>
A \(q\)-weighted Robinson-Schensted algorithmposts/2013-06-01-q-robinson-schensted-paper.html2013-06-01T00:00:00ZYuchen Pei<p>In <a href="https://projecteuclid.org/euclid.ejp/1465064320">this paper</a> with <a href="http://www.bristol.ac.uk/maths/people/neil-m-oconnell/">Neil</a> we construct a \(q\)-version of the Robinson-Schensted algorithm with column insertion. Like the <a href="http://en.wikipedia.org/wiki/Robinson–Schensted_correspondence">usual RS correspondence</a> with column insertion, this algorithm could take words as input. Unlike the usual RS algorithm, the output is a set of weighted pairs of semistandard and standard Young tableaux \((P,Q)\) with the same shape. The weights are rational functions of indeterminant \(q\).</p>
<p>If \(q\in[0,1]\), the algorithm can be considered as a randomised RS algorithm, with 0 and 1 being two interesting cases. When \(q\to0\), it is reduced to the latter usual RS algorithm; while when \(q\to1\) with proper scaling it should scale to directed random polymer model in <a href="http://arxiv.org/abs/0910.0069">(O'Connell 2012)</a>. When the input word \(w\) is a random walk:</p>
<p>\begin{align*}\mathbb P(w=v)=\prod_{i=1}^na_{v_i},\qquad\sum_ja_j=1\end{align*}</p>
<p>the shape of output evolves as a Markov chain with kernel related to \(q\)-Whittaker functions, which are Macdonald functions when \(t=0\) with a factor.</p>