Trace:

en:pcoa_nmds

This shows you the differences between two versions of the page.

Both sides previous revision Previous revision Next revision | Previous revision | ||

en:pcoa_nmds [2019/02/06 01:11] David Zelený [Non-metric Multidimensional Scaling (NMDS)] |
en:pcoa_nmds [2019/02/25 20:56] David Zelený |
||
---|---|---|---|

Line 1: | Line 1: | ||

- | ====== Ordination analysis ====== | + | Section: [[en:ordination]] |

===== PCoA & NMDS (distance-based unconstrained ordination) ===== | ===== PCoA & NMDS (distance-based unconstrained ordination) ===== | ||

Line 7: | Line 7: | ||

[[{|width: 7em; background-color: white; color: navy}pcoa_nmds_exercise|Exercise {{::lock-icon.png?nolink|}}]] | [[{|width: 7em; background-color: white; color: navy}pcoa_nmds_exercise|Exercise {{::lock-icon.png?nolink|}}]] | ||

- | ==== Theory ==== | + | ==== Principal Correspondence Analysis (PCoA) ==== |

- | | + | |

- | === Principal Correspondence Analysis (PCoA) === | + | |

This method is also known as MDS (Metric Multidimensional Scaling). While PCA preserves Euclidean distances among samples and CA chi-square distances, PCoA provides Euclidean representation of a set of objects whose relationship is measured by any dissimilarity index. As well as PCA and CA, PCoA returns a set of orthogonal axes whose importance is measured by eigenvalues. This means that calculating PCoA on Euclidean distances among samples yields the same results as PCA calculated on the covariance matrix of the same dataset (if scaling 1 is used), and PCoA on Chi-square distances similar results to CA (but not identical, because CA is applying the weights in the calculation). In case of using non-metric (non-Euclidean) distance index, the PCoA may produce axes with negative eigenvalues which cannot be plotted. Solution to this is to either convert the non-metric dissimilarity index to metric one (e.g. Bray-Curtis dissimilarity is non-metric, but after square-root transformation becomes metric) or using specific corrections (Lingoes or Cailliez). Since the PCoA algorithm is based on the matrix of dissimilarities between samples, the species scores are not calculated; however, the species can be projected to the ordination diagram by weighted averaging or correlations, similarly as supplementary environmental variables. | This method is also known as MDS (Metric Multidimensional Scaling). While PCA preserves Euclidean distances among samples and CA chi-square distances, PCoA provides Euclidean representation of a set of objects whose relationship is measured by any dissimilarity index. As well as PCA and CA, PCoA returns a set of orthogonal axes whose importance is measured by eigenvalues. This means that calculating PCoA on Euclidean distances among samples yields the same results as PCA calculated on the covariance matrix of the same dataset (if scaling 1 is used), and PCoA on Chi-square distances similar results to CA (but not identical, because CA is applying the weights in the calculation). In case of using non-metric (non-Euclidean) distance index, the PCoA may produce axes with negative eigenvalues which cannot be plotted. Solution to this is to either convert the non-metric dissimilarity index to metric one (e.g. Bray-Curtis dissimilarity is non-metric, but after square-root transformation becomes metric) or using specific corrections (Lingoes or Cailliez). Since the PCoA algorithm is based on the matrix of dissimilarities between samples, the species scores are not calculated; however, the species can be projected to the ordination diagram by weighted averaging or correlations, similarly as supplementary environmental variables. | ||

- | === Non-metric Multidimensional Scaling (NMDS) === | + | ==== Non-metric Multidimensional Scaling (NMDS) ==== |

Non-metric Multidimensional Scaling is a non-metric alternative of PCoA analysis. It can use any dissimilarity measure among samples, and the main aim is to locate samples in low dimensional ordination space (two or three axes) so as the Euclidean distances between these samples correspond to the dissimilarities represented by the original dissimilarity index. The method is non-metric, because it does not use the raw dissimilarity values, but converts them into the ranks and use these ranks in the calculation. The algorithm is iterative - it starts from the initial distribution of samples in the ordination space, and by the iterative reshuffling of samples it searches for optimal final distribution. Due to the iterative nature of the algorithm, each run may result in a different solution. | Non-metric Multidimensional Scaling is a non-metric alternative of PCoA analysis. It can use any dissimilarity measure among samples, and the main aim is to locate samples in low dimensional ordination space (two or three axes) so as the Euclidean distances between these samples correspond to the dissimilarities represented by the original dissimilarity index. The method is non-metric, because it does not use the raw dissimilarity values, but converts them into the ranks and use these ranks in the calculation. The algorithm is iterative - it starts from the initial distribution of samples in the ordination space, and by the iterative reshuffling of samples it searches for optimal final distribution. Due to the iterative nature of the algorithm, each run may result in a different solution. | ||

Line 20: | Line 18: | ||

- Construct initial configuration of all samples in //m// dimensions as a starting point of the iterative process. The result of the whole iteration procedure may depend on this step, so it's somehow crucial - the initial configuration can be generated randomly, but a better way is to help it a bit, e.g. by using as starting positions results of PCoA ordination on the same dissimilarity matrix. | - Construct initial configuration of all samples in //m// dimensions as a starting point of the iterative process. The result of the whole iteration procedure may depend on this step, so it's somehow crucial - the initial configuration can be generated randomly, but a better way is to help it a bit, e.g. by using as starting positions results of PCoA ordination on the same dissimilarity matrix. | ||

- An iterative procedure tries to reshuffle the objects in a given number of dimension in such a way that the real (Euclidean) distances among samples in the ordination spaces reflect best their compositional dissimilarity measured by used dissimilarity index. The fit between these two parameters is expressed by so-called //stress value// - the lower stress value the better. | - An iterative procedure tries to reshuffle the objects in a given number of dimension in such a way that the real (Euclidean) distances among samples in the ordination spaces reflect best their compositional dissimilarity measured by used dissimilarity index. The fit between these two parameters is expressed by so-called //stress value// - the lower stress value the better. | ||

- | - The algorithm stops when new iteration cannot lower the stress value - the solution has been reached. | + | - The algorithm stops when the new iteration cannot lower the stress value - the solution has been reached. |

- After the algorithm is finished, the final solution is rotated using PCA to ease its interpretation (that's why the final ordination diagram has ordination axes, even if the original algorithm doesn't produce any). | - After the algorithm is finished, the final solution is rotated using PCA to ease its interpretation (that's why the final ordination diagram has ordination axes, even if the original algorithm doesn't produce any). | ||

Line 27: | Line 25: | ||

Considering the algorithm, NMDS and PCoA have close to nothing in common. NMDS is an iterative method which may return different solution on re-analysis of the same data, while PCoA has a unique analytical solution. The number of ordination axes (dimensions) in NMDS can be fixed by the user, while in PCoA the number of axes is given by the dataset properties (number of samples). If the initial configuration of samples in NMDS algorithm is produced by PCoA on the same matrix, then the iterative NMDS algorithm may be seen as a method how to further optimize the sample distribution so as more variation in species composition is represented by fewer ordination axes. | Considering the algorithm, NMDS and PCoA have close to nothing in common. NMDS is an iterative method which may return different solution on re-analysis of the same data, while PCoA has a unique analytical solution. The number of ordination axes (dimensions) in NMDS can be fixed by the user, while in PCoA the number of axes is given by the dataset properties (number of samples). If the initial configuration of samples in NMDS algorithm is produced by PCoA on the same matrix, then the iterative NMDS algorithm may be seen as a method how to further optimize the sample distribution so as more variation in species composition is represented by fewer ordination axes. | ||

- | <imgcaption pcoa_nmds|Ordination diagrams of PCoA (left) and NMDS (right) calculated on Bray-Curtis dissimilarity index (square-rooted to made metric) using data from Vltava river valley dataset. The classification of samples into one of the four vegetation groups (GROUP 1-4) is displayed by different color and symbol of individual site scores. Species are added to the ordination diagrams as weighted averages of their abundances in the sites; only species occurring in at least 20 sites are displayed.>{{:obrazky:pcoa_nmds.png?direct|}}</imgcaption> | + | <imgcaption pcoa_nmds|Ordination diagrams of PCoA (left) and NMDS (right) calculated on Bray-Curtis dissimilarity index (square-rooted to made metric) using data from Vltava river valley dataset. The classification of samples into one of the four vegetation groups (GROUP 1-4) is displayed by different colour and symbol of individual site scores. Species are added to the ordination diagrams as weighted averages of their abundances in the sites; only species occurring in at least 20 sites are displayed.>{{:obrazky:pcoa_nmds.png?direct|}}</imgcaption> |

en/pcoa_nmds.txt · Last modified: 2019/02/25 20:56 by David Zelený