This method takes a symmetric "dissimilarities matrix" as input
MDS
TRUE or FALSE: An MDS solution is most informative when variables are roughly equally spaced from one another
FALSE
This type of dimension reduction best preserves the relationships between variables
MDS
This MDS transformation typically gives the lowest stress-1 value
Ordinal transformation (ordinal MDS)
Name 1 of the 2 base R functions for performing PCA
prcomp() or princomp()
It's the type of data needed to perform correspondence analysis
Frequency table
Name a visualization technique that accomplishes roughly the same purpose as MDS
Correlation heatmap, Scatterplot matrix, Network, Biplot
If you think that unique item variances amount to "error", you would want to use this type of dimension reduction
Factor analysis
In lieu of using a sum score, you can calculate a "scale score" using this method (Not IRT)
Principal components analysis (PCA)
With this function, you learned to simulate data using a covariance matrix (named "Sigma") and a vector of means
mvrnorm()
Name the two most common formats for storing a network
Adjacency matrix
Edge list
This type of plot helps you visualize the fit of an MDS transformation (e.g., interval, ordinal, spline)
Shepard plot
Just how well can a limited dimensional space account for your higher-dimensional data?
With PCA you can tell using ___, with MDS you can tell using ___.
% variance explained; stress (stress-1)
This method takes an asymmetric dissimilarities matrix as input
Unfolding
This network R package isn't the best for visualization, but it has functions for performing community analysis and is used for many social network analyses.
igraph
This data format has exactly 1 participant per row, and 1 row per participant
Wide format
This geometric shape is helpful when visualizing the "confidence interval" of points in an MDS solution
Ellipse
This transformation, common in factor analysis, is often considered taboo in principal components analysis
Rotation
This network estimation method is ideal if you want to know unique variable-variable relationships, but you believe that the underlying 'true' model should be extremely dense
Partial correlation network (unregularized) / Concentration network
This magic argument allows you to specify which type of network you want in the bootnet() function
default
e.g.
bootnet(...., default="EBICglasso")
You could fit a "parafac" on this type of data structure
Array / tensor
What two key pieces of information do the vectors tell you in a PCA biplot?
1. Variable loadings (x, y coordinates of vector)
2. Approximate correlation between variables (angle between vectors)
Eigenvalue decomposition, used in PCA, decomposes the covariance matrix into two "parts". What is the interpretation of each part?
A "scale" part (eigenvalues) and a "direction" part (eigenvector)
This method reduces relational and individual high-dimensional space, but picks at least 1 special dimension to treat differently than the rest.
Three-way PCA
rogersPCA <- prcomp(Rogers, scale.=TRUE)
What R code will give me the variable loadings on (only) the first principal component?
rogersPCA$rotation[,1]