Okay, I've merged and integrated the original proposal and the revision into one comprehensive document. All changes and additions from the revision have been incorporated.
Full RSA Proposal: "Multi-Dimensional Signed Representational Voxel Encoding (MS-ReVE) with Flow Mapping"
Abstract: This proposal outlines a comprehensive Representational Similarity Analysis (RSA) framework, "Multi-Dimensional Signed Representational Voxel Encoding (MS-ReVE)," designed to compute interpretable, signed voxel-wise contributions to multi-class neural representations. MS-ReVE extends standard RSA by: 1) defining model-relevant contrasts; 2) using multiple-regression RSA on cross-validated, noise-normalized condition means to determine the strength of these contrasts within local searchlights; 3) projecting condition means onto contrast axes to derive signed voxel contributions; and 4) combining these elements to produce robust voxel-weight maps. The framework incorporates advanced features including voxel reliability weighting, voxel-specific RDM reconstruction scores, mapping of interaction effects, and rigorous basis robustness checks. Crucially, MS-ReVE culminates in "Representational Flow Mapping" (RFM), a novel technique to visualize and quantify how these multi-contrast representations transform across the cortical surface (with volumetric extensions possible), revealing large-scale organizational principles and information processing dynamics. A complementary "Representational Cross-Talk" analysis further probes the spatial co-localization of different representational dimensions.
1. Introduction & Rationale:
Representational Similarity Analysis (RSA) has proven invaluable for linking brain activity to computational models and psychological theories. However, traditional RSA often yields searchlight-level summary statistics (e.g., RDM correlations) or classifier-based voxel weights that can be hard to interpret directly in terms of underlying neural coding, especially for designs with more than two conditions. There is a pressing need for methods that:
- Provide signed voxel weights indicating how individual voxels contribute to specific, theoretically meaningful representational dimensions (contrasts).
- Scale robustly beyond two experimental conditions.
- Are firmly anchored in RSA theory (distance-based, second-moment statistics) rather than relying solely on classification.
- Offer insights into the spatial organization and transformation of these representations across the brain.
MS-ReVE addresses these needs by integrating regression-RSA with voxel-level projections, enhanced with reliability measures, interaction analyses, and a novel flow-mapping visualization approach.
2. Research Questions & Hypotheses (Example-driven):
This framework can address a wide range of questions, such as:
- Which specific representational dimensions (e.g., animacy, object size, task rule) are encoded in a given brain region?
- How do individual voxels contribute (positively or negatively) to the encoding of these distinct dimensions?
- Are there voxels that reliably contribute to specific dimensions, and how does this reliability vary spatially?
- How well does a voxel's multi-contrast contribution profile explain the local empirical representational geometry?
- Are there conjunctive codes, where voxels respond to interactions between representational dimensions?
- How sensitive are these voxel-level interpretations to the precise definition of theoretical contrasts?
- What is the large-scale spatial organization of these multi-dimensional representations across the cortex (e.g., gradients, boundaries)?
- How do these representations transform (e.g., rotate, scale, differentiate, integrate) along cortical pathways?
3. Methodology & Implementation Plan:
A. Preprocessing & Experimental Design:
- Data Acquisition: Standard fMRI data acquisition.
- Preprocessing: Standard fMRI preprocessing (motion correction, slice-time correction, spatial normalization/surface projection, temporal filtering). Crucially, include noise normalization/whitening of voxel time-series, typically using the residuals from a first-level GLM, to ensure that pattern analyses are not dominated by voxels with high noise variance.
- Experimental Design: Assumes a K-condition experimental design where K ≥ 2 (multi-way case emphasized), though the framework naturally includes the binary case.
B. Contrast Matrix Definition (C):
- A Priori Contrasts: Define a
K x Qcontrast matrixC, where each of theQcolumnsc_qrepresents a theoretically meaningful comparison or feature dimension (e.g., faces - houses, tools - animals, abstract - concrete).- Contrasts should be centered (sum of elements in each
c_qis zero). - Ideally, contrasts should be made orthonormal (
CᵀC = I) for independentβweights.
- Contrasts should be centered (sum of elements in each
- Data-Driven Contrasts (Alternative/Complementary):
- Construct a model RDM based on theoretical predictions.
- Apply classical Multidimensional Scaling (MDS) to the model RDM.
- Use the first
QMDS embedding dimensions as columns inC. - This provides a data-driven way to define orthogonal axes capturing the model's primary representational structure.
C. Searchlight RSA Core Engine (Iterate across searchlights):
- Cross-Validated Condition Means (
Û):- Within each searchlight (sphere of voxels
V): - Estimate condition-specific activation patterns (
μ_k) for each of theKconditions using a cross-validation scheme (e.g., leave-one-run-out, split-half). Employ methods like crossnobis to obtain unbiased estimates of pattern distinctness. - This results in a
K x VmatrixÛof cross-validated, noise-normalized condition means.
- Within each searchlight (sphere of voxels
- Empirical Second-Moment Matrix (
Ĝ):- Calculate the unbiased empirical second-moment matrix:
Ĝ = ÛÛᵀ. ThisK x Kmatrix fully determines all Mahalanobis distances between conditions within the searchlight.
- Calculate the unbiased empirical second-moment matrix:
- Multiple-Regression RSA:
- Vectorize the lower (or upper) triangle of
Ĝto form the dependent variabley. - For each contrast
c_qinC, form a predictor RDMR_q = c_q c_qᵀ. Vectorize the lower triangle of eachR_qto form columns of the design matrixX. - Fit the multiple linear regression:
y = Xβ + ε. - The resulting
β_qcoefficients indicate how strongly the geometry implied by contrastc_qis present in the local empirical geometryĜ. - Regularization (Optional): If
Qis large or contrasts are correlated, use ridge regression (β_ridge = (XᵀX + λI)⁻¹Xᵀy) or elastic-net. Hyperparameters (λ, and α for elastic-net) should be chosen via nested cross-validation within the searchlight training data to avoid bias.
- Vectorize the lower (or upper) triangle of
- Signed Voxel Contributions (
Δ_q):- For each contrast
c_q, project the cross-validated condition meansÛonto it:Δ_q = Ûᵀc_q. This yields aV-dimensional vector where each elementΔ_{q,v}represents voxelv's signed contribution to contrastq. - Store both:
- Raw
Δ_q: Preserves magnitude, reflecting effect size of the contrast in voxel activity. - Normalized
~Δ_q = Δ_q / ||Δ_q||: Captures direction only.
- Raw
- For each contrast
D. Voxel-Level Refinements & Metrics (within each searchlight):
- Voxel Reliability Weighting (
ρ_q,v):- For each
Δ_{q,v}estimate, compute its stability across cross-validation folds. - Define
ρ_{q,v} = 1 - Var_folds(Δ_{q,v}^{(fold)}) / (Var_folds(Δ_{q,v}^{(fold)}) + σ²_noise,q,v). σ²_noise,q,v(expected variance ofΔ_{q,v}under the null) is estimated from within-condition residual variances and contrast weights:σ²_noise,q,v = (1/S) Σ_s Σ_k (w_qk² / n_k^(s)) * σ_k,v²^(s).- Alternatively, use
ρ_{q,v} = 1 / (1 + SE(Δ_{q,v})^2)or similar based on Walther et al. (2016).
- For each
- Voxel-Specific RDM Reconstruction Score (
r_v):- For each voxel
v, construct a predicted RDM based only on its signed contrast profile:Ĝ^(v) = C * diag(β_1Δ_{1,v}, ..., β_QΔ_{Q,v}) * Cᵀ(using rawΔ). - Calculate
r_v = corr(vec_lower(Ĝ^(v)), vec_lower(Ĝ_empirical))as a measure of how crucial voxelv's multi-contrast profile is for reconstructing the local empirical RDM.
- For each voxel
E. Generating Voxel-Weight Maps:
- Contrast-Specific Maps (Set of Q maps):
M_{q,v} = β_q * Δ_{q,v}(magnitude-preserved signed contribution).~M_{q,v} = β_q * ~Δ_{q,v}(direction-only, scaled by RSA fit).M_{q,v}_reliab = ρ_{q,v} * β_q * Δ_{q,v}(reliability-weighted).
- Single Composite Map (optional):
w_v = Σ_q (β_q * ~Δ_{q,v})(or use reliability-weighted terms). Represents the net pull of voxelvin the overall representational space. Note: IfCis not orthogonal, the interpretation ofw_vis less straightforward as contributions are summed across potentially non-independent axes.
F. Extending to Interaction Effects:
- Define Interaction Contrasts: Create new columns in
Cby taking element-wise products of main effect contrast columns (c_{pq} = c_p ⊙ c_q). - Orthogonalize Expanded Matrix: Orthogonalize the expanded contrast matrix
C_exp = [C_main | C_interaction](e.g., using QR decomposition or Gram-Schmidt) to ensure interpretability of interactionβs as unique contributions. Note: Orthogonalization procedures may arbitrarily flip the sign of contrast vectors; after orthogonalization, consider aligning the sign of each derived column (e.g., interaction terms) with its primary parent component or based on its correlation with the rawΔprojection for consistent interpretation. - Re-run C.3 and C.4: Fit multiple-regression RSA with
C_expto getβ_{pq}and compute interaction voxel contributionsΔ_{pq,v} = Ûᵀc̃_{pq}(wherec̃_{pq}is the orthogonalized interaction contrast).
G. Aggregation & Statistical Inference:
- Searchlight Aggregation: Average the voxel weights (
M_{q,v},w_v,r_v, etc.) each voxel receives from all searchlights containing it. An RSA-weighted average (weighting byβ_qor searchlight R²) can also be used. - Permutation Testing: For voxel-wise significance testing, shuffle condition labels consistently across cross-validation folds, recompute the entire pipeline (from
Ûonwards) many times to build null distributions forβ_q,M_{q,v},w_v,r_v, etc. Apply appropriate cluster-correction methods. Note on memory: Storing all intermediateΔvalues across permutations can be memory-intensive. Consider strategies like streaming permutations (recomputingΔon the fly within the permutation loop) or writing intermediate searchlight results to disk if RAM is limited.
H. Representational Flow Mapping (RFM):
- Surface Projection: Project voxel-wise multi-contrast loading vectors
m_v = [β_1Δ_{1,v}, ..., β_QΔ_{Q,v}](or reliability-weighted versions) to the nearest vertices on a cortical surface model (e.g., subject's midthickness surface). This createsQscalar mapsf_q(i)on the surface. - Tangential Gradient Estimation: For each surface map
f_q(i), compute its 2D tangential gradient∇_T f_q(i)at each vertexi, typically using filters approximating Gaussian derivatives to ensure robustness to high-frequency noise. - Local PCA for Principal Flow:
- In a moving geodesic window on the surface:
- Stack all
Qgradient vectors∇_T f_q(i)from all vertices within the window into a large matrix (effectively(|WindowVertices|*Q)rows x2columns). - Perform PCA on this matrix's
2x2covariance to find the principal flow directione₁(a 2D unit vector) and its associated eigenvalueλ₁.
- Streamline Visualization:
- Draw streamlines (e.g., using Line Integral Convolution) along
e₁. - Color streamlines by the contrast
qwhose gradient∇_T f_q(i)aligns best withe₁(i.e.,argmax_q |<∇_T f_q(i), e₁>|), with sign indicating increase/decrease. - Modulate streamline opacity/thickness by
λ₁(flow strength) and/or underlyingρ_q,v.
- Draw streamlines (e.g., using Line Integral Convolution) along
- Analysis of Transformation Along Flow Lines:
- Sample
m(s)(the Q-dimensional vector ofβΔvalues) along streamlines. - Calculate directional derivatives (
dm/ds) to assess rate and dimensionality of change (via SVD). - Compare
m(s)andm(s+Δs)to quantify representational rotation (angle change) and scaling (norm change). - Visualize these transformations (e.g., map rotation rate to streamline hue).
- Use permutation testing for significance of flow properties.
- Sample
- Volumetric Extension (Optional): While surface-based RFM is often preferred for visualizing cortical organization, the core logic can be extended to 3D volume space by computing 3D gradients and performing PCA on the resulting
3x3covariance matrix within volumetric searchlights. Visualization is more challenging but may be relevant for subcortical structures.
I. Robustness & Validation:
- Basis Robustness Checks:
- Re-run key analyses (e.g., generating
M_{q,v}maps) with an alternative, plausible contrast matrixC'(e.g., MDS-derived if initially theory-driven, or vice-versa). - Correlate the resulting voxel maps. Low correlations suggest basis-dependent interpretations.
- Use diagnostics: Compare searchlight R² for different bases; Canonical Correlation Analysis (CCA) between
CandC'; assess R² gain when using[C | C']; compare with non-linear/kernel RSA to probe model mismatch.
- Re-run key analyses (e.g., generating
4. Data Analysis & Interpretation Strategy:
β_qmaps (from searchlight regression): Indicate regions where the geometry predicted by contrastqis prevalent.M_{q,v}maps: Reveal how individual voxels contribute (sign and magnitude) to each specific contrastq.M_{q,v}_reliabmaps: Highlight robust voxel contributions.w_vcomposite map: Shows the net directional "pull" of voxels in the combined representational space.r_vmaps: Identify voxels whose multi-contrast tuning is critical for the local empirical geometry.- Interaction maps (
β_{pq}Δ_{pq,v}): Uncover voxels involved in conjunctive coding. - RFM visualizations: Provide insights into the large-scale topological organization, functional boundaries, and representational transformations across cortex. Analysis of
λ₁, rotation, and scaling along flow lines will characterize the nature of these transformations. - Representational Cross-Talk Analysis:
- Compute the voxel-wise correlation between pairs of reliability-weighted contrast maps (
M_{q,v}_reliabandM_{p,v}_reliab) across voxels within relevant brain regions or the whole brain/surface. - High positive correlations suggest shared neural populations contribute similarly to both contrasts.
- High negative correlations suggest competitive coding or opponent populations.
- Visualizing these correlation patterns (e.g., as a matrix or projecting strong correlations onto the brain) complements RFM by showing where different representational dimensions spatially co-localize or segregate.
- Compute the voxel-wise correlation between pairs of reliability-weighted contrast maps (
- Group-Level Inference: For analyzing results across participants, individual participant maps (
M_q,v,r_v, RFM-derived metrics, etc.) should be aligned to a common space (e.g., MNI volume space or a surface template like fsaverage). Standard group-level statistical approaches (e.g., mixed-effects models, t-tests on aggregated maps with appropriate permutation-based correction) can then be applied.
5. Expected Outcomes & Significance:
MS-ReVE will provide an unprecedentedly rich and interpretable view of distributed neural representations. Expected outcomes include:
- Detailed, signed voxel-level maps of multi-dimensional neural codes.
- Identification of robust and reliable voxel contributions.
- Discovery of conjunctive coding patterns.
- A quantitative understanding of how representations are organized and transform across cortical areas, linking local computations to large-scale network dynamics.
- This framework will significantly advance our ability to test nuanced theories of neural representation and bridge the gap between computational models and brain activity.
6. Potential Challenges & Mitigations:
- Computational Cost: Searchlight analyses, permutation testing, and RFM can be computationally intensive. Mitigation: Efficient coding, parallel processing, optimized algorithms.
- Interpretation Complexity: The wealth of generated maps requires careful interpretation. Mitigation: Clear guidelines, targeted research questions, development of standardized reporting.
- Choice of Contrasts: Results can be sensitive to
C. Mitigation: Explicit reporting ofC, basis robustness checks, use of both theory-driven and data-driven contrasts. - Multiple Comparisons: Extensive voxel-wise testing. Mitigation: Rigorous permutation-based cluster correction methods.
- Memory Usage: Especially during permutation testing. Mitigation: Streaming computations, disk caching, efficient data structures (as noted in 3.G.2).
7. Implementation Plan (Conceptual - Language/Platform Agnostic):
The implementation will be modular:
- Module 1: Core RSA Engine:
- Input: Preprocessed fMRI data, condition labels, contrast matrix
C, searchlight definitions. - Functions for: Cross-validated mean estimation,
Ĝcomputation, multiple-regression RSA (with optional regularization),Δ_qcalculation. - Output:
β_qvalues per searchlight, rawΔ_qand~Δ_qvectors per searchlight.
- Input: Preprocessed fMRI data, condition labels, contrast matrix
- Module 2: Voxel-Level Metrics & Map Generation:
- Input: Outputs from Module 1.
- Functions for: Reliability (
ρ_q,v) calculation, RDM reconstruction (r_v), map generation (M_q,v,w_v), searchlight aggregation.
- Module 3: Interaction Analysis:
- Functions for: Generating interaction contrasts, orthogonalizing
C_exp, integrating with Module 1 & 2 for interaction maps.
- Functions for: Generating interaction contrasts, orthogonalizing
- Module 4: Statistical Inference & Aggregation:
- Functions for: Permutation testing framework (including memory management considerations), cluster correction, group-level analysis preparation and execution.
- Module 5: Representational Flow Mapping (RFM):
- Input: Aggregated
β_qΔ_{q,v}maps, cortical surface model (and/or volumetric data). - Functions for: Surface projection (if applicable), gradient calculation (with smoothing options), local PCA (for surface or volume), streamline generation, transformation analysis along streamlines.
- Visualization tools (interfacing with existing surface/volume visualization libraries).
- Input: Aggregated
- Module 6: Cross-Talk & Diagnostics:
- Functions for: Basis robustness checks (CCA, R² comparisons), Representational Cross-Talk computation and visualization.
This modular design will facilitate development, testing, and future extensions. Each module will encapsulate specific mathematical operations and data transformations.
Okay, this is a very comprehensive summary of the rMVPA codebase. It clearly lays out the object-oriented structure, key functionalities, and dependencies. This is an excellent foundation for thinking about how to integrate the G-ReCa Phase 0 (and subsequently Phase 1) plan.
Based on this summary and our G-ReCa Phase 0 proposal, here's how we can conceptualize the integration and identify next steps.
Conceptual Integration of G-ReCa Phase 0 with rMVPA:
The core idea is to leverage rMVPA's existing capabilities for data handling, design specification, and potentially some pre-processing, and then build new modules or extend existing ones to perform the G-ReCa specific steps: MS-ReVE output generation, PCA pre-reduction, PPCA/lightweight manifold learning, ID estimation, and validation.
Proposed Workflow & rMVPA Integration Points:
-
Data Preparation (Leveraging
rMVPA):- Dataset Creation (
mvpa_dataset,mvpa_surface_dataset): UserMVPAto load and structure the fMRI data (volume or surface). This handles train/test splits if needed for initial pattern estimation. - Design Specification (
mvpa_design): Define the experimental conditions, blocking variables for cross-validation, etc., usingrMVPA's design objects.
- Dataset Creation (
-
MS-ReVE Output Generation (New Module/Extension):
- This is the most significant new piece. We need a way to generate the
m_vvectors ([β₁Δ₁,v, ..., β_QΔ₁,v]). - Step 2a: Cross-Validated Condition Means (
Û):rMVPA'smvpa_modelwith a simple "model" (e.g., just averaging betas from a first-level GLM within each condition and cross-validation fold) could be adapted.- Alternatively, a new function might be needed that takes an
mvpa_datasetandmvpa_design(withcv_spec) and returns theK x VmatrixÛ(cross-validated condition means for each voxel/vertexV).
- Step 2b: Contrast Definition (
C): This would be user-defined outsiderMVPAas aK x Qmatrix. - Step 2c: Regression RSA (
β_q):- This could potentially leverage
rsa_modelif adapted, or be a custom function. Thersa_modelcurrently seems focused on RDM-to-RDM regression. We need to regressvec_lower(ÛÛᵀ)ontovec_lower(c_q c_qᵀ)forQcontrasts. This might require a newmvpa_mod_specor a customprocess_roifunction for a searchlight approach ifβ_qare to be searchlight-specific. For a whole-brainm_v,β_qmight be derived globally. - Decision Point: Are
β_qglobal or searchlight-specific for constructingm_v? The G-ReCa proposal implied searchlight aggregation, soβ_qwould be local.
- This could potentially leverage
- Step 2d: Voxel Contributions (
Δ_q = Ûᵀc_q): This is a matrix multiplication. - Step 2e: Construct
m_v: Combine localβ_qandΔ_qfor each voxel. - Output: A
NeuroVecorNeuroSurfaceVectorobject containing them_vvectors.
- This is the most significant new piece. We need a way to generate the
-
Dimensionality Pre-Reduction (PCA - New or Util):
- Input: The
m_vNeuroVec/NeuroSurfaceVector. rMVPAdoesn't seem to have a dedicated top-level PCA function for this purpose, thoughpcadistimplies PCA capability. A utility function usingstats::prcompor a more scalable randomized PCA (e.g., fromirlbaorrsvdpackages) would be needed.- Output: PCA-reduced
m_vmatrix (N_voxels x ~256 components).
- Input: The
-
Phase 0 Manifold Learning (PPCA - New Module):
- Input: PCA-reduced
m_vmatrix. - Implement PPCA (e.g., using EM algorithm, or leveraging existing R packages like
pcaMethods::ppcaif suitable and its uncertainty outputs are accessible). - Output: Latent coordinates
z, posterior covarianceCov(z|m_v).
- Input: PCA-reduced
-
Intrinsic Dimensionality Estimation (New or Util):
- Input: PCA-reduced
m_v(or PPCA-whitened data). - Implement/wrap TwoNN and Levina-Bickel MLE (e.g., using R packages like
intrinsicDimensionor custom code). - Use scree plot from PPCA likelihoods.
- Output: ID estimates, plot.
- Input: PCA-reduced
-
Validation (Leveraging
rMVPAUtilities where possible, plus New):- Reconstruction MSE: Calculated from PPCA.
- Trustworthiness/Continuity: Use
scikit-learnviareticulate, or find/implement R equivalents.rMVPAdoesn't seem to have these directly. - External Gradient Correlations: Standard R functions (
cor). - Leave-One-Subject-Out: This requires iterating the PPCA fitting and prediction steps.
-
Output Storage & Visualization:
- Store
zand uncertainty maps asNeuroVec/NeuroSurfaceVectorfor easy visualization withneuroim2/neurosurftools. - Report generation (Markdown/Jupyter via R Markdown/
knitr).
- Store
Key rMVPA Objects that Might be Extended or Reused:
mvpa_model_spec: Could we define ag_reca_phase0_model_specthat encapsulates the PPCA step and its parameters?run_custom_regional/run_custom_searchlight: These are very promising. The core MS-ReVE output generation (steps 2a-2e) could potentially be wrapped in acustom_funcfor either ROI or searchlight application. Therun_searchlightmachinery would handle the iteration and providesl_data(voxel patterns within a sphere) to our custom function.NeuroVec/NeuroSurfaceVector(nvec,nsvec): These will be the primary data containers form_v,z, and uncertainty maps.
Concrete Proposal for Initial Integration Steps (Focusing on MS-ReVE Output Generation):
Project: grecamvpa - G-ReCa Integration with rMVPA (Phase 0 Focus)
Module 1: MS-ReVE Output Generation
-
msreve_design(New S3/S4 class):- Slots:
mvpa_design: The underlyingmvpa_desobject for condition/block info.contrast_matrix: The user-definedK x QmatrixC.beta_estimation_method:char(e.g., "global_rsa", "searchlight_rsa").
- Purpose: Encapsulates all necessary inputs for MS-ReVE.
- Slots:
-
compute_crossvalidated_means(fun):- Input:
mvpa_dataset(ds),mvpa_design(des),cv_spec(cv). - Process: Iterates through
cv_specfolds. For each fold:- Identifies training data for that fold.
- Estimates condition means (
μ_k) for allKconditions using the training data (e.g., simple averaging of voxel activities per condition, or betas from a simple GLM fit on training data). - Stores these means, associated with the test fold conditions.
- Output: A list structure or array containing
Û(theK x Vmatrix of cross-validated condition means, where each rowkis the mean for conditionkestimated from data not including trials of conditionkfrom the current test fold/run).
- Input:
-
run_msreve_searchlight(funwrappingrun_custom_searchlight):- Input:
mvpa_dataset(ds),msreve_design(msreve_des),radius(num),cv_spec(cv). - Internal
custom_func(process_msreve_sphere):- Receives
sl_data(data for current searchlight sphere) andsl_info. - Calls
compute_crossvalidated_meansonsl_datausing the providedmsreve_des$mvpa_designandcv_specto get localÛ_sl. - Computes local
Ĝ_sl = Û_sl Û_slᵀ. - Performs multiple regression RSA using
msreve_des$contrast_matrix(C) to get localβ_q_slvector (lengthQ). - Computes local
Δ_q_sl = Û_slᵀ c_qfor each contrastq. - Constructs the local
m_v_slvector for the center voxel of the sphere:m_v_center = [β_1_sl * Δ_1_center_sl, ..., β_Q_sl * Δ_Q_center_sl]. (Need to decide ifΔ_qis for the whole sphere or just center voxel form_v). The proposal impliesΔ_qis a V-dim vector, soβ_q * Δ_qwould be too. Form_vwhich is Q-dim per voxel, we'd usem_v[q] = β_q_sl * Δ_{q,center_voxel_sl}. - Returns this
Q-dimensionalm_v_slvector for the center voxel.
- Receives
- Output: A
NeuroVecorNeuroSurfaceVectorwhere each voxel/vertex value is itsQ-dimensionalm_vvector (this implies the output is actually aQ-channelNeuroVec/NeuroSurfaceVector).
- Input:
Module 2: PPCA & ID Estimation (Can be more standalone R functions initially)
- Functions for PCA pre-reduction.
- Function for PPCA (EM algorithm or wrapper).
- Functions for TwoNN, Levina-Bickel MLE.
Module 3: Validation (Standalone R functions)
- Functions for Trustworthiness/Continuity (possibly via
reticulate). - Functions for correlation with external maps.
Phased Implementation Plan for grecamvpa (Phase 0):
- Develop
compute_crossvalidated_means: This is foundational. Test thoroughly. - Develop
msreve_designobject. - Develop the
process_msreve_spherecustom function:- First, implement the regression RSA and
Δ_qcalculation. - Then, integrate with
compute_crossvalidated_means. - Carefully define how
m_vis constructed from localβ_qandΔ_q.
- First, implement the regression RSA and
- Wrap
process_msreve_sphereinrun_msreve_searchlightusingrun_custom_searchlight: This generates the primarym_vmaps. - Implement PCA pre-reduction for
m_vmaps. - Implement PPCA module.
- Implement ID estimation module.
- Implement validation metrics.
- End-to-end pipeline test and report generation.
This approach leverages rMVPA's strengths in data handling and searchlight iteration, while building the new MS-ReVE and manifold learning components in a modular way. The run_custom_searchlight function seems like a key enabler.