diff --git a/README.md b/README.md index c1b31fe..04b18c4 100644 --- a/README.md +++ b/README.md @@ -9,7 +9,7 @@ many conditions (or many effects on many outcomes). The methods use Empirical Bayes methods to estimate patterns of similarity among conditions, and then exploit those patterns of similarity among conditions to improve accuracy of effect estimates. -See [Urbut et al][mashr-paper] for details of the model and methods. +See [Urbut et al][mash-paper] for details of the model and methods. Note that this R package is a refactoring of the code originally used to generate the results for the manuscript. The original package code is @@ -85,10 +85,10 @@ the man directory), + These are the R commands to build the website (make sure you are connected to Internet while running these commands): -```R -library(pkgdown) -build_site(mathjax = FALSE) -``` + ```R + library(pkgdown) + pkgdown::build_site(mathjax = FALSE) + ``` [cran-docs]: https://cran.r-project.org/manuals.html [mash-paper]: https://doi.org/10.1038/s41588-018-0268-8 diff --git a/docs/articles/eQTL_outline.html b/docs/articles/eQTL_outline.html index 8a6498b..21fe8ca 100644 --- a/docs/articles/eQTL_outline.html +++ b/docs/articles/eQTL_outline.html @@ -134,9 +134,9 @@

U.c = cov_canonical(data.random)
 m = mash(data.random, Ulist = c(U.ed,U.c), outputlevel = 1)
#  - Computing 5000 x 241 likelihood matrix.
-#  - Likelihood calculations took 0.14 seconds.
+#  - Likelihood calculations took 0.13 seconds.
 #  - Fitting model with 241 mixture components.
-#  - Model fitting took 2.10 seconds.
+# - Model fitting took 2.14 seconds.

diff --git a/docs/articles/flash_mash.html b/docs/articles/flash_mash.html index 02cc9a6..d2f2407 100644 --- a/docs/articles/flash_mash.html +++ b/docs/articles/flash_mash.html @@ -227,7 +227,7 @@

#  - Computing 5000 x 256 likelihood matrix.
 #  - Likelihood calculations took 0.16 seconds.
 #  - Fitting model with 256 mixture components.
-#  - Model fitting took 2.34 seconds.
+# - Model fitting took 2.35 seconds.

diff --git a/docs/articles/intro_correlations.html b/docs/articles/intro_correlations.html index ba74f56..a1a1c93 100644 --- a/docs/articles/intro_correlations.html +++ b/docs/articles/intro_correlations.html @@ -93,9 +93,9 @@

U.c = cov_canonical(data.V) 
 m.c = mash(data.V, U.c) # fits with correlations because data.V includes correlation information 
#  - Computing 2000 x 151 likelihood matrix.
-#  - Likelihood calculations took 0.05 seconds.
+#  - Likelihood calculations took 0.04 seconds.
 #  - Fitting model with 151 mixture components.
-#  - Model fitting took 0.43 seconds.
+#  - Model fitting took 0.42 seconds.
 #  - Computing posterior matrices.
 #  - Computation allocated took 0.01 seconds.
print(get_loglik(m.c),digits=10) # log-likelihood of the fit with correlations set to V
@@ -103,9 +103,9 @@

We can also compare with the original analysis. (Note that the canonical covariances do not depend on the correlations, so we can use the same U.c here for both analyses. If we used data-driven covariances we might prefer to estimate these separately for each analysis as the correlations would affect them.)

m.c.orig = mash(data, U.c) # fits without correlations because data object was set up without correlations
#  - Computing 2000 x 151 likelihood matrix.
-#  - Likelihood calculations took 0.05 seconds.
+#  - Likelihood calculations took 0.06 seconds.
 #  - Fitting model with 151 mixture components.
-#  - Model fitting took 0.41 seconds.
+#  - Model fitting took 0.40 seconds.
 #  - Computing posterior matrices.
 #  - Computation allocated took 0.04 seconds.
print(get_loglik(m.c.orig),digits=10)
diff --git a/docs/articles/intro_mash.html b/docs/articles/intro_mash.html index a8bf9ee..c314f0a 100644 --- a/docs/articles/intro_mash.html +++ b/docs/articles/intro_mash.html @@ -128,7 +128,7 @@

#  - Computing 2000 x 151 likelihood matrix.
 #  - Likelihood calculations took 0.04 seconds.
 #  - Fitting model with 151 mixture components.
-#  - Model fitting took 0.41 seconds.
+#  - Model fitting took 0.43 seconds.
 #  - Computing posterior matrices.
 #  - Computation allocated took 0.01 seconds.

This can take a little time. What this does is to fit a mixture model to the data, estimating the mixture proportions. Specifically the model is that the true effects follow a mixture of multivariate normal distributions: \(B \sim \sum_k \sum_l \pi_{kl} N(0, \omega_l U_k)\) where the \(\omega_l\) are scaling factors set by the “grid” parameter in mash and the \(U_k\) are the covariance matrices (here specified by U.c).

diff --git a/docs/articles/intro_mashcommonbaseline.html b/docs/articles/intro_mashcommonbaseline.html index ce34061..5f9bb96 100644 --- a/docs/articles/intro_mashcommonbaseline.html +++ b/docs/articles/intro_mashcommonbaseline.html @@ -101,11 +101,11 @@

U.c = cov_canonical(data.L)
 mashcontrast.model = mash(data.L, U.c, algorithm.version = 'R')
#  - Computing 12000 x 181 likelihood matrix.
-#  - Likelihood calculations took 0.97 seconds.
+#  - Likelihood calculations took 1.18 seconds.
 #  - Fitting model with 181 mixture components.
-#  - Model fitting took 4.21 seconds.
+#  - Model fitting took 4.23 seconds.
 #  - Computing posterior matrices.
-#  - Computation allocated took 0.14 seconds.
+# - Computation allocated took 0.15 seconds.
print(get_loglik(mashcontrast.model),digits=10)
# [1] -105525.1375

Use get_significant_results to find the indices of effects that are ‘significant’:

@@ -121,11 +121,11 @@

data.wrong = mash_set_data(Bhat = simdata$Chat %*% t(L), Shat = 1) m = mash(data.wrong, U.c)

#  - Computing 12000 x 181 likelihood matrix.
-#  - Likelihood calculations took 0.29 seconds.
+#  - Likelihood calculations took 0.30 seconds.
 #  - Fitting model with 181 mixture components.
-#  - Model fitting took 2.26 seconds.
+#  - Model fitting took 2.35 seconds.
 #  - Computing posterior matrices.
-#  - Computation allocated took 0.07 seconds.
+# - Computation allocated took 0.09 seconds.
print(get_loglik(m),digits = 10)
# [1] -111355.1971

We can see that the log likelihood is lower, since it does not consider the induced correlation.

diff --git a/docs/articles/intro_mashnobaseline.html b/docs/articles/intro_mashnobaseline.html index c58dadf..7183825 100644 --- a/docs/articles/intro_mashnobaseline.html +++ b/docs/articles/intro_mashnobaseline.html @@ -142,7 +142,7 @@

m = mash(data.L, c(U.c,U.ed), algorithm.version = 'R')
#  - Computing 2000 x 181 likelihood matrix.
-#  - Likelihood calculations took 0.36 seconds.
+#  - Likelihood calculations took 0.39 seconds.
 #  - Fitting model with 181 mixture components.
 #  - Model fitting took 0.51 seconds.
 #  - Computing posterior matrices.
diff --git a/docs/articles/mash_sampling.html b/docs/articles/mash_sampling.html
index 6fe4ffd..4652048 100644
--- a/docs/articles/mash_sampling.html
+++ b/docs/articles/mash_sampling.html
@@ -88,11 +88,11 @@ 

Here, we draw 100 samples from the posteriors of each effect.

m = mash(data, U.c, algorithm.version = 'R', posterior_samples = 100)
#  - Computing 2000 x 151 likelihood matrix.
-#  - Likelihood calculations took 0.27 seconds.
+#  - Likelihood calculations took 0.28 seconds.
 #  - Fitting model with 151 mixture components.
-#  - Model fitting took 0.43 seconds.
+#  - Model fitting took 0.42 seconds.
 #  - Computing posterior matrices.
-#  - Computation allocated took 3.11 seconds.
+# - Computation allocated took 3.01 seconds.

Using get_samples(m), we have a \(2000 \times 5 \times 100\) array for samples.

If we fit the mash model without the posterior samples, we could use mash_compute_posterior_matrices to sample from the mash object.

m$result = mash_compute_posterior_matrices(m, data, algorithm.version = 'R',
diff --git a/docs/index.html b/docs/index.html
index 03ff313..b46a149 100644
--- a/docs/index.html
+++ b/docs/index.html
@@ -64,7 +64,7 @@
 mashr: Multivariate Adaptive Shrinkage in R

This package implements methods to estimate and test many effects in many conditions (or many effects on many outcomes).

-

The methods use Empirical Bayes methods to estimate patterns of similarity among conditions, and then exploit those patterns of similarity among conditions to improve accuracy of effect estimates. See [Urbut et al][mashr-paper] for details of the model and methods.

+

The methods use Empirical Bayes methods to estimate patterns of similarity among conditions, and then exploit those patterns of similarity among conditions to improve accuracy of effect estimates. See Urbut et al for details of the model and methods.

Note that this R package is a refactoring of the code originally used to generate the results for the manuscript. The original package code is here.

@@ -109,8 +109,7 @@

  • When any changes are made to roxygen2 markup or the C++ code in the src directory, simply run devtools::document() to update the RcppExports.cpp, the package namespaces (see NAMESPACE), and the package documentation files (in the man directory),

  • These are the R commands to build the website (make sure you are connected to Internet while running these commands):

  • -
    library(pkgdown)
    -build_site(mathjax = FALSE)
    +

    R library(pkgdown) pkgdown::build_site(mathjax = FALSE)

    diff --git a/docs/reference/mash.html b/docs/reference/mash.html index 1cfd46a..2060db5 100644 --- a/docs/reference/mash.html +++ b/docs/reference/mash.html @@ -195,7 +195,7 @@

    Examp res.mash = mashr::mash(data,U.c)
    #> - Computing 20 x 131 likelihood matrix. #> - Likelihood calculations took 0.00 seconds. #> - Fitting model with 131 mixture components. -#> - Model fitting took 0.04 seconds. +#> - Model fitting took 0.08 seconds. #> - Computing posterior matrices. #> - Computation allocated took 0.00 seconds.