Run a differential co-expression pipeline on data from a simulation experiment. A default pipeline can be used which consists of methods in the package or custom pipelines can be provided.
dcPipeline(
simulation,
dc.func = "zscore",
precomputed = FALSE,
continuous = FALSE,
cond.args = list(),
...
)
a list, storing data and results generated from simulations
a function or character. Character represents one of the
method names from dcMethods
which is run with the default settings.
A function can be used to provide custom processing pipelines (see details)
a logical, indicating whether the precomputed inference should be used or a new one computed (default FALSE)
a logical, indicating whether binary or continuous conditions should be used (default FALSE). No methods implemented currently use continuous conditions. This is to allow custom methods that require continuous conditions
a list, containing condition-specific arguments for the DC inference pipeline. See details
additional parameters to dc.func
a list of igraphs, representing the differential network for each independent condition (knock-out).
If dc.func
is a character, the existing methods in the
package will be run with their default parameters. The pipeline is as such:
dcScore -> dcTest -> dcAdjust -> dcNetwork, resulting in a igraph object.
Parameters to the independent processing steps can also be provided to this
function as shown in the examples.
If precomputed
is TRUE while dc.func
is a character,
pre-computed results will be used. These can then be evaluated using
dcEvaluate
.
Custom pipelines need to be coded into a function which can then be provided instead of a character. Functions must have the following structure:
function(emat, condition, ...)
They must return either an igraph object or an adjacency matrix stored in a base R 'matrix' or the S4 'Matrix' class, containing all genes in the expression matrix 'emat'. See examples for how the in-built functions are combined into a pipeline.
If the pipeline (in-built or custom) requires condition-specific parameters
to run, cond.args can be used to pass these. For instance, LDGM requires
lambda OR the number of edges in the target network to be specified for
each inference/condition. For the latter case and with 3 different
conditions, this can be done by setting cond.args =
list('ldgm.ntarget' = c(100, 140, 200))
. Non-specific arguments should be
passed directly to the dcPipeline
function call.
data(sim102)
#run a standard pipeline
resStd <- dcPipeline(sim102, dc.func = 'zscore')
#run a standard pipeline and specify params
resParam <- dcPipeline(sim102, dc.func = 'zscore', cor.method = 'pearson')
#run a standard pipeline and specify condition-specific params
resParam <- dcPipeline(
sim102,
dc.func = 'diffcoex',
#arguments for the conditions ADR1 knockdown and UME6 knockdown resp.
cond.args = list(diffcoex.beta = c(6, 20))
)
#retrieve pre-computed results
resPrecomputed <- dcPipeline(sim102, dc.func = 'zscore', precomputed = TRUE)
#run a custom pipeline
analysisInbuilt <- function(emat, condition, dc.method = 'zscore', ...) {
#compute scores
score = dcScore(emat, condition, dc.method, ...)
#perform statistical test
pvals = dcTest(score, emat, condition, ...)
#adjust tests for multiple testing
adjp = dcAdjust(pvals, ...)
#threshold and generate network
dcnet = dcNetwork(score, adjp, ...)
return(dcnet)
}
resCustom <- dcPipeline(sim102, dc.func = analysisInbuilt)
plot(resCustom[[1]])