Skip to content

Meta Analysis

Meta analysis methods are being used by the Journal for two main purposes:

  1. Meta-Analytic Outputs. The Journal may use the specified meta-analysis methods to combine result of multiple studies, ie., publications list, at the end of each simulation cycle, ie., when it has collected max_pubs publications.
  2. Adaptive Selection Strategies. The Journal may use the meta-analytic outcomes to adjust its Selection Strategy and its acceptance rate. [under development]

Meta-Analytic Outputs

Collecting meta-analytic outputs is the main use-case of adding meta-analysis methods to Jouranl's configuration. This is done by adding each method to a list of meta_analysis_metrics under journal_parameters section. Like before, we can add any method-specific parameters to the list. In the case where we prefer using one method with two sets of different parameters, we can simply add a new method with the modified parameters.

After each simulation cycle, when the list of accepted publications is ready, The Journal traveses through the list of methods, and computes their outcomes. Before restarting the simulation — and getting ready for a new set of submissions to arrive — the Journal saves all its publications, and their corresponding meta-analyses to different CSV files for further analysis by the user. Simulation configurations section lists all the different available output parameters.

"journal_parameters": {
    "max_pubs": 24,
    "selection_strategy": {...},
    "meta_analysis_metrics": [
        {
            "name": "MethodName",
            "param": ...
        },
        {
            "name": "MethodName",
            "param": ...
        },
        ...
    ]
}

SAM offers a short list of meta-analysis methods and publication bias metrics, as presented in the rest of this section.

Fixed Effect Estimator

Fixed Effect Estimator Configurations

{
    "name": "FixedEffectEstimator"
}

Random Effect Estimator

Random Effect Estimator Configurations

{
    "name": "RandomEffectEstimator",
    "estimator": "DL"
}

List of available estimators are:

  • DerSimonian-Laird Estimator1

Egger's Test Estimator2

Egger's Test Estimator Configurations

{
    "name": "EggersTestEstimator",
    "alpha": 0.1
}

Trim and Fill Method3

Trim and Fill Method Configurations

{
    "name": "TrimAndFill",
    "alpha": 0.1,
    "estimator": "R0",
    "side": "auto"
}

Begg's Rank Correlation4

Rank Correlation Configurations

{
    "name": "RankCorrelation",
    "alpha": 0.1,
    "alternative": "TwoSided"
}

Test of Excess of Significant Findings5

Test of Excess of Significant Findings Configurations

{
    "name": "TestOfObsOverExptSig",
    "alpha": 0.1
}

Adaptive Selection Strategies


  1. Rebecca DerSimonian and Nan Laird. Meta-analysis in clinical trials. Controlled Clinical Trials, 73:177–188, sep 1986. URL: https://doi.org/10.1016%2F0197-2456%2886%2990046-2, doi:10.1016/0197-24568690046-2

  2. Matthias Egger, George Davey Smith, Martin Schneider, and Christoph Minder. Bias in meta-analysis detected by a simple, graphical test. BMJ, 3157109:629–634, 1997. 

  3. Sue Duval and Richard Tweedie. Trim and fill: a simple funnel-plot–based method of testing and adjusting for publication bias in meta-analysis. Biometrics, 562:455–463, 2000. URL: https://onlinelibrary.wiley.com/doi/abs/10.1111/j.0006-341X.2000.00455.x, arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.0006-341X.2000.00455.x, doi:10.1111/j.0006-341X.2000.00455.x

  4. Colin B. Begg and Madhuchhanda Mazumdar. Operating characteristics of a rank correlation test for publication bias. Biometrics, 504:1088–1101, 1994. URL: http://www.jstor.org/stable/2533446

  5. John PA Ioannidis and Thomas A Trikalinos. An exploratory test for an excess of significant findings. Clinical Trials: Journal of the Society for Clinical Trials, 43:245–253, jun 2007. URL: https://doi.org/10.1177%2F1740774507079441, doi:10.1177/1740774507079441


Last update: 2021-09-18