sam::FrankenbachStrategy¶
Inherits from sam::HackingProbabilityStrategy
Public Classes¶
Name | |
---|---|
struct | Parameters |
Public Functions¶
Name | |
---|---|
FrankenbachStrategy() =default | |
FrankenbachStrategy(Parameters & p) | |
virtual float | estimate(Experiment * experiment) override |
Public Attributes¶
Name | |
---|---|
Parameters | params |
Additional inherited members¶
Public Functions inherited from sam::HackingProbabilityStrategy
Name | |
---|---|
virtual | ~HackingProbabilityStrategy() =0 |
operator float() | |
std::unique_ptr< HackingProbabilityStrategy > | build(json & config) |
Public Attributes inherited from sam::HackingProbabilityStrategy
Name | |
---|---|
float | prob |
arma::Row< float > | probabilities |
Public Functions Documentation¶
function FrankenbachStrategy¶
FrankenbachStrategy() =default
function FrankenbachStrategy¶
inline FrankenbachStrategy(
Parameters & p
)
function estimate¶
virtual float estimate(
Experiment * experiment
) override
Reimplements: sam::HackingProbabilityStrategy::estimate
We have something in the middle now, so, we are calculating based on the p-value we check for significance, if sig, then we return 0. else, then we assign a value
I have a feeling this is a very inefficient implementation
If the hacking probability is 1, then everything in this range is going to be hacked, a.k.a, hp = 1; Update: I think I had this wrong previously, where I assign the probability to everything, while it should only be assigned to those studies that are passing the effect test in the first place
TodoRemember that you should consider some option here. At the moment, I'm returning the maximum of all probabilities, but that's not necessarily the best things to do, also, it works just fine in Frankenbach simulation because they have only one one outcome anyway
Public Attributes Documentation¶
variable params¶
Parameters params;
Updated on 29 June 2021 at 16:13:46 CEST