29 September 2016

Ethics review of public health studies

Without any need to go historically back to the Nazi human experimentation or the Tuskegee Syphilis Study or so to justify the need for research ethics, now it is a standard practice that all public health (and other) studies involving humans or human biological materials shall be conducted as per accepted ethical principles. To ensure the ethical soundness of such studies, it is a requirement that study protocols undergo a rigorous ethics review by research ethics committees (RECs) or institutional review boards (IRBs).

 But what part of the research protocols shall undergo review by RECs or IRBs? Some researchers are not happy when RECs or IRBs review parts of the protocol other than the ethics statement section. Shall RECs or IRBs review only the ethics statement of public health research protocols?

RECs or IRBs shall review a protocol in its entirety, not just the ethics statement section. Anything from research problem identification to the design of the study methodology and to the data analysis, interpretation and reporting of findings could raise ethical issues. It is, therefore, crucial that public health research protocols be reviewed in their entirety (from title to annexes). Generally, review of a public health research protocol shall address the following: 1) scientific review, 2) conflict of interest review and 3) ethics review.

The three types of reviews are briefly discussed below.

 Scientific review. Scientific review is required to ensure the scientific soundness of the proposed study. This includes reviewing whether the study has a clear research question and objectives, whether it is going to generate new knowledge, whether the methodology is appropriate and so on.  A study that is not scientifically sound cannot be ethical. For example, a study that is not likely to come up with new knowledge, and that is likely to “reinvent the wheel”, unnecessarily bothers study participants and unnecessarily wastes meager resources and is hence unethical.

 Conflict of interest review. This is required to ensure that parties that are directly or indirectly involved in the research do not have a vested interest that influences the researchers. Such influences may included, for example, influence of a funding agency on how the research is to be conducted, whether results should be published, where and when to publish the findings and cherry-picking of findings for publication. A review of review articles on the health effects of passive smoking published between 1980 and 1995 by Barnes and Bero[1] has found out that review articles published by tobacco industry-affiliated authors had about 88 times higher odds of concluding that passive smoking has no health effect compared to authors who are non tobacco-industry affiliated, which shows the influence of funding agencies on authors. Hence, conducting public health studies which have a conflict of interest is unethical and hence RECs or IRBs shall ensure that proposed studies are free from such conflicts of interest.

Ethics review. This is required to ensure that the proposed study be conducted in accordance with the three basic principles, viz. respect for persons, beneficence and justice.[2]  A study that violates any of these basic principles is unethical.

To conclude, ethics review of public health research protocols shall not be restricted only to the ethics statement section of the protocols. It should rather review the protocols in their entirety and make sure that planned studies are scientifically sound, free from conflict of interest and comply with the basic ethical principles of studies involving humans or human biological materials. 
  
Reference
1.      Barnes DE, Bero LA. Why Review Articles on the Health Effects of Passive Smoking Reach Different Conclusions. JAMA. 1998;279(19):1566-70.
2.      Council for International Organizations of Medical Sciences (CIOMS). International Ethical Guidelines for Biomedical Research Involving Human Subjects. Geneva; 2002.

24 September 2016

Weighted versus unweighted estimates: Is it wrong if they are different?

A while back, I and a colleague submitted a manuscript for publication on a peer-reviewed journal. Our manuscript was based on a weighted analysis of data to attenuate bias due to unequal probability of selection of sub-populations[1].  To be more transparent, we reported both the weighted and unweigthed frequency distribution of the population by important variables.

After the manuscript underwent a thorough anonymous peer-review, we received a peer-review report which states that because the weighted and unweighted frequencies did have substantial difference, the analysis weight we used was wrong. Does obtaining different weighted and unweighted frequencies render the assigned analysis weight invalid?

As pointed out above, if weighting serves to attenuate bias due to disproportionate sampling of sub-populations, then weighted and unweighted frequencies (and other estimates such as means and regression coefficients)  are expected to differ. The magnitude of the difference depends on the extent of the disproportion in the sampling.

To clear the above issue, let’s see an example. Let’s say we have a population of size 100 comprised of 25 males and 75 females. That is, males comprise 25% of the population and females comprise 75%.  If we take a sample of size 30 using simple random sampling and if this sample is comprised of 15 males and 15 females, then based on our sample, males comprise 50% of the population and females comprise the remaining 50%. However, the probability of selection for males was 60% (15/25*100) whereas that for females was only 20% (15/75*100). Clearly, the sampling is disproportionate. Hence, the analysis should take account of this disproportion in the sampling. If not, results will be biased.

In this simplest scenario, we need to calculate an analysis weight as the inverse of the selection probabilities of males and females. Accordingly, the analysis weight (non-normalized) for males will be 1.67 and for females 5. If we declare a complex sample survey design (using svyset in Stata) and re-do the analysis (complex sample analysis), then the frequency of males will be 25% and that of females will be 75%.

Hence, it is not abnormal if the weighted and unweighted frequencies differ. That is rather what must be expected given the weight variable has been properly computed.

Reference
  1. Heeringa SG, West BT, Berglund PA. Applied Survey Data Analysis. Boca Raton, FL: Chapman & Hall/CRC; 2010. 

17 September 2016

Do not control a variable belonging to a sub-sample

In public health research, multivariable models are often used to control confounders while investigating the effect of an exposure on an outcome. To use a given model, certain assumptions shall be fulfilled. A model cannot be valid if its assumptions are fatally violated. Some assumptions are checked before using the model. For example, one wouldn’t choose a linear regression model for a nominal dependent variable because the assumption of “a continuous dependent variable” is violated. Other assumptions are checked post-hoc (for example, normality of residuals in linear regression models). Assumptions are model-specific.

There is also a prerequisite that must be checked for all models but that is not often taught to students and not clearly addressed in books. A variable being controlled in a multivariable model should be one that has been measured on all study units and hence for which all study units have some meaningful value. A control variable should not be one that is measured only on a sub-sample. For example, in a study among mothers, if there is a variable about outcome of previous pregnancy, it applies only to mothers who had at least one pregnancy in the past. It is not uncommon to find researchers who enter such variables into multivariable models and end up in models “behaving wildly”. This is because all study units for which the variable doesn’t apply will be excluded from the analysis. The analysis will then be limited to a sub-sample. In effect, estimates will biased, precision will be lost as manifested by too wide confidence intervals (sometimes, the lower and upper bounds of confidence intervals could not be estimated), and model goodness-of-fit statistics cannot be determined.
          
A variable belonging to a sub-sample can be controlled only if one is purposely doing the analysis on a sub-sample for a sub-population inference. In that case, it should be well planned from the outset including ensuring the adequacy of the sample size for sub-population inference.

In conclusion, while working with multivariable models, it s necessary to make sure that all control variables (also exposure variables, for that matter) apply for all study units (unless one is doing an analysis of a sub-sample for sub-population inference). Otherwise, results could lack both validity and precision.

9 September 2016

Collider-stratification bias

In public health research, adjusting for confounders using multivariable methods is a popular practice. That is necessary to deal with the menace of confounders. However, adjusting for confounders should be accomplished with the utmost care because a not-well-thought-about adjustment can take the data “from the frying pan into the fire”. To this end, multivariable analysis should not be a laissez faire practice where one plugs in as many variables as possible into the multivariable model and then see what comes out of the model. It should rather be carefully guided by a conceptual framework or causal diagrams. The mechanism by which each variable operates should be duly considered and analysis done accordingly. Otherwise, an unwise attempt to control so-called confounders would end up in biasing estimates.
      
Among such biases is a collider-stratification bias. Collider is “a variable directly affected by two or more variables in a causal diagram”[1] (Figure 1). If variable C is directly affected by both variables X and Y, stratifying on (or controlling) C while assessing the association of X with Y, or the association of both X and Y with some other variable Z, tends to change the association between the variables X and Y[1, 2]. That is, it may cause a spurious association to emerge between X and Y while there is none[2, 3]; it may also cause a statistical association between the outcome of interest and an unmeasured factor U that is also causal of C while actually the variables are independent[4] thus forcing the unmeasured variable U to be a confounder. Controlling the collider may also cause a risk factor to appear protective[4]. Such biases resulting from controlling a collider variable during data analysis are called collider-stratification biases[2] or biases due to conditioning on a collider[3].

Fig. 1: A collider variable. Variable C is directly affected by both variables X and Y.

Ipso facto, as a way of avoiding collider-stratification bias, collider variables should not be controlled (or adjusted for) in multivariable models [2].

References and additional reading materials:
1.       Porta M, editor. A Dictionary of Epidemiology. 5th ed. New York: Oxford University Press; 2008.
2.       Greenland S. Quantifying Biases in Causal Models: Classical Confounding vs Collider-Stratification Bias. Epidemiology. 2003;14(3):300 –6.
3.       Cole SR, Platt RW, Schisterman EF, Chu H, Westreich D, Richardson D, et al. Illustrating bias due to conditioning on a collider. International Journal of Epidemiology. 2010;39:417-20.
4.       Whitcomb BW, Schisterman EF, Perkins NJ, Platt RW. Quantification of collider-stratification bias and the birthweight paradox. Paediatric and Perinatal Epidemiology. 2009;23:394-402.