|Year : 2018 | Volume
| Issue : 3 | Page : 115-116
What's new in critical illness and injury science?: Revalidation of vasoactive–ventilation–renal scoring in predicting outcome in postcardiac surgery children and the importance of replicating studies
Thomas John Papadimos
Department of Anesthesiology, University of Toledo College of Medicine and Life Sciences, Toledo, Ohio 43614, USA
|Date of Web Publication||27-Aug-2018|
Dr. Thomas John Papadimos
Department of Anesthesiology, University of Toledo College of Medicine and Life Sciences, 3000 Arlington Avenue, Toledo, Ohio 43614
Source of Support: None, Conflict of Interest: None
|How to cite this article:|
Papadimos TJ. What's new in critical illness and injury science?: Revalidation of vasoactive–ventilation–renal scoring in predicting outcome in postcardiac surgery children and the importance of replicating studies. Int J Crit Illn Inj Sci 2018;8:115-6
|How to cite this URL:|
Papadimos TJ. What's new in critical illness and injury science?: Revalidation of vasoactive–ventilation–renal scoring in predicting outcome in postcardiac surgery children and the importance of replicating studies. Int J Crit Illn Inj Sci [serial online] 2018 [cited 2022 Jan 18];8:115-6. Available from: https://www.ijciis.org/text.asp?2018/8/3/115/239899
In this issue, Alam et al. have revalidated the work done by Miletic et al. in regard to the effectiveness of the vasoactive–ventilation–renal (VVR) score as an appropriate tool in the prediction of postcardiac surgery outcome in children., Miletic et al.'s work had also been previously validated by Scherer et al. Here, however, Alam et al. accumulated a denominator nearly four times that of Miletic et al. (the two studies of Miletic et al. had 222 and 92 patients, and Alam et al. had 1097 patients). In their work, Alam et al. compared VVR with vasoactive–inotropic scoring (VIS) using the ventilation index (VI). The VVR is equal to VIS + VI (VI) + (Δ creatinine × 10), where VI = respiratory rate × (peak inspiratory pressure [PIP] – positive end-expiratory pressure [PEEP]) × PaCO2/1000, and Δ creatinine is calculated by subtracting serum creatinine (mg/dl) at the time of each measurement with preoperative serum creatinine.
While this tool has been previously used and validated on several occasions, the position of this journal is that works which involve validation of a clinical tool or situation/study should be replicated and reported, especially when revalidation is supported by a much large sample than was used in the original validation studies.,, In their work, Alam et al. demonstrated the robustness of VVR and that its applications to the evaluation of critical illness at the bedside should be seriously considered. Sherer et al. found that the VVR score obtained at 12 h after Intensive Care Unit arrival was a strong predictor of clinical outcome. Alam et al., on the other hand, reported that VVR at 48 h best correlated with a prolonged length of stay (LOS) and mortality; this is consistent with Miletic et al.'s works. The authors also attempted to find correlations with other variables, such as age, cardiopulmonary bypass time, aristotle basic complexity scoring, and risk assessment for congenital heart surgery score (RACHS), but none demonstrated superiority to 24 h or 48 h VVR or VIS scoring. However, RACHS category >3 and acute kidney injury stage >2 were associated with mortality, and RACHS category >3 and postoperative rhythm abnormalities were associated with an increased LOS. In regard to RACHS, Alam et al. had a lower percentage of patients with RACHS 4–6 than Miletic et al., and they pointed out that could simply be due to a difference in the cohort that was studied.
The authors correctly indicated that the efficacy of the VVR score as compared to other disease severity indices such as the pediatric logistic organ dysfunction score, pediatric risk of mortality III score, and pediatric index of mortality II score still need to be studied. These scores are complex calculations, whereas the VVR is much simpler to perform and thus may be better to use. However, the VVR score has come under scrutiny by some because of the calculation method of renal function used in the equation, and for how VI is determined., As it concerns serum creatinine, Alam et al. stated that age variation of serum creatinine varies greatly and does not accurately predict/indicate renal injury and may actually underestimate such injury. Therefore, the use of the percentage increase of estimated glomerular filtration rate may assist in eliminating this shortcoming. In regard to VI, they suggest that the use of the difference between PIP and PEEP may need to be supplanted by inclusion of plateau pressure (PLAT) in the place of PIP. The use of PLAT instead of PIP may help with VI accuracy and will require investigation, as Alam et al. rightly pointed out.
VVR remains an effective tool regardless of criticisms. Adjustments to the use of creatinine clearance and use of VI-PLAT will only strengthen the importance of VVR. It would have been novel and extremely interesting if Alam et al. had done their study incorporating the changes in the calculation of creatinine and the incorporation of PLAT into VI as discussed above, thereby making this report pivotal as opposed to being a solid validation of previous work.
On that note, some positive comments on the value of validation or replication of studies should be made at this time. In his brilliant work in 2005, Why Most Published Research Findings are False, John Ioannidis made the claim that “it can be proven that most research findings are false.” He actually went on to prove this mathematically. Ioannides educates as to why study power, bias, the quantity of other studies on the same topic, and the ratio of “true to no relationships among the relationships probed in each scientific field” are extremely important.
It is of noteworthy caution that instead of chasing statistical significance, the authors should commit to understanding the entire range of values regarding a subject or finding, or what Ioannidis calls, prestudy odds. Researchers need to evaluate their thinking process and decide whether they are testing a true or nontrue relationship. Ioannidis goes on to state that, “whenever ethically acceptable, large studies with minimal bias should be performed on research finding that are considered relatively established, to see how often they are indeed confirmed.” More recently, Begley and Ioannidis had rechallenged basic and preclinical research regarding reproducibility. They expressed concern about the need for robustness and reliability as a sound platform for further advances in medical science, and that we are now in the midst of a “reproducibility crisis,” and that our problem as researchers, authors, teachers, and mentors is our “failure to adhere to good scientific practice and the desperation to publish or perish.” This problem involves many stakeholders and has many perspectives that need to be considered.
Lest there is a concern as to whether this also applies to social sciences, it certainly does. Simons, Professor of Psychology, Visual Cognition Laboratory, the University of Illinois, enthusiastically supports the value of direct replication. He stated that “reproducibility is the cornerstone of science.” He emphasized that any researcher of competence should be able to replicate a result with adequate statistical power. He delved into the topics of trust, but verify, limitations of scope, and the dangers of assumed moderation. He summarized his thoughts as follows, “only with direct replication by multiple laboratories will our theories make useful, testable, and generalizable predictions.”
In this issue, Alam et al. presented us with a study that was large as compared to previous studies on the topic, the study examined research findings that are considered relatively established (multiple papers on the topic of VVR, VIS, and VI), and the statistics were well explained. Their work is exactly what should be presented; not necessarily new, but confirmatory. I believe that reproducibility of data is truly a cornerstone of science, whether it is basic science, clinical science, or social science. As believers in best practice evidence and scientific method, it is incumbent on us to publish manuscripts that replicate the work of others to confirm or deny those findings. Anything less would be an injustice to ourselves, our patients, and the study of science.
| References|| |
Alam S, Akunuri S, Jain A, Mazahir R, Hegde R. Vasoactive-ventilation-renal score in predicting outcome postcardiac surgery. Int J Crit Illn Inj Sci 2018;89.
Miletic KG, Spiering TJ, Delius RE, Walters HL 3rd
, Mastropietro CW. Use of a novel vasoactive-ventilation-renal score to predict outcomes after paediatric cardiac surgery. Interact Cardiovasc Thorac Surg 2015;20:289-95.
Miletic KG, Delius RE, Walters HL 3rd
, Mastropietro CW. Prospective validation of a novel vasoactive-ventilation-renal score as a predictor of outcomes after pediatric cardiac surgery. Ann Thorac Surg 2016;101:1558-63.
Scherer B, Moser EA, Brown JW, Rodefeld MD, Turrentine MW, Mastropietro CW, et al.
Vasoactive-ventilation-renal score reliably predicts hospital length of stay after surgery for congenital heart disease. J Thorac Cardiovasc Surg 2016;152:1423-90.
Gaies MG, Jeffries HE, Niebler RA, Pasquali SK, Donohue JE, Yu S, et al
. Vasoactive-inotropic score is associated with outcome after infant cardiac surgery: An analysis from the pediatric cardiac critical care consortium and virtual PICU system registries. Pediatr Crit Care Med 2014;15:529-37.
Colombo J. Predictors of outcomes after pediatric cardiac surgery: A Proposal to improve the vasoactive-ventilation-renal score. Ann Thorac Surg 2016;102:1413.
Karamlou T. Vasoactive-ventilation-renal score…a preliminary report. J Thorac Cardiovasc Surg 2016;152:1430-1.
Ioannidis JP. Why most published research findings are false. PLoS Med 2005;2:e124.
Wacholder S, Chanock S, Garcia-Closas M, El Ghormli L, Rothman N. Assessing the probability that a positive report is false: An approach for molecular epidemiology studies. J Natl Cancer Inst 2004;96:434-42.
Begley CG, Ioannidis JP. Reproducibility in science: Improving the standard for basic and preclinical research. Circ Res 2015;116:116-26.
Simons DJ. The value of direct replication. Perspect Psychol Sci 2014;9:76-80.