solved Sandra Porter RE: Discussion – Week 5 COLLAPSE Hello All,

Sandra Porter 
RE: Discussion – Week 5
COLLAPSE
Hello All,
In week 2, my selected topic was diabetes, more specifically type 2. I wanted to complete further research on the National Diabetes Prevention Program. From a public health lens, cost-effectiveness analysis is the comparison of costs and effectiveness of two or more health interventions to determine whether the value of the intervention justifies the costs; effectiveness is measured in similar units (Marseille, Larson, Kazi, Kahn, & Rosen, 2014). It is an economic principle that compares the relative costs of the intervention and its outcome; this data guide policy-makers in prioritizing and allocating resources to health interventions (Getzen, 2013).
(Kuvaja-Köllner, Valtonen, Komulainen, Hassinen, & Rauramaa, 2013) discussed the allocation of time to various activities based on opportunity costs of time. The motivation towards one activity versus another is observed through behavior. For example, a person’s deciding not to attend a healthy lifestyle class is time being provided to complete another activity or event for an hour. The National Diabetes prevention Program is time giving to an alternative activity. The assumption for this project is that attending the sessions for the NDPP is time to socially jumpstart a healthier lifestyle and reduce weight loss through the programs goals.  
Maintaining both mental and physical health are major public health issues. There is no shortage of evidence to support the claim that there are multiple benefits of physical activity, and more particularly, green exercise/activity (Farrell & Price, 2013; Gladwell, Brown, Wood, Sandercock, & Barton, 2013b; Pretty et al., 2005). While there is no definitive guidance on the quantity or frequency of green exercise required for the best health outcomes, some researcher utilizes a dose framework to examine the relationship between duration, frequency, and intensity of exposure to green exercise (Cox et al., 2017; Shanahan et al., 2016). 
Employing a cost-effectiveness analysis can measure overall public health benefit of the program by comparing the cost that the preventive health services will cost to reduce risk or increase the quality-adjusted life years (Getzen, 2013). The program should be able to demonstrate success by providing evidence of an improved quality of life years.
After reviewing this week’s learning resources as well as supplemental readings, it appears more cost-effective to allocate resources to a public health program that employ evidenced-informed interventions, and or promising programs to support their outcomes (Getzen, 2013).
From a previous study, the CEA results seemed undetermined. Many of the population-based approaches did not conduct formal CEA. As a result, the CS results from these studies needs to be better understood, as they were a simple comparison of costs given a certain level of health benefit. Also, many of the CE results were estimated from governmental or health care system perspectives rather than a societal perspective. Third, the societal perspective defined in population-based approaches was not as inclusive as it was for high-risk approaches (Zhou et al., 2020). Some cost categories were not included in the societal perspective, such as productivity loss or time cost.
References
Cox, D. T., Shanahan, D. F., Hudson, H. L., Fuller, R. A., Anderson, K., Hancock, S., & Gaston, K. J. (2017). Doses of nearby nature simultaneously associated with multiple health benefits Multidisciplinary Digital Publishing Institute.
Farrell, H. C., & Price, L. (2013). The (unintended) benefits of green exercise International Centre for Research in Events, Tourism, and Hospitality, Leeds ….
Getzen, T. E. (2013). Health economics and financing (5th ed.). Hoboken, NJ: John Wiley and Sons.
Kuvaja-Köllner, V., Valtonen, H., Komulainen, P., Hassinen, M., & Rauramaa, R. (2013). The impact of time cost of physical exercise on health outcomes by older adults: The DR’s EXTRA study Springer.
Marseille, E., Larson, B., Kazi, D. S., Kahn, J. G., & Rosen, S. (2014). Thresholds for the cost- effectiveness of interventions: Alternative approaches SciELO Public Health.
Public Health Management Corporation, (PHMC). (2017). Public health management corporation | Philadelphia public health, healthcare nonprofits Pennsylvania, public health resources (http://www.phmc.org/site/index.php ed.) Retrieved from http://www.phmc.org/site/index.php
Zhou, X., Siegel, K. R., Ng, B. P., Jawanda, S., Proia, K. K., Zhang, X., Albright, A. L., & Zhang, P. (2020, July 1). Cost-effectiveness of Diabetes Prevention Interventions Targeting High-risk Individuals and Whole Populations: A Systematic Review. Diabetes Care. https://care.diabetesjournals.org/content/43/7/1593. 
2 days ago
Isaac YAH 
RE: Discussion – Week 5
COLLAPSE
Isaac Yah
Week 5 Discussion: Cost-Effectiveness of Public Decisions
Description of Community and Intervention
In week two I selected the Maryland Million Hearts Initiative funded by the Centers for Disease Control and Prevention (CDC) at the states level to prevent 1 million health attacks and strokes over a five-year period (cdc.gov, n. d.). The strategies of the initiative were to implement small set of evidence-based programs at the community level to improve cardiovascular health. The Maryland Million Heart Initiative will use Community health workers deployed in communities to take part in a network of referral and monitoring of hypertension, improve identification of undiagnosed hypertension, improve treatment to decrease emergency rooms visit for hypertension in Maryland by 5% (Maryland.gov, n. d.).
Tailoring and Description of Cost-Effectiveness Analysis to Maryland Million Heart Initiative
Cardiovascular disease is the leading causes of death in the United States which prompted the CDC and Center for Medicaid and Medicare to launch the Million Heart initiative in 2012 to reverse the trend (Kottke & Horst, 2019). Improving heart health and associated conditions and reducing Heart attack and stroke rates positively impact the Country financially. The initiative includes self-management education, Community providers, clinical quality reporting and many others, require cost to implement (MD.gov, n. d.). However, Getzen (2013) indicated that cost-benefit analysis looks at the trades offs in the dollars spent to save lives and prevent diseases in decision making while cost-effectiveness analysis examines the cost side of and initiative.
Use of Cost-Effective Analysis to measure Benefits the Million Heart Initiative
        In cost-effective analysis, we examine the cost while cost-benefit examines the benefit against the cost. For example, in the million-heart initiative, the strategies such as smoking cessation, dietary reduction of sodium consumption, hypertension self-care, nutrition and physical education, just to name a few would cost millions of dollars to implement. However, according to the million-heart initiative report, there are 1.5 million heart attacks, 800,00 deaths and $316.6 billion in health care cost and lost productivity each year in the United States (millionhearts.hhs.gov, 2011). Comparing the cost of treating stroke patients a year and saving 1 million hearts over five-year period, the benefits are incomparable.
Is It Cost effective to Allocate Resources for One Public health Program Versus the Other?
        Considering the magnitude of cardiovascular disease and associated conditions it is cost-effective to allocate resources to the million hearts initiative. For example, Kottke and Horst (2019) reported that the million heart goals of controlling risks factors by promoting healthy diet, physical activities, abstinence from tobacco, hypertension treatment, and dyslipidemia would prevent or postpone more than 50% of all deaths in the US middle-age population. While allocating resources to the million hearts initiative may not only prevent deaths from hypertension and strokes, but it also impacts diabetes, kidney, and other chronic diseases. Heart diseases kills roughly the same number people who die from cancer, pneumonia, and accidents combined each year in the US (millionhearts.hss.gov, 2011).
References
Centers for Disease Control and Prevention (CDC.gov, n. d.). Million Hearts Partnerships 2012-2016: Key evaluation findings and successes. Retrieved from cdc.gov/dhdsp/evaluation_resources/mh-partner-network-evaluation.htm.
Getzen, T. E. (w2013). Health economic and financing (5th ed.). Hoboken, NJ: John Wiley and Sons
Kottke, T. E., & Horst, S. (2019). We can save million hearts. The Permanente Journal 2019; 23: 18-289. Doi.org/10.7812/TPP/18-289
Maryland Department of Health and Mental Hygiene (Maryland.gov, n. d.). Maryland Million Hearts Implementation Guide.  Aligning and guiding statewide efforts. Retrieved from https://chronicdisease.org/resources/resmgr/diabetes_webinar/million_hearts_implementation.pdf
Million Hearts (2011). Cost and consequences. Key Facts. Retrieved from https://millionhearts.hhs.gov/learn-prevent/cost-consequences.html
4 days ago
Ijeoma Nwazuruokeh 
RE: Discussion – Week 5
COLLAPSE
Statistical Significance and Meaningfulness
Statistically significant relationships in research do not necessarily mean that they will have a meaningful contribution to literature (Laureate Education, 2016f). Research studies having statistically significant variables will not always have theoretical importance; instead, hypothesis testing remains the primary model used to derive statistical inferences (Frankfort-Nachmias, et al., 2020). According to the American Statistical Association (ASA), the importance of a result cannot be measured only by a p-value or statistical significance (ASA, 2016).
It is essential to know what is being tested and relate this to the population in the study (Walden University, 2021). When the research work is deemed exploratory, the researcher will be examining a new area of research, and the results will not be able to drive policy decisions. A type 1 error occurs when the true null hypothesis is rejected; in their footnote, the researchers were letting readers know that they dropped the confidence interval from 95%, which is standard for social research, to 90%, the confidence level for the study is at 10% risk of being wrong (Frankfort-Nachmias, et al., 2020). Incorrect decisions are made based on relaxing to the 0.10 level. As a reader, the footnote informs me that this research work cannot be used as a reference because there is the expectation that the statistical significance is open to type I and type II errors. Nevertheless, this research work should not be overlooked because it can be the beginning of a research question worthy of attention.
References
American Statistical Association. (2016). American Statistical Association release statement on statistical significance and P-values. Retrieved from https://www.amstat.org/asa/files/pdfs/P-ValueStatement.pdf
Frankfort-Nachmias, C., Leon-Guerrero, A., & Davis, G. (2020). Social statistics for a diverse society (9th ed.) Sage.
Laureate Education (Producer). (2016f). Meaningfulness vs statistical significance [Video]. Baltimore, MD: Author
Walden University (2021). Academic skills center. Hypothesis Testing [Video]. YouTube https://academicguides.waldenu.edu/academic-skills-center/tutors/stats-drop-in-rsch-8210
4 days ago
Joy Garba 
RE: Discussion – Week 5
COLLAPSE
Statistical Significance and Meaningfulness
According to Bhansari et al. (2016), a hypothesis is statistical when the parameters or the probability distribution for a designated population generates the observations. The Type I error rate or actual alpha level, increases as a function of the number of conducted tests (Strobe & Strack, 2014). An inflated alpha level (.01) occurs because the researcher’s nominal alpha level was only intended to cover one opportunity to reject the null hypothesis, and it is applied to two options (Strobe & Strack, 2014).
           Also, Statistical tools such as p-values and confidence intervals are meaningful confirmatory analyses (Rubin, 2017). In turn, preregistration is one of the very few ways to check and confirm that the presented analyses were indeed confirmatory. Researchers who do exploratory work cannot interpret their statistical tests’ outcomes in a meaningful way because of unknown comparisons. While researchers who wish to interpret their statistical tests’ outcomes in a meaningful way are forced to preregister their analyses,  preregistration is the price one pays for being allowed anywhere near a statistical test.
Misconception and Misuse of the p-value
p-values can indicate how incompatible the data are with a specified statistical model. A common misuse of p-values is that they are often turned into statements about the null hypothesis’s truth. P-values do not measure the probability that the studied hypothesis is true or the probability that the data were produced by random chance alone (Greenland et al., 2016). The American Statistical Association (2016) states that p-values can indicate how incompatible the data are with a specified statistical models and are often used or misinterpreted. Ranstam (2016) concluded that p values could be highly misleading measures of evidence because the use of p values make it relatively easy to obtain statistically significant findings, such that p =. 05 can indicate no evidence against Ho.
P-value or statistical significance does not measure the size of an effect or the importance of a result (Greenland et al., 2016). A smaller p-value does not indicate a more robust association or a larger effect and tells little about the association’s magnitude. p-value or statistical significance does not measure the size of an effect or the importance of a result (Greenland et al., 2016)
Response to Scenario
Explanatory research is carried out to investigate a timely phenomenon that had not been studied before or had not been well explained to provide details where a small amount of information exists. From the scenario, given that this research was exploratory, traditional levels of significance to reject the null hypotheses were relaxed to the .10 level.
Given that; H?(predictor) – HA(Response) ? 0
To reject the null hypotheses (H?) P ?.10 to illustrate that there is no statistically significant difference between the predictor and response variables (Warner, 2012).  If the null hypothesis (H?) is rejected when it is true, there is a possibility of committing a Type, I error. On the other hand, if the null hypothesis (H?) is rejected when the alternative hypotheses (HA) is false, there will be a possibility of committing Type II error.
The study results are significant because they provide evidence for the researcher to conclude the predictor variables and the response variables.
References
American Statistical Association. (2016). American Statistical Association Release Statement on Statistical Significance and P-Values. Retrieved from https://www.amstat.org/asa/files/pdfs/P-ValueStatement.pdf
Bhandari M, Montori VM, Schemitsch EH. (2016). The undue influence of significant p-values on the perceived importance of study results. Acta Orthop. 2015 Jun;76(3):291-5.
Warner, R.M. (2012). Applied statistics from bivariate through multivariate techniques (2nd ed.). Thousand Oaks, CA: Sage Publications.Applied Statistics From Bivariate Through Multivariate Techniques, 2nd Edition by Warner, R.M. Copyright 2012 by Sage College. Reprinted by permission of Sage College via the Copyright Clearance Centre.

Chapter 3, “Statistical Significance Testing” (pp.81-124).

Greenland S, Senn SJ, Rothman KJ, Carlin JB, Poole C, Goodman SN, Altman DG. (2016) Statistical tests, p values, confidence intervals, and power: a guide to misinterpretations. Eur J Epidemiol. 2016 Apr;31(4):337-50. Epub 2016 May 21. 3. Nuzzo R. Scientific method: statistical errors. Nature. 2014 Feb 13;506 (7487):150-2
Ranstam, J. (2012). Why the p-value culture is bad and confidence intervals a better alternative. Osteoarthritis and Cartilage, 20(8), 805–808. http://doi.org/10.1016/j.joca.2012.04.001
Rubin, M. (2017). Do p values lose their meaning in exploratory analyses? It depends on how you define the family-wise error rate. Review of General Psychology, 21, 269-275. Dog:10.1037/gpr0000123.
Stroebe, W., & Strack, F. (2014). The alleged crisis and the illusion of exact replication.Perspectives on Psychological Science, 9, 59-71. doi: 10.1177/1745691613514450

Looking for an Assignment Help? Order a custom-written, plagiarism-free paper

Order Now