Degree of Freedom (DF) are the measure of data that information give that can be utilized to gauge the estimations of obscure populace parameters, and ascertain the variability of these approximations. DOF can simply be defined as the number of scores or value in any sample or final statistic that are free to vary or can freely change. The first step in calculation of DOF is determining the type of statistical test that is going to be used i.e. Chi-squared test or t-test. The test can be either sample distribution or normal distribution. The second step is to determine the number of independent variables present in the population, the equation is N degrees of freedom if the population has N random values and if the mean is subtracted from each of the data variables then the equation becomes N-1 DOF.
Inferential statistics allows the researcher or user to observe or draw inferences about a very large population from a sample data i.e. it can be used to observe the significance of a difference observed between two sets of groups. It is also used to determine the strength of relationship between variables.
The General Linear Model (GLM) is an ANOVA procedure that is used to determine if the means amongst two or more scores/groups differ. It is used to determine the relationship between continuous response variable, and covariates and factors using least square regression procedure. The model enhances the ability to analyze accurately and summarize what is occurring in the data (Trochim & Donnelly, 2008). The model is important because of its generality and it is important for social research. It serves as the basis of many other multivariate methods such as ANCOVA, ANOVA, and the t-test. GLM has been the foundation and basis of the innovation of other advanced models such as Hierarchical Linear Modeling (HLM) and Multilevel Structural Equation Modeling (SEM).
A test is referred to as parametric if the knowledge about the population is known by means of the parameters e.g. z-test and t-test, while a statistical test is referred to as non-parametric if the information about its population is unknown from the parameters e.g. Kruskal-Wallis test. In a non-parametric test there are no assumptions made in relation to the population while in a parametric test assumptions are made with respect to the population under test. In non-parametric tests the null hypothesis is free from the parameters while in parametric tests the population distribution is used to define the null hypothesis. While in parametric test is only applicable for the variables in a non-parametric test is based on both variables and attributes as the test statistic is arbitrary. Where ranking is present, preferences should be assessed and assumptions for the study are not required then a non-parametric test can be utilized over a parametric test.
Numerous statistical tests have assumptions that must be adhered to so as to ensure that the information gathered is fitting for the sorts of examinations one needs to carry out. Regular assumptions that must be met for parametric tests incorporate linearity, normality, homoscedasticity, and independence (Osborne & Waters, 2002). Inability to meet these suppositions, among others, can bring about wrong results, which is faulty. While testing assumptions, running investigations on information that has damaged the presumptions of the measurable test can bring about wrong results, contingent upon the specific assumption is not adhered to. If the dependent variable is not normally distributed a transformed variable can be applied that satisfies normality criteria e.g. square root, inverse transformation and log (Myers, Well & Lorch, 2010). If none of these transformations satisfy the normality criteria then an untransformed variable can utilized then a violation of assumption caution added.
A p-value means the probability that a null hypothesis be rejected if it is correct, therefore for p=0.05 means that the probability that null hypothesis could be rejected when it is true is 0.05 or 5% and this is referred to as type I error or a false positive. One of the common misconceptions is the expression ‘significant’ is that it does not signify a truly essential finding or that an especially huge distinction or relationship was found. It implies that the probability of this information happening by chance is essentially sufficiently low to make one question the invalid theory. A finding that falls beneath 0.02 is not inexorably bigger, littler, or more vital than one that falls just underneath 0.05. This is a typical and a shockingly simple surmising to make. A second misconception is the p-value does not mean the likelihood that the null hypothesis is right. It implies the likelihood of getting information, expecting the null is right. One might dismiss the null hypothesis on the grounds that information fell beneath the 0.05 level and in light of the fact that the low likelihood of getting such an outcome throws suspicion on it, yet that doesn’t mean the null hypothesis is right 5% of the time (Schmitz, 2007). The misconceptions are clearly defined by Schmidt (2010) as the following; significance identifies the size of a relationship, significance identifies reliable replication, when not significant it indicates no relationship, significance guarantee impartiality, significance are essential to research and they contribute to the field.
At the point when different invalid speculations are being tried the significance level should be brought down so that analysis risk does not turn out to be too extensive. While a significance level of 0.05 is conventional, there is nothing consecrated in regards to that level of Type I error likelihood. Faul et al. (2007) inferred that there is no motivation behind why the careful significance level of results ought not to be accounted for in reports and articles.
Cohen (1992) identifies four components of statistical power; effect size, sample size, significance, and power. Both the effect size and statistical significance serve the same purpose because they maintain a sufficient power in order to obtain accurate or significant results; this is done through increasing the minimum sample size as both statistical significance and effect size increases. An effect size alludes to the greatness of the outcome as it happens, or would be detected, in nature, or in a populace. Despite the fact that effects can be seen in the artificial environment of a research facility or a sample, effect sizes exist in this present reality (Ellis, 2010). An effect size is used to show the variance accounted for in a particular study. On the other hand, statistical significance is the probability or maximum allowable risk of rejecting the null hypothesis when it is correct or committing a type I error.
The main difference between a statistical significant result and clinically significant result is the outcome. In statistical significance it only provides if the P-value is below the alpha level to determine significance therefore the result does not provide an interpretation of the results or rather it gives limited information about the results, for instance it does not provide the magnitude of the change i.e. effect size. Apart from determining the significance in the differences between scores, clinically significant results goes ahead to determine the meaningfulness of the outcomes by providing clinical relevance and effect size. For example,
“A large study might find that a new antihypertensive drug lowered BP, on average, 1 mm Hg more than conventional treatments. The results were statistically significant with a P Value of less than .05 because the study was large enough to detect a very small difference. However, most clinicians would not find the 1 mm Hg difference in blood pressure large enough to justify changing to a new drug. This would be a case where the results were statistically significant (p value less than .05) but clinically insignificant” (Guyatt et al., 2008).
Null hypothesis testing is a statistical testing procedure that is used to determine whether certain variable(s) has an effect on another variable. An example of the NHST is the t-test. According to Kirk (2003) the NHST assumes that the difference in the means from the null hypothesis is due principally because of chance or sampling variance. There are other assumptions depending on the type of test e.g. data is from a normal distribution with equal variance in each group and any data missing is missing completely at random.
The first criticism is that it creates confusion between substantive and statistical significance because the results depend on the effect size found and its ability to be replicated. The second criticism is the way NHST is universally misunderstood based on the aforementioned misconceptions. The third criticism is defined by Cohen (1994) as the ‘inverse probability error’ because the NHST gives the opposite of what the researcher wants to know i.e. the test will give the researcher the likelihood of obtaining outcomes dependent on the scope of the null hypothesis.
An alternative to NHST is the Bayesian methods which contrary to NHST tell the researcher or the user what they want to know such as the probability distribution of a parameter and the probability that the hypothesis is true. The main advantage of this is it allows previous knowledge to be incorporated which is not the case in NHST. The second alternative is the confidence intervals that provide the optimal approach to integration of literature (McGrath, 2011). The advantage of confidence intervals is that they are more objective and they present more information than NHST.