Conducting hypothesis testing and reporting findings as significant when “p-value” is less than .05 is often deemed as a sufficient analysis of research data, but it has often been overused and misused. In an effort to address such “p-value” abuse, this tutorial suggests an alternative method for determining significance. Specifically, it provides an introduction on how to compute effect size (ES), conduct meta-analysis and use the information generated to aid scientific inference. Calculating effect size after taking the samples' own variability into consideration allows the magnitude of the treatment effect to be determined. ES is calculated in a variety of ways, depending on the type of research being conducted and the type of data available. This tutorial focuses on deciphering the different calculations of ES, and how they may aid the researcher in deciding whether the statistically significant result found actually matters in practice. With the aid of some numerical examples, the two main approaches, standardized difference between means, including Cohen's d, Hedges's g and Glass's delta, and effect size correlation are discussed and interpreted in detail, providing an extension of evaluating “significance.” As a technique that allows the results of several related studies to be combined and summarized, meta-analysis offers a useful “synthesis” approach that takes sampling and study variability into consideration when making a scientific inference. Rather than basing a decision on a single small-size study, meta-analysis measures the overall effects of interventions and treatments. Together with ES, meta-analysis provides accumulated evidence of significance tests, which in turn aids researchers in making appropriate inferences and judgments regarding “practical significance” of the intervention or treatment. Finally this tutorial introduces related software for ES calculation and meta-analysis computation. Keyword(s): assessment, measurement/evaluation, research