Page 1 of 16
31
Advances in Social Sciences Research Journal – Vol.7, No.11
Publication Date: November 25, 2020
DOI:10.14738/assrj.711.9146.
Ritter, N. L. (2020). Statistical Methods Used In Educational Technology Research 2012-2013. Advances in Social Sciences Research
Journal, 7(11) 31-46.
Statistical Methods Used In Educational Technology Research 2012-
2013
Nicola L. Ritter
ABSTRACT
This article provides a content analysis of the research methodologies
used in quantitative and mixed-methods articles in the top five
educational technology journals between 2012 and 2013. These articles
represented a total of 32,131 sampling procedures and statistical
techniques recorded from 1,171 articles – the largest research synthesis
of research methodologies in field of educational technology to date.
Results indicate quantitative methods continue to dominate the field as
a whole, yet specific journals appear to favor certain research methods
over others. Most authors did not report the type of sampling procedure
used in their investigations (617 articles). Fewer researchers reported
score reliability estimates using their own data – with only 420 articles
reporting reliability coefficients. Findings also suggest few authors
reported informationally-adequate statistics. Recommendations for
best statistical practices and implications for the field of educational
technology are discussed.
INTRODUCTION
Is educational technology research pseudo-scientific or filled with statistical blunders? Mitchell
(1997) and Reeves (2000) suggested this over two decades ago. However, Mitchell and Reeves did
not report any emperical evidence at the time to support these claims. The purpose of the current
study is determine whether or not these claims are accurate. Research syntheses of research
methodologies are published commonly in various fields, including education. Methodological
reviews come in multiple forms, such as investigations of specific techniques published within a
single journal or across multiple journals. For example, Willson (1980) reviewed articles published
in American Educational Research Journal (AERJ) with respect to statistical methodology. More
recently, disciplines within education have also begun to evaluate studies across multiple journals.
For example, Warne, Lazo, Ramos, and Ritter (2012) reported the statistical techniques in five gifted
education journals.
Educational technology researchers have also conducted methodological reviews. Most researchers
identified the types of research methods used in educational technology literature (e.g., Koble &
Bunker, 1997; Wang & Lockee, 2010). Other researchers extended their investigations beyond
identifying research methods to synthesize the types of research designs (e.g., Cheung & Hew, 2009;
Şİmşek, Özdamar, Uysal, Kobak, Berk, Kiliçer, & Çİğdem, 2009). Likewise, reviews may investigate
other methodological issues such as sampling method, reliability, and statistical techniques (e.g.,
Alper & Gülbahar, 2009).
Page 2 of 16
32
Advances in Social Sciences Research Journal (ASSRJ) Vol.7, Issue 11, November-2020
LITERATURE REVIEW
Previous reviews adequately investigated the empirical nature, research methods, and research
designs used in educational technology. Educational technology researchers captured information
about the literature’s empirical and non-empirical nature (e.g., Chen & Hirschheim, 2004;
Farhoomand & Drury, 1999; Hrastinski & Keller, 2007). Empirical articles are published more often
than non-empirical articles and the ratio between the two has remained constant over the past
decade. Similarly, many researchers identified the research methods used in educational technology
(Koble & Bunker, 1997; Rourke & Szabo, 2002; Wang & Lockee, 2010). Historically, quantitative
methods dominated the field, yet today the field has seen a more balanced use of quantitative,
qualitative, and mixed-methodologies. Lastly, researchers identified the experimental designs used,
despite the lack of explicit reporting in the original article. Identifying trends in experimental
designs are particularly difficult given the variation in classification schemes. Specifically, a
synthesis among five reviews resulted in 14 different experimental design categories, with some
overlapping others (Cheung & Hew, 2009; Koble & Bunker, 1997; Peterson-Karlan, 2011; Randolph,
Julnes, Sutinen, & Lehman, 2008; Shih, Feng, & Tsai, 2008; Şİmşek et al., 2009).
Despite the variety of previous reviews in educational technology, few researchers have reviewed
sampling procedures, score reliability, or statistical techniques. Researchers found convenience
samples to plague the field, yet the actual sampling procedures are often not directly reported
(Alper & Gülbahar, 2009; Randolph et al., 2008). Likewise, the few researchers who reviewed score
reliability information found poor reporting practices (Lee, Driscoll, & Nelson, 2004, 2007;
Randolph, 2008). Finally, researchers who evaluated statistical techniques simply reported a brief
list due to small sizes. For example, the list from Lee et al. (2004) was compiled from 47 articles,
while the list from Lee et al. (2007) was compiled from 88 articles. The difficulty with the findings
from the aforementioned studies is the small sample sizes and lack of discussion on these specific
areas. As such more information and discussion is needed about sampling procedures, score
reliability, or statistical techniques using a large sample size.
Researchers have a clear picture of the empirical nature, research methods, and research designs
used in educational technology. Nevertheless, more information is needed about sampling
procedures, score reliability, and statistical techniques used in the field. The need for this review
arises due to numerous claims that the educational technology field is pseudo-scientific and lends
itself to unsupported conclusions based on poor measure practice and statistical blunders (Mitchell,
1997; Reeves, 2000). However, little to no empirical evidence is used to support such claims.
Additionally, some researchers examined trends over time and across journals (e.g., Alper &
Gülbahar, 2009; Chen & Hirschheim, 2004; Hrastinski & Keller, 2007; Koble & Bunker, 1997; Rourke
& Szabo, 2002; Şİmşek et al., 2009), while others did not observe trends (e.g., Cheung, & Hew, 2009;
Farhoomand & Drury, 1999; Lee et al., 2004, 2007; Randolph, 2008, Randolph et al., 2008). Although
these topics are explored to some extent, little is known about the statistical trends in educational
technology. Moreover, few reviewed the educational technology field as a whole. For these reasons,
a thorough synthesis of the field regarding the use of sampling procedures, score reliability, and
statistical techniques is needed.
Methodological reviews benefit the publishing, research, and teaching communities. The current
study can offer authors, editors, and reviewers insight to the publishing trends in high impact
Page 3 of 16
Ritter, N. L. (2020). Statistical Methods Used In Educational Technology Research 2012-2013. Advances in Social Sciences Research Journal, 7(11) 31-46.
URL: http://dx.doi.org/10.14738/assrj.711.9146 33
educational technology journals. This study also identifies some of the strengths and weaknesses of
current statistical techniques used in educational technology research. Lastly, educational
technology doctoral programs can use this information to determine the types of statistical
techniques their graduate students need to interpret and conduct research. By reviewing the use of
sampling methods, score reliability and statistical techniques, in particular, the field may begin to
determine the extent of the gaps other researchers claim that educational technology has “not yet
held to the standards of experimental research in other fields of social science” (Lee et al., 2007, p.
40).
The purpose of the present article offers a comparable review of articles in educational technology
to identify which sampling procedures and statistical techniques are used, and whether discernible
trends emerged within journals and across the discipline over the past two years. Essentially, this
review models Warne et al. (2012), who reviewed articles across multiple journals. Where other
educational technology reviews are limited to specific areas, this study reviews the field as a whole.
Moreover, this study moved beyond classifying research methods, and examined the sampling
procedures and statistical techniques used in educational technology research.
METHODS
The present methodological review examined all articles published in the five influential
educational technology journals over a two year period, from 2012 to 2013. The journals selected
for review were based on the journals’ impact factors using the 2011 Journal Citation Reports®
(JCR®) Social Science Edition. The journals in rank order were: Computers & Education (C&E),
International Journal of Computer-Supported Collaborative Learning (ijCSCL), British Journal of
Educational Technology (BJET), Australasian Journal of Educational Technology (AJET), and
Educational Technology Research and Development (ETR&D).
A coding scheme was created based on three sources: previous reviews (Skidmore & Thompson,
2010; Warne et al., 2012), publication standards (American Psychological Association, 2010; APA
Publications & Communications Board Working Group on Journal Article Reporting Standards,
2008; Wilkinson & the Task Force on Statistical Inference, 1999), and meta-analytic coding
suggestions (Cooper, 2010; Lipsey & Wilson, 2001). First, previous reviews were consulted, which
led to an initial coding scheme. Based on the findings of previous reviews, it was observed that the
original articles under review did not follow publication standards. As a result, the statistical
reporting standards from the American Psychological Association (2010), APA Publications and
Communications Board Working Group on Journal Article Reporting Standards (2008), and
Wilkinson and the Task Force on Statistical Inference (1999) were appraised, which expanded the
coding scheme to include specific statistics such as effect sizes. Lastly, works on meta-analytic
coding were reviewed, which expanded the coding scheme to include the type of information to
report in original articles to best synthesize information for future use (e.g. reporting standard
deviations with means or accompanying means and standard deviations with correlation matrixes).
The coding sheet and definitions are located in supplementary materials section. Note, despite that
some statistical analysis techniques are computationally equivalent to each other, the statistical
technique was coded according to how the author of the article referred to the statistical technique.
A total of 32,131 items from 1171 articles were recorded. The inter-rater reliability between two,