Stanford University

Along with Stanford news and stories, show me:

  • Student information
  • Faculty/Staff information

We want to provide announcements, events, leadership messages and resources that are relevant to you. Your selection is stored in a browser cookie which you can remove at any time using “Clear all personalization” below.

Image credit: Claire Scully

New advances in technology are upending education, from the recent debut of new artificial intelligence (AI) chatbots like ChatGPT to the growing accessibility of virtual-reality tools that expand the boundaries of the classroom. For educators, at the heart of it all is the hope that every learner gets an equal chance to develop the skills they need to succeed. But that promise is not without its pitfalls.

“Technology is a game-changer for education – it offers the prospect of universal access to high-quality learning experiences, and it creates fundamentally new ways of teaching,” said Dan Schwartz, dean of Stanford Graduate School of Education (GSE), who is also a professor of educational technology at the GSE and faculty director of the Stanford Accelerator for Learning . “But there are a lot of ways we teach that aren’t great, and a big fear with AI in particular is that we just get more efficient at teaching badly. This is a moment to pay attention, to do things differently.”

For K-12 schools, this year also marks the end of the Elementary and Secondary School Emergency Relief (ESSER) funding program, which has provided pandemic recovery funds that many districts used to invest in educational software and systems. With these funds running out in September 2024, schools are trying to determine their best use of technology as they face the prospect of diminishing resources.

Here, Schwartz and other Stanford education scholars weigh in on some of the technology trends taking center stage in the classroom this year.

AI in the classroom

In 2023, the big story in technology and education was generative AI, following the introduction of ChatGPT and other chatbots that produce text seemingly written by a human in response to a question or prompt. Educators immediately worried that students would use the chatbot to cheat by trying to pass its writing off as their own. As schools move to adopt policies around students’ use of the tool, many are also beginning to explore potential opportunities – for example, to generate reading assignments or coach students during the writing process.

AI can also help automate tasks like grading and lesson planning, freeing teachers to do the human work that drew them into the profession in the first place, said Victor Lee, an associate professor at the GSE and faculty lead for the AI + Education initiative at the Stanford Accelerator for Learning. “I’m heartened to see some movement toward creating AI tools that make teachers’ lives better – not to replace them, but to give them the time to do the work that only teachers are able to do,” he said. “I hope to see more on that front.”

He also emphasized the need to teach students now to begin questioning and critiquing the development and use of AI. “AI is not going away,” said Lee, who is also director of CRAFT (Classroom-Ready Resources about AI for Teaching), which provides free resources to help teach AI literacy to high school students across subject areas. “We need to teach students how to understand and think critically about this technology.”

Immersive environments

The use of immersive technologies like augmented reality, virtual reality, and mixed reality is also expected to surge in the classroom, especially as new high-profile devices integrating these realities hit the marketplace in 2024.

The educational possibilities now go beyond putting on a headset and experiencing life in a distant location. With new technologies, students can create their own local interactive 360-degree scenarios, using just a cell phone or inexpensive camera and simple online tools.

“This is an area that’s really going to explode over the next couple of years,” said Kristen Pilner Blair, director of research for the Digital Learning initiative at the Stanford Accelerator for Learning, which runs a program exploring the use of virtual field trips to promote learning. “Students can learn about the effects of climate change, say, by virtually experiencing the impact on a particular environment. But they can also become creators, documenting and sharing immersive media that shows the effects where they live.”

Integrating AI into virtual simulations could also soon take the experience to another level, Schwartz said. “If your VR experience brings me to a redwood tree, you could have a window pop up that allows me to ask questions about the tree, and AI can deliver the answers.”

Gamification

Another trend expected to intensify this year is the gamification of learning activities, often featuring dynamic videos with interactive elements to engage and hold students’ attention.

“Gamification is a good motivator, because one key aspect is reward, which is very powerful,” said Schwartz. The downside? Rewards are specific to the activity at hand, which may not extend to learning more generally. “If I get rewarded for doing math in a space-age video game, it doesn’t mean I’m going to be motivated to do math anywhere else.”

Gamification sometimes tries to make “chocolate-covered broccoli,” Schwartz said, by adding art and rewards to make speeded response tasks involving single-answer, factual questions more fun. He hopes to see more creative play patterns that give students points for rethinking an approach or adapting their strategy, rather than only rewarding them for quickly producing a correct response.

Data-gathering and analysis

The growing use of technology in schools is producing massive amounts of data on students’ activities in the classroom and online. “We’re now able to capture moment-to-moment data, every keystroke a kid makes,” said Schwartz – data that can reveal areas of struggle and different learning opportunities, from solving a math problem to approaching a writing assignment.

But outside of research settings, he said, that type of granular data – now owned by tech companies – is more likely used to refine the design of the software than to provide teachers with actionable information.

The promise of personalized learning is being able to generate content aligned with students’ interests and skill levels, and making lessons more accessible for multilingual learners and students with disabilities. Realizing that promise requires that educators can make sense of the data that’s being collected, said Schwartz – and while advances in AI are making it easier to identify patterns and findings, the data also needs to be in a system and form educators can access and analyze for decision-making. Developing a usable infrastructure for that data, Schwartz said, is an important next step.

With the accumulation of student data comes privacy concerns: How is the data being collected? Are there regulations or guidelines around its use in decision-making? What steps are being taken to prevent unauthorized access? In 2023 K-12 schools experienced a rise in cyberattacks, underscoring the need to implement strong systems to safeguard student data.

Technology is “requiring people to check their assumptions about education,” said Schwartz, noting that AI in particular is very efficient at replicating biases and automating the way things have been done in the past, including poor models of instruction. “But it’s also opening up new possibilities for students producing material, and for being able to identify children who are not average so we can customize toward them. It’s an opportunity to think of entirely new ways of teaching – this is the path I hope to see.”

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Open access
  • Published: 11 August 2020

Schooling and Covid-19: lessons from recent research on EdTech

  • Robert Fairlie 1 &
  • Prashant Loyalka 2  

npj Science of Learning volume  5 , Article number:  13 ( 2020 ) Cite this article

6313 Accesses

8 Citations

3 Altmetric

Metrics details

The wide-scale global movement of school education to remote instruction due to Covid-19 is unprecedented. The use of educational technology (EdTech) offers an alternative to in-person learning and reinforces social distancing, but there is limited evidence on whether and how EdTech affects academic outcomes. Recently, we conducted two large-scale randomized experiments, involving ~10,000 primary school students in China and Russia, to evaluate the effectiveness of EdTech as a substitute for traditional schooling. In China, we examined whether EdTech improves academic outcomes relative to paper-and-pencil workbook exercises of identical content. We found that EdTech was a perfect substitute for traditional learning. In Russia, we further explored how much EdTech can substitute for traditional learning. We found that EdTech substitutes only to a limited extent. The findings from these large-scale trials indicate that we need to be careful about using EdTech as a full-scale substitute for the traditional instruction received by schoolchildren.

Similar content being viewed by others

education technology research article

Unraveling the controversial effect of Covid-19 on college students’ performance

education technology research article

The effect of classroom environment on literacy development

education technology research article

Subject integration and theme evolution of STEM education in K-12 and higher education research

The wide-scale global movement of school education to remote instruction due to Covid-19 is unprecedented. The use of educational technology (EdTech) offers an alternative to in-person learning and reinforces social distancing, but there is limited evidence on whether and how EdTech affects academic outcomes, and that limited evidence is mixed. 1 , 2 For example, previous studies examine performance of students in online courses and generally find that they do not perform as well as in traditional courses. On the other hand, recent large-scale evaluations of supplemental computer-assisted learning programs show large positive effects on test scores. One concern, however, is that EdTech is often evaluated as a supplemental after-school program instead of as a direct substitute for traditional learning. Supplemental programs inherently have an advantage in that provide more time learning material.

Recently, we conducted two large-scale randomized experiments, involving ~10,000 primary school students in China and Russia, to evaluate the effectiveness of EdTech as a substitute for traditional schooling. 3 , 4 In both, we focused on whether and how EdTech can substitute for in-person instruction (being careful to control for time on task). In China, we examined whether EdTech improves academic outcomes relative to paper-and-pencil workbook exercises of identical content. We followed students ages 9–13 for several months over the academic year. When we examined the impacts of each supplemental program we found that EdTech and workbook exercise sessions of equal time and content outside of school hours had the same effect on standardized math test scores and grades in math classes. As such, EdTech appeared to be a perfect substitute for traditional learning.

In Russia, we built on these findings by further exploring how much EdTech can substitute for traditional learning. We examined whether providing students ages 9–11 with no EdTech, a base level of EdTech (~45 min per week), and a doubling of that level of EdTech can improve standardized test scores and grades. We found that EdTech can substitute for traditional learning only to a limited extent. There is a diminishing marginal rate of substitution for traditional learning from doubling the amount of EdTech use (that is, when we double the amount of EdTech used we do not find that test scores performance doubles). We find that additional time on EdTech even decreases schoolchildren’s motivation and engagement in subject material.

The findings from the large-scale trials indicate that we need to be careful about using EdTech as a full-scale substitute for the traditional instruction received by schoolchildren. There are two general takeaways: First, to a certain extent, EdTech can successfully substitute for traditional learning. Second, there are limits on how much EdTech may be beneficial. Admittedly, we need to be careful about extrapolating from the smaller amount of technology substitution in our experiments to the full-scale substitution in the face of the coronavirus pandemic. However, these studies may offer important lessons. For example, a balanced approach to learning in which schoolchildren intermingle work on electronic devices and work with traditional materials might be optimal. Schools could mail workbooks to students or recommend that students print out exercises to break up the amount of continuous time schoolchildren spend on devices. This might keep students engaged throughout the day and avoid problems associated with removing the structure of classroom schedules. Schools and families can devise creative remote learning solutions that include a combination of EdTech and more traditional forms of learning. Activities such as reading books, running at-home experiments, and art projects can also be used to break up extensive use of technology in remote instruction.

Bulman, G. & Fairlie, R. W. in Handbook of the Economics of Education (eds Hanushek, E., Machin, S. & Woessmann, L.) 239–280 (North-Holland, 2016).

Escueta, M., Quan, V., Nickow, A. J. & Oreopoulos, P. Education Technology: An Evidence-Based Review (National Bureau of Economics Research Working Paper No. 23744, 2017).

Bettinger, E. et al. Does EdTech Substitute for Traditional Learning? Experimental Estimates of the Educational Production Function (National Bureau of Economics Research Working Paper, 2020).

Ma, Y., Fairlie, R. W., Loyalka, P. & Rozelle, S. Isolating the “Tech” from EdTech: Experimental Evidence on Computer Assisted Learning in China (National Bureau of Economics Research Working Paper, 2020).

Download references

Acknowledgements

We would like to thank the numerous people that helped us with this research.

Author information

Authors and affiliations.

Department of Economics, University of California, Santa Cruz, USA

Robert Fairlie

Graduate School of Education/Freeman Spogli Institute for International Studies, Stanford University, Stanford, USA

Prashant Loyalka

You can also search for this author in PubMed   Google Scholar

Contributions

R.F.: contributed to analysis and writing. P.L.: contributed to analysis and writing. Both authors are accountable for the accuracy or integrity of all of the work. The authors are co-first authors having provided equal contributions to work.

Corresponding author

Correspondence to Robert Fairlie .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Fairlie, R., Loyalka, P. Schooling and Covid-19: lessons from recent research on EdTech. npj Sci. Learn. 5 , 13 (2020). https://doi.org/10.1038/s41539-020-00072-6

Download citation

Received : 04 May 2020

Accepted : 08 July 2020

Published : 11 August 2020

DOI : https://doi.org/10.1038/s41539-020-00072-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

education technology research article

  • Search this journal
  • Search all journals
  • View access options
  • View profile
  • Create profile

Journal of Educational Technology Systems

education technology research article

The Journal of Educational Technology Systems (ETS) deals with systems in which technology and education interface and is designed to inform educators who are interested in making optimum use of technology. More importantly, the Journal focuses on techniques and curriculum that utilize technology in all types of educational systems. Description of actual classroom practice and experimentation with the educational use of technology is an equally important aspect of the Journal. View full journal description

Generative AI: Is Authentic Qualitative Research Data Collection Possible?

  • Cheryl Burleigh
  • Andrea M. Wilson

This Isn’t Science Fiction: Technology Use During and Post-COVID for Students With Disabilities

  • Vicki Donne
  • Mary A. Hansen

The Importance of Frequent Engagement With Self-Assessment Resources in Online Learning: A Hierarchical Linear Modeling Approach

  • Anna Y. Zhang
  • April L. Millet

The Effect of Implementing Mixed-Reality Simulation (MRS) in an Educator Preparation Program

  • Ie May Freeman
  • Richard Barsh

Online Learning: A Panacea in the Time of COVID-19 Crisis

  • Shivangi Dhawan

Combining the Best of Online and Face-to-Face Learning: Hybrid and Blended Learning Approach for COVID-19, Post Vaccine, & Post-Pandemic World

  • Jitendra Singh
  • Keely Steele
  • Lovely Singh

The Shift to Gamification in Education: A Review on Dominant Issues

  • Kingsley Ofosu-Ampong

Online, Hybrid, and Face-to-Face Learning Through the Eyes of Faculty, Students, Administrators, and Instructional Designers: Lessons Learned and Directions for the Post-Vaccine and Post-Pandemic/COVID-19 World

  • Erica Evans
  • Karen Qualey
  • Hannah Wiersma

Readiness of In-service Teachers Toward a Blended Learning Approach as a Learning Pedagogy in the Post-COVID-19 Era

  • Rabiya Saboowala
  • Pooja Manghirmalani Mishra

Open Book Examination and Higher Education During COVID-19: Case of University of Delhi

  • Dhananjay Ashri
  • Bibhu P. Sahoo
  • Mansureh Kebritchi
  • Angie Lipschuetz
  • Lilia Santiague

Article has an altmetric score of 60

  • Jim Buckley
  • Tabea DeWille
  • Chris Exton
  • Geraldine Exton
  • Liam Murray

Article has an altmetric score of 3

  • Wermhuar Tarng
  • Mei-Yu Change
  • Kuo-Liang Ou
  • Ya-Wen Chang
  • Hsin-Hun Liou

Article has an altmetric score of 4

  • Julie Marin
  • Ségolène Brichler
  • Hervé Lecuyer
  • Etienne Carbonnelle
  • Mathilde Lescat

Article has an altmetric score of 2

This collection of articles has been put together from the Critical Studies in Television archive to be readalongside the Special Issue 'Acting on Television: Analytical Methods and Approaches'

This collection of articles has been put together from the Critical Studies in Television archive to be readalongside the Special Issue 'European Cultures of Production'

  • Crisis Management
  • Cultures and Intercultural Issues
  • Newsletters

Visit our How to Get Published resources for submission tips and our featured webinar with Sage journal editors.

  • Using Expert Sources to Correct Health Misinformation in Social Media Download MP3

Victoria Y. Martin

Nick Dragojlovic

You might be interested in

Publish with us.

ETS Call for Papers

Authors will enjoy:

  • Rigorous peer review of your research
  • Prompt publishing
  • Multidisciplinary audience
  • High visibility for global exposure

Author, editor & reviewer resources

Sage supports authors, editors, and reviewers throughout all steps of the publishing process – explore these resources:

  • Author Gateway
  • Editor Gateway
  • Reviewer Gateway
  • View additional resources

Sage discipline hubs

Editorial team.

  • Thomas T. Liao

Content Alerts

Keep up to date, email alerts.

Sign up to receive email alerts:

  • With the latest table of contents
  • When new articles are published online

Add email alerts

You are adding the following journal to your email alerts

New content
Journal of Educational Technology Systems

Also from Sage

  • CQ Library Elevating debate opens in new tab
  • Sage Data Uncovering insight opens in new tab
  • Sage Business Cases Shaping futures opens in new tab
  • Sage Campus Unleashing potential opens in new tab
  • Sage Knowledge Multimedia learning resources opens in new tab
  • Sage Research Methods Supercharging research opens in new tab
  • Sage Video Streaming knowledge opens in new tab
  • Technology from Sage Library digital services opens in new tab
  • Review article
  • Open access
  • Published: 22 January 2020

Mapping research in student engagement and educational technology in higher education: a systematic evidence map

  • Melissa Bond   ORCID: orcid.org/0000-0002-8267-031X 1 ,
  • Katja Buntins 2 ,
  • Svenja Bedenlier 1 ,
  • Olaf Zawacki-Richter 1 &
  • Michael Kerres 2  

International Journal of Educational Technology in Higher Education volume  17 , Article number:  2 ( 2020 ) Cite this article

130k Accesses

296 Citations

65 Altmetric

Metrics details

Digital technology has become a central aspect of higher education, inherently affecting all aspects of the student experience. It has also been linked to an increase in behavioural, affective and cognitive student engagement, the facilitation of which is a central concern of educators. In order to delineate the complex nexus of technology and student engagement, this article systematically maps research from 243 studies published between 2007 and 2016. Research within the corpus was predominantly undertaken within the United States and the United Kingdom, with only limited research undertaken in the Global South, and largely focused on the fields of Arts & Humanities, Education, and Natural Sciences, Mathematics & Statistics. Studies most often used quantitative methods, followed by mixed methods, with little qualitative research methods employed. Few studies provided a definition of student engagement, and less than half were guided by a theoretical framework. The courses investigated used blended learning and text-based tools (e.g. discussion forums) most often, with undergraduate students as the primary target group. Stemming from the use of educational technology, behavioural engagement was by far the most often identified dimension, followed by affective and cognitive engagement. This mapping article provides the grounds for further exploration into discipline-specific use of technology to foster student engagement.

Introduction

Over the past decade, the conceptualisation and measurement of ‘student engagement’ has received increasing attention from researchers, practitioners, and policy makers alike. Seminal works such as Astin’s ( 1999 ) theory of involvement, Fredricks, Blumenfeld, and Paris’s ( 2004 ) conceptualisation of the three dimensions of student engagement (behavioural, emotional, cognitive), and sociocultural theories of engagement such as Kahu ( 2013 ) and Kahu and Nelson ( 2018 ), have done much to shape and refine our understanding of this complex phenomenon. However, criticism about the strength and depth of student engagement theorising remains e.g. (Boekaerts, 2016 ; Kahn, 2014 ; Zepke, 2018 ), the quality of which has had a direct impact on the rigour of subsequent research (Lawson & Lawson, 2013 ; Trowler, 2010 ), prompting calls for further synthesis (Azevedo, 2015 ; Eccles, 2016 ).

In parallel to this increased attention on student engagement, digital technology has become a central aspect of higher education, inherently affecting all aspects of the student experience (Barak, 2018 ; Henderson, Selwyn, & Aston, 2017 ; Selwyn, 2016 ). International recognition of the importance of ICT skills and digital literacy has been growing, alongside mounting recognition of its importance for active citizenship (Choi, Glassman, & Cristol, 2017 ; OECD, 2015a ; Redecker, 2017 ), and the development of interdisciplinary and collaborative skills (Barak & Levenberg, 2016 ; Oliver, & de St Jorre, Trina, 2018 ). Using technology has the potential to make teaching and learning processes more intensive (Kerres, 2013 ), improve student self-regulation and self-efficacy (Alioon & Delialioğlu, 2017 ; Bouta, Retalis, & Paraskeva, 2012 ), increase participation and involvement in courses as well as the wider university community (Junco, 2012 ; Salaber, 2014 ), and predict increased student engagement (Chen, Lambert, & Guidry, 2010 ; Rashid & Asghar, 2016 ). There is, however, no guarantee of active student engagement as a result of using technology (Kirkwood, 2009 ), with Tamim, Bernard, Borokhovski, Abrami, and Schmid’s ( 2011 ) second-order meta-analysis finding only a small to moderate impact on student achievement across 40 years. Rather, careful planning, sound pedagogy and appropriate tools are vital (Englund, Olofsson, & Price, 2017 ; Koehler & Mishra, 2005 ; Popenici, 2013 ), as “technology can amplify great teaching, but great technology cannot replace poor teaching” (OECD, 2015b ), p. 4.

Due to the nature of its complexity, educational technology research has struggled to find a common definition and terminology with which to talk about student engagement, which has resulted in inconsistency across the field. For example, whilst 77% of articles reviewed by Henrie, Halverson, and Graham ( 2015 ) operationalised engagement from a behavioural perspective, most of the articles did not have a clearly defined statement of engagement, which is no longer considered acceptable in student engagement research (Appleton, Christenson, & Furlong, 2008 ; Christenson, Reschly, & Wylie, 2012 ). Linked to this, educational technology research has, however, lacked theoretical guidance (Al-Sakkaf, Omar, & Ahmad, 2019 ; Hew, Lan, Tang, Jia, & Lo, 2019 ; Lundin, Bergviken Rensfeldt, Hillman, Lantz-Andersson, & Peterson, 2018 ). A review of 44 random articles published in 2014 in the journals Educational Technology Research & Development and Computers & Education, for example, revealed that more than half had no guiding conceptual or theoretical framework (Antonenko, 2015 ), and only 13 out of 62 studies in a systematic review of flipped learning in engineering education reported theoretical grounding (Karabulut-Ilgu, Jaramillo Cherrez, & Jahren, 2018 ). Therefore, calls have been made for a greater understanding of the role that educational technology plays in affecting student engagement, in order to strengthen teaching practice and lead to improved outcomes for students (Castañeda & Selwyn, 2018 ; Krause & Coates, 2008 ; Nelson Laird & Kuh, 2005 ).

A reflection upon prior research that has been undertaken in the field is a necessary first step to engage in meaningful discussion on how to foster student engagement in the digital age. In support of this aim, this article provides a synthesis of student engagement theory research, and systematically maps empirical higher education research between 2007 and 2016 on student engagement in educational technology. Synthesising the vast body of literature on student engagement (for previous literature and systematic reviews, see Additional file  1 ), this article develops “a tentative theory” in the hopes of “plot[ting] the conceptual landscape…[and chart] possible routes to explore it” (Antonenko, 2015 , pp. 57–67) for researchers, practitioners, learning designers, administrators and policy makers. It then discusses student engagement against the background of educational technology research, exploring prior literature and systematic reviews that have been undertaken. The systematic review search method is then outlined, followed by the presentation and discussion of findings.

Literature review

What is student engagement.

Student engagement has been linked to improved achievement, persistence and retention (Finn, 2006 ; Kuh, Cruce, Shoup, Kinzie, & Gonyea, 2008 ), with disengagement having a profound effect on student learning outcomes and cognitive development (Ma, Han, Yang, & Cheng, 2015 ), and being a predictor of student dropout in both secondary school and higher education (Finn & Zimmer, 2012 ). Student engagement is a multifaceted and complex construct (Appleton et al., 2008 ; Ben-Eliyahu, Moore, Dorph, & Schunn, 2018 ), which some have called a ‘meta-construct’ (e.g. Fredricks et al., 2004 ; Kahu, 2013 ), and likened to blind men describing an elephant (Baron & Corbin, 2012 ; Eccles, 2016 ). There is ongoing disagreement about whether there are three components e.g., (Eccles, 2016 )—affective/emotional, cognitive and behavioural—or whether there are four, with the recent suggested addition of agentic engagement (Reeve, 2012 ; Reeve & Tseng, 2011 ) and social engagement (Fredricks, Filsecker, & Lawson, 2016 ). There has also been confusion as to whether the terms ‘engagement’ and ‘motivation’ can and should be used interchangeably (Reschly & Christenson, 2012 ), especially when used by policy makers and institutions (Eccles & Wang, 2012 ). However, the prevalent understanding across the literature is that motivation is an antecedent to engagement; it is the intent and unobservable force that energises behaviour (Lim, 2004 ; Reeve, 2012 ; Reschly & Christenson, 2012 ), whereas student engagement is energy and effort in action; an observable manifestation (Appleton et al., 2008 ; Eccles & Wang, 2012 ; Kuh, 2009 ; Skinner & Pitzer, 2012 ), evidenced through a range of indicators.

Whilst it is widely accepted that no one definition exists that will satisfy all stakeholders (Solomonides, 2013 ), and no one project can be expected to possibly examine every sub-construct of student engagement (Kahu, 2013 ), it is important for each research project to begin with a clear definition of their own understanding (Boekaerts, 2016 ). Therefore, in this project, student engagement is defined as follows:

Student engagement is the energy and effort that students employ within their learning community, observable via any number of behavioural, cognitive or affective indicators across a continuum. It is shaped by a range of structural and internal influences, including the complex interplay of relationships, learning activities and the learning environment. The more students are engaged and empowered within their learning community, the more likely they are to channel that energy back into their learning, leading to a range of short and long term outcomes, that can likewise further fuel engagement.

Dimensions and indicators of student engagement

There are three widely accepted dimensions of student engagement; affective, cognitive and behavioural. Within each component there are several indicators of engagement (see Additional file  2 ), as well as disengagement (see Additional file 2 ), which is now seen as a separate and distinct construct to engagement. It should be stated, however, that whilst these have been drawn from a range of literature, this is not a finite list, and it is recognised that students might experience these indicators on a continuum at varying times (Coates, 2007 ; Payne, 2017 ), depending on their valence (positive or negative) and activation (high or low) (Pekrun & Linnenbrink-Garcia, 2012 ). There has also been disagreement in terms of which dimension the indicators align with. For example, Järvelä, Järvenoja, Malmberg, Isohätälä, and Sobocinski ( 2016 ) argue that ‘interaction’ extends beyond behavioural engagement, covering both cognitive and/or emotional dimensions, as it involves collaboration between students, and Lawson and Lawson ( 2013 ) believe that ‘effort’ and ‘persistence’ are cognitive rather than behavioural constructs, as they “represent cognitive dispositions toward activity rather than an activity unto itself” (p. 465), which is represented in the table through the indicator ‘stay on task/focus’ (see Additional file 2 ). Further consideration of these disagreements represent an area for future research, however, as they are beyond the scope of this paper.

Student engagement within educational technology research

The potential that educational technology has to improve student engagement, has long been recognised (Norris & Coutas, 2014 ), however it is not merely a case of technology plus students equals engagement. Without careful planning and sound pedagogy, technology can promote disengagement and impede rather than help learning (Howard, Ma, & Yang, 2016 ; Popenici, 2013 ). Whilst still a young area, most of the research undertaken to gain insight into this, has been focused on undergraduate students e.g., (Henrie et al., 2015 ; Webb, Clough, O’Reilly, Wilmott, & Witham, 2017 ), with Chen et al. ( 2010 ) finding a positive relationship between the use of technology and student engagement, particularly earlier in university study. Research has also been predominantly STEM and medicine focused (e.g., Li, van der Spek, Feijs, Wang, & Hu, 2017 ; Nikou & Economides, 2018 ), with at least five literature or systematic reviews published in the last 5 years focused on medicine, and nursing in particular (see Additional file  3 ). This indicates that further synthesis is needed of research in other disciplines, such as Arts & Humanities and Education, as well as further investigation into whether research continues to focus on undergraduate students.

The five most researched technologies in Henrie et al.’s ( 2015 ) review were online discussion boards, general websites, learning management systems (LMS), general campus software and videos, as opposed to Schindler, Burkholder, Morad, and Marsh’s ( 2017 ) literature review, which concentrated on social networking sites (Facebook and Twitter), digital games, wikis, web-conferencing software and blogs. Schindler et al. found that most of these technologies had a positive impact on multiple indicators of student engagement across the three dimensions of engagement, with digital games, web-conferencing software and Facebook the most effective. However, it must be noted that they only considered seven indicators of student engagement, which could be extended by considering further indicators of student engagement. Other reviews that have found at least a small positive impact on student engagement include those focused on audience response systems (Hunsu, Adesope, & Bayly, 2016 ; Kay & LeSage, 2009 ), mobile learning (Kaliisa & Picard, 2017 ), and social media (Cheston, Flickinger, & Chisolm, 2013 ). Specific indicators of engagement that increased as a result of technology include interest and enjoyment (Li et al., 2017 ), improved confidence (Smith & Lambert, 2014 ) and attitudes (Nikou & Economides, 2018 ), as well as enhanced relationships with peers and teachers e.g., (Alrasheedi, Capretz, & Raza, 2015 ; Atmacasoy & Aksu, 2018 ).

Literature and systematic reviews focused on student engagement and technology do not always include information on where studies have been conducted. Out of 27 identified reviews (see Additional file 3 ), only 14 report the countries included, and two of these were explicitly focused on a specific region or country, namely Africa and Turkey. Most of the research has been conducted in the USA, followed by the UK, Taiwan, Australia and China. Table  1 depicts the three countries from which most studies originated from in the respective reviews, and highlights a clear lack of research conducted within mainland Europe, South America and Africa. Whilst this could be due to the choice of databases in which the literature was searched for, this nevertheless highlights a substantial gap in the literature, and to that end, it will be interesting to see whether this review is able to substantiate or contradict these trends.

Research into student engagement and educational technology has predominantly used a quantitative methodology (see Additional file 3 ), with 11 literature and systematic reviews reporting that surveys, particularly self-report Likert-scale, are the most used source of measurement (e.g. Henrie et al., 2015 ). Reviews that have included research using a range of methodologies, have found a limited number of studies employing qualitative methods (e.g. Connolly, Boyle, MacArthur, Hainey, & Boyle, 2012 ; Kay & LeSage, 2009 ; Lundin et al., 2018 ). This has led to a call for further qualitative research to be undertaken, exploring student engagement and technology, as well as more rigorous research designs e.g., (Li et al., 2017 ; Nikou & Economides, 2018 ), including sampling strategies, data collection, and in experimental studies in particular (Cheston et al., 2013 ; Connolly et al., 2012 ). However, not all reviews included information on methodologies used. Crook ( 2019 ), in his recent editorial in the British Journal of Educational Technology , stated that research methodology is a “neglected topic” (p. 487) within educational technology research, and stressed its importance in order to conduct studies delving deeper into phenomena (e.g. longitudinal studies).

Therefore, this article presents an initial “evidence map” (Miake-Lye, Hempel, Shanman, & Shekelle, 2016 ), p. 19 of systematically identified literature on student engagement and educational technology within higher education, undertaken through a systematic review, in order to address the issues raised by prior research, and to identify research gaps. These issues include the disparity between field of study and study levels researched, the geographical distribution of studies, the methodologies used, and the theoretical fuzziness surrounding student engagement. This article, however, is intended to provide an initial overview of the systematic review method employed, as well as an overview of the overall corpus. Further synthesis of possible correlations between student engagement and disengagement indicators with the co-occurrence of technology tools, will be undertaken within field of study specific articles (e.g., Bedenlier, 2020b ; Bedenlier 2020a ), allowing more meaningful guidance on applying the findings in practice.

The following research questions guide this enquiry:

How do the studies in the sample ground student engagement and align with theory?

Which indicators of cognitive, behavioural and affective engagement were identified in studies where educational technology was used? Which indicators of student disengagement?

What are the learning scenarios, modes of delivery and educational technology tools employed in the studies?

Overview of the study

With the intent to systematically map empirical research on student engagement and educational technology in higher education, we conducted a systematic review. A systematic review is an explicitly and systematically conducted literature review, that answers a specific question through applying a replicable search strategy, with studies then included or excluded, based on explicit criteria (Gough, Oliver, & Thomas, 2012 ). Studies included for review are then coded and synthesised into findings that shine light on gaps, contradictions or inconsistencies in the literature, as well as providing guidance on applying findings in practice. This contribution maps the research corpus of 243 studies that were identified through a systematic search and ensuing random parameter-based sampling.

Search strategy and selection procedure

The initial inclusion criteria for the systematic review were peer-reviewed articles in the English language, empirically reporting on students and student engagement in higher education, and making use of educational technology. The search was limited to records between 1995 and 2016, chosen due to the implementation of the first Virtual Learning Environments and Learning Management Systems within higher education see (Bond, 2018 ). Articles were limited to those published in peer-reviewed journals, due to the rigorous process under which they are published, and their trustworthiness in academia (Nicholas et al., 2015 ), although concerns within the scientific community with the peer-review process are acknowledged e.g. (Smith, 2006 ).

Discussion arose on how to approach the “hard-to-detect” (O’Mara-Eves et al., 2014 , p. 51) concept of student engagement in regards to sensitivity versus precision (Brunton, Stansfield, & Thomas, 2012 ), particularly in light of engagement being Henrie et al.’s ( 2015 ) most important search term. The decision was made that the concept ‘student engagement’ would be identified from titles and abstracts at a later stage, during the screening process. In this way, it was assumed that articles would be included, which indeed are concerned with student engagement, but which use different terms to describe the concept. Given the nature of student engagement as a meta-construct e.g. (Appleton et al., 2008 ; Christenson et al., 2012 ; Kahu, 2013 ) and by limiting the search to only articles including the term engagement , important research on other elements of student engagement might be missed. Hence, we opted for recall over precision. According to Gough et al. ( 2012 ), p. 13 “electronic searching is imprecise and captures many studies that employ the same terms without sharing the same focus”, or would lead to disregarding studies that analyse the construct but use different terms to describe it.

With this in mind, the search strategy to identify relevant studies was developed iteratively with support from the University Research Librarian. As outlined in O’Mara-Eves et al. ( 2014 ) as a standard approach, we used reviewer knowledge—in this case strongly supported through not only reviewer knowledge but certified expertise—and previous literature (e.g. Henrie et al., 2015 ; Kahu, 2013 ) to elicit concepts with potential importance under the topics student engagement, higher education and educational technology . The final search string (see Fig.  1 ) encompasses clusters of different educational technologies that were searched for separately in order to avoid an overly long search string. It was decided not to include any brand names, e.g. Facebook, Twitter, Moodle etc. because it was again reasoned that in scientific publication, the broader term would be used (e.g. social media). The final search string was slightly adapted, e.g. the format required for truncations or wildcards, according to the settings of each database being used Footnote 1 .

figure 1

Final search terms used in the systematic review

Four databases (ERIC, Web of Science, Scopus and PsycINFO) were searched in July 2017 and three researchers and a student assistant screened abstracts and titles of the retrieved references between August and November 2017, using EPPI Reviewer 4.0. An initial 77,508 references were retrieved, and with the elimination of duplicate records, 53,768 references remained (see Fig.  2 ). A first cursory screening of records revealed that older research was more concerned with technologies that are now considered outdated (e.g. overhead projectors, floppy disks). Therefore, we opted to adjust the period to include research published between 2007 and 2016, labeled as a phase of research and practice, entitled ‘online learning in the digital age’ (Bond, 2018 ). Whilst we initially opted for recall over precision, the decision was then made to search for specific facets of the student engagement construct (e.g. deep learning, interest and persistence) within EPPI-Reviewer, in order to further refine the corpus. These adaptations led to a remaining 18,068 records.

figure 2

Systematic review PRISMA flow chart (slightly modified after Brunton et al., 2012 , p. 86; Moher, Liberati, Tetzlaff, & Altman, 2009 ), p. 8

Four researchers screened the first 150 titles and abstracts, in order to iteratively establish a joint understanding of the inclusion criteria. The remaining references were distributed equally amongst the screening team, which resulted in the inclusion of 4152 potentially relevant articles. Given the large number of articles for screening on full text, whilst facing restrained time as a condition in project-based and funded work, it was decided that a sample of articles would be drawn from this corpus for further analysis. With the intention to draw a sample that estimates the population parameters with a predetermined error range, we used methods of sample size estimation in the social sciences (Kupper & Hafner, 1989 ). To do so, the R Package MBESS (Kelley, Lai, Lai, & Suggests, 2018 ) was used. Accepting a 5% error range, a percentage of a half and an alpha of 5%, 349 articles were sampled, with this sample being then stratified by publishing year, as student engagement has become much more prevalent (Zepke, 2018 ) and educational technology has become more differentiated within the last decade (Bond, 2018 ). Two researchers screened the first 100 articles on full text, reaching an agreement of 88% on inclusion/exclusion. The researchers then discussed the discrepancies and came to an agreement on the remaining 12%. It was decided that further comparison screening was needed, to increase the level of reliability. After screening the sample on full text, 232 articles remained for data extraction, which contained 243 studies.

Data extraction process

In order to extract the article data, an extensive coding system was developed, including codes to extract information on the set-up and execution of the study (e.g. methodology, study sample) as well as information on the learning scenario, the mode of delivery and educational technology used. Learning scenarios included broader pedagogies, such as social collaborative learning and self-determined learning, but also specific pedagogies such as flipped learning, given the increasing number of studies and interest in these approaches (e.g., Lundin et al., 2018 ). Specific examples of student engagement and/or disengagement were coded under cognitive, affective or behavioural (dis)engagement. The facets of student (dis)engagement were identified based on the literature review undertaken (see Additional file 2 ), and applied in this detailed manner to not only capture the overarching dimensions of the concept, but rather their diverse sub-meanings. New indicators also emerged during the coding process, which had not initially been identified from the literature review, including ‘confidence’ and ‘assuming responsibility’. The 243 studies were coded with this extensive code set and any disagreements that occurred between the coders were reconciled. Footnote 2

As a plethora of over 50 individual educational technology applications and tools were identified in the 243 studies, in line with results found in other large-scale systematic reviews (e.g., Lai & Bower, 2019 ), concerns were raised over how the research team could meaningfully analyse and report the results. The decision was therefore made to employ Bower’s ( 2016 ) typology of learning technologies (see Additional file  4 ), in order to channel the tools into groups that share the same characteristics or “structure of information” (Bower, 2016 ), p. 773. Whilst it is acknowledged that some of the technology could be classified into more than one type within the typology, e.g. wikis can be used in individual composition, for collaborative tasks, or for knowledge organisation and sharing, “the type of learning that results from the use of the tool is dependent on the task and the way people engage with it rather than the technology itself” therefore “the typology is presented as descriptions of what each type of tool enables and example use cases rather than prescriptions of any particular pedagogical value system” (Bower, 2016 ), p. 774. For further elaboration on each category, please see Bower ( 2015 ).

Study characteristics

Geographical characteristics.

The systematic mapping reveals that the 243 studies were set in 33 different countries, whilst seven studies investigated settings in an international context, and three studies did not indicate their country setting. In 2% of the studies, the country was allocated based on the author country of origin, if the two authors came from the same country. The top five countries account for 158 studies (see Fig.  3 ), with 35.4% ( n  = 86) studies conducted in the United States (US), 10.7% ( n  = 26) in the United Kingdom (UK), 7.8% ( n  = 19) in Australia, 7.4% ( n  = 18) in Taiwan, and 3.7% ( n  = 9) in China. Across the corpus, studies from countries employing English as the official or one of the official languages total up to 59.7% of the entire sample, followed by East Asian countries that in total account for 18.8% of the sample. With the exception of the UK, European countries are largely absent from the sample, only 7.3% of the articles originate from this region, with countries such as France, Belgium, Italy or Portugal having no studies and countries such as Germany or the Netherlands having one respectively. Thus, with eight articles, Spain is the most prolific European country outside of the UK. The geographical distribution of study settings also clearly shows an almost complete absence of studies undertaken within African contexts, with five studies from South Africa and one from Tunisia. Studies from South-East Asia, the Middle East, and South America are likewise low in number this review. Whilst the global picture evokes an imbalance, this might be partially due to our search and sampling strategy, having focused on English language journals, indexed in four primarily Western-focused databases.

figure 3

Percentage deviation from the average relative frequencies of the different data collection formats per country (≥ 3 articles). Note. NS = not stated; AUS = Australia; CAN = Canada; CHN = China; HKG = Hong Kong; inter = international; IRI = Iran; JAP = Japan; MYS = Malaysia; SGP = Singapore; ZAF = South Africa; KOR = South Korea; ESP = Spain; SWE = Sweden; TWN = Taiwan; TUR = Turkey; GBR = United Kingdom; USA = United States of America

Methodological characteristics

Within this literature corpus, 103 studies (42%) employed quantitative methods, 84 (35%) mixed methods, and 56 (23%) qualitative. Relating these numbers back to the contributing countries, different preferences for and frequencies of methods used become apparent (see Fig. 3 ). As a general tendency, mixed methods and qualitative research occurs more often in Western countries, whereas quantitative research is the preferred method in East Asian countries. For example, studies originating from Australia employ mixed methods research 28% more often than the average, whereas Singapore is far below average in mixed methods research, with 34.5% less than the other countries in the sample. In Taiwan, on the other hand, mixed methods studies are being conducted 23.5% below average and qualitative research 6.4% less often than average. However, quantitative research occurs more often than in other countries, with 29.8% above average.

Amongst the qualitative studies, qualitative content analysis ( n  = 30) was the most frequently used analysis approach, followed by thematic analysis ( n  = 21) and grounded theory ( n  = 12). However, a lot of times ( n  = 37) the exact analysis approach was not reported, could not be allocated to a specific classification ( n  = 22), or no method of analysis was identifiable ( n  = 11). Within studies using quantitative methods, mean comparison was used in 100 studies, frequency data was collected and analysed in 83 studies, and in 40 studies regression models were used. Furthermore, looking at the correlation between the different analysis approaches, only one significant correlation can be identified, this being between mean comparison and frequency data (−.246). Besides that, correlations are small, for example, in only 14% of the studies both mean comparisons and regressions models are employed.

Study population characteristics

Research in the corpus focused on universities as the prime institution type ( n  = 191, 79%), followed by 24 (10%) non-specified institution types, and colleges ( n  = 21, 8.2%) (see Fig.  4 ). Five studies (2%) included institutions classified as ‘other’, and two studies (0.8%) included both college and university students. The most frequently studied student population was undergraduate students (60%, n  = 146), as opposed to 33 studies (14%) focused on postgraduate students (see Fig.  6 ). A combination of undergraduate and postgraduate students were the subject of interest in 23 studies (9%), with 41 studies (17%) not specifying the level of study of research participants.

figure 4

Relative frequencies of study field in dependence of countries with ≥3 articles. Note. Country abbreviations are as per Figure 4. A&H = Arts & Humanities; BA&L = Business, Administration and Law; EDU = Education; EM&C = Engineering, Manufacturing & Construction; H&W = Health & Welfare; ICT = Information & Communication Technologies; ID = interdisciplinary; NS,M&S = Natural Science, Mathematics & Statistics; NS = Not specified; SoS = Social Sciences, Journalism & Information

Based on the UNESCO (2015) ISCED classification, eight broad study fields are covered in the sample, with Arts & Humanities (42 studies), Education (42 studies), and Natural Sciences, Mathematics & Statistics (37) being the top three study fields, followed by Health & Welfare (30 studies), Social Sciences, Journalism & Information (22), Business, Administration & Law (19 studies), Information & Communication Technologies (13), Engineering, Manufacturing & Construction (11), and another 26 studies of interdisciplinary character. One study did not specify a field of study.

An expectancy value was calculated, according to which, the distribution of studies per discipline should occur per country. The actual deviation from this value then showed that several Asian countries are home to more articles in the field of Arts & Humanities than was expected: Japan with 3.3 articles more, China with 5.4 and Taiwan with 5.9. Furthermore, internationally located research also shows 2.3 more interdisciplinary studies than expected, whereas studies on Social Sciences occur more often than expected in the UK (5.7 more articles) and Australia (3.3 articles) but less often than expected across all other countries. Interestingly, the USA have 9.9 studies less in Arts & Humanities than was expected but 5.6 articles more than expected in Natural Science.

Question One: How do the studies in the sample ground student engagement and align with theory?

Defining student engagement.

It is striking that almost all of the studies ( n  = 225, 93%) in this corpus lack a definition of student engagement, with only 18 (7%) articles attempting to define the concept. However, this is not too surprising, as the search strategy was set up with the assumption that researchers investigating student engagement (dimensions and indicators) would not necessarily label them as student engagement. When developing their definitions, authors in these 18 studies referenced 22 different sources, with the work of Kuh and colleagues e.g., (Hu & Kuh, 2002 ; Kuh, 2001 ; Kuh et al., 2006 ), as well as Astin ( 1984 ), the only authors referred to more than once. The most popular definition of student engagement within these studies was that of active participation and involvement in learning and university life e.g., (Bolden & Nahachewsky, 2015 ; bFukuzawa & Boyd, 2016 ), which was also found by Joksimović et al. ( 2018 ) in their review of MOOC research. Interaction, especially between peers and with faculty, was the next most prevalent definition e.g., (Andrew, Ewens, & Maslin-Prothero, 2015 ; Bigatel & Williams, 2015 ). Time and effort was given as a definition in four studies (Gleason, 2012 ; Hatzipanagos & Code, 2016 ; Price, Richardson, & Jelfs, 2007 ; Sun & Rueda, 2012 ), with expending physical and psychological energy (Ivala & Gachago, 2012 ) another definition. This variance in definitions and sources reflects the ongoing complexity of the construct (Zepke, 2018 ), and serves to reinforce the need for a clearer understanding across the field (Schindler et al., 2017 ).

Theoretical underpinnings

Reflecting findings from other systematic and literature reviews on the topic (Abdool, Nirula, Bonato, Rajji, & Silver, 2017 ; Hunsu et al., 2016 ; Kaliisa & Picard, 2017 ; Lundin et al., 2018 ), 59% ( n  = 100) of studies did not employ a theoretical model in their research. Of the 41% ( n  = 100) that did, 18 studies drew on social constructivism, followed by the Community of Inquiry model ( n  = 8), Sociocultural Learning Theory ( n  = 5), and Community of Practice models ( n  = 4). These findings also reflect the state of the field in general (Al-Sakkaf et al., 2019 ; Bond, 2019b ; Hennessy, Girvan, Mavrikis, Price, & Winters, 2018 ).

Another interesting finding of this research is that whilst 144 studies (59%) provided research questions, 99 studies (41%) did not. Although it is recognised that not all studies have research questions (Bryman, 2007 ), or only develop them throughout the research process, such as with grounded theory (Glaser & Strauss, 1967 ), a surprising number of quantitative studies (36%, n  = 37) did not have research questions. This is a reflection on the lack of theoretical guidance, as 30 of these 37 studies also did not draw on a theoretical or conceptual framework.

Question 2: which indicators of cognitive, behavioural and affective engagement were identified in studies where educational technology was used? Which indicators of student disengagement?

Student engagement indicators.

Within the corpus, the behavioural engagement dimension was documented in some form in 209 studies (86%), whereas the dimension of affective engagement was reported in 163 studies (67%) and the cognitive dimension in only 136 (56%) studies. However, the ten most often identified student engagement indicators across the studies overall (see Table  2 ) were evenly distributed over all three dimensions (see Table  3 ). The indicators participation/interaction/involvement , achievement and positive interactions with peers and teachers each appear in at least 100 studies, which is almost double the amount of the next most frequent student engagement indicator.

Across the 243 studies in the corpus, 117 (48%) showed all three dimensions of affective, cognitive and behavioural student engagement e.g., (Szabo & Schwartz, 2011 ), including six studies that used established student engagement questionnaires, such as the NSSE (e.g., Delialioglu, 2012 ), or self-developed addressing these three dimensions. Another 54 studies (22%) displayed at least two student engagement dimensions e.g., (Hatzipanagos & Code, 2016 ), including six questionnaire studies. Studies exhibiting one student engagement dimension only, was reported in 71 studies (29%) e.g., (Vural, 2013 ).

Student disengagement indicators

Indicators of student disengagement (see Table  4 ) were identified considerably less often across the corpus, which could be explained by the purpose of the studies being to primarily address/measure positive engagement, but on the other hand this could potentially be due to a form of self-selected or publication bias, due to less frequently reporting and/or publishing studies with negative results. The three disengagement indicators that were most often indicated were frustration ( n  = 33, 14%) e.g., (Ikpeze, 2007 ), opposition/rejection ( n  = 20, 8%) e.g., (Smidt, Bunk, McGrory, Li, & Gatenby, 2014 ) and disappointment e.g., (Granberg, 2010 ) , as well as other affective disengagement ( n  = 18, 7% each).

Technology tool typology and engagement/disengagement indicators

Across the 243 studies, a plethora of over 50 individual educational technology tools were employed. The top five most frequently researched tools were LMS ( n  = 89), discussion forums ( n  = 80), videos ( n  = 44), recorded lectures ( n  = 25), and chat ( n  = 24). Following a slightly modified version of Bower’s ( 2016 ) educational tools typology, 17 broad categories of tools were identified (see Additional file 4 for classification, and 3.2 for further information). The frequency with which tools from the respective groups employed in studies varied considerably (see Additional file 4 ), with the top five categories being text-based tools ( n  = 138), followed by knowledge organisation & sharing tools ( n  = 104), multimodal production tools ( n  = 89), assessment tools ( n  = 65) and website creation tools ( n  = 29).

Figure  5 shows what percentage of each engagement dimension (e.g., affective engagement or cognitive disengagement) was fostered through each specific technology type. Given the results in 4.2.1 on student engagement, it was somewhat unsurprising to see the prevalence of text-based tools , knowledge organisation & sharing tools, and multimodal production tools having the highest proportion of affective, behavioural and cognitive engagement. For example, affective engagement was identified in 163 studies, with 63% of these studies using text-based tools (e.g., Bulu & Yildirim, 2008 ) , and cognitive engagement identified in 136 studies, with 47% of those using knowledge organisation & sharing tools e.g., (Shonfeld & Ronen, 2015 ). However, further analysis of studies employing discussion forums (a text-based tool ) revealed that, whilst the top affective and behavioural engagement indicators were found in almost two-thirds of studies (see Additional file  5 ), there was a substantial gap between that and the next most prevalent engagement indicator, with the exact pattern (and indicators) emerging for wikis. This represents an area for future research.

figure 5

Engagement and disengagement by tool typology. Note. TBT = text-based tools; MPT = multimodal production tools; WCT = website creation tools; KO&S = knowledge organisation and sharing tools; DAT = data analysis tools; DST = digital storytelling tools; AT = assessment tools; SNT = social networking tools; SCT = synchronous collaboration tools; ML = mobile learning; VW = virtual worlds; LS = learning software; OL = online learning; A&H = Arts & Humanities; BA&L = Business, Administration and Law; EDU = Education; EM&C = Engineering, Manufacturing & Construction; H&W = Health & Welfare; ICT = Information & Communication Technologies; ID = interdisciplinary; NS,M&S = Natural Science, Mathematics & Statistics; NS = Not specified; SoS = Social Sciences, Journalism & Information

Interestingly, studies using website creation tools reported more disengagement than engagement indicators across all three domains (see Fig.  5 ), with studies using assessment tools and social networking tools also reporting increased instances of disengagement across two domains (affective and cognitive, and behavioural and cognitive respectively). 23 of the studies (79%) using website creation tools , used blogs, with students showing, for example, disinterest in topics chosen e.g., (Sullivan & Longnecker, 2014 ), anxiety over their lack of blogging knowledge and skills e.g., (Mansouri & Piki, 2016 ), and continued avoidance of using blogs in some cases, despite introductory training e.g., (Keiller & Inglis-Jassiem, 2015 ). In studies where assessment tools were used, students found timed assessment stressful, particularly when trying to complete complex mathematical solutions e.g., (Gupta, 2009 ), as well as quizzes given at the end of lectures, with some students preferring take-up time of content first e.g., (DePaolo & Wilkinson, 2014 ). Disengagement in studies where social networking tools were used, indicated that some students found it difficult to express themselves in short posts e.g., (Cook & Bissonnette, 2016 ), that conversations lacked authenticity e.g., (Arnold & Paulus, 2010 ), and that some did not want to mix personal and academic spaces e.g., (Ivala & Gachago, 2012 ).

Question 3: What are the learning scenarios, modes of delivery and educational technology tools employed in the studies?

Learning scenarios.

With 58.4% across the sample, social-collaborative learning (SCL) was the scenario most often employed ( n  = 142), followed by 43.2% of studies investigating self-directed learning (SDL) ( n  = 105) and 5.8% of studies using game-based learning (GBL) ( n  = 14) (see Fig. 6 ). Studies coded as SCL included those exploring social learning (Bandura, 1971 ) and social constructivist approaches (Vygotsky, 1978 ). Personal learning environments (PLE) were found for 2.9% of studies, 1.3% studies used other scenarios ( n  = 3), whereas another 13.2% did not provide specification of their learning scenarios ( n  = 32). It is noteworthy that in 45% of possible cases for employing SDL scenarios, SCL was also used. Other learning scenarios were also used mostly in combination with SCL and SDL. Given the rising number of higher education studies exploring flipped learning (Lundin et al., 2018 ), studies exploring the approach were also specifically coded (3%, n  = 7).

figure 6

Co-occurrence of learning scenarios across the sample ( n  = 243). Note. SDL = self-directed learning; SCL = social collaborative learning; GBL = game-based learning; PLE = personal learning environments; other = other learning scenario

Modes of delivery

In 84% of studies ( n  = 204), a single mode of delivery was used, with blended learning the most researched (109 studies), followed by distance education (72 studies), and face-to-face instruction (55 studies). Of the remaining 39 studies, 12 did not indicate their mode of delivery, whilst the other 27 studies combined or compared modes of delivery, e.g. comparing face to face courses to blended learning, such as the study on using iPads in undergraduate nursing education by Davies ( 2014 ).

Educational technology tools investigated

Most studies in this corpus (55%) used technology asynchronously, with 12% of studies researching synchronous tools, and 18% of studies using both asynchronous and synchronous. When exploring the use of tools, the results are not surprising, with a heavy reliance on asynchronous technology. However, when looking at tool usage with studies in face-to-face contexts, the number of synchronous tools (31%) is almost as many as the number of asynchronous tools (41%), and surprisingly low within studies in distance education (7%).

Tool categories were used in combination, with text-based tools most often used in combination with other technology types (see Fig.  7 ). For example, in 60% of all possible cases using multimodal production tools, in 69% of all possible synchronous production tool cases, in 72% of all possible knowledge, organisation & sharing tool cases , and a striking 89% of all possible learning software cases and 100% of all possible MOOC cases. On the contrary, text-based tools were never used in combination with games or data analysis tools . However, studies using gaming tools were used in 67% of possible assessment tool cases as well. Assessment tools, however, constitute somewhat of a special case when studies using website creation tools are concerned, with only 7% of possible cases having employed assessment tools .

figure 7

Co-occurrence of tools across the sample ( n  = 243). Note. TBT = text-based tools; MPT = multimodal production tools; WCT = website creation tools; KO&S = knowledge organisation and sharing tools; DAT = data analysis tools; DST = digital storytelling tools; AT = assessment tools; SNT = social networking tools; SCT = synchronous collaboration tools; ML = mobile learning; VW = virtual worlds; LS = learning software; OL = online learning

In order to gain further understanding into how educational technology was used, we examined how often a combination of two variables should occur in the sample and how often it actually occurs, with deviations described as either ‘more than’ or ‘less than’ the expected value. This provides further insight into potential gaps in the literature, which can inform future research. For example, an analysis of educational technology tool usage amongst study populations (see Fig.  8 ) reveals that 5.0 more studies than expected looked at knowledge organisation & sharing for graduate students, but 5.0 studies less than expected investigated assessment tools for this group. By contrast, 5 studies more than expected researched assessment tools for unspecified study levels, and 4.3 studies less than expected employed knowledge organisation & sharing for undergraduate students.

figure 8

Relative frequency of educational technology tools used according to study level Note. Abbreviations are explained in Fig. 7

Educational technology tools were also used differently from the expected pattern within various fields of study (see Fig.  9 ), most obviously for the cases of the top five tools. However, also for virtual worlds, found in 5.8 studies more in Health & Welfare than expected, and learning software, used in 6.4 studies more in Arts & Humanities than expected. In all other disciplines, learning software was used less often than assumed. Text-based tools were used more often than expected in fields of study that are already text-intensive, including Arts & Humanities, Education, Business, Administration & Law as well as Social Sciences - but less often than thought in fields such as Engineering, Health & Welfare, and Natural Sciences, Mathematics & Statistics. Multimodal production tools were used more often only in Health & Welfare, ICT and Natural Sciences, and less often than assumed across all other disciplines. Assessment tools deviated most clearly, with 11.9 studies more in Natural Sciences, Mathematics & Statistics than assumed, but with 5.2 studies less in both Education and Arts & Humanities.

figure 9

Relative frequency of educational technology tools used according to field of study. Note. TBT = text-based tools; MPT = multimodal production tools; WCT = website creation tools; KO&S = knowledge organisation and sharing tools; DAT = data analysis tools; DST = digital storytelling tools; AT = assessment tools; SNT = social networking tools; SCT = synchronous collaboration tools; ML = mobile learning; VW = virtual worlds; LS = learning software; OL = online learning

In regards to mode of delivery and educational technology tools used, it is interesting to see that from the five top tools, except for assessment tools , all tools were used in face-to-face instruction less often than expected (see Fig.  10 ); from 1.6 studies less for website creation tools to 14.5 studies less for knowledge organisation & sharing tools . Assessment tools , however, were used in 3.3 studies more than expected - but less often than assumed (although moderately) in blended learning and distance education formats. Text-based tools, multimodal production tools and knowledge organisation & sharing tools were employed more often than expected in blended and distance learning, especially obvious in 13.1 studies more on t ext-based tools and 8.2 studies on knowledge organisation & sharing tools in distance education. Contrary to what one would perhaps expect, social networking tools were used in 4.2 studies less than expected for this mode of delivery.

figure 10

Relative frequency of educational technology tools used according mode of delivery. Note. Tool abbreviations as per Figure 10. BL = Blended learning; DE = Distance education; F2F = Face-to-face; NS = Not stated

The findings of this study confirm those of previous research, with the most prolific countries being the US, UK, Australia, Taiwan and China. This is rather representative of the field, with an analysis of instructional design and technology research from 2007 to 2017 listing the most productive countries as the US, Taiwan, UK, Australia and Turkey (Bodily, Leary, & West, 2019 ). Likewise, an analysis of 40 years of research in Computers & Education (CAE) found that the US, UK and Taiwan accounted for 49.9% of all publications (Bond, 2018 ). By contrast, a lack of African research was apparent in this review, which is also evident in educational technology research in top tier peer-reviewed journals, with only 4% of articles published in the British Journal of Educational Technology ( BJET ) in the past decade (Bond, 2019b ) and 2% of articles in the Australasian Journal of Educational Technology (AJET) (Bond, 2018 ) hailing from Africa. Similar results were also found in previous literature and systematic reviews (see Table 1 ), which again raises questions of literature search and inclusion strategies, which will be further discussed in the limitations section.

Whilst other reviews of educational technology and student engagement have found studies to be largely STEM focused (Boyle et al., 2016 ; Li et al., 2017 ; Lundin et al., 2018 ; Nikou & Economides, 2018 ), this corpus features a more balanced scope of research, with the fields of Arts & Humanities (42 studies, 17.3%) and Education (42 studies, 17.3%) constituting roughly one third of all studies in the corpus - and Natural Sciences, Mathematics & Statistics, nevertheless, assuming rank 3 with 38 studies (15.6%). Beyond these three fields, further research is needed within underrepresented fields of study, in order to gain more comprehensive insights into the usage of educational technology tools (Kay & LeSage, 2009 ; Nikou & Economides, 2018 ).

Results of the systematic map further confirm the focus that prior educational technology research has placed on undergraduate students as the target group and participants in technology-enhanced learning settings e.g. (Cheston et al., 2013 ; Henrie et al., 2015 ). With the overwhelming number of 146 studies researching undergraduate students—compared to 33 studies on graduate students and 23 studies investigating both study levels—this also indicates that further investigation into the graduate student experience is needed. Furthermore, the fact that 41 studies do not report on the study level of their participants is an interesting albeit problematic fact, as implications might not easily be drawn for application to one’s own specific teaching context if the target group under investigation is not clearly denominated. A more precise reporting of participants’ details, as well as specification of the study context (country, institution, study level to name a few) is needed to transfer and apply study results to practice—being then able to take into account why some interventions succeed and others do not.

In line with other studies e.g. (Henrie et al., 2015 ), this review has also demonstrated that student engagement remains an under-theorised concept, that is often only considered fragmentally in research. Whilst studies in this review have often focused on isolated aspects of student engagement, their results are nevertheless interesting and valuable. However, it is important to relate these individual facets to the larger framework of student engagement, by considering how these aspects are connected and linked to each other. This is especially helpful to integrate research findings into practice, given that student engagement and disengagement are rarely one-dimensional; it is not enough to focus only on one aspect of engagement, but also to look at aspects that are adjacent to it (Pekrun & Linnenbrink-Garcia, 2012 ). It is also vital, therefore, that researchers develop and refine an understanding of student engagement, and make this explicit in their research (Appleton et al., 2008 ; Christenson et al., 2012 ).

Reflective of current conversations in the field of educational technology (Bond, 2019b ; Castañeda & Selwyn, 2018 ; Hew et al., 2019 ), as well as other reviews (Abdool et al., 2017 ; Hunsu et al., 2016 ; Kaliisa & Picard, 2017 ; Lundin et al., 2018 ), a substantial number of studies in this corpus did not have any theoretical underpinnings. Kaliisa and Picard ( 2017 ) argue that, without theory, research can result in disorganised accounts and issues with interpreting data, with research effectively “sit[ting] in a void if it’s not theoretically connected” (Kara, 2017 ), p. 56. Therefore, framing research in educational technology with a stronger theoretical basis, can assist with locating the “field’s disciplinary alignment” (Crook, 2019 ), p. 486 and further drive conversations forward.

The application of methods in this corpus was interesting in two ways. First, it is noticeable that quantitative studies are prevalent across the 243 articles in the sample. The number of studies employing qualitative research methods in the sample was comparatively low (56 studies as opposed to 84 mixed method studies and 103 quantitative studies). This is also reflected in the educational technology field at large, with a review of articles published in BJET and Educational Technology Research & Development (ETR&D) from 2002 to 2014 revealing that 40% of articles used quantitative methods, 26% qualitative and 13% mixed (Baydas, Kucuk, Yilmaz, Aydemir, & Goktas, 2015 ), and likewise a review of educational technology research from Turkey 1990–2011 revealed that 53% of articles used quantitative methods, 22% qualitative and 10% mixed methods (Kucuk, Aydemir, Yildirim, Arpacik, & Goktas, 2013 ). Quantitative studies primarily show that an intervention has worked or not when applied to e.g. a group of students in a certain setting as done in the study on using mobile apps on student performance in engineering education by Jou, Lin, and Tsai ( 2016 ), however, not all student engagement indicators can actually be measured in this way. The lower numbers of affective and cognitive engagement found in the studies in the corpus, reflect a wider call to the field to increase research on these two domains (Henrie et al., 2015 ; Joksimović et al., 2018 ; O’Flaherty & Phillips, 2015 ; Schindler et al., 2017 ). Whilst it is arguably more difficult to measure these two than behavioural engagement, the use of more rigorous and accurate surveys could be one possibility, as they can “capture unobservable aspects” (Henrie et al., 2015 ), p. 45 such as student feelings and information about the cognitive strategies they employ (Finn & Zimmer, 2012 ). However, they are often lengthy and onerous, or subject to the limitations of self-selection.

Whereas low numbers of qualitative studies researching student engagement and educational technology were previously identified in other student engagement and technology reviews (Connolly et al., 2012 ; Kay & LeSage, 2009 ; Lundin et al., 2018 ), it is studies like that by Lopera Medina ( 2014 ) in this sample, which reveal how people perceive this educational experience and the actual how of the process. Therefore, more qualitative and ethnographic measures should also be employed, such as student observations with thick descriptions, which can help shed light on the complexity of teaching and learning environments (Fredricks et al., 2004 ; Heflin, Shewmaker, & Nguyen, 2017 ). Conducting observations can be costly, however, both in time and money, so this is suggested in combination with computerised learning analytic data, which can provide measurable, objective and timely insight into how certain manifestations of engagement change over time (Henrie et al., 2015 ; Ma et al., 2015 ).

Whereas other results of this review have confirmed previous results in the field, the technology tools that were used in the studies and considered in their relation to student engagement in this corpus deviate. Whilst Henrie et al. ( 2015 ) found that the most frequently researched tools were discussion forums, general websites, LMS, general campus software and videos, the studies here focused predominantly on LMS, discussion forums, videos, recorded lectures and chat. Furthermore, whilst Schindler et al. ( 2017 ) found that digital games, web-conferencing software and Facebook were the most effective tools at enhancing student engagement, this review found that it was rather text-based tools , knowledge organisation & sharing , and multimodal production tools .

Limitations

During the execution of this systematic review, we tried to adhere to the method as rigorously as possible. However, several challenges were also encountered - some of which are addressed and discussed in another publication (Bedenlier, 2020b ) - resulting in limitations to this study. Four large, general educational research databases were searched, which are international in scope. However, by applying the criterion of articles published in English, research published on this topic in languages other than English was not included in this review. The same applies to research documented in, for example, grey literature, book chapters or monographs, or even articles from journals that are not indexed in the four databases searched. Another limitation is that only research published within the period 2007–2016 was investigated. Whilst we are cognisant of this being a restriction, we also think that the technological advances and the implications to be drawn from this time-frame relate more meaningfully to the current situation, than would have been the case for technologies used in the 1990s see (Bond, 2019b ). The sampling strategy also most likely accounts for the low number of studies from certain countries, e.g. in South America and Africa.

Studies included in this review represent various academic fields, and they also vary in the rigour with which they were conducted. Harden and Gough ( 2012 ) stress that the appraisal of quality and relevance of studies “ensure[s] that only the most appropriate, trustworthy and relevant studies are used to develop the conclusions of the review” (p. 154), we have included the criterion of being a peer reviewed contribution as a formal inclusion criterion from the beginning. In doing so, we reason that studies met a baseline of quality as applicable to published research in a specific field - otherwise they would not have been accepted for publication by the respective community. Finally, whilst the studies were diligently read and coded, and disagreements also discussed and reconciled, the human flaw of having overlooked or misinterpreted information provided in the individual articles cannot fully be excluded.

Finally, the results presented here provide an initial window into the overall body of research identified during the search, and further research is being undertaken to provide deeper insight into discipline specific use of technology and resulting student engagement using subsets of this sample (Bedenlier, 2020a ; Bond, M., Bedenlier, S., Buntins, K., Kerres, M., & Zawacki-Richter, O.: Facilitating student engagement through educational technology: A systematic review in the field of education, forthcoming).

Recommendations for future work and implications for practice

Whilst the evidence map presented in this article has confirmed previous research on the nexus of educational technology and student engagement, it has also elucidated a number of areas that further research is invited to address. Although these findings are similar to that of previous reviews, in order to more fully and comprehensively understand student engagement as a multi-faceted construct, it is not enough to focus only on indicators of engagement that can easily be measured, but rather the more complex endeavour of uncovering and investigating those indicators that reside below the surface. This also includes the careful alignment of theory and methodological design, in order to both adequately analyse the phenomenon under investigation, as well as contributing to the soundly executed body of research within the field of educational technology. Further research is invited in particular into how educational technology affects cognitive and affective engagement, whilst considering how this fits within the broader sociocultural framework of engagement (Bond, 2019a ). Further research is also invited into how educational technology affects student engagement within fields of study beyond Arts & Humanities, Education and Natural Sciences, Mathematics & Statistics, as well as within graduate level courses. The use of more qualitative research methods is particularly encouraged.

The findings of this review suggest that research gaps exist with particular combinations of tools, study levels and modes of delivery. With respect to study level, the use of assessment tools with graduate students, as well as knowledge organisation & sharing tools with undergraduate students, are topics researched far less than expected. The use of text-based tools in Engineering, Health & Welfare and Natural Sciences, Mathematics & Statistics, as well as the use of multimodal production tools outside of these disciplines, are also areas for future research, as is the use of assessment tools in the fields of Education and Arts & Humanities in particular.

With 109 studies in this systematic review using a blended learning design, this is a confirmation of the argument that online distance education and traditional face-to-face education are becoming increasingly more integrated with one another. Whilst this indicates that a lot of educators have made the move from face-to-face teaching to technology-enhanced learning, this also makes a case for the need for further professional development, in order to apply these tools effectively within their own teaching contexts, with this review indicating that further research is needed in particlar into the use of social networking tools in online/distance education. The question also needs to be asked, not only why the number of published studies are low within certain countries and regions, but also to enquire into the nature of why that is the case. This entails questioning the conditions under which research is being conducted, potentially criticising publication policies of major, Western-based journals, but also ultimately to reflect on one’s search strategy and research assumptions as a Western educator-researcher.

Based on the findings of this review, educators within higher education institutions are encouraged to use text-based tools , knowledge, organisation and sharing tools , and multimodal production tools in particular and, whilst any technology can lead to disengagement if not employed effectively, to be mindful that website creation tools (blogs and ePortfolios), social networking tools and assessment tools have been found to be more disengaging than engaging in this review. Therefore, educators are encouraged to ensure that students receive sufficient and ongoing training for any new technology used, including those that might appear straightforward, e.g. blogs, and that they may require extra writing support. Ensure that discussion/blog topics are interesting, that they allow student agency, and they are authentic to students, including the use of social media. Social networking tools that augment student professional learning networks are particularly useful. Educators should also be aware, however, that some students do not want to mix their academic and personal lives, and so the decision to use certain social platforms could be decided together with students.

Availability of data and materials

All data will be made publicly available, as part of the funding requirements, via https://www.researchgate.net/project/Facilitating-student-engagement-with-digital-media-in-higher-education-ActiveLeaRn .

The detailed search strategy, including the modified search strings according to the individual databases, can be retrieved from https://www.researchgate.net/project/Facilitating-student-engagement-with-digital-media-in-higher-education-ActiveLeaRn

The full code set can be retrieved from the review protocol at https://www.researchgate.net/project/Facilitating-student-engagement-with-digital-media-in-higher-education-ActiveLeaRn .

Abdool, P. S., Nirula, L., Bonato, S., Rajji, T. K., & Silver, I. L. (2017). Simulation in undergraduate psychiatry: Exploring the depth of learner engagement. Academic Psychiatry : the Journal of the American Association of Directors of Psychiatric Residency Training and the Association for Academic Psychiatry , 41 (2), 251–261. https://doi.org/10.1007/s40596-016-0633-9 .

Article   Google Scholar  

Alioon, Y., & Delialioğlu, Ö. (2017). The effect of authentic m-learning activities on student engagement and motivation. British Journal of Educational Technology , 32 , 121. https://doi.org/10.1111/bjet.12559 .

Alrasheedi, M., Capretz, L. F., & Raza, A. (2015). A systematic review of the critical factors for success of mobile learning in higher education (university students’ perspective). Journal of Educational Computing Research , 52 (2), 257–276. https://doi.org/10.1177/0735633115571928 .

Al-Sakkaf, A., Omar, M., & Ahmad, M. (2019). A systematic literature review of student engagement in software visualization: A theoretical perspective. Computer Science Education , 29 (2–3), 283–309. https://doi.org/10.1080/08993408.2018.1564611 .

Andrew, L., Ewens, B., & Maslin-Prothero, S. (2015). Enhancing the online learning experience using virtual interactive classrooms. Australian Journal of Advanced Nursing , 32 (4), 22–31.

Google Scholar  

Antonenko, P. D. (2015). The instrumental value of conceptual frameworks in educational technology research. Educational Technology Research and Development , 63 (1), 53–71. https://doi.org/10.1007/s11423-014-9363-4 .

Appleton, J. J., Christenson, S. L., & Furlong, M. J. (2008). Student engagement with school: Critical conceptual and methodological issues of the construct. Psychology in the Schools , 45 (5), 369–386. https://doi.org/10.1002/pits.20303 .

Arnold, N., & Paulus, T. (2010). Using a social networking site for experiential learning: Appropriating, lurking, modeling and community building. Internet and Higher Education , 13 (4), 188–196. https://doi.org/10.1016/j.iheduc.2010.04.002 .

Astin, A. W. (1984). Student involvement: A developmental theory for higher education. Journal of College Student Development , 25 (4), 297–308.

Astin, A. W. (1999). Student involvement: A developmental theory for higher education. Journal of College Student Development , 40 (5), 518–529. https://www.researchgate.net/publication/220017441 (Original work published July 1984).

Atmacasoy, A., & Aksu, M. (2018). Blended learning at pre-service teacher education in Turkey: A systematic review. Education and Information Technologies , 23 (6), 2399–2422. https://doi.org/10.1007/s10639-018-9723-5 .

Azevedo, R. (2015). Defining and measuring engagement and learning in science: Conceptual, theoretical, methodological, and analytical issues. Educational Psychologist , 50 (1), 84–94. https://doi.org/10.1080/00461520.2015.1004069 .

Bandura, A. (1971). Social learning theory . New York: General Learning Press.

Barak, M. (2018). Are digital natives open to change? Examining flexible thinking and resistance to change. Computers & Education , 121 , 115–123. https://doi.org/10.1016/j.compedu.2018.01.016 .

Barak, M., & Levenberg, A. (2016). Flexible thinking in learning: An individual differences measure for learning in technology-enhanced environments. Computers & Education , 99 , 39–52. https://doi.org/10.1016/j.compedu.2016.04.003 .

Baron, P., & Corbin, L. (2012). Student engagement: Rhetoric and reality. Higher Education Research and Development , 31 (6), 759–772. https://doi.org/10.1080/07294360.2012.655711 .

Baydas, O., Kucuk, S., Yilmaz, R. M., Aydemir, M., & Goktas, Y. (2015). Educational technology research trends from 2002 to 2014. Scientometrics , 105 (1), 709–725. https://doi.org/10.1007/s11192-015-1693-4 .

Bedenlier, S., Bond, M., Buntins, K., Zawacki-Richter, O., & Kerres, M. (2020a). Facilitating student engagement through educational technology in higher education: A systematic review in the field of arts & humanities. Australasian Journal of Educational Technology , 36 (4), 27–47. https://doi.org/10.14742/ajet.5477 .

Bedenlier, S., Bond, M., Buntins, K., Zawacki-Richter, O., & Kerres, M. (2020b). Learning by Doing? Reflections on Conducting a Systematic Review in the Field of Educational Technology. In O. Zawacki-Richter, M. Kerres, S. Bedenlier, M. Bond, & K. Buntins (Eds.), Systematic Reviews in Educational Research (Vol. 45 , pp. 111–127). Wiesbaden: Springer Fachmedien Wiesbaden. https://doi.org/10.1007/978-3-658-27602-7_7 .

Ben-Eliyahu, A., Moore, D., Dorph, R., & Schunn, C. D. (2018). Investigating the multidimensionality of engagement: Affective, behavioral, and cognitive engagement across science activities and contexts. Contemporary Educational Psychology , 53 , 87–105. https://doi.org/10.1016/j.cedpsych.2018.01.002 .

Betihavas, V., Bridgman, H., Kornhaber, R., & Cross, M. (2016). The evidence for ‘flipping out’: A systematic review of the flipped classroom in nursing education. Nurse Education Today , 38 , 15–21. https://doi.org/10.1016/j.nedt.2015.12.010 .

Bigatel, P., & Williams, V. (2015). Measuring student engagement in an online program. Online Journal of Distance Learning Administration , 18 (2), 9.

Bodily, R., Leary, H., & West, R. E. (2019). Research trends in instructional design and technology journals. British Journal of Educational Technology , 50 (1), 64–79. https://doi.org/10.1111/bjet.12712 .

Boekaerts, M. (2016). Engagement as an inherent aspect of the learning process. Learning and Instruction , 43 , 76–83. https://doi.org/10.1016/j.learninstruc.2016.02.001 .

Bolden, B., & Nahachewsky, J. (2015). Podcast creation as transformative music engagement. Music Education Research , 17 (1), 17–33. https://doi.org/10.1080/14613808.2014.969219 .

Bond, M. (2018). Helping doctoral students crack the publication code: An evaluation and content analysis of the Australasian Journal of Educational Technology. Australasian Journal of Educational Technology , 34 (5), 168–183. https://doi.org/10.14742/ajet.4363 .

Bond, M., & Bedenlier, S. (2019a). Facilitating Student Engagement Through Educational Technology: Towards a Conceptual Framework. Journal of Interactive Media in Education , 2019 (1), 1-14. https://doi.org/10.5334/jime.528 .

Bond, M., Zawacki-Richter, O., & Nichols, M. (2019b). Revisiting five decades of educational technology research: A content and authorship analysis of the British Journal of Educational Technology. British Journal of Educational Technology , 50 (1), 12–63. https://doi.org/10.1111/bjet.12730 .

Bouta, H., Retalis, S., & Paraskeva, F. (2012). Utilising a collaborative macro-script to enhance student engagement: A mixed method study in a 3D virtual environment. Computers & Education , 58 (1), 501–517. https://doi.org/10.1016/j.compedu.2011.08.031 .

Bower, M. (2015). A typology of web 2.0 learning technologies . EDUCAUSE Digital Library Retrieved 20 June 2019, from http://www.educause.edu/library/resources/typology-web-20-learning-technologies .

Bower, M. (2016). Deriving a typology of web 2.0 learning technologies. British Journal of Educational Technology , 47 (4), 763–777. https://doi.org/10.1111/bjet.12344 .

Boyle, E. A., Connolly, T. M., Hainey, T., & Boyle, J. M. (2012). Engagement in digital entertainment games: A systematic review. Computers in Human Behavior , 28 (3), 771–780. https://doi.org/10.1016/j.chb.2011.11.020 .

Boyle, E. A., Hainey, T., Connolly, T. M., Gray, G., Earp, J., Ott, M., … Pereira, J. (2016). An update to the systematic literature review of empirical evidence of the impacts and outcomes of computer games and serious games. Computers & Education , 94 , 178–192. https://doi.org/10.1016/j.compedu.2015.11.003 .

Broadbent, J., & Poon, W. L. (2015). Self-regulated learning strategies & academic achievement in online higher education learning environments: A systematic review. The Internet and Higher Education , 27 , 1–13. https://doi.org/10.1016/j.iheduc.2015.04.007 .

Brunton, G., Stansfield, C., & Thomas, J. (2012). Finding relevant studies. In D. Gough, S. Oliver, & J. Thomas (Eds.), An introduction to systematic reviews , (pp. 107–134). Los Angeles: Sage.

Bryman, A. (2007). The research question in social research: What is its role? International Journal of Social Research Methodology , 10 (1), 5–20. https://doi.org/10.1080/13645570600655282 .

Bulu, S. T., & Yildirim, Z. (2008). Communication behaviors and trust in collaborative online teams. Educational Technology & Society , 11 (1), 132–147.

Bundick, M., Quaglia, R., Corso, M., & Haywood, D. (2014). Promoting student engagement in the classroom. Teachers College Record , 116 (4) Retrieved from http://www.tcrecord.org/content.asp?contentid=17402 .

Castañeda, L., & Selwyn, N. (2018). More than tools? Making sense of the ongoing digitizations of higher education. International Journal of Educational Technology in Higher Education , 15 (1), 211. https://doi.org/10.1186/s41239-018-0109-y .

Chen, P.-S. D., Lambert, A. D., & Guidry, K. R. (2010). Engaging online learners: The impact of web-based learning technology on college student engagement. Computers & Education , 54 (4), 1222–1232. https://doi.org/10.1016/j.compedu.2009.11.008 .

Cheston, C. C., Flickinger, T. E., & Chisolm, M. S. (2013). Social media use in medical education: A systematic review. Academic Medicine : Journal of the Association of American Medical Colleges , 88 (6), 893–901. https://doi.org/10.1097/ACM.0b013e31828ffc23 .

Choi, M., Glassman, M., & Cristol, D. (2017). What it means to be a citizen in the internet age: Development of a reliable and valid digital citizenship scale. Computers & Education , 107 , 100–112. https://doi.org/10.1016/j.compedu.2017.01.002 .

Christenson, S. L., Reschly, A. L., & Wylie, C. (Eds.) (2012). Handbook of research on student engagement . Boston: Springer US.

Coates, H. (2007). A model of online and general campus-based student engagement. Assessment & Evaluation in Higher Education , 32 (2), 121–141. https://doi.org/10.1080/02602930600801878 .

Connolly, T. M., Boyle, E. A., MacArthur, E., Hainey, T., & Boyle, J. M. (2012). A systematic literature review of empirical evidence on computer games and serious games. Computers & Education , 59 (2), 661–686. https://doi.org/10.1016/j.compedu.2012.03.004 .

Cook, M. P., & Bissonnette, J. D. (2016). Developing preservice teachers’ positionalities in 140 characters or less: Examining microblogging as dialogic space. Contemporary Issues in Technology and Teacher Education (CITE Journal) , 16 (2), 82–109.

Crompton, H., Burke, D., Gregory, K. H., & Gräbe, C. (2016). The use of mobile learning in science: A systematic review. Journal of Science Education and Technology , 25 (2), 149–160. https://doi.org/10.1007/s10956-015-9597-x .

Crook, C. (2019). The “British” voice of educational technology research: 50th birthday reflection. British Journal of Educational Technology , 50 (2), 485–489. https://doi.org/10.1111/bjet.12757 .

Davies, M. (2014). Using the apple iPad to facilitate student-led group work and seminar presentation. Nurse Education in Practice , 14 (4), 363–367. https://doi.org/10.1016/j.nepr.2014.01.006 .

Article   MathSciNet   Google Scholar  

Delialioglu, O. (2012). Student engagement in blended learning environments with lecture-based and problem-based instructional approaches. Educational Technology & Society , 15 (3), 310–322.

DePaolo, C. A., & Wilkinson, K. (2014). Recurrent online quizzes: Ubiquitous tools for promoting student presence, participation and performance. Interdisciplinary Journal of E-Learning and Learning Objects , 10 , 75–91 Retrieved from http://www.ijello.org/Volume10/IJELLOv10p075-091DePaolo0900.pdf .

Doherty, K., & Doherty, G. (2018). Engagement in HCI. ACM Computing Surveys , 51 (5), 1–39. https://doi.org/10.1145/3234149 .

Eccles, J. (2016). Engagement: Where to next? Learning and Instruction , 43 , 71–75. https://doi.org/10.1016/j.learninstruc.2016.02.003 .

Eccles, J., & Wang, M.-T. (2012). Part I commentary: So what is student engagement anyway? In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 133–145). Boston: Springer US Retrieved from http://link.springer.com/10.1007/978-1-4614-2018-7_6 .

Chapter   Google Scholar  

Englund, C., Olofsson, A. D., & Price, L. (2017). Teaching with technology in higher education: Understanding conceptual change and development in practice. Higher Education Research and Development , 36 (1), 73–87. https://doi.org/10.1080/07294360.2016.1171300 .

Fabian, K., Topping, K. J., & Barron, I. G. (2016). Mobile technology and mathematics: Effects on students’ attitudes, engagement, and achievement. Journal of Computers in Education , 3 (1), 77–104. https://doi.org/10.1007/s40692-015-0048-8 .

Filsecker, M., & Kerres, M. (2014). Engagement as a volitional construct. Simulation & Gaming , 45 (4–5), 450–470. https://doi.org/10.1177/1046878114553569 .

Finn, J. (2006). The adult lives of at-risk students: The roles of attainment and engagement in high school (NCES 2006-328) . Washington, DC: U.S. Department of Education, National Center for Education Statistics Retrieved from website: https://nces.ed.gov/pubs2006/2006328.pdf .

Finn, J., & Zimmer, K. (2012). Student engagement: What is it? Why does it matter? In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 97–131). Boston: Springer US. https://doi.org/10.1007/978-1-4614-2018-7_5 .

Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research , 74 (1), 59–109. https://doi.org/10.3102/00346543074001059 .

Fredricks, J. A., Filsecker, M., & Lawson, M. A. (2016). Student engagement, context, and adjustment: Addressing definitional, measurement, and methodological issues. Learning and Instruction , 43 , 1–4. https://doi.org/10.1016/j.learninstruc.2016.02.002 .

Fredricks, J. A., Wang, M.-T., Schall Linn, J., Hofkens, T. L., Sung, H., Parr, A., & Allerton, J. (2016). Using qualitative methods to develop a survey measure of math and science engagement. Learning and Instruction , 43 , 5–15. https://doi.org/10.1016/j.learninstruc.2016.01.009 .

Fukuzawa, S., & Boyd, C. (2016). Student engagement in a large classroom: Using technology to generate a hybridized problem-based learning experience in a large first year undergraduate class. Canadian Journal for the Scholarship of Teaching and Learning , 7 (1). https://doi.org/10.5206/cjsotl-rcacea.2016.1.7 .

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research . Chicago: Aldine.

Gleason, J. (2012). Using technology-assisted instruction and assessment to reduce the effect of class size on student outcomes in undergraduate mathematics courses. College Teaching , 60 (3), 87–94.

Gough, D., Oliver, S., & Thomas, J. (2012). An introduction to systematic reviews . Los Angeles: Sage.

Granberg, C. (2010). Social software for reflective dialogue: Questions about reflection and dialogue in student Teachers’ blogs. Technology, Pedagogy and Education , 19 (3), 345–360. https://doi.org/10.1080/1475939X.2010.513766 .

Greenwood, L., & Kelly, C. (2019). A systematic literature review to explore how staff in schools describe how a sense of belonging is created for their pupils. Emotional and Behavioural Difficulties , 24 (1), 3–19. https://doi.org/10.1080/13632752.2018.1511113 .

Gupta, M. L. (2009). Using emerging technologies to promote student engagement and learning in agricultural mathematics. International Journal of Learning , 16 (10), 497–508. https://doi.org/10.18848/1447-9494/CGP/v16i10/46658 .

Harden, A., & Gough, D. (2012). Quality and relevance appraisal. In D. Gough, S. Oliver, & J. Thomas (Eds.), An introduction to systematic reviews , (pp. 153–178). London: Sage.

Hatzipanagos, S., & Code, J. (2016). Open badges in online learning environments: Peer feedback and formative assessment as an engagement intervention for promoting agency. Journal of Educational Multimedia and Hypermedia , 25 (2), 127–142.

Heflin, H., Shewmaker, J., & Nguyen, J. (2017). Impact of mobile technology on student attitudes, engagement, and learning. Computers & Education , 107 , 91–99. https://doi.org/10.1016/j.compedu.2017.01.006 .

Henderson, M., Selwyn, N., & Aston, R. (2017). What works and why? Student perceptions of ‘useful’ digital technology in university teaching and learning. Studies in Higher Education , 42 (8), 1567–1579. https://doi.org/10.1080/03075079.2015.1007946 .

Hennessy, S., Girvan, C., Mavrikis, M., Price, S., & Winters, N. (2018). Editorial. British Journal of Educational Technology , 49 (1), 3–5. https://doi.org/10.1111/bjet.12598 .

Henrie, C. R., Halverson, L. R., & Graham, C. R. (2015). Measuring student engagement in technology-mediated learning: A review. Computers & Education , 90 , 36–53. https://doi.org/10.1016/j.compedu.2015.09.005 .

Hew, K. F., & Cheung, W. S. (2013). Use of web 2.0 technologies in K-12 and higher education: The search for evidence-based practice. Educational Research Review , 9 , 47–64. https://doi.org/10.1016/j.edurev.2012.08.001 .

Hew, K. F., Lan, M., Tang, Y., Jia, C., & Lo, C. K. (2019). Where is the “theory” within the field of educational technology research? British Journal of Educational Technology , 50 (3), 956–971. https://doi.org/10.1111/bjet.12770 .

Howard, S. K., Ma, J., & Yang, J. (2016). Student rules: Exploring patterns of students’ computer-efficacy and engagement with digital technologies in learning. Computers & Education , 101 , 29–42. https://doi.org/10.1016/j.compedu.2016.05.008 .

Hu, S., & Kuh, G. D. (2002). Being (dis)engaged in educationally purposeful activities: The influences of student and institutional characteristics. Research in Higher Education , 43 (5), 555–575. https://doi.org/10.1023/A:1020114231387 .

Hunsu, N. J., Adesope, O., & Bayly, D. J. (2016). A meta-analysis of the effects of audience response systems (clicker-based technologies) on cognition and affect. Computers & Education , 94 , 102–119. https://doi.org/10.1016/j.compedu.2015.11.013 .

Ikpeze, C. (2007). Small group collaboration in peer-led electronic discourse: An analysis of group dynamics and interactions involving Preservice and Inservice teachers. Journal of Technology and Teacher Education , 15 (3), 383–407.

Ivala, E., & Gachago, D. (2012). Social media for enhancing student engagement: The use of Facebook and blogs at a university of technology. South African Journal of Higher Education , 26 (1), 152–167.

Järvelä, S., Järvenoja, H., Malmberg, J., Isohätälä, J., & Sobocinski, M. (2016). How do types of interaction and phases of self-regulated learning set a stage for collaborative engagement? Learning and Instruction , 43 , 39–51. https://doi.org/10.1016/j.learninstruc.2016.01.005 .

Joksimović, S., Poquet, O., Kovanović, V., Dowell, N., Mills, C., Gašević, D., … Brooks, C. (2018). How do we model learning at scale? A systematic review of research on MOOCs. Review of Educational Research , 88 (1), 43–86. https://doi.org/10.3102/0034654317740335 .

Jou, M., Lin, Y.-T., & Tsai, H.-C. (2016). Mobile APP for motivation to learning: An engineering case. Interactive Learning Environments , 24 (8), 2048–2057. https://doi.org/10.1080/10494820.2015.1075136 .

Junco, R. (2012). The relationship between frequency of Facebook use, participation in Facebook activities, and student engagement. Computers & Education , 58 (1), 162–171. https://doi.org/10.1016/j.compedu.2011.08.004 .

Kahn, P. (2014). Theorising student engagement in higher education. British Educational Research Journal , 40 (6), 1005–1018. https://doi.org/10.1002/berj.3121 .

Kahu, E. R. (2013). Framing student engagement in higher education. Studies in Higher Education , 38 (5), 758–773. https://doi.org/10.1080/03075079.2011.598505 .

Kahu, E. R., & Nelson, K. (2018). Student engagement in the educational interface: Understanding the mechanisms of student success. Higher Education Research and Development , 37 (1), 58–71. https://doi.org/10.1080/07294360.2017.1344197 .

Kaliisa, R., & Picard, M. (2017). A systematic review on mobile learning in higher education: The African perspective. The Turkish Online Journal of Educational Technology , 16 (1) Retrieved from https://files.eric.ed.gov/fulltext/EJ1124918.pdf .

Kara, H. (2017). Research and evaluation for busy students and practitioners: A time-saving guide , (2nd ed., ). Bristol: Policy Press.

Book   Google Scholar  

Karabulut-Ilgu, A., Jaramillo Cherrez, N., & Jahren, C. T. (2018). A systematic review of research on the flipped learning method in engineering education: Flipped learning in engineering education. British Journal of Educational Technology , 49 (3), 398–411. https://doi.org/10.1111/bjet.12548 .

Kay, R. H., & LeSage, A. (2009). Examining the benefits and challenges of using audience response systems: A review of the literature. Computers & Education , 53 (3), 819–827. https://doi.org/10.1016/j.compedu.2009.05.001 .

Keiller, L., & Inglis-Jassiem, G. (2015). A lesson in listening: Is the student voice heard in the rush to incorporate technology into health professions education? African Journal of Health Professions Education , 7 (1), 47–50. https://doi.org/10.7196/ajhpe.371 .

Kelley, K., Lai, K., Lai, M. K., & Suggests, M. (2018). Package ‘MBESS’. Retrieved from https://cran.r-project.org/web/packages/MBESS/MBESS.pdf

Kerres, M. (2013). Mediendidaktik. Konzeption und Entwicklung mediengestützter Lernangebote . München: Oldenbourg.

Kirkwood, A. (2009). E-learning: You don’t always get what you hope for. Technology, Pedagogy and Education , 18 (2), 107–121. https://doi.org/10.1080/14759390902992576 .

Koehler, M., & Mishra, P. (2005). What happens when teachers design educational technology? The development of technological pedagogical content knowledge. Journal of Educational Computing Research , 32 (2), 131–152.

Krause, K.-L., & Coates, H. (2008). Students’ engagement in first-year university. Assessment & Evaluation in Higher Education , 33 (5), 493–505. https://doi.org/10.1080/02602930701698892 .

Kucuk, S., Aydemir, M., Yildirim, G., Arpacik, O., & Goktas, Y. (2013). Educational technology research trends in Turkey from 1990 to 2011. Computers & Education , 68 , 42–50. https://doi.org/10.1016/j.compedu.2013.04.016 .

Kuh, G. D. (2001). The National Survey of student engagement: Conceptual framework and overview of psychometric properties . Bloomington: Indiana University Center for Postsecondary Research Retrieved from http://nsse.indiana.edu/2004_annual_report/pdf/2004_conceptual_framework.pdf .

Kuh, G. D. (2009). What student affairs professionals need to know about student engagement. Journal of College Student Development , 50 (6), 683–706. https://doi.org/10.1353/csd.0.0099 .

Kuh, G. D., Cruce, T. M., Shoup, R., Kinzie, J., & Gonyea, R. M. (2008). Unmasking the effects of student engagement on first-year college grades and persistence. The Journal of Higher Education , 79 (5), 540–563 Retrieved from http://www.jstor.org.ezproxy.umuc.edu/stable/25144692 .

Kuh, G. D., J. Kinzie, J. A. Buckley, B. K. Bridges, & J. C. Hayek. (2006). What matters to student success: A review of the literature. Washington, DC: National Postsecondary Education Cooperative.

Kupper, L. L., & Hafner, K. B. (1989). How appropriate are popular sample size formulas? The American Statistician , 43 (2), 101–105.

Lai, J. W. M., & Bower, M. (2019). How is the use of technology in education evaluated? A systematic review. Computers & Education , 133 , 27–42. https://doi.org/10.1016/j.compedu.2019.01.010 .

Lawson, M. A., & Lawson, H. A. (2013). New conceptual frameworks for student engagement research, policy, and practice. Review of Educational Research , 83 (3), 432–479. https://doi.org/10.3102/0034654313480891 .

Leach, L., & Zepke, N. (2011). Engaging students in learning: A review of a conceptual organiser. Higher Education Research and Development , 30 (2), 193–204. https://doi.org/10.1080/07294360.2010.509761 .

Li, J., van der Spek, E. D., Feijs, L., Wang, F., & Hu, J. (2017). Augmented reality games for learning: A literature review. In N. Streitz, & P. Markopoulos (Eds.), Lecture Notes in Computer Science. Distributed, Ambient and Pervasive Interactions , (vol. 10291, pp. 612–626). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-58697-7_46 .

Lim, C. (2004). Engaging learners in online learning environments. TechTrends , 48 (4), 16–23 Retrieved from https://link.springer.com/content/pdf/10.1007%2FBF02763440.pdf .

Lopera Medina, S. (2014). Motivation conditions in a foreign language reading comprehension course offering both a web-based modality and a face-to-face modality (Las condiciones de motivación en un curso de comprensión de lectura en lengua extranjera (LE) ofrecido tanto en la modalidad presencial como en la modalidad a distancia en la web). PROFILE: Issues in Teachers’ Professional Development , 16 (1), 89–104 Retrieved from https://search.proquest.com/docview/1697487398?accountid=12968 .

Lundin, M., Bergviken Rensfeldt, A., Hillman, T., Lantz-Andersson, A., & Peterson, L. (2018). Higher education dominance and siloed knowledge: A systematic review of flipped classroom research. International Journal of Educational Technology in Higher Education , 15 (1), 1. https://doi.org/10.1186/s41239-018-0101-6 .

Ma, J., Han, X., Yang, J., & Cheng, J. (2015). Examining the necessary condition for engagement in an online learning environment based on learning analytics approach: The role of the instructor. The Internet and Higher Education , 24 , 26–34. https://doi.org/10.1016/j.iheduc.2014.09.005 .

Mahatmya, D., Lohman, B. J., Matjasko, J. L., & Farb, A. F. (2012). Engagement across developmental periods. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 45–63). Boston: Springer US Retrieved from http://link.springer.com/10.1007/978-1-4614-2018-7_3 .

Mansouri, A. S., & Piki, A. (2016). An exploration into the impact of blogs on students’ learning: Case studies in postgraduate business education. Innovations in Education and Teaching International , 53 (3), 260–273. https://doi.org/10.1080/14703297.2014.997777 .

Martin, A. J. (2012). Motivation and engagement: Conceptual, operational, and empirical clarity. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 303–311). Boston: Springer US. https://doi.org/10.1007/978-1-4614-2018-7_14 .

McCutcheon, K., Lohan, M., Traynor, M., & Martin, D. (2015). A systematic review evaluating the impact of online or blended learning vs. face-to-face learning of clinical skills in undergraduate nurse education. Journal of Advanced Nursing , 71 (2), 255–270. https://doi.org/10.1111/jan.12509 .

Miake-Lye, I. M., Hempel, S., Shanman, R., & Shekelle, P. G. (2016). What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products. Systematic Reviews , 5 , 28. https://doi.org/10.1186/s13643-016-0204-x .

Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. BMJ (Clinical Research Ed.) , 339 , b2535. https://doi.org/10.1136/bmj.b2535 .

Nelson Laird, T. F., & Kuh, G. D. (2005). Student experiences with information technology and their relationship to other aspects of student engagement. Research in Higher Education , 46 (2), 211–233. https://doi.org/10.1007/s11162-004-1600-y .

Nguyen, L., Barton, S. M., & Nguyen, L. T. (2015). iPads in higher education-hype and hope. British Journal of Educational Technology , 46 (1), 190–203. https://doi.org/10.1111/bjet.12137 .

Nicholas, D., Watkinson, A., Jamali, H. R., Herman, E., Tenopir, C., Volentine, R., … Levine, K. (2015). Peer review: Still king in the digital age. Learned Publishing , 28 (1), 15–21. https://doi.org/10.1087/20150104 .

Nikou, S. A., & Economides, A. A. (2018). Mobile-based assessment: A literature review of publications in major referred journals from 2009 to 2018. Computers & Education , 125 , 101–119. https://doi.org/10.1016/j.compedu.2018.06.006 .

Norris, L., & Coutas, P. (2014). Cinderella’s coach or just another pumpkin? Information communication technologies and the continuing marginalisation of languages in Australian schools. Australian Review of Applied Linguistics , 37 (1), 43–61 Retrieved from http://www.jbe-platform.com/content/journals/10.1075/aral.37.1.03nor .

OECD (2015a). Schooling redesigned. Educational Research and Innovation . OECD Publishing Retrieved from http://www.oecd-ilibrary.org/education/schooling-redesigned_9789264245914-en .

OECD (2015b). Students, computers and learning . PISA: OECD Publishing Retrieved from http://www.oecd-ilibrary.org/education/students-computers-and-learning_9789264239555-en .

O’Flaherty, J., & Phillips, C. (2015). The use of flipped classrooms in higher education: A scoping review. The Internet and Higher Education , 25 , 85–95. https://doi.org/10.1016/j.iheduc.2015.02.002 .

O’Gorman, E., Salmon, N., & Murphy, C.-A. (2016). Schools as sanctuaries: A systematic review of contextual factors which contribute to student retention in alternative education. International Journal of Inclusive Education , 20 (5), 536–551. https://doi.org/10.1080/13603116.2015.1095251 .

Oliver, B., & de St Jorre, Trina, J. (2018). Graduate attributes for 2020 and beyond: recommendations for Australian higher education providers. Higher Education Research and Development , 1–16. https://doi.org/10.1080/07294360.2018.1446415 .

O’Mara-Eves, A., Brunton, G., McDaid, D., Kavanagh, J., Oliver, S., & Thomas, J. (2014). Techniques for identifying cross-disciplinary and ‘hard-to-detect’ evidence for systematic review. Research Synthesis Methods , 5 (1), 50–59. https://doi.org/10.1002/jrsm.1094 .

Payne, L. (2017). Student engagement: Three models for its investigation. Journal of Further and Higher Education , 3 (2), 1–17. https://doi.org/10.1080/0309877X.2017.1391186 .

Pekrun, R., & Linnenbrink-Garcia, L. (2012). Academic emotions and student engagement. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 259–282). Boston: Springer US Retrieved from http://link.springer.com/10.1007/978-1-4614-2018-7_12 .

Popenici, S. (2013). Towards a new vision for university governance, pedagogies and student engagement. In E. Dunne, & D. Owen (Eds.), The student engagement handbook: Practice in higher education , (1st ed., pp. 23–42). Bingley: Emerald.

Price, L., Richardson, J. T., & Jelfs, A. (2007). Face-to-face versus online tutoring support in distance education. Studies in Higher Education , 32 (1), 1–20.

Quin, D. (2017). Longitudinal and contextual associations between teacher–student relationships and student engagement. Review of Educational Research , 87 (2), 345–387. https://doi.org/10.3102/0034654316669434 .

Rashid, T., & Asghar, H. M. (2016). Technology use, self-directed learning, student engagement and academic performance: Examining the interrelations. Computers in Human Behavior , 63 , 604–612. https://doi.org/10.1016/j.chb.2016.05.084 .

Redecker, C. (2017). European framework for the digital competence of educators . Luxembourg: Office of the European Union.

Redmond, P., Heffernan, A., Abawi, L., Brown, A., & Henderson, R. (2018). An online engagement framework for higher education. Online Learning , 22 (1). https://doi.org/10.24059/olj.v22i1.1175 .

Reeve, J. (2012). A self-determination theory perspective on student engagement. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 149–172). Boston: Springer US Retrieved from http://link.springer.com/10.1007/978-1-4614-2018-7_7 .

Reeve, J., & Tseng, C.-M. (2011). Agency as a fourth aspect of students’ engagement during learning activities. Contemporary Educational Psychology , 36 (4), 257–267. https://doi.org/10.1016/j.cedpsych.2011.05.002 .

Reschly, A. L., & Christenson, S. L. (2012). Jingle, jangle, and conceptual haziness: Evolution and future directions of the engagement construct. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 3–19). Boston: Springer US Retrieved from http://link.springer.com/10.1007/978-1-4614-2018-7_1 .

Salaber, J. (2014). Facilitating student engagement and collaboration in a large postgraduate course using wiki-based activities. The International Journal of Management Education , 12 (2), 115–126. https://doi.org/10.1016/j.ijme.2014.03.006 .

Schindler, L. A., Burkholder, G. J., Morad, O. A., & Marsh, C. (2017). Computer-based technology and student engagement: A critical review of the literature. International Journal of Educational Technology in Higher Education , 14 (1), 253. https://doi.org/10.1186/s41239-017-0063-0 .

Selwyn, N. (2016). Digital downsides: Exploring university students’ negative engagements with digital technology. Teaching in Higher Education , 21 (8), 1006–1021. https://doi.org/10.1080/13562517.2016.1213229 .

Shonfeld, M., & Ronen, I. (2015). Online learning for students from diverse backgrounds: Learning disability students, excellent students and average students. IAFOR Journal of Education , 3 (2), 13–29.

Skinner, E., & Pitzer, J. R. (2012). Developmental dynamics of student engagement, coping, and everyday resilience. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of research on student engagement , (pp. 21–44). Boston: Springer US.

Smidt, E., Bunk, J., McGrory, B., Li, R., & Gatenby, T. (2014). Student attitudes about distance education: Focusing on context and effective practices. IAFOR Journal of Education , 2 (1), 40–64.

Smith, R. (2006). Peer review: A flawed process at the heart of science and journals. Journal of the Royal Society of Medicine , 99 , 178–182.

Smith, T., & Lambert, R. (2014). A systematic review investigating the use of twitter and Facebook in university-based healthcare education. Health Education , 114 (5), 347–366. https://doi.org/10.1108/HE-07-2013-0030 .

Solomonides, I. (2013). A relational and multidimensional model of student engagement. In E. Dunne, & D. Owen (Eds.), The student engagement handbook: Practice in higher education , (1st ed., pp. 43–58). Bingley: Emerald.

Sosa Neira, E. A., Salinas, J., & de Benito, B. (2017). Emerging technologies (ETs) in education: A systematic review of the literature published between 2006 and 2016. International Journal of Emerging Technologies in Learning (IJET) , 12 (05), 128. https://doi.org/10.3991/ijet.v12i05.6939 .

Sullivan, M., & Longnecker, N. (2014). Class blogs as a teaching tool to promote writing and student interaction. Australasian Journal of Educational Technology , 30 (4), 390–401. https://doi.org/10.14742/ajet.322 .

Sun, J. C.-Y., & Rueda, R. (2012). Situational interest, computer self-efficacy and self-regulation: Their impact on student engagement in distance education. British Journal of Educational Technology , 43 (2), 191–204. https://doi.org/10.1111/j.1467-8535.2010.01157.x .

Szabo, Z., & Schwartz, J. (2011). Learning methods for teacher education: The use of online discussions to improve critical thinking. Technology, Pedagogy and Education , 20 (1), 79–94. https://doi.org/10.1080/1475939x.2010.534866 .

Tamim, R. M., Bernard, R. M., Borokhovski, E., Abrami, P. C., & Schmid, R. F. (2011). What forty years of research says about the impact of technology on learning: A second-order meta-analysis and validation study. Review of Educational Research , 81 (1), 4–28. https://doi.org/10.3102/0034654310393361 .

Trowler, V. (2010). Student engagement literature review . York: The Higher Education Academy Retrieved from website: https://www.heacademy.ac.uk/system/files/studentengagementliteraturereview_1.pdf .

Van Rooij, E., Brouwer, J., Fokkens-Bruinsma, M., Jansen, E., Donche, V., & Noyens, D. (2017). A systematic review of factors related to first-year students’ success in Dutch and Flemish higher education. Pedagogische Studien , 94 (5), 360–405 Retrieved from https://repository.uantwerpen.be/docman/irua/cebc4c/149722.pdf .

Vural, O. F. (2013). The impact of a question-embedded video-based learning tool on E-learning. Educational Sciences: Theory and Practice , 13 (2), 1315–1323.

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes . Cambridge: Harvard University Press.

Webb, L., Clough, J., O’Reilly, D., Wilmott, D., & Witham, G. (2017). The utility and impact of information communication technology (ICT) for pre-registration nurse education: A narrative synthesis systematic review. Nurse Education Today , 48 , 160–171. https://doi.org/10.1016/j.nedt.2016.10.007 .

Wekullo, C. S. (2019). International undergraduate student engagement: Implications for higher education administrators. Journal of International Students , 9 (1), 320–337. https://doi.org/10.32674/jis.v9i1.257 .

Wimpenny, K., & Savin-Baden, M. (2013). Alienation, agency and authenticity: A synthesis of the literature on student engagement. Teaching in Higher Education , 18 (3), 311–326. https://doi.org/10.1080/13562517.2012.725223 .

Winstone, N. E., Nash, R. A., Parker, M., & Rowntree, J. (2017). Supporting learners’ agentic engagement with feedback: A systematic review and a taxonomy of recipience processes. Educational Psychologist , 52 (1), 17–37. https://doi.org/10.1080/00461520.2016.1207538 .

Zepke, N. (2014). Student engagement research in higher education: Questioning an academic orthodoxy. Teaching in Higher Education , 19 (6), 697–708. https://doi.org/10.1080/13562517.2014.901956 .

Zepke, N. (2018). Student engagement in neo-liberal times: What is missing? Higher Education Research and Development , 37 (2), 433–446. https://doi.org/10.1080/07294360.2017.1370440 .

Zepke, N., & Leach, L. (2010). Improving student engagement: Ten proposals for action. Active Learning in Higher Education , 11 (3), 167–177. https://doi.org/10.1177/1469787410379680 .

Zhang, A., & Aasheim, C. (2011). Academic success factors: An IT student perspective. Journal of Information Technology Education: Research , 10 , 309–331. https://doi.org/10.28945/1518 .

Download references

Acknowledgements

The authors thank the two student assistants who helped during the article retrieval and screening stage.

This research resulted from the ActiveLearn project, funded by the Bundesministerium für Bildung und Forschung (BMBF-German Ministry of Education and Research) [grant number 16DHL1007].

Author information

Authors and affiliations.

Faculty of Education and Social Sciences (COER), Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany

Melissa Bond, Svenja Bedenlier & Olaf Zawacki-Richter

Learning Lab, Universität Duisburg-Essen, Essen, Germany

Katja Buntins & Michael Kerres

You can also search for this author in PubMed   Google Scholar

Contributions

All authors contributed to the design and conceptualisation of the systematic review. MB, KB and SB conducted the systematic review search and data extraction. MB undertook the literature review on student engagement and educational technology, co-wrote the method, results, discussion and conclusion section. KB designed and executed the sampling strategy and produced all of the graphs and tables, as well as assisted with the formulation of the article. SB co-wrote the method, results, discussion and conclusion sections, and proof read the introduction and literature review sections. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Melissa Bond .

Ethics declarations

Consent for publication.

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1..

Literature reviews (LR) and systematic reviews (SR) on student engagement

Additional file 2.

Indicators of engagement and disengagement

Additional file 3.

Literature reviews (LR) and systematic reviews (SR) on student engagement and technology in higher education (HE)

Additional file 4.

Educational technology tool typology based on Bower ( 2016 ) and Educational technology tools used

Additional file 5.

Text-based tool examples by engagement domain

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Bond, M., Buntins, K., Bedenlier, S. et al. Mapping research in student engagement and educational technology in higher education: a systematic evidence map. Int J Educ Technol High Educ 17 , 2 (2020). https://doi.org/10.1186/s41239-019-0176-8

Download citation

Received : 01 May 2019

Accepted : 17 December 2019

Published : 22 January 2020

DOI : https://doi.org/10.1186/s41239-019-0176-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Educational technology
  • Higher education
  • Systematic review
  • Evidence map
  • Student engagement

education technology research article

REALIZING THE PROMISE:

Leading up to the 75th anniversary of the UN General Assembly, this “Realizing the promise: How can education technology improve learning for all?” publication kicks off the Center for Universal Education’s first playbook in a series to help improve education around the world.

It is intended as an evidence-based tool for ministries of education, particularly in low- and middle-income countries, to adopt and more successfully invest in education technology.

While there is no single education initiative that will achieve the same results everywhere—as school systems differ in learners and educators, as well as in the availability and quality of materials and technologies—an important first step is understanding how technology is used given specific local contexts and needs.

The surveys in this playbook are designed to be adapted to collect this information from educators, learners, and school leaders and guide decisionmakers in expanding the use of technology.  

Introduction

While technology has disrupted most sectors of the economy and changed how we communicate, access information, work, and even play, its impact on schools, teaching, and learning has been much more limited. We believe that this limited impact is primarily due to technology being been used to replace analog tools, without much consideration given to playing to technology’s comparative advantages. These comparative advantages, relative to traditional “chalk-and-talk” classroom instruction, include helping to scale up standardized instruction, facilitate differentiated instruction, expand opportunities for practice, and increase student engagement. When schools use technology to enhance the work of educators and to improve the quality and quantity of educational content, learners will thrive.

Further, COVID-19 has laid bare that, in today’s environment where pandemics and the effects of climate change are likely to occur, schools cannot always provide in-person education—making the case for investing in education technology.

Here we argue for a simple yet surprisingly rare approach to education technology that seeks to:

  • Understand the needs, infrastructure, and capacity of a school system—the diagnosis;
  • Survey the best available evidence on interventions that match those conditions—the evidence; and
  • Closely monitor the results of innovations before they are scaled up—the prognosis.

RELATED CONTENT

education technology research article

Podcast: How education technology can improve learning for all students

education technology research article

To make ed tech work, set clear goals, review the evidence, and pilot before you scale

The framework.

Our approach builds on a simple yet intuitive theoretical framework created two decades ago by two of the most prominent education researchers in the United States, David K. Cohen and Deborah Loewenberg Ball. They argue that what matters most to improve learning is the interactions among educators and learners around educational materials. We believe that the failed school-improvement efforts in the U.S. that motivated Cohen and Ball’s framework resemble the ed-tech reforms in much of the developing world to date in the lack of clarity improving the interactions between educators, learners, and the educational material. We build on their framework by adding parents as key agents that mediate the relationships between learners and educators and the material (Figure 1).

Figure 1: The instructional core

Adapted from Cohen and Ball (1999)

As the figure above suggests, ed-tech interventions can affect the instructional core in a myriad of ways. Yet, just because technology can do something, it does not mean it should. School systems in developing countries differ along many dimensions and each system is likely to have different needs for ed-tech interventions, as well as different infrastructure and capacity to enact such interventions.

The diagnosis:

How can school systems assess their needs and preparedness.

A useful first step for any school system to determine whether it should invest in education technology is to diagnose its:

  • Specific needs to improve student learning (e.g., raising the average level of achievement, remediating gaps among low performers, and challenging high performers to develop higher-order skills);
  • Infrastructure to adopt technology-enabled solutions (e.g., electricity connection, availability of space and outlets, stock of computers, and Internet connectivity at school and at learners’ homes); and
  • Capacity to integrate technology in the instructional process (e.g., learners’ and educators’ level of familiarity and comfort with hardware and software, their beliefs about the level of usefulness of technology for learning purposes, and their current uses of such technology).

Before engaging in any new data collection exercise, school systems should take full advantage of existing administrative data that could shed light on these three main questions. This could be in the form of internal evaluations but also international learner assessments, such as the Program for International Student Assessment (PISA), the Trends in International Mathematics and Science Study (TIMSS), and/or the Progress in International Literacy Study (PIRLS), and the Teaching and Learning International Study (TALIS). But if school systems lack information on their preparedness for ed-tech reforms or if they seek to complement existing data with a richer set of indicators, we developed a set of surveys for learners, educators, and school leaders. Download the full report to see how we map out the main aspects covered by these surveys, in hopes of highlighting how they could be used to inform decisions around the adoption of ed-tech interventions.

The evidence:

How can school systems identify promising ed-tech interventions.

There is no single “ed-tech” initiative that will achieve the same results everywhere, simply because school systems differ in learners and educators, as well as in the availability and quality of materials and technologies. Instead, to realize the potential of education technology to accelerate student learning, decisionmakers should focus on four potential uses of technology that play to its comparative advantages and complement the work of educators to accelerate student learning (Figure 2). These comparative advantages include:

  • Scaling up quality instruction, such as through prerecorded quality lessons.
  • Facilitating differentiated instruction, through, for example, computer-adaptive learning and live one-on-one tutoring.
  • Expanding opportunities to practice.
  • Increasing learner engagement through videos and games.

Figure 2: Comparative advantages of technology

Here we review the evidence on ed-tech interventions from 37 studies in 20 countries*, organizing them by comparative advantage. It’s important to note that ours is not the only way to classify these interventions (e.g., video tutorials could be considered as a strategy to scale up instruction or increase learner engagement), but we believe it may be useful to highlight the needs that they could address and why technology is well positioned to do so.

When discussing specific studies, we report the magnitude of the effects of interventions using standard deviations (SDs). SDs are a widely used metric in research to express the effect of a program or policy with respect to a business-as-usual condition (e.g., test scores). There are several ways to make sense of them. One is to categorize the magnitude of the effects based on the results of impact evaluations. In developing countries, effects below 0.1 SDs are considered to be small, effects between 0.1 and 0.2 SDs are medium, and those above 0.2 SDs are large (for reviews that estimate the average effect of groups of interventions, called “meta analyses,” see e.g., Conn, 2017; Kremer, Brannen, & Glennerster, 2013; McEwan, 2014; Snilstveit et al., 2015; Evans & Yuan, 2020.)

*In surveying the evidence, we began by compiling studies from prior general and ed-tech specific evidence reviews that some of us have written and from ed-tech reviews conducted by others. Then, we tracked the studies cited by the ones we had previously read and reviewed those, as well. In identifying studies for inclusion, we focused on experimental and quasi-experimental evaluations of education technology interventions from pre-school to secondary school in low- and middle-income countries that were released between 2000 and 2020. We only included interventions that sought to improve student learning directly (i.e., students’ interaction with the material), as opposed to interventions that have impacted achievement indirectly, by reducing teacher absence or increasing parental engagement. This process yielded 37 studies in 20 countries (see the full list of studies in Appendix B).

Scaling up standardized instruction

One of the ways in which technology may improve the quality of education is through its capacity to deliver standardized quality content at scale. This feature of technology may be particularly useful in three types of settings: (a) those in “hard-to-staff” schools (i.e., schools that struggle to recruit educators with the requisite training and experience—typically, in rural and/or remote areas) (see, e.g., Urquiola & Vegas, 2005); (b) those in which many educators are frequently absent from school (e.g., Chaudhury, Hammer, Kremer, Muralidharan, & Rogers, 2006; Muralidharan, Das, Holla, & Mohpal, 2017); and/or (c) those in which educators have low levels of pedagogical and subject matter expertise (e.g., Bietenbeck, Piopiunik, & Wiederhold, 2018; Bold et al., 2017; Metzler & Woessmann, 2012; Santibañez, 2006) and do not have opportunities to observe and receive feedback (e.g., Bruns, Costa, & Cunha, 2018; Cilliers, Fleisch, Prinsloo, & Taylor, 2018). Technology could address this problem by: (a) disseminating lessons delivered by qualified educators to a large number of learners (e.g., through prerecorded or live lessons); (b) enabling distance education (e.g., for learners in remote areas and/or during periods of school closures); and (c) distributing hardware preloaded with educational materials.

Prerecorded lessons

Technology seems to be well placed to amplify the impact of effective educators by disseminating their lessons. Evidence on the impact of prerecorded lessons is encouraging, but not conclusive. Some initiatives that have used short instructional videos to complement regular instruction, in conjunction with other learning materials, have raised student learning on independent assessments. For example, Beg et al. (2020) evaluated an initiative in Punjab, Pakistan in which grade 8 classrooms received an intervention that included short videos to substitute live instruction, quizzes for learners to practice the material from every lesson, tablets for educators to learn the material and follow the lesson, and LED screens to project the videos onto a classroom screen. After six months, the intervention improved the performance of learners on independent tests of math and science by 0.19 and 0.24 SDs, respectively but had no discernible effect on the math and science section of Punjab’s high-stakes exams.

One study suggests that approaches that are far less technologically sophisticated can also improve learning outcomes—especially, if the business-as-usual instruction is of low quality. For example, Naslund-Hadley, Parker, and Hernandez-Agramonte (2014) evaluated a preschool math program in Cordillera, Paraguay that used audio segments and written materials four days per week for an hour per day during the school day. After five months, the intervention improved math scores by 0.16 SDs, narrowing gaps between low- and high-achieving learners, and between those with and without educators with formal training in early childhood education.

Yet, the integration of prerecorded material into regular instruction has not always been successful. For example, de Barros (2020) evaluated an intervention that combined instructional videos for math and science with infrastructure upgrades (e.g., two “smart” classrooms, two TVs, and two tablets), printed workbooks for students, and in-service training for educators of learners in grades 9 and 10 in Haryana, India (all materials were mapped onto the official curriculum). After 11 months, the intervention negatively impacted math achievement (by 0.08 SDs) and had no effect on science (with respect to business as usual classes). It reduced the share of lesson time that educators devoted to instruction and negatively impacted an index of instructional quality. Likewise, Seo (2017) evaluated several combinations of infrastructure (solar lights and TVs) and prerecorded videos (in English and/or bilingual) for grade 11 students in northern Tanzania and found that none of the variants improved student learning, even when the videos were used. The study reports effects from the infrastructure component across variants, but as others have noted (Muralidharan, Romero, & Wüthrich, 2019), this approach to estimating impact is problematic.

A very similar intervention delivered after school hours, however, had sizeable effects on learners’ basic skills. Chiplunkar, Dhar, and Nagesh (2020) evaluated an initiative in Chennai (the capital city of the state of Tamil Nadu, India) delivered by the same organization as above that combined short videos that explained key concepts in math and science with worksheets, facilitator-led instruction, small groups for peer-to-peer learning, and occasional career counseling and guidance for grade 9 students. These lessons took place after school for one hour, five times a week. After 10 months, it had large effects on learners’ achievement as measured by tests of basic skills in math and reading, but no effect on a standardized high-stakes test in grade 10 or socio-emotional skills (e.g., teamwork, decisionmaking, and communication).

Drawing general lessons from this body of research is challenging for at least two reasons. First, all of the studies above have evaluated the impact of prerecorded lessons combined with several other components (e.g., hardware, print materials, or other activities). Therefore, it is possible that the effects found are due to these additional components, rather than to the recordings themselves, or to the interaction between the two (see Muralidharan, 2017 for a discussion of the challenges of interpreting “bundled” interventions). Second, while these studies evaluate some type of prerecorded lessons, none examines the content of such lessons. Thus, it seems entirely plausible that the direction and magnitude of the effects depends largely on the quality of the recordings (e.g., the expertise of the educator recording it, the amount of preparation that went into planning the recording, and its alignment with best teaching practices).

These studies also raise three important questions worth exploring in future research. One of them is why none of the interventions discussed above had effects on high-stakes exams, even if their materials are typically mapped onto the official curriculum. It is possible that the official curricula are simply too challenging for learners in these settings, who are several grade levels behind expectations and who often need to reinforce basic skills (see Pritchett & Beatty, 2015). Another question is whether these interventions have long-term effects on teaching practices. It seems plausible that, if these interventions are deployed in contexts with low teaching quality, educators may learn something from watching the videos or listening to the recordings with learners. Yet another question is whether these interventions make it easier for schools to deliver instruction to learners whose native language is other than the official medium of instruction.

Distance education

Technology can also allow learners living in remote areas to access education. The evidence on these initiatives is encouraging. For example, Johnston and Ksoll (2017) evaluated a program that broadcasted live instruction via satellite to rural primary school students in the Volta and Greater Accra regions of Ghana. For this purpose, the program also equipped classrooms with the technology needed to connect to a studio in Accra, including solar panels, a satellite modem, a projector, a webcam, microphones, and a computer with interactive software. After two years, the intervention improved the numeracy scores of students in grades 2 through 4, and some foundational literacy tasks, but it had no effect on attendance or classroom time devoted to instruction, as captured by school visits. The authors interpreted these results as suggesting that the gains in achievement may be due to improving the quality of instruction that children received (as opposed to increased instructional time). Naik, Chitre, Bhalla, and Rajan (2019) evaluated a similar program in the Indian state of Karnataka and also found positive effects on learning outcomes, but it is not clear whether those effects are due to the program or due to differences in the groups of students they compared to estimate the impact of the initiative.

In one context (Mexico), this type of distance education had positive long-term effects. Navarro-Sola (2019) took advantage of the staggered rollout of the telesecundarias (i.e., middle schools with lessons broadcasted through satellite TV) in 1968 to estimate its impact. The policy had short-term effects on students’ enrollment in school: For every telesecundaria per 50 children, 10 students enrolled in middle school and two pursued further education. It also had a long-term influence on the educational and employment trajectory of its graduates. Each additional year of education induced by the policy increased average income by nearly 18 percent. This effect was attributable to more graduates entering the labor force and shifting from agriculture and the informal sector. Similarly, Fabregas (2019) leveraged a later expansion of this policy in 1993 and found that each additional telesecundaria per 1,000 adolescents led to an average increase of 0.2 years of education, and a decline in fertility for women, but no conclusive evidence of long-term effects on labor market outcomes.

It is crucial to interpret these results keeping in mind the settings where the interventions were implemented. As we mention above, part of the reason why they have proven effective is that the “counterfactual” conditions for learning (i.e., what would have happened to learners in the absence of such programs) was either to not have access to schooling or to be exposed to low-quality instruction. School systems interested in taking up similar interventions should assess the extent to which their learners (or parts of their learner population) find themselves in similar conditions to the subjects of the studies above. This illustrates the importance of assessing the needs of a system before reviewing the evidence.

Preloaded hardware

Technology also seems well positioned to disseminate educational materials. Specifically, hardware (e.g., desktop computers, laptops, or tablets) could also help deliver educational software (e.g., word processing, reference texts, and/or games). In theory, these materials could not only undergo a quality assurance review (e.g., by curriculum specialists and educators), but also draw on the interactions with learners for adjustments (e.g., identifying areas needing reinforcement) and enable interactions between learners and educators.

In practice, however, most initiatives that have provided learners with free computers, laptops, and netbooks do not leverage any of the opportunities mentioned above. Instead, they install a standard set of educational materials and hope that learners find them helpful enough to take them up on their own. Students rarely do so, and instead use the laptops for recreational purposes—often, to the detriment of their learning (see, e.g., Malamud & Pop-Eleches, 2011). In fact, free netbook initiatives have not only consistently failed to improve academic achievement in math or language (e.g., Cristia et al., 2017), but they have had no impact on learners’ general computer skills (e.g., Beuermann et al., 2015). Some of these initiatives have had small impacts on cognitive skills, but the mechanisms through which those effects occurred remains unclear.

To our knowledge, the only successful deployment of a free laptop initiative was one in which a team of researchers equipped the computers with remedial software. Mo et al. (2013) evaluated a version of the One Laptop per Child (OLPC) program for grade 3 students in migrant schools in Beijing, China in which the laptops were loaded with a remedial software mapped onto the national curriculum for math (similar to the software products that we discuss under “practice exercises” below). After nine months, the program improved math achievement by 0.17 SDs and computer skills by 0.33 SDs. If a school system decides to invest in free laptops, this study suggests that the quality of the software on the laptops is crucial.

To date, however, the evidence suggests that children do not learn more from interacting with laptops than they do from textbooks. For example, Bando, Gallego, Gertler, and Romero (2016) compared the effect of free laptop and textbook provision in 271 elementary schools in disadvantaged areas of Honduras. After seven months, students in grades 3 and 6 who had received the laptops performed on par with those who had received the textbooks in math and language. Further, even if textbooks essentially become obsolete at the end of each school year, whereas laptops can be reloaded with new materials for each year, the costs of laptop provision (not just the hardware, but also the technical assistance, Internet, and training associated with it) are not yet low enough to make them a more cost-effective way of delivering content to learners.

Evidence on the provision of tablets equipped with software is encouraging but limited. For example, de Hoop et al. (2020) evaluated a composite intervention for first grade students in Zambia’s Eastern Province that combined infrastructure (electricity via solar power), hardware (projectors and tablets), and educational materials (lesson plans for educators and interactive lessons for learners, both loaded onto the tablets and mapped onto the official Zambian curriculum). After 14 months, the intervention had improved student early-grade reading by 0.4 SDs, oral vocabulary scores by 0.25 SDs, and early-grade math by 0.22 SDs. It also improved students’ achievement by 0.16 on a locally developed assessment. The multifaceted nature of the program, however, makes it challenging to identify the components that are driving the positive effects. Pitchford (2015) evaluated an intervention that provided tablets equipped with educational “apps,” to be used for 30 minutes per day for two months to develop early math skills among students in grades 1 through 3 in Lilongwe, Malawi. The evaluation found positive impacts in math achievement, but the main study limitation is that it was conducted in a single school.

Facilitating differentiated instruction

Another way in which technology may improve educational outcomes is by facilitating the delivery of differentiated or individualized instruction. Most developing countries massively expanded access to schooling in recent decades by building new schools and making education more affordable, both by defraying direct costs, as well as compensating for opportunity costs (Duflo, 2001; World Bank, 2018). These initiatives have not only rapidly increased the number of learners enrolled in school, but have also increased the variability in learner’ preparation for schooling. Consequently, a large number of learners perform well below grade-based curricular expectations (see, e.g., Duflo, Dupas, & Kremer, 2011; Pritchett & Beatty, 2015). These learners are unlikely to get much from “one-size-fits-all” instruction, in which a single educator delivers instruction deemed appropriate for the middle (or top) of the achievement distribution (Banerjee & Duflo, 2011). Technology could potentially help these learners by providing them with: (a) instruction and opportunities for practice that adjust to the level and pace of preparation of each individual (known as “computer-adaptive learning” (CAL)); or (b) live, one-on-one tutoring.

Computer-adaptive learning

One of the main comparative advantages of technology is its ability to diagnose students’ initial learning levels and assign students to instruction and exercises of appropriate difficulty. No individual educator—no matter how talented—can be expected to provide individualized instruction to all learners in his/her class simultaneously . In this respect, technology is uniquely positioned to complement traditional teaching. This use of technology could help learners master basic skills and help them get more out of schooling.

Although many software products evaluated in recent years have been categorized as CAL, many rely on a relatively coarse level of differentiation at an initial stage (e.g., a diagnostic test) without further differentiation. We discuss these initiatives under the category of “increasing opportunities for practice” below. CAL initiatives complement an initial diagnostic with dynamic adaptation (i.e., at each response or set of responses from learners) to adjust both the initial level of difficulty and rate at which it increases or decreases, depending on whether learners’ responses are correct or incorrect.

Existing evidence on this specific type of programs is highly promising. Most famously, Banerjee et al. (2007) evaluated CAL software in Vadodara, in the Indian state of Gujarat, in which grade 4 students were offered two hours of shared computer time per week before and after school, during which they played games that involved solving math problems. The level of difficulty of such problems adjusted based on students’ answers. This program improved math achievement by 0.35 and 0.47 SDs after one and two years of implementation, respectively. Consistent with the promise of personalized learning, the software improved achievement for all students. In fact, one year after the end of the program, students assigned to the program still performed 0.1 SDs better than those assigned to a business as usual condition. More recently, Muralidharan, et al. (2019) evaluated a “blended learning” initiative in which students in grades 4 through 9 in Delhi, India received 45 minutes of interaction with CAL software for math and language, and 45 minutes of small group instruction before or after going to school. After only 4.5 months, the program improved achievement by 0.37 SDs in math and 0.23 SDs in Hindi. While all learners benefited from the program in absolute terms, the lowest performing learners benefited the most in relative terms, since they were learning very little in school.

We see two important limitations from this body of research. First, to our knowledge, none of these initiatives has been evaluated when implemented during the school day. Therefore, it is not possible to distinguish the effect of the adaptive software from that of additional instructional time. Second, given that most of these programs were facilitated by local instructors, attempts to distinguish the effect of the software from that of the instructors has been mostly based on noncausal evidence. A frontier challenge in this body of research is to understand whether CAL software can increase the effectiveness of school-based instruction by substituting part of the regularly scheduled time for math and language instruction.

Live one-on-one tutoring

Recent improvements in the speed and quality of videoconferencing, as well as in the connectivity of remote areas, have enabled yet another way in which technology can help personalization: live (i.e., real-time) one-on-one tutoring. While the evidence on in-person tutoring is scarce in developing countries, existing studies suggest that this approach works best when it is used to personalize instruction (see, e.g., Banerjee et al., 2007; Banerji, Berry, & Shotland, 2015; Cabezas, Cuesta, & Gallego, 2011).

There are almost no studies on the impact of online tutoring—possibly, due to the lack of hardware and Internet connectivity in low- and middle-income countries. One exception is Chemin and Oledan (2020)’s recent evaluation of an online tutoring program for grade 6 students in Kianyaga, Kenya to learn English from volunteers from a Canadian university via Skype ( videoconferencing software) for one hour per week after school. After 10 months, program beneficiaries performed 0.22 SDs better in a test of oral comprehension, improved their comfort using technology for learning, and became more willing to engage in cross-cultural communication. Importantly, while the tutoring sessions used the official English textbooks and sought in part to help learners with their homework, tutors were trained on several strategies to teach to each learner’s individual level of preparation, focusing on basic skills if necessary. To our knowledge, similar initiatives within a country have not yet been rigorously evaluated.

Expanding opportunities for practice

A third way in which technology may improve the quality of education is by providing learners with additional opportunities for practice. In many developing countries, lesson time is primarily devoted to lectures, in which the educator explains the topic and the learners passively copy explanations from the blackboard. This setup leaves little time for in-class practice. Consequently, learners who did not understand the explanation of the material during lecture struggle when they have to solve homework assignments on their own. Technology could potentially address this problem by allowing learners to review topics at their own pace.

Practice exercises

Technology can help learners get more out of traditional instruction by providing them with opportunities to implement what they learn in class. This approach could, in theory, allow some learners to anchor their understanding of the material through trial and error (i.e., by realizing what they may not have understood correctly during lecture and by getting better acquainted with special cases not covered in-depth in class).

Existing evidence on practice exercises reflects both the promise and the limitations of this use of technology in developing countries. For example, Lai et al. (2013) evaluated a program in Shaanxi, China where students in grades 3 and 5 were required to attend two 40-minute remedial sessions per week in which they first watched videos that reviewed the material that had been introduced in their math lessons that week and then played games to practice the skills introduced in the video. After four months, the intervention improved math achievement by 0.12 SDs. Many other evaluations of comparable interventions have found similar small-to-moderate results (see, e.g., Lai, Luo, Zhang, Huang, & Rozelle, 2015; Lai et al., 2012; Mo et al., 2015; Pitchford, 2015). These effects, however, have been consistently smaller than those of initiatives that adjust the difficulty of the material based on students’ performance (e.g., Banerjee et al., 2007; Muralidharan, et al., 2019). We hypothesize that these programs do little for learners who perform several grade levels behind curricular expectations, and who would benefit more from a review of foundational concepts from earlier grades.

We see two important limitations from this research. First, most initiatives that have been evaluated thus far combine instructional videos with practice exercises, so it is hard to know whether their effects are driven by the former or the latter. In fact, the program in China described above allowed learners to ask their peers whenever they did not understand a difficult concept, so it potentially also captured the effect of peer-to-peer collaboration. To our knowledge, no studies have addressed this gap in the evidence.

Second, most of these programs are implemented before or after school, so we cannot distinguish the effect of additional instructional time from that of the actual opportunity for practice. The importance of this question was first highlighted by Linden (2008), who compared two delivery mechanisms for game-based remedial math software for students in grades 2 and 3 in a network of schools run by a nonprofit organization in Gujarat, India: one in which students interacted with the software during the school day and another one in which students interacted with the software before or after school (in both cases, for three hours per day). After a year, the first version of the program had negatively impacted students’ math achievement by 0.57 SDs and the second one had a null effect. This study suggested that computer-assisted learning is a poor substitute for regular instruction when it is of high quality, as was the case in this well-functioning private network of schools.

In recent years, several studies have sought to remedy this shortcoming. Mo et al. (2014) were among the first to evaluate practice exercises delivered during the school day. They evaluated an initiative in Shaanxi, China in which students in grades 3 and 5 were required to interact with the software similar to the one in Lai et al. (2013) for two 40-minute sessions per week. The main limitation of this study, however, is that the program was delivered during regularly scheduled computer lessons, so it could not determine the impact of substituting regular math instruction. Similarly, Mo et al. (2020) evaluated a self-paced and a teacher-directed version of a similar program for English for grade 5 students in Qinghai, China. Yet, the key shortcoming of this study is that the teacher-directed version added several components that may also influence achievement, such as increased opportunities for teachers to provide students with personalized assistance when they struggled with the material. Ma, Fairlie, Loyalka, and Rozelle (2020) compared the effectiveness of additional time-delivered remedial instruction for students in grades 4 to 6 in Shaanxi, China through either computer-assisted software or using workbooks. This study indicates whether additional instructional time is more effective when using technology, but it does not address the question of whether school systems may improve the productivity of instructional time during the school day by substituting educator-led with computer-assisted instruction.

Increasing learner engagement

Another way in which technology may improve education is by increasing learners’ engagement with the material. In many school systems, regular “chalk and talk” instruction prioritizes time for educators’ exposition over opportunities for learners to ask clarifying questions and/or contribute to class discussions. This, combined with the fact that many developing-country classrooms include a very large number of learners (see, e.g., Angrist & Lavy, 1999; Duflo, Dupas, & Kremer, 2015), may partially explain why the majority of those students are several grade levels behind curricular expectations (e.g., Muralidharan, et al., 2019; Muralidharan & Zieleniak, 2014; Pritchett & Beatty, 2015). Technology could potentially address these challenges by: (a) using video tutorials for self-paced learning and (b) presenting exercises as games and/or gamifying practice.

Video tutorials

Technology can potentially increase learner effort and understanding of the material by finding new and more engaging ways to deliver it. Video tutorials designed for self-paced learning—as opposed to videos for whole class instruction, which we discuss under the category of “prerecorded lessons” above—can increase learner effort in multiple ways, including: allowing learners to focus on topics with which they need more help, letting them correct errors and misconceptions on their own, and making the material appealing through visual aids. They can increase understanding by breaking the material into smaller units and tackling common misconceptions.

In spite of the popularity of instructional videos, there is relatively little evidence on their effectiveness. Yet, two recent evaluations of different versions of the Khan Academy portal, which mainly relies on instructional videos, offer some insight into their impact. First, Ferman, Finamor, and Lima (2019) evaluated an initiative in 157 public primary and middle schools in five cities in Brazil in which the teachers of students in grades 5 and 9 were taken to the computer lab to learn math from the platform for 50 minutes per week. The authors found that, while the intervention slightly improved learners’ attitudes toward math, these changes did not translate into better performance in this subject. The authors hypothesized that this could be due to the reduction of teacher-led math instruction.

More recently, Büchel, Jakob, Kühnhanss, Steffen, and Brunetti (2020) evaluated an after-school, offline delivery of the Khan Academy portal in grades 3 through 6 in 302 primary schools in Morazán, El Salvador. Students in this study received 90 minutes per week of additional math instruction (effectively nearly doubling total math instruction per week) through teacher-led regular lessons, teacher-assisted Khan Academy lessons, or similar lessons assisted by technical supervisors with no content expertise. (Importantly, the first group provided differentiated instruction, which is not the norm in Salvadorian schools). All three groups outperformed both schools without any additional lessons and classrooms without additional lessons in the same schools as the program. The teacher-assisted Khan Academy lessons performed 0.24 SDs better, the supervisor-led lessons 0.22 SDs better, and the teacher-led regular lessons 0.15 SDs better, but the authors could not determine whether the effects across versions were different.

Together, these studies suggest that instructional videos work best when provided as a complement to, rather than as a substitute for, regular instruction. Yet, the main limitation of these studies is the multifaceted nature of the Khan Academy portal, which also includes other components found to positively improve learner achievement, such as differentiated instruction by students’ learning levels. While the software does not provide the type of personalization discussed above, learners are asked to take a placement test and, based on their score, educators assign them different work. Therefore, it is not clear from these studies whether the effects from Khan Academy are driven by its instructional videos or to the software’s ability to provide differentiated activities when combined with placement tests.

Games and gamification

Technology can also increase learner engagement by presenting exercises as games and/or by encouraging learner to play and compete with others (e.g., using leaderboards and rewards)—an approach known as “gamification.” Both approaches can increase learner motivation and effort by presenting learners with entertaining opportunities for practice and by leveraging peers as commitment devices.

There are very few studies on the effects of games and gamification in low- and middle-income countries. Recently, Araya, Arias Ortiz, Bottan, and Cristia (2019) evaluated an initiative in which grade 4 students in Santiago, Chile were required to participate in two 90-minute sessions per week during the school day with instructional math software featuring individual and group competitions (e.g., tracking each learner’s standing in his/her class and tournaments between sections). After nine months, the program led to improvements of 0.27 SDs in the national student assessment in math (it had no spillover effects on reading). However, it had mixed effects on non-academic outcomes. Specifically, the program increased learners’ willingness to use computers to learn math, but, at the same time, increased their anxiety toward math and negatively impacted learners’ willingness to collaborate with peers. Finally, given that one of the weekly sessions replaced regular math instruction and the other one represented additional math instructional time, it is not clear whether the academic effects of the program are driven by the software or the additional time devoted to learning math.

The prognosis:

How can school systems adopt interventions that match their needs.

Here are five specific and sequential guidelines for decisionmakers to realize the potential of education technology to accelerate student learning.

1. Take stock of how your current schools, educators, and learners are engaging with technology .

Carry out a short in-school survey to understand the current practices and potential barriers to adoption of technology (we have included suggested survey instruments in the Appendices); use this information in your decisionmaking process. For example, we learned from conversations with current and former ministers of education from various developing regions that a common limitation to technology use is regulations that hold school leaders accountable for damages to or losses of devices. Another common barrier is lack of access to electricity and Internet, or even the availability of sufficient outlets for charging devices in classrooms. Understanding basic infrastructure and regulatory limitations to the use of education technology is a first necessary step. But addressing these limitations will not guarantee that introducing or expanding technology use will accelerate learning. The next steps are thus necessary.

“In Africa, the biggest limit is connectivity. Fiber is expensive, and we don’t have it everywhere. The continent is creating a digital divide between cities, where there is fiber, and the rural areas.  The [Ghanaian] administration put in schools offline/online technologies with books, assessment tools, and open source materials. In deploying this, we are finding that again, teachers are unfamiliar with it. And existing policies prohibit students to bring their own tablets or cell phones. The easiest way to do it would have been to let everyone bring their own device. But policies are against it.” H.E. Matthew Prempeh, Minister of Education of Ghana, on the need to understand the local context.

2. Consider how the introduction of technology may affect the interactions among learners, educators, and content .

Our review of the evidence indicates that technology may accelerate student learning when it is used to scale up access to quality content, facilitate differentiated instruction, increase opportunities for practice, or when it increases learner engagement. For example, will adding electronic whiteboards to classrooms facilitate access to more quality content or differentiated instruction? Or will these expensive boards be used in the same way as the old chalkboards? Will providing one device (laptop or tablet) to each learner facilitate access to more and better content, or offer students more opportunities to practice and learn? Solely introducing technology in classrooms without additional changes is unlikely to lead to improved learning and may be quite costly. If you cannot clearly identify how the interactions among the three key components of the instructional core (educators, learners, and content) may change after the introduction of technology, then it is probably not a good idea to make the investment. See Appendix A for guidance on the types of questions to ask.

3. Once decisionmakers have a clear idea of how education technology can help accelerate student learning in a specific context, it is important to define clear objectives and goals and establish ways to regularly assess progress and make course corrections in a timely manner .

For instance, is the education technology expected to ensure that learners in early grades excel in foundational skills—basic literacy and numeracy—by age 10? If so, will the technology provide quality reading and math materials, ample opportunities to practice, and engaging materials such as videos or games? Will educators be empowered to use these materials in new ways? And how will progress be measured and adjusted?

4. How this kind of reform is approached can matter immensely for its success.

It is easy to nod to issues of “implementation,” but that needs to be more than rhetorical. Keep in mind that good use of education technology requires thinking about how it will affect learners, educators, and parents. After all, giving learners digital devices will make no difference if they get broken, are stolen, or go unused. Classroom technologies only matter if educators feel comfortable putting them to work. Since good technology is generally about complementing or amplifying what educators and learners already do, it is almost always a mistake to mandate programs from on high. It is vital that technology be adopted with the input of educators and families and with attention to how it will be used. If technology goes unused or if educators use it ineffectually, the results will disappoint—no matter the virtuosity of the technology. Indeed, unused education technology can be an unnecessary expenditure for cash-strapped education systems. This is why surveying context, listening to voices in the field, examining how technology is used, and planning for course correction is essential.

5. It is essential to communicate with a range of stakeholders, including educators, school leaders, parents, and learners .

Technology can feel alien in schools, confuse parents and (especially) older educators, or become an alluring distraction. Good communication can help address all of these risks. Taking care to listen to educators and families can help ensure that programs are informed by their needs and concerns. At the same time, deliberately and consistently explaining what technology is and is not supposed to do, how it can be most effectively used, and the ways in which it can make it more likely that programs work as intended. For instance, if teachers fear that technology is intended to reduce the need for educators, they will tend to be hostile; if they believe that it is intended to assist them in their work, they will be more receptive. Absent effective communication, it is easy for programs to “fail” not because of the technology but because of how it was used. In short, past experience in rolling out education programs indicates that it is as important to have a strong intervention design as it is to have a solid plan to socialize it among stakeholders.

education technology research article

Beyond reopening: A leapfrog moment to transform education?

On September 14, the Center for Universal Education (CUE) will host a webinar to discuss strategies, including around the effective use of education technology, for ensuring resilient schools in the long term and to launch a new education technology playbook “Realizing the promise: How can education technology improve learning for all?”

file-pdf Full Playbook – Realizing the promise: How can education technology improve learning for all? file-pdf References file-pdf Appendix A – Instruments to assess availability and use of technology file-pdf Appendix B – List of reviewed studies file-pdf Appendix C – How may technology affect interactions among students, teachers, and content?

About the Authors

Alejandro j. ganimian, emiliana vegas, frederick m. hess.

  • Media Relations
  • Terms and Conditions
  • Privacy Policy
  • Our Mission

7 Research Findings About Technology and Education

Here’s what research shows about the effectiveness of technology for learning and when less tech can be more productive.

Photo of elementary teacher and students on ipads in classroom

Do students perform better on digital or paper assessments? Does the amount of time spent on an app correlate to learning growth? How much valid and reliable research is typically behind an educational application? These are questions that busy educators often wonder about, yet they may not have an easy way to find answers. Fortunately, there is research on education apps and devices as well as learning growth and outcomes in the research journals. Below are seven things that educators should know about the research on the effectiveness of technology for learning—note that research findings can evolve over time, and the points below are not definitively settled.

Advantages and disadvantages of Tech

1. When screens are present but not being used for learning, students tend to learn less. Whether it’s a laptop or a smartphone , studies have found that the mere presence of these devices reduces available cognitive capacity in college students. Long-term recall and retention of information decreases when students at the university level have screens present during direct instructional time . Just having a laptop screen open or a cell phone next to a student (but not being used) is enough to distract their brain from fully focusing on the class activities.

Further, studies found that students in college who send off-task text or IM messages during class or engage with social media on their devices typically take lower-quality notes, and their overall academic performance is worse than that of those who didn’t engage in those activities during class. It’s important to note that when a student doesn’t have a device but is near another student who is using a device during class, both students’ grades will likely be negatively affected.

2. Literacy applications often have little valid and reliable research associated with them. A number of applications in the app stores (such as Google Play) do not have much, if any, valid and reliable research associated with them. According to a study looking at the top-rated early literacy applications , 77 percent of the applications have zero reliable research behind them. And the few apps that did have research only considered the look and feel of the application (such as ease of navigation or visual appeal), rather than if the child was likely to learn foundational literacy skills from the app.

There are apps that are effective , but finding them in the sea of all available apps—many of them poorly designed, with inadequate backing evidence—is a daunting task.

3. Neither the amount of time spent on an app nor the number of sessions in an app correlates with effectiveness. A recent study found that the “dosage” of the app, such as the number of sessions, time spent per session, and duration of the study, did not predict effectiveness of the app . Thus, learning outcomes did not change if a student spent more or less time in an application. The quality of the application matters more in determining learning growth or outcomes than the amount of time or number of times an application is used.

4. Students who read online tend to comprehend less than those who read via paper. Studies have shown that when it comes to comprehension and reading online versus on paper , the type of text matters. One study discovered that when it comes to leisure reading , the more complex the text, the more likely students will comprehend the content better when reading on paper.

Print reading over a long period of time could boost comprehension skills by six to eight times more than digital reading. The same study found that younger children (ages 6–12) seem to benefit the most from print reading over online. Further, another recent study found that university students tend to annotate more when reading on paper versus digital text, though this does not improve their subsequent memory of the text.

5. Students tend to perform worse when testing online compared with those who test on paper. While many standardized tests have moved online, there’s research that doesn’t support this as the best medium for optimal outcomes. A 2018 study determined that students tend to score worse when testing online versus paper in both math and English language arts. In particular, English language learners, children from lower-income homes, and students on individualized education programs perform worse online than on paper.

Some studies are finding that the use of computers in formal assessments creates an obstacle for students who need special accommodations like text-to-speech readers or language translators. For example, students with visual impairments tended to perform worse on computer-based tests that provided a digital reader, compared with similar students who took paper tests with a human reader.

6. Online classes are best for students who can self-regulate and are independent learners. The Brookings Institution’s Executive Summary on online learning finds that online learning is best suited for students who are high achievers and self-motivated. The research they reviewed found that academically strong students can benefit from fully online courses, while students who are not academically strong tend to do worse in online courses than they would in in-person classes.

One example is the Back on Track study, which looked at ninth-grade students taking credit recovery algebra. The study compared students in a fully online algebra credit recovery course with students in an in-person credit recovery algebra course; the fully online students had worse overall academic outcomes and were less likely to recover credit. Additionally, students in fully online courses with no face-to-face instructor interaction typically fared worse than students in face-to-face classes. The good news is that students in blended courses (part online and part in-person) appear to do about the same as those in fully in-person classes.

7. The type of device matters. While schools often shop for the least expensive option for student devices, it is important to note that a recent study looking at remote learning found that the type and quality of student devices matters in learning outcomes. Students who used devices that were older and had slower processors had a worse quality of learning experiences than those who had newer devices with stronger specifications.

These are some highlights from recent studies that can inform teachers and school districts when it comes to decision-making with purchasing technology, creating policies, or devising alternative academic offerings. It is important to understand the evidence behind any edtech-related decisions that could impact many students.

  • Philosophy of Art

Impact of modern technology in education

  • Journal of Applied and Advanced Research 3(S1):33
  • CC BY-NC 4.0
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • Muhammad Fazli Taib Saearani

Hilal Yılmaz

  • Regina Yoantika Natalie
  • Markus Budiraharjo
  • Virdayanti Virdayanti
  • Herlina Usman
  • Juhana Sakmal

Sofìa Neira

  • Ching Ho Cheng
  • Mariyam Shuima
  • Thivilojana Perinpasingam
  • Mariyam Aneela

Ayesha Calmerin Penuela

  • Suwanna Yutthapirut
  • Sarinrat Sertpunya
  • Pikun Ekwarangkoon
  • Savaş Yılmaz
  • Yusuf Cerit
  • J.D. Bransford
  • R. R. Cocking
  • BRIT J EDUC TECHNOL

Jennifer M. Brill

  • Chad Galloway
  • FUTURE CHILD
  • J Roschelle
  • H Wenglinski
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts
  • PMC11328461

Logo of nihpa

Digital Technology in Nutrition Education and Behavior Change: Opportunities and Challenges

Alexandra l. macmillan uribe.

Institute for Advancing Health Through Agriculture, Texas A&M AgriLife Research

Emily Welker Duffy

University of North Carolina at Chapel Hill, Gillings School of Global Public Health, Department of Nutrition

Basheerah Enahora

Department of Agricultural and Human Sciences, North Carolina State University, NC State Extension

Phrashiah Githinji

Texas A&M AgriLife Research, Institute for Advancing Health through Agriculture

Jared McGuirt

Department of Nutrition, University of North Carolina Greensboro

Gina L. Tripicchio

Center for Obesity Research and Education, College of Public Health, Temple University

The incorporation of digital technology (digitech) within nutrition education and behavior change interventions (NEBI) has markedly increased, and COVID-19 rapidly accelerated advancement and acceptability in this area. 1 The proliferation of digitech, including devices and platforms, creates novel ways for end-users to engage with NEBI and presents unique opportunities for increasing reach and engagement of underrepresented populations. 2 - 5 While a “digital divide” exists with some digitech, like desktop/laptop ownership and home broadband internet access, most people own smart phones (≥ 76%) or use social media (≥ 65%), regardless ofincome, race and ethnicity, or age. 6 - 8 Furthermore, digitech can resolve common barriers to NEBI participation (e.g., lack of transportation or time) and can increase NEBI scalability. 9 Prior to developing or adapting NEBI that incorporate digitech, it is important to consider challenges that might impact their effectiveness and approaches that enhance equitable access.

Digitech-specific, evidence-based frameworks are critical for developing effective NEBI. In user-centered design, for example, end users’ needs and preferences are prioritized and used to guide design processes, 10 leading to improved participant engagement and an increased likelihood of an effective intervention. 11 Researchers may also consider implementation process models to guide development and optimize sustained digitech utilization. For example, the Exploration, Adoption/Preparation, Implementation, Sustainment (EPIS) model helps identify whether NEBI-related digitech is feasible, adoptable, and relevant to the intended population. 12 Classic theories, like diffusion of innovation, can also be applied to understand how digitech innovations are adopted. Also, engagement strategies like reminders, coaching, and personalized information 13 are important considerations in NEBI digitech. Digital inequities, such as inconsistent internet access or low digital literacy, disproportionately burden the same populations burdened by diet-related disease inequities. Employing user-centered design and leveraging digitech already adopted by the intended audience (e.g., among Hispanics, 80% use social media and 85% own smartphones) 6 , 8 could help reach populations most at risk of diet-related diseases.

Another key challenge is the financial cost of developing and maintaining digitech. For example, a mobile application with simple features can cost $16,000 to $32,000, not including maintenance and updates. 14 This is coupled with the competition, money, and fast pace of digitech in industry that is often misaligned with the scrutinous, slow pace of research. Rigorous digitech-focused funding mechanisms could help support the development and maintenance of innovations. However, continued funding for maintenance and updates may require further testing or expansion of digitech, as part of additional research proposals. Another strategy is leveraging industry’s financial assets and audience reach through collaborative projects that navigate and consider the often-divergent interests of research and industry.

Ultimately, digitech holds great promise for enhancing NEBI reach and effectiveness, especially to address disparities, and warrants continued investigation by nutrition educators and researchers. The Society for Nutrition Education and Behavior (SNEB) DigiTech Division is well-positioned to lead this charge through educating and connecting SNEB members with NEBI digitech experts and resources. 15 Digitech NEBI can effectively meet the specific needs and preferences of the intended audience, while achieving desired outcomes in nutrition education and behavior change.

Biographies

MacMillan Uribe

Contributor Information

Alexandra L. MacMillan Uribe, Institute for Advancing Health Through Agriculture, Texas A&M AgriLife Research.

Emily Welker Duffy, University of North Carolina at Chapel Hill, Gillings School of Global Public Health, Department of Nutrition.

Basheerah Enahora, Department of Agricultural and Human Sciences, North Carolina State University, NC State Extension.

Phrashiah Githinji, Texas A&M AgriLife Research, Institute for Advancing Health through Agriculture.

Jared McGuirt, Department of Nutrition, University of North Carolina Greensboro.

Gina L. Tripicchio, Center for Obesity Research and Education, College of Public Health, Temple University.

Research on the Design and Educational Application of Library Management System Based on SSM and MySQL Technology

New citation alert added.

This alert has been successfully added and will be sent to:

You will be notified whenever a record that you have chosen has been cited.

To manage your alert preferences, click on the button below.

New Citation Alert!

Please log in to your account

Information & Contributors

Bibliometrics & citations, index terms.

Applied computing

Computers in other domains

Digital libraries and archives

Human-centered computing

Visualization

Information systems

Data management systems

Query languages

Information systems applications

Decision support systems

Data analytics

Theory of computation

Theory and algorithms for application domains

Recommendations

Library leaders on digital libraries and the future of the research library: a panel discussion.

This panel presents perspectives from a group of research library leaders on the evolving relationships between the body of systems, services and technologies associated with digital libraries and the institution of the research library. Four panelists ...

VR Application for Technology Education in a Public Library

In this video paper we report on our experiences of deploying a VR point at a public library for the purpose of educating library patrons on VR technology. The primary application offered at the VR point is called Virtual Library, a VR application ...

Creating an educational digital library: GROW a national civil engineering education resource library

The GROW (Geotechnical, Rock and Water Engineering) project (http://www.grow.arizona.edu) is the first iteration of a National Civil Engineering Education Resource Library (NCERL). This educational digital library uses precise coding and metadata to ...

Information

Published in.

cover image ACM Other conferences

Association for Computing Machinery

New York, NY, United States

Publication History

Permissions, check for updates, author tags.

  • Database application data 3
  • Library management system 2
  • Village school 5
  • database design 1
  • Research-article
  • Refereed limited

Funding Sources

  • Sichuan Education Development Research Center Project Exploration of Innovation and Entrepreneurship Education Model for Local Teachers Majors

Contributors

Other metrics, bibliometrics, article metrics.

  • 0 Total Citations
  • 0 Total Downloads
  • Downloads (Last 12 months) 0
  • Downloads (Last 6 weeks) 0

View Options

Login options.

Check if you have access through your login credentials or your institution to get full access on this article.

Full Access

View options.

View or Download as a PDF file.

View online with eReader .

HTML Format

View this article in HTML Format.

Share this Publication link

Copying failed.

Share on social media

Affiliations, export citations.

  • Please download or close your previous search result export first before starting a new bulk export. Preview is not available. By clicking download, a status dialog will open to start the export process. The process may take a few minutes but once it finishes a file will be downloadable from your browser. You may continue to browse the DL while the export process is in progress. Download
  • Download citation
  • Copy citation

We are preparing your search results for download ...

We will inform you here when the file is ready.

Your file of search results citations is now ready.

Your search export query has expired. Please try again.

Categories, themes and research evolution of the study of digital literacy: a bibliometric analysis

  • Published: 07 September 2024

Cite this article

education technology research article

  • Dongping Wu   ORCID: orcid.org/0009-0000-8049-4431 1 ,
  • Sheiladevi Sukumaran 2 ,
  • Xiaomin Zhi   ORCID: orcid.org/0009-0006-1249-3457 3 ,
  • Wenjing Zhou   ORCID: orcid.org/0009-0001-4845-6790 3 ,
  • Lihua Li   ORCID: orcid.org/0009-0000-0912-5561 3 &
  • Hongnan You   ORCID: orcid.org/0009-0003-8024-7256 2 , 3  

With the emerging forces of online and digital products, scholars keenly captured digital literacy and have new research dimensions. The purpose of this study is to present a bibliometric analysis of digital literacy using CiteSpace and to explore the categories, themes and research evolution in digital literacy. A total of 9042 bibliographic records were retrieved from the WoS Core Collection between 1990 and 2024. With CiteSpace, this paper conducted keywords co-occurrence analysis, reference co-citation analysis, categories co-occurring analysis, landscape view, timeline view, etc. to identify the themes, hotspots, and research evolution of digital literacy research. The results demonstrates that education & educational research, health care sciences & services, and public, environmental & occupational health are the top 3 research categories to which the research of digital literacy belongs. By combining the main clusters and their respective keywords, eight prominent themes were generated. In the timeline view, clusters such as health literacy , digital literacy and digital storytellin g are with strong professional vitality and good sustainability, especially cluster digital literacy . The timeline visualization reveals three periods of development of digital literacy research. This study can serve as a fundamental and important support, provide directional guide in the study of digital literacy and contribute to researchers and educators who want to study digital teaching and learning or educational technology for future research in this field.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

education technology research article

Explore related subjects

  • Artificial Intelligence
  • Digital Education and Educational Technology

Data availability

Data will be made available upon request.

Ahmadvand, A., Kavanagh, D., Clark, M., Drennan, J., & Nissen, L. (2019). Trends and visibility of digital health as a keyword in articles by JMIR publications in the new millennium: Bibliographic-bibliometric analysis. Journal of Medical Internet Research , 21 (12), e10477. https://doi.org/10.2196/10477

Article   Google Scholar  

Arcury, T. A., Sandberg, J. C., Melius, K. P., Quandt, S. A., Leng, X., Latulipe, C., & Bertoni, A. G. (2020). Older adult internet use and eHealth literacy. Journal of Applied Gerontology , 39 (2), 141–150. https://doi.org/10.1177/0733464818807468

Baker, D. W., Parker, R. M., Williams, M. V., & Clark, W. S. (1998). Health literacy and the risk of hospital admission. Journal of General Internal Medicine , 13 (12), 791–798. https://doi.org/10.1046/j.1525-1497.1998.00242.x

Bawden, D. (2008). Origins and concepts of digital literacy. Digital Literacies: Concepts Policies and Practices , 30 (2008), 17–32.

Google Scholar  

Bennett, S., Maton, K., & Kervin, L. (2008). The ‘digital natives’ debate: A critical review of the evidence. British Journal of Educational Technology , 39 (5), 775–786. https://doi.org/10.1111/j.1467-8535.2007.00793.x

Bin Naeem, S., & Kamel Boulos, M. N. (2021). COVID-19 misinformation online and health literacy: A brief overview. International Journal of Environmental Research and Public Health , 18 (15), 8091. https://doi.org/10.3390/ijerph18158091

Buchholz, B. A., DeHart, J., & Moorman, G. (2020). Digital citizenship during a global pandemic: Moving beyond digital literacy. Journal of Adolescent & Adult Literacy , 64 (1), 11–17. https://doi.org/10.1002/jaal.1076

Carretero, S., Vuorikari, R., & Punie, Y. (2017). DigComp 2.1: The digital competence framework for citizens. https://doi.org/10.2760/38842

Cetindamar Kozanoglu, D., & Abedin, B. (2021). Understanding the role of employees in digital transformation: Conceptualization of digital literacy of employees as a multi-dimensional organizational affordance. Journal of Enterprise Information Management , 34 (6), 1649–1672. https://doi.org/10.1108/JEIM-01-2020-0010

Chen, C. (2006). CiteSpaceII: Detecting and visualizing emerging trends and transient patterns in scientific literature. Journal of the American Society for Information Science and Technology , 57 (3), 359–377. https://doi.org/10.1002/asi

Chen, C. (2017). Science mapping: A systematic review of the literature. Journal of data and Information Science , 2 (2), 1–40. https://doi.org/10.1515/jdis-2017-0006

Article   MathSciNet   Google Scholar  

Chen, C., & Song, M. (2019). Visualizing a field of research: A methodology of systematic scientometric reviews. PloS One , 14 (10), e0223994. https://doi.org/10.1371/journal.pone.0223994

Chen, C., Hu, Z., Liu, S., & Tseng, H. (2012). Emerging trends in regenerative medicine: A scientometric analysis in CiteSpace. Expert Opinion on Biological Therapy , 12 (5), 593–608. https://doi.org/10.1517/14712598.2012.674507

Choi, N. G., & DiNitto, D. M. (2013). The digital divide among low-income homebound older adults: Internet use patterns, eHealth literacy, and attitudes toward computer/Internet use. Journal of Medical Internet Research , 15 (5), e93. https://doi.org/10.2196/jmir.2645

Coiro, J., Knobel, M., Lankshear, C., & Leu, D. J. (2014). Handbook of research on new literacies . Routledge.

Collins, J. (1995). Literacy and literacies. Annual Review of Anthropology , 24 (1), 75–93. https://doi.org/10.1146/annurev.an.24.100195.000451

Ecker, U. K. H., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L. K., Brashier, N., & Amazeen, M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology , 1 (1), 13–29. https://doi.org/10.1038/s44159-021-00006-y

Eldred, J. C., & Mortensen, P. (1992). Reading literacy narratives. College English , 54 (5), 512–539. https://doi.org/10.2307/378153

Falloon, G. (2020). From digital literacy to digital competence: The teacher digital competency (TDC) framework. Educational Technology Research and Development , 68 , 2449–2472. https://doi.org/10.1007/s11423-020-09767-4

García-Peñalvo, F. J. (2023). The perception of Artificial Intelligence in educational contexts after the launch of ChatGPT: Disruption or panic? Education in the Knowledge Society , 24 . https://doi.org/10.14201/eks.31279 . Article e31279.

García-Peñalvo, F. J., Largo, L., F., & Vidal, J. (2023). The new reality of education in the face of advances in generative artificial intelligence. https://doi.org/10.5944/ried.27.1.37716

Gilster, P., & Glister, P. (1997). Digital literacy . Wiley Computer Pub.

Goyal, K., & Kumar, S. (2021). Financial literacy: A systematic review and bibliometric analysis. International Journal of Consumer Studies , 45 (1), 80–105. https://doi.org/10.1111/ijcs.12605

Guess, A. M., Lerner, M., Lyons, B., Montgomery, J. M., Nyhan, B., Reifler, J., & Sircar, N. (2020). A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proceedings of the National Academy of Sciences , 117 (27), 15536–15545. https://doi.org/10.1073/pnas.192049811

Guo, Y., Hao, Z., Zhao, S., Gong, J., & Yang, F. (2020). Artificial intelligence in health care: Bibliometric analysis. Journal of Medical Internet Research , 22 (7), e18228. https://doi.org/10.2196/18228

Hargittai, E. (2010). Digital na (t) ives? Variation in internet skills and uses among members of the net generation. Sociological Inquiry , 80 (1), 92–113. https://doi.org/10.1111/j.1475-682X.2009.00317.x

Hatlevik, O. E., Throndsen, I., Loi, M., & Gudmundsdottir, G. B. (2018). Students’ ICT self-efficacy and computer and information literacy: Determinants and relationships. Computers & Education , 118 , 107–119. https://doi.org/10.1016/j.compedu.2017.11.011

Hutchison, A., Beschorner, B., & Schmidt-Crawford, D. (2012). Exploring the use of the iPad for literacy learning. The Reading Teacher , 66 (1), 15–23. https://doi.org/10.1002/TRTR.01090

Ito, M. (2013). Hanging out, messing around, and geeking out: Kids living and learning with new media . MIT Press.

Jewitt, C., & Kress, G. (2010). Multimodality, literacy and school English. The Routledge international handbook of English, language and literacy teaching (pp. 342–352). Routledge.

Kozyreva, A., Lewandowsky, S., & Hertwig, R. (2020). Citizens versus the internet: Confronting digital challenges with cognitive tools. Psychological Science in the Public Interest , 21 (3), 103–156. https://doi.org/10.1177/152910062094670

Lankshear, C., & Knobel, M. (2011). New literacies: Everyday practices and social learning . McGraw-Hill Education (UK).

Leander, K., & Boldt, G. (2013). Rereading a pedagogy of multiliteracies bodies, texts, and emergence. Journal of Literacy Research , 45 (1), 22–46. https://doi.org/10.1177/1086296X12468587

Lee, C. D. (1992). Literacy, cultural diversiiy, and instruction. Education and Urban Society , 24 (2), 279–291. https://doi.org/10.1177/0013124592024002008

Lee, T., Lee, B. K., & Lee-Geiller, S. (2020). The effects of information literacy on trust in government websites: Evidence from an online experiment. International Journal of Information Management , 52 , 102098. https://doi.org/10.1016/j.ijinfomgt.2020.102098

Leu, D. J., Coiro, J., Castek, J., Hartman, D. K., Henry, L. A., & Reinking, D. (2008). Research on instruction and assessment in the new literacies of online reading comprehension. Comprehension Instruction: Research-based best Practices , 2 , 321–346.

Leu, D. J., Forzani, E., Burlingame, C., Kulikowich, J., Sedransk, N., Coiro, J., & Kennedy, C. (2013). The new literacies of online research and comprehension: Assessing and preparing students for the 21st century with Common Core State standards. Quality Reading Instruction in the age of Common Core Standards , 219 , 236. https://doi.org/10.1002/9781405198431.wbeal0865.pub2

Leu, D. J., Forzani, E., Rhoads, C., Maykel, C., Kennedy, C., & Timbrell, N. (2015). The new literacies of online research and comprehension: Rethinking the reading achievement gap. Reading Research Quarterly , 50 (1), 37–59. https://doi.org/10.1002/rrq.85

Marín, V. I., & Castaneda, L. (2023). Developing digital literacy for teaching and learning. In Handbook of open, distance and digital education (pp. 1089–1108). Springer. https://doi.org/10.1007/978-981-19-0351-9_64-1

Martin, A., & Grudziecki, J. (2006). DigEuLit: Concepts and tools for digital literacy development. Innovation in Teaching and Learning in Information and Computer Sciences , 5 (4), 249–267. https://doi.org/10.11120/ital.2006.05040249

Martins Van Jaarsveld, G. (2020). The effects of COVID-19 among the elderly population: A case for closing the digital divide. Frontiers in Psychiatry , 11 , 577427. https://doi.org/10.3389/psyt.2020.577427

Miller, P. J., Wiley, A. R., Fung, H., & Liang, C. H. (1997). Personal storytelling as a medium of socialization in Chinese and American families. Child Development , 68 (3), 557–568. https://doi.org/10.1111/j.1467-8624.1997.tb01958.x

Neter, E., & Brainin, E. (2012). eHealth literacy: Extending the digital divide to the realm of health information. Journal of Medical Internet Research , 14 (1), e19. https://doi.org/10.2196/jmir.1619

Pangrazio, L. (2016). Reconceptualising critical digital literacy. Discourse: Studies in the Cultural Politics of Education , 37 (2), 163–174. https://doi.org/10.1080/01596306.2014.942836

Patil, U., Kostareva, U., Hadley, M., Manganello, J. A., Okan, O., Dadaczynski, K., & Sentell, T. (2021). Health literacy, digital health literacy, and COVID-19 pandemic attitudes and behaviors in US college students: Implications for interventions. International Journal of Environmental Research and Public Health , 18 (6), 3301. https://doi.org/10.3390/ijerph18063301

Pennycook, G., & Rand, D. G. (2021). The psychology of fake news. Trends in Cognitive Sciences , 25 (5), 388–402. https://doi.org/10.1016/j.tics.2021.02.007

Pettersson, F. (2020). Understanding digitalization and educational change in school by means of activity theory and the levels of learning concept. Education and Information Technologies , 26 (1), 187–204. https://doi.org/10.1007/s10639-020-10239-8

Pool, C. R. (1997). A new digital literacy a conversation with Paul Gilster. Educational Leadership , 55 , 6–11.

Pritchard, A. (1969). Statistical bibliography or bibliometrics. Journal of Documentation , 25 , 348.

Reddy, P., Sharma, B., & Chaudhary, K. (2020). Digital literacy: A review of literature. International Journal of Technoethics (IJT) , 11 (2), 65–94. https://doi.org/10.4018/IJT.20200701.oa1

Scheerder, A., Van Deursen, A., & Van Dijk, J. (2017). Determinants of internet skills, uses and outcomes. A systematic review of the second-and third-level digital divide. Telematics and Informatics , 34 (8), 1607–1624. https://doi.org/10.1016/j.tele.2017.07.007

Spante, M., Hashemi, S. S., Lundin, M., & Algers, A. (2018). Digital competence and digital literacy in higher education research: Systematic review of concept use. Cogent Education , 5 (1), 1519143. https://doi.org/10.1080/2331186X.2018.1519143

Takacs, Z. K., Swart, E. K., & Bus, A. G. (2015). Benefits and pitfalls of multimedia and interactive features in technology-enhanced storybooks: A meta-analysis. Review of Educational Research , 85 (4), 698–739. https://doi.org/10.3102/0034654314566989

Valverde-Berrocoso, J., Garrido-Arroyo, M. C., Burgos-Videla, C., & Morales-Cevallos, M. B. (2020). Trends in educational research about e-learning: A systematic literature review (2009–2018). Sustainability , 12 (12), 5153. https://doi.org/10.3390/su12125153

Van Der Vaart, R., & Drossaert, C. (2017). Development of the digital health literacy instrument: Measuring a broad spectrum of health 1.0 and health 2.0 skills. Journal of Medical Internet Research , 19 (1), e27. https://doi.org/10.2196/jmir.6709

van Deursen, A. J. A. M., Helsper, E., & Eynon, R. (2014). Measuring digital skills: From digital skills to tangible outcomes project report . The London School of Economics and Political Science.

Van Laar, E., Van Deursen, A. J. A. M., Van Dijk, J. A. G. M., & De Haan, J. (2017). The relation between 21st-century skills and digital skills: A systematic literature review. Computers in Human Behavior , 72 , 577–588. https://doi.org/10.1016/j.chb.2017.03.010

Yang, K., Hu, Y., & Qi, H. (2022). Digital health literacy: Bibliometric analysis. Journal of Medical Internet Research , 24 (7), e35816. https://doi.org/10.2196/35816

Download references

Acknowledgements

The researchers would like to thank SEGI University Malaysia.

This study is funded by the provincial humanities and social science project of Jiangxi Provincial Department of Education of China Research on the Construction and Development of Teachers’ Digitalization Teaching Ability in the Post-pandemic Times (No: JY20104). This is the phased research result.

Author information

Authors and affiliations.

School of Chemistry and Materials, Jiangxi Agricultural University, Nanchang, Jiangxi, China

Dongping Wu

Faculty of Education, Language, Psychology, and Music, SEGI University, Kota Damansara, Selangor, Malaysia

Sheiladevi Sukumaran & Hongnan You

School of Foreign Languages, Jiangxi Agricultural University, Nanchang, Jiangxi, 330045, China

Xiaomin Zhi, Wenjing Zhou, Lihua Li & Hongnan You

You can also search for this author in PubMed   Google Scholar

Contributions

Hongnan You : Writing – review & editing, Writing – original draft, Validation, Supervision, Project administration, Methodology, Funding acquisition, Formal analysis, Conceptualization. Dongping Wu : Writing – review & editing, Writing – original draft, Validation, Methodology, Formal analysis, Conceptualization. Sheiladevi Sukumaran : Writing – review & editing, Validation, Supervision, Methodology, Conceptualization. Xiaomin Zhi : Writing – review & editing. Wenjing Zhou : Writing – review & editing. Lihua Li : Writing – review & editing.

Corresponding author

Correspondence to Hongnan You .

Ethics declarations

Conflict of interest.

There is no conflict of interest for any of the participants.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Wu, D., Sukumaran, S., Zhi, X. et al. Categories, themes and research evolution of the study of digital literacy: a bibliometric analysis. Educ Inf Technol (2024). https://doi.org/10.1007/s10639-024-12955-x

Download citation

Received : 30 April 2024

Accepted : 06 August 2024

Published : 07 September 2024

DOI : https://doi.org/10.1007/s10639-024-12955-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research evolution
  • Digital literacy
  • Bibliometric analysis
  • Find a journal
  • Publish with us
  • Track your research

COMMENTS

  1. Home

    Educational technology research and development

  2. AI technologies for education: Recent research & future directions

    AI technologies for education: Recent research & future ...

  3. How technology is reinventing K-12 education

    How technology is reinventing K-12 education | Stanford Report

  4. Understanding the role of digital technologies in education: A review

    Understanding the role of digital technologies in education

  5. Education reform and change driven by digital technology: a

    Education reform and change driven by digital technology

  6. Articles

    Peer technical support in preservice teacher education: A mixed methods social network analysis and phenomenological study to understand relative expertise. Educational Technology Research and Development is a scholarly journal focusing on research and development in educational technology.

  7. Trends and Topics in Educational Technology, 2024 Edition

    Trends and Topics in Educational Technology, 2024 Edition

  8. Educational Technology Research and Development

    Educational Technology Research and Development

  9. Exploring student engagement in technology-based education in relation

    Exploring student engagement in technology-based ...

  10. (PDF) The Role of Technology in Education: Enhancing ...

    The Role of Technology in Education: Enhancing Learning ...

  11. Schooling and Covid-19: lessons from recent research on EdTech

    The wide-scale global movement of school education to remote instruction due to Covid-19 is unprecedented. The use of educational technology (EdTech) offers an alternative to in-person learning ...

  12. Journal of Educational Technology Systems: Sage Journals

    Journal of Educational Technology Systems - Sage Journals

  13. Journal of Research on Technology in Education

    Journal of Research on Technology in Education

  14. Mapping research in student engagement and educational technology in

    Mapping research in student engagement and educational ...

  15. Realizing the promise: How can education technology improve learning

    Realizing the promise: How can education technology ...

  16. Educational technology: what it is and how it works

    This paper presents an argument that education—the giving and receiving of systematic instruction, the process of facilitating learning, constituted from countless methods, tools, and structures, operated by teachers and many others—may usefully be seen as a technological phenomenon; that all educators are thus educational technologists (albeit that their choices of technology may vary ...

  17. Why Do We Need Technology in Education?

    Full article: Why Do We Need Technology in Education?

  18. Educational Technology Research: Contexts, Complexity and Challenges

    the development of educational technology research (technology de velopment and changes. in society) are both influencing practice. What remains in educational technology resear ch and. practice ...

  19. Impacts of digital technologies on education and factors influencing

    Impacts of digital technologies on education and factors ...

  20. The nature and building blocks of educational technology research

    Over 12 years, Internet and Higher Education published 382 articles but gathered the most citations per article (43.6%). The only journal with all its articles cited at least once within two years of publishing was the International Journal of Computer-Supported Collaborative Learning, followed by Internet and Higher Education where 99.7% of all articles were cited within two years.

  21. 7 Research Findings About Technology and Education

    3. Neither the amount of time spent on an app nor the number of sessions in an app correlates with effectiveness. A recent study found that the "dosage" of the app, such as the number of sessions, time spent per session, and duration of the study, did not predict effectiveness of the app.Thus, learning outcomes did not change if a student spent more or less time in an application.

  22. Trends and Topics in Educational Technology, 2022 Edition

    This editorial continues our annual effort to identify and catalog trends and popular topics in the field of educational technology. Continuing our approach from previous years (Kimmons, 2020; Kimmons et al., 2021), we use public internet data mining methods (Kimmons & Veletsianos, 2018) to extract and analyze data from three large data sources: the Scopus research article database, the ...

  23. Impact of modern technology in education

    Importance of technolog y in education. The role of technology in the field of education is four-. fold: it is included as a part of the curriculum, as an. instructional delivery system, as a ...

  24. Digital Technology in Nutrition Education and Behavior Change

    The incorporation of digital technology (digitech) within nutrition education and behavior change interventions (NEBI) has markedly increased, and COVID-19 rapidly accelerated advancement and acceptability in this area. 1 The proliferation of digitech, including devices and platforms, creates novel ways for end-users to engage with NEBI and presents unique opportunities for increasing reach ...

  25. Trends and Topics in Educational Technology, 2023 Edition

    In this editorial, we present trends and popular topics in educational technology for the year 2022. We used a similar public internet data mining approach (Kimmons & Veletsianos, 2018) to previous years (Kimmons, 2020; Kimmons et al., 2021; Kimmons & Rosenberg, 2022), extracting and analyzing data from three large data sources: the Scopus research article database, the Twitter #EdTech ...

  26. Research on the Design and Educational Application of Library

    VR Application for Technology Education in a Public Library MUM '18: Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia In this video paper we report on our experiences of deploying a VR point at a public library for the purpose of educating library patrons on VR technology.

  27. Categories, themes and research evolution of the study of digital

    With the emerging forces of online and digital products, scholars keenly captured digital literacy and have new research dimensions. The purpose of this study is to present a bibliometric analysis of digital literacy using CiteSpace and to explore the categories, themes and research evolution in digital literacy. A total of 9042 bibliographic records were retrieved from the WoS Core Collection ...