Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Writing a Literature Review

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays). When we say “literature review” or refer to “the literature,” we are talking about the research ( scholarship ) in a given field. You will often see the terms “the research,” “the scholarship,” and “the literature” used mostly interchangeably.

Where, when, and why would I write a lit review?

There are a number of different situations where you might write a literature review, each with slightly different expectations; different disciplines, too, have field-specific expectations for what a literature review is and does. For instance, in the humanities, authors might include more overt argumentation and interpretation of source material in their literature reviews, whereas in the sciences, authors are more likely to report study designs and results in their literature reviews; these differences reflect these disciplines’ purposes and conventions in scholarship. You should always look at examples from your own discipline and talk to professors or mentors in your field to be sure you understand your discipline’s conventions, for literature reviews as well as for any other genre.

A literature review can be a part of a research paper or scholarly article, usually falling after the introduction and before the research methods sections. In these cases, the lit review just needs to cover scholarship that is important to the issue you are writing about; sometimes it will also cover key sources that informed your research methodology.

Lit reviews can also be standalone pieces, either as assignments in a class or as publications. In a class, a lit review may be assigned to help students familiarize themselves with a topic and with scholarship in their field, get an idea of the other researchers working on the topic they’re interested in, find gaps in existing research in order to propose new projects, and/or develop a theoretical framework and methodology for later research. As a publication, a lit review usually is meant to help make other scholars’ lives easier by collecting and summarizing, synthesizing, and analyzing existing research on a topic. This can be especially helpful for students or scholars getting into a new research area, or for directing an entire community of scholars toward questions that have not yet been answered.

What are the parts of a lit review?

Most lit reviews use a basic introduction-body-conclusion structure; if your lit review is part of a larger paper, the introduction and conclusion pieces may be just a few sentences while you focus most of your attention on the body. If your lit review is a standalone piece, the introduction and conclusion take up more space and give you a place to discuss your goals, research methods, and conclusions separately from where you discuss the literature itself.

Introduction:

  • An introductory paragraph that explains what your working topic and thesis is
  • A forecast of key topics or texts that will appear in the review
  • Potentially, a description of how you found sources and how you analyzed them for inclusion and discussion in the review (more often found in published, standalone literature reviews than in lit review sections in an article or research paper)
  • Summarize and synthesize: Give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: Don’t just paraphrase other researchers – add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically Evaluate: Mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: Use transition words and topic sentence to draw connections, comparisons, and contrasts.

Conclusion:

  • Summarize the key findings you have taken from the literature and emphasize their significance
  • Connect it back to your primary research question

How should I organize my lit review?

Lit reviews can take many different organizational patterns depending on what you are trying to accomplish with the review. Here are some examples:

  • Chronological : The simplest approach is to trace the development of the topic over time, which helps familiarize the audience with the topic (for instance if you are introducing something that is not commonly known in your field). If you choose this strategy, be careful to avoid simply listing and summarizing sources in order. Try to analyze the patterns, turning points, and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred (as mentioned previously, this may not be appropriate in your discipline — check with a teacher or mentor if you’re unsure).
  • Thematic : If you have found some recurring central themes that you will continue working with throughout your piece, you can organize your literature review into subsections that address different aspects of the topic. For example, if you are reviewing literature about women and religion, key themes can include the role of women in churches and the religious attitude towards women.
  • Qualitative versus quantitative research
  • Empirical versus theoretical scholarship
  • Divide the research by sociological, historical, or cultural sources
  • Theoretical : In many humanities articles, the literature review is the foundation for the theoretical framework. You can use it to discuss various theories, models, and definitions of key concepts. You can argue for the relevance of a specific theoretical approach or combine various theorical concepts to create a framework for your research.

What are some strategies or tips I can use while writing my lit review?

Any lit review is only as good as the research it discusses; make sure your sources are well-chosen and your research is thorough. Don’t be afraid to do more research if you discover a new thread as you’re writing. More info on the research process is available in our "Conducting Research" resources .

As you’re doing your research, create an annotated bibliography ( see our page on the this type of document ). Much of the information used in an annotated bibliography can be used also in a literature review, so you’ll be not only partially drafting your lit review as you research, but also developing your sense of the larger conversation going on among scholars, professionals, and any other stakeholders in your topic.

Usually you will need to synthesize research rather than just summarizing it. This means drawing connections between sources to create a picture of the scholarly conversation on a topic over time. Many student writers struggle to synthesize because they feel they don’t have anything to add to the scholars they are citing; here are some strategies to help you:

  • It often helps to remember that the point of these kinds of syntheses is to show your readers how you understand your research, to help them read the rest of your paper.
  • Writing teachers often say synthesis is like hosting a dinner party: imagine all your sources are together in a room, discussing your topic. What are they saying to each other?
  • Look at the in-text citations in each paragraph. Are you citing just one source for each paragraph? This usually indicates summary only. When you have multiple sources cited in a paragraph, you are more likely to be synthesizing them (not always, but often
  • Read more about synthesis here.

The most interesting literature reviews are often written as arguments (again, as mentioned at the beginning of the page, this is discipline-specific and doesn’t work for all situations). Often, the literature review is where you can establish your research as filling a particular gap or as relevant in a particular way. You have some chance to do this in your introduction in an article, but the literature review section gives a more extended opportunity to establish the conversation in the way you would like your readers to see it. You can choose the intellectual lineage you would like to be part of and whose definitions matter most to your thinking (mostly humanities-specific, but this goes for sciences as well). In addressing these points, you argue for your place in the conversation, which tends to make the lit review more compelling than a simple reporting of other sources.

literature review theoretical empirical

Literature Review: 3 Essential Ingredients

The theoretical framework, empirical research and research gap

By: Derek Jansen (MBA) | Reviewer: Eunice Rautenbach (DTech) | July 2023

Writing a comprehensive but concise literature review is no simple task. There’s a lot of ground to cover and it can be challenging to figure out what’s important and what’s not. In this post, we’ll unpack three essential ingredients that need to be woven into your literature review to lay a rock-solid foundation for your study.

This post is based on our popular online course, Literature Review Bootcamp . In the course, we walk you through the full process of developing a literature review, step by step. If it’s your first time writing a literature review, you definitely want to use this link to get 50% off the course (limited-time offer).

Overview: Essential Ingredients

  • Ingredients vs structure
  • The theoretical framework (foundation of theory)
  • The empirical research
  • The research gap
  • Summary & key takeaways

Ingredients vs Structure

As a starting point, it’s important to clarify that the three ingredients we’ll cover in this video are things that need to feature within your literature review, as opposed to a set structure for your chapter . In other words, there are different ways you can weave these three ingredients into your literature review. Regardless of which structure you opt for, each of the three components will make an appearance in some shape or form. If you’re keen to learn more about structural options, we’ve got a dedicated post about that here .

Free Webinar: Literature Review 101

1. The Theoretical Framework

Let’s kick off with the first essential ingredient – that is the theoretical framework , also called the foundation of theory . 

The foundation of theory, as the name suggests, is where you’ll lay down the foundational building blocks for your literature review so that your reader can get a clear idea of the core concepts, theories and assumptions (in relation to your research aims and questions) that will guide your study. Note that this is not the same as a conceptual framework .

Typically you’ll cover a few things within the theoretical framework:

Firstly, you’ll need to clearly define the key constructs and variables that will feature within your study. In many cases, any given term can have multiple different definitions or interpretations – for example, different people will define the concept of “integrity” in different ways. This variation in interpretation can, of course, wreak havoc on how your study is understood. So, this section is where you’ll pin down what exactly you mean when you refer to X, Y or Z in your study, as well as why you chose that specific definition. It’s also a good idea to state any assumptions that are inherent in these definitions and why these are acceptable, given the purpose of your study.

Related to this, the second thing you’ll need to cover in your theoretical framework is the relationships between these variables and/or constructs . For example, how does one variable potentially affect another variable – does A have an impact on B, B on A, and so on? In other words, you want to connect the dots between the different “things” of interest that you’ll be exploring in your study. Note that you only need to focus on the key items of interest here (i.e. those most central to your research aims and questions) – not every possible construct or variable.

Lastly, and very importantly, you need to discuss the existing theories that are relevant to your research aims and research questions . For example, if you’re investigating the uptake/adoption of a certain application or software, you might discuss Davis’ Technology Acceptance Model and unpack what it has to say about the factors that influence technology adoption. More importantly, though, you need to explain how this impacts your expectations about what you will find in your own study . In other words, your theoretical framework should reveal some insights about what answers you might expect to find to your research questions .

If this sounds a bit fluffy, don’t worry. We deep dive into the theoretical framework (as well as the conceptual framework) and look at practical examples in Literature Review Bootcamp . If you’d like to learn more, take advantage of the limited-time offer (60% off the standard price).

Need a helping hand?

literature review theoretical empirical

2. The Empirical Research

Onto the second essential ingredient, which is  empirical research . This section is where you’ll present a critical discussion of the existing empirical research that is relevant to your research aims and questions.

But what exactly is empirical research?

Simply put, empirical research includes any study that involves actual data collection and analysis , whether that’s qualitative data, quantitative data, or a mix of both . This contrasts against purely theoretical literature (the previous ingredient), which draws its conclusions based exclusively on logic and reason , as opposed to an analysis of real-world data.

In other words, theoretical literature provides a prediction or expectation of what one might find based on reason and logic, whereas empirical research tests the accuracy of those predictions using actual real-world data . This reflects the broader process of knowledge creation – in other words, first developing a theory and then testing it out in the field.

Long story short, the second essential ingredient of a high-quality literature review is a critical discussion of the existing empirical research . Here, it’s important to go beyond description . You’ll need to present a critical analysis that addresses some (if not all) of the following questions:

  • What have different studies found in relation to your research questions ?
  • What contexts have (and haven’t been covered)? For example, certain countries, cities, cultures, etc.
  • Are the findings across the studies similar or is there a lot of variation ? If so, why might this be the case?
  • What sorts of research methodologies have been used and how could these help me develop my own methodology?
  • What were the noteworthy limitations of these studies?

Simply put, your task here is to present a synthesis of what’s been done (and found) within the empirical research, so that you can clearly assess the current state of knowledge and identify potential research gaps , which leads us to our third essential ingredient.

Theoretical literature provides predictions, whereas empirical research tests the accuracy of those predictions using real-world data.

The Research Gap

The third essential ingredient of a high-quality literature review is a discussion of the research gap (or gaps).

But what exactly is a research gap?

Simply put, a research gap is any unaddressed or inadequately explored area within the existing body of academic knowledge. In other words, a research gap emerges whenever there’s still some uncertainty regarding a certain topic or question.

For example, it might be the case that there are mixed findings regarding the relationship between two variables (e.g., job performance and work-from-home policies). Similarly, there might be a lack of research regarding the impact of a specific new technology on people’s mental health. On the other end of the spectrum, there might be a wealth of research regarding a certain topic within one country (say the US), but very little research on that same topic in a different social context (say, China).

These are just random examples, but as you can see, research gaps can emerge from many different places. What’s important to understand is that the research gap (or gaps) needs to emerge from your previous discussion of the theoretical and empirical literature . In other words, your discussion in those sections needs to start laying the foundation for the research gap.

For example, when discussing empirical research, you might mention that most studies have focused on a certain context , yet very few (or none) have focused on another context, and there’s reason to believe that findings may differ. Or you might highlight how there’s a fair deal of mixed findings and disagreement regarding a certain matter. In other words, you want to start laying a little breadcrumb trail in those sections so that your discussion of the research gap is firmly rooted in the rest of the literature review.

But why does all of this matter?

Well, the research gap should serve as the core justification for your study . Through your literature review, you’ll show what gaps exist in the current body of knowledge, and then your study will then attempt to fill (or contribute towards filling) one of those gaps. In other words, you’re first explaining what the problem is (some sort of gap) and then proposing how you’ll solve it.

 A research gap exists whenever there’s still a  reasonable level of uncertainty or disagreement regarding a certain topic or question.

Key Takeaways

To recap, the three ingredients that need to be mixed into your literature review are:

  • The foundation of theory or theoretical framework
  • The empirical or evidence-based research

As we mentioned earlier, these are components of a literature review and not (necessarily) a structure for your literature review chapter. Of course, you can structure your chapter in a way that reflects these three components (in fact, in some cases that works very well), but it’s certainly not the only option. The right structure will vary from study to study , depending on various factors.

If you’d like to get hands-on help developing your literature review, be sure to check out our private coaching service , where we hold your hand through the entire research journey, step by step. 

Literature Review Course

Psst… there’s more!

This post is an extract from our bestselling short course, Literature Review Bootcamp . If you want to work smart, you don't want to miss this .

mintesnot

very good , as the first writer of the thesis i will need ur advise . please give me a piece of idea on topic -impact of national standardized exam on students learning engagement . Thank you .

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • Resources Home 🏠
  • Try SciSpace Copilot
  • Search research papers
  • Add Copilot Extension
  • Try AI Detector
  • Try Paraphraser
  • Try Citation Generator
  • April Papers
  • June Papers
  • July Papers

SciSpace Resources

Literature Review and Theoretical Framework: Understanding the Differences

Sumalatha G

Table of Contents

A literature review and a theoretical framework are both important components of academic research. However, they serve different purposes and have distinct characteristics. In this article, we will examine the concepts of literature review and theoretical framework, explore their significance, and highlight the key differences between the two.

Defining the Concepts: Literature Review and Theoretical Framework

Before we dive into the details, let's clarify what a literature review and a theoretical framework actually mean.

What is a Literature Review?

A literature review is a critical analysis and synthesis of existing research and scholarly articles on a specific topic. It involves reviewing and summarizing the current knowledge and understanding of the subject matter. By examining previous studies, the scholar can identify knowledge gaps, assess the strengths and weaknesses of existing research, and present a comprehensive overview of the topic.

When conducting a literature review, the scholar delves into a vast array of sources, including academic journals, books, conference proceedings, and reputable online databases. This extensive exploration allows them to gather relevant information, theories, and methodologies related to their research topic.

Furthermore, a literature review provides a solid foundation for the research by establishing the context and significance of the study. It helps researchers identify the key concepts, theories, and variables that are relevant to their research objectives. By critically analyzing the existing literature, scholars can identify research gaps and propose new avenues for scientific investigation.

Moreover, a literature review is not merely a summary of previous studies. It requires a critical evaluation of the methodologies used, the quality of the data collected, and the validity of the conclusions drawn.

Researchers must assess the credibility and reliability of the sources they include in their review to ensure the accuracy and robustness of their analysis.

What is a Theoretical Framework?

A theoretical framework provides a conceptual explanation for the research problem or question being investigated. It serves as a foundation that guides the formulation of hypotheses and research objectives. A theoretical framework helps researchers to analyze and interpret their findings by establishing a set of assumptions, concepts, and relationships that underpin their study. It provides a structured framework for organizing and presenting research outcomes.

When developing a theoretical framework, researchers draw upon existing theories and concepts from relevant disciplines to create a conceptual framework that aligns with their research objectives. This framework helps researchers to define the variables they will study, establish the relationships between these variables, and propose hypotheses that can be tested through empirical research.

Furthermore, a theoretical framework provides a roadmap for researchers to navigate through the complexities of their study. It helps them to identify the key constructs and variables that need to be measured and analyzed. By providing a clear structure, the theoretical framework ensures that researchers stay focused on their research objectives and avoid getting lost in a sea of information.

Moreover, a theoretical framework allows researchers to make connections between their study and existing theories or models. By building upon established knowledge, researchers can contribute to the advancement of their field and provide new insights and perspectives. The theoretical framework also helps researchers interpret their findings in a meaningful way and draw conclusions that have theoretical and practical implications.

In summary, both a literature review and a theoretical framework play crucial roles in the research process. While a literature review provides a comprehensive overview of existing knowledge and identifies research gaps, a theoretical framework establishes the conceptual foundation for the study and guides the formulation of research objectives and hypotheses. Together, these two elements contribute to the development of a robust and well-grounded research study.

The Purpose and Importance of Literature Reviews

Now that we have a clear understanding of what a literature review is, let's explore its purpose and significance.

A literature review plays a crucial role in academic research. It serves several purposes, including:

  • Providing a comprehensive understanding of the existing literature in a particular field.
  • Identifying the gaps, controversies, or inconsistencies in the current knowledge.
  • Helping researchers to refine their research questions and objectives.
  • Ensuring that the research being conducted is novel and contributes to the existing body of knowledge.

The Benefits of Conducting a Literature Review

There are numerous benefits to conducting a literature review, such as:

  • Enhancing the researcher's knowledge and understanding of the subject area.
  • Providing a framework for developing research hypotheses and objectives.
  • Identifying potential research methodologies and approaches.
  • Informing the selection of appropriate data collection and analysis methods.
  • Guiding the interpretation and discussion of research findings.

The Purpose and Importance of Theoretical Frameworks

Moving on to theoretical frameworks, let us discuss their purpose and importance.

When conducting research, theoretical frameworks play a crucial role in providing a solid foundation for the study. They serve as a guiding tool for researchers, helping them navigate through the complexities of their research and providing a framework for understanding and interpreting their findings.

The Function of Theoretical Frameworks in Research

Theoretical frameworks serve multiple functions in research:

  • Providing a conceptual framework enables researchers to clearly define the scope and direction of their study.
  • Acting as a roadmap, guiding researchers in formulating their research objectives and hypotheses. It helps them identify the key variables and relationships they want to explore, providing a solid foundation for their research.
  • Helping researchers identify and select appropriate research methods and techniques. When it comes to selecting research methods and techniques, theoretical frameworks are invaluable. They provide researchers with a lens through which they can evaluate different methods and techniques, ensuring that they choose the most appropriate ones for their study. By aligning their methods with the theoretical framework, researchers can enhance the validity and reliability of their research.
  • Supporting the interpretation and explanation of research findings. Once the data has been collected, theoretical frameworks help researchers make sense of their findings. They provide a framework for interpreting and explaining the results, allowing researchers to draw meaningful conclusions. By grounding their analysis in a theoretical framework, researchers can provide a solid foundation for their findings and contribute to the existing body of knowledge.
  • Facilitating the integration of new knowledge with existing theories and concepts. Theoretical frameworks also play a crucial role in the advancement of knowledge. By integrating new findings with existing theories and concepts, researchers can contribute to the development of their field.

The Advantages of Developing a Theoretical Framework

Developing a theoretical framework offers several advantages:

  • Enhancing the researcher's understanding of the research problem. By developing a theoretical framework, researchers gain a deeper understanding of the research problem they are investigating.  This enhanced understanding allows researchers to approach their study with clarity and purpose.
  • Facilitating the selection of an appropriate research design. Choosing the right research design is crucial for the success of a study. A well-developed theoretical framework helps researchers select the most appropriate research design by providing a clear direction and focus. It ensures that the research design aligns with the research objectives and hypotheses, maximizing the chances of obtaining valid and reliable results.
  • Helping researchers organize their thoughts and ideas systematically. This organization helps researchers stay focused and ensures that all aspects of the research problem are considered. By structuring their thoughts, researchers can effectively communicate their ideas and findings to others.
  • Guiding the analysis and interpretation of research findings. When it comes to analyzing and interpreting research findings, a theoretical framework provides researchers with a framework to guide their process. It helps researchers identify patterns, relationships, and themes within the data, allowing for a more comprehensive analysis.

Developing a theoretical framework is essential for ensuring the validity and reliability of a study. By aligning the research with established theories and concepts, researchers can enhance the credibility of their study. A well-developed theoretical framework provides a solid foundation for the research, increasing the chances of obtaining accurate and meaningful results.

Differences Between Literature Reviews and Theoretical Frameworks

Now, let's explore the key differences between literature reviews and theoretical frameworks.

Key Differences:

  • Focus: A literature review focuses on summarizing existing research, while a theoretical framework focuses on providing a conceptual foundation for the study.
  • Scope: A literature review covers a broad range of related research, while a theoretical framework is more specific to the research problem at hand.
  • Timing: A literature review is typically conducted early in the research process, while a theoretical framework is often developed alongside the research design.
  • Purpose: A literature review aims to inform the research and establish its context, while a theoretical framework aims to guide the interpretation and analysis of findings.

In conclusion

Understanding the distinction between a literature review and a theoretical framework is crucial for conducting effective and meaningful academic research. While a literature review provides an overview of existing research, a theoretical framework guides the formulation, analysis, and interpretation of research. Both components are essential for building a strong foundation of knowledge in any field. By comprehending their purpose, significance, and key differences, researchers can enhance the quality and rigor of their research endeavors.

Love using SciSpace tools? Enjoy discounts! Use SR40 (40% off yearly) and SR20 (20% off monthly). Claim yours here 👉 SciSpace Premium

Learn more about Literature Review

5 literature review tools to ace your reseach (+2 bonus tools)

Role of AI in Systematic Literature Review

Evaluating literature review: systematic vs. scoping reviews

A complete guide on how to write a literature review

How to Use AI Tools for Conducting a Literature Review

You might also like

Boosting Citations: A Comparative Analysis of Graphical Abstract vs. Video Abstract

Boosting Citations: A Comparative Analysis of Graphical Abstract vs. Video Abstract

Sumalatha G

The Impact of Visual Abstracts on Boosting Citations

Introducing SciSpace’s Citation Booster To Increase Research Visibility

Introducing SciSpace’s Citation Booster To Increase Research Visibility

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.

Cover of Handbook of eHealth Evaluation: An Evidence-based Approach

Handbook of eHealth Evaluation: An Evidence-based Approach [Internet].

Chapter 9 methods for literature reviews.

Guy Paré and Spyros Kitsiou .

9.1. Introduction

Literature reviews play a critical role in scholarship because science remains, first and foremost, a cumulative endeavour ( vom Brocke et al., 2009 ). As in any academic discipline, rigorous knowledge syntheses are becoming indispensable in keeping up with an exponentially growing eHealth literature, assisting practitioners, academics, and graduate students in finding, evaluating, and synthesizing the contents of many empirical and conceptual papers. Among other methods, literature reviews are essential for: (a) identifying what has been written on a subject or topic; (b) determining the extent to which a specific research area reveals any interpretable trends or patterns; (c) aggregating empirical findings related to a narrow research question to support evidence-based practice; (d) generating new frameworks and theories; and (e) identifying topics or questions requiring more investigation ( Paré, Trudel, Jaana, & Kitsiou, 2015 ).

Literature reviews can take two major forms. The most prevalent one is the “literature review” or “background” section within a journal paper or a chapter in a graduate thesis. This section synthesizes the extant literature and usually identifies the gaps in knowledge that the empirical study addresses ( Sylvester, Tate, & Johnstone, 2013 ). It may also provide a theoretical foundation for the proposed study, substantiate the presence of the research problem, justify the research as one that contributes something new to the cumulated knowledge, or validate the methods and approaches for the proposed study ( Hart, 1998 ; Levy & Ellis, 2006 ).

The second form of literature review, which is the focus of this chapter, constitutes an original and valuable work of research in and of itself ( Paré et al., 2015 ). Rather than providing a base for a researcher’s own work, it creates a solid starting point for all members of the community interested in a particular area or topic ( Mulrow, 1987 ). The so-called “review article” is a journal-length paper which has an overarching purpose to synthesize the literature in a field, without collecting or analyzing any primary data ( Green, Johnson, & Adams, 2006 ).

When appropriately conducted, review articles represent powerful information sources for practitioners looking for state-of-the art evidence to guide their decision-making and work practices ( Paré et al., 2015 ). Further, high-quality reviews become frequently cited pieces of work which researchers seek out as a first clear outline of the literature when undertaking empirical studies ( Cooper, 1988 ; Rowe, 2014 ). Scholars who track and gauge the impact of articles have found that review papers are cited and downloaded more often than any other type of published article ( Cronin, Ryan, & Coughlan, 2008 ; Montori, Wilczynski, Morgan, Haynes, & Hedges, 2003 ; Patsopoulos, Analatos, & Ioannidis, 2005 ). The reason for their popularity may be the fact that reading the review enables one to have an overview, if not a detailed knowledge of the area in question, as well as references to the most useful primary sources ( Cronin et al., 2008 ). Although they are not easy to conduct, the commitment to complete a review article provides a tremendous service to one’s academic community ( Paré et al., 2015 ; Petticrew & Roberts, 2006 ). Most, if not all, peer-reviewed journals in the fields of medical informatics publish review articles of some type.

The main objectives of this chapter are fourfold: (a) to provide an overview of the major steps and activities involved in conducting a stand-alone literature review; (b) to describe and contrast the different types of review articles that can contribute to the eHealth knowledge base; (c) to illustrate each review type with one or two examples from the eHealth literature; and (d) to provide a series of recommendations for prospective authors of review articles in this domain.

9.2. Overview of the Literature Review Process and Steps

As explained in Templier and Paré (2015) , there are six generic steps involved in conducting a review article:

  • formulating the research question(s) and objective(s),
  • searching the extant literature,
  • screening for inclusion,
  • assessing the quality of primary studies,
  • extracting data, and
  • analyzing data.

Although these steps are presented here in sequential order, one must keep in mind that the review process can be iterative and that many activities can be initiated during the planning stage and later refined during subsequent phases ( Finfgeld-Connett & Johnson, 2013 ; Kitchenham & Charters, 2007 ).

Formulating the research question(s) and objective(s): As a first step, members of the review team must appropriately justify the need for the review itself ( Petticrew & Roberts, 2006 ), identify the review’s main objective(s) ( Okoli & Schabram, 2010 ), and define the concepts or variables at the heart of their synthesis ( Cooper & Hedges, 2009 ; Webster & Watson, 2002 ). Importantly, they also need to articulate the research question(s) they propose to investigate ( Kitchenham & Charters, 2007 ). In this regard, we concur with Jesson, Matheson, and Lacey (2011) that clearly articulated research questions are key ingredients that guide the entire review methodology; they underscore the type of information that is needed, inform the search for and selection of relevant literature, and guide or orient the subsequent analysis. Searching the extant literature: The next step consists of searching the literature and making decisions about the suitability of material to be considered in the review ( Cooper, 1988 ). There exist three main coverage strategies. First, exhaustive coverage means an effort is made to be as comprehensive as possible in order to ensure that all relevant studies, published and unpublished, are included in the review and, thus, conclusions are based on this all-inclusive knowledge base. The second type of coverage consists of presenting materials that are representative of most other works in a given field or area. Often authors who adopt this strategy will search for relevant articles in a small number of top-tier journals in a field ( Paré et al., 2015 ). In the third strategy, the review team concentrates on prior works that have been central or pivotal to a particular topic. This may include empirical studies or conceptual papers that initiated a line of investigation, changed how problems or questions were framed, introduced new methods or concepts, or engendered important debate ( Cooper, 1988 ). Screening for inclusion: The following step consists of evaluating the applicability of the material identified in the preceding step ( Levy & Ellis, 2006 ; vom Brocke et al., 2009 ). Once a group of potential studies has been identified, members of the review team must screen them to determine their relevance ( Petticrew & Roberts, 2006 ). A set of predetermined rules provides a basis for including or excluding certain studies. This exercise requires a significant investment on the part of researchers, who must ensure enhanced objectivity and avoid biases or mistakes. As discussed later in this chapter, for certain types of reviews there must be at least two independent reviewers involved in the screening process and a procedure to resolve disagreements must also be in place ( Liberati et al., 2009 ; Shea et al., 2009 ). Assessing the quality of primary studies: In addition to screening material for inclusion, members of the review team may need to assess the scientific quality of the selected studies, that is, appraise the rigour of the research design and methods. Such formal assessment, which is usually conducted independently by at least two coders, helps members of the review team refine which studies to include in the final sample, determine whether or not the differences in quality may affect their conclusions, or guide how they analyze the data and interpret the findings ( Petticrew & Roberts, 2006 ). Ascribing quality scores to each primary study or considering through domain-based evaluations which study components have or have not been designed and executed appropriately makes it possible to reflect on the extent to which the selected study addresses possible biases and maximizes validity ( Shea et al., 2009 ). Extracting data: The following step involves gathering or extracting applicable information from each primary study included in the sample and deciding what is relevant to the problem of interest ( Cooper & Hedges, 2009 ). Indeed, the type of data that should be recorded mainly depends on the initial research questions ( Okoli & Schabram, 2010 ). However, important information may also be gathered about how, when, where and by whom the primary study was conducted, the research design and methods, or qualitative/quantitative results ( Cooper & Hedges, 2009 ). Analyzing and synthesizing data : As a final step, members of the review team must collate, summarize, aggregate, organize, and compare the evidence extracted from the included studies. The extracted data must be presented in a meaningful way that suggests a new contribution to the extant literature ( Jesson et al., 2011 ). Webster and Watson (2002) warn researchers that literature reviews should be much more than lists of papers and should provide a coherent lens to make sense of extant knowledge on a given topic. There exist several methods and techniques for synthesizing quantitative (e.g., frequency analysis, meta-analysis) and qualitative (e.g., grounded theory, narrative analysis, meta-ethnography) evidence ( Dixon-Woods, Agarwal, Jones, Young, & Sutton, 2005 ; Thomas & Harden, 2008 ).

9.3. Types of Review Articles and Brief Illustrations

EHealth researchers have at their disposal a number of approaches and methods for making sense out of existing literature, all with the purpose of casting current research findings into historical contexts or explaining contradictions that might exist among a set of primary research studies conducted on a particular topic. Our classification scheme is largely inspired from Paré and colleagues’ (2015) typology. Below we present and illustrate those review types that we feel are central to the growth and development of the eHealth domain.

9.3.1. Narrative Reviews

The narrative review is the “traditional” way of reviewing the extant literature and is skewed towards a qualitative interpretation of prior knowledge ( Sylvester et al., 2013 ). Put simply, a narrative review attempts to summarize or synthesize what has been written on a particular topic but does not seek generalization or cumulative knowledge from what is reviewed ( Davies, 2000 ; Green et al., 2006 ). Instead, the review team often undertakes the task of accumulating and synthesizing the literature to demonstrate the value of a particular point of view ( Baumeister & Leary, 1997 ). As such, reviewers may selectively ignore or limit the attention paid to certain studies in order to make a point. In this rather unsystematic approach, the selection of information from primary articles is subjective, lacks explicit criteria for inclusion and can lead to biased interpretations or inferences ( Green et al., 2006 ). There are several narrative reviews in the particular eHealth domain, as in all fields, which follow such an unstructured approach ( Silva et al., 2015 ; Paul et al., 2015 ).

Despite these criticisms, this type of review can be very useful in gathering together a volume of literature in a specific subject area and synthesizing it. As mentioned above, its primary purpose is to provide the reader with a comprehensive background for understanding current knowledge and highlighting the significance of new research ( Cronin et al., 2008 ). Faculty like to use narrative reviews in the classroom because they are often more up to date than textbooks, provide a single source for students to reference, and expose students to peer-reviewed literature ( Green et al., 2006 ). For researchers, narrative reviews can inspire research ideas by identifying gaps or inconsistencies in a body of knowledge, thus helping researchers to determine research questions or formulate hypotheses. Importantly, narrative reviews can also be used as educational articles to bring practitioners up to date with certain topics of issues ( Green et al., 2006 ).

Recently, there have been several efforts to introduce more rigour in narrative reviews that will elucidate common pitfalls and bring changes into their publication standards. Information systems researchers, among others, have contributed to advancing knowledge on how to structure a “traditional” review. For instance, Levy and Ellis (2006) proposed a generic framework for conducting such reviews. Their model follows the systematic data processing approach comprised of three steps, namely: (a) literature search and screening; (b) data extraction and analysis; and (c) writing the literature review. They provide detailed and very helpful instructions on how to conduct each step of the review process. As another methodological contribution, vom Brocke et al. (2009) offered a series of guidelines for conducting literature reviews, with a particular focus on how to search and extract the relevant body of knowledge. Last, Bandara, Miskon, and Fielt (2011) proposed a structured, predefined and tool-supported method to identify primary studies within a feasible scope, extract relevant content from identified articles, synthesize and analyze the findings, and effectively write and present the results of the literature review. We highly recommend that prospective authors of narrative reviews consult these useful sources before embarking on their work.

Darlow and Wen (2015) provide a good example of a highly structured narrative review in the eHealth field. These authors synthesized published articles that describe the development process of mobile health ( m-health ) interventions for patients’ cancer care self-management. As in most narrative reviews, the scope of the research questions being investigated is broad: (a) how development of these systems are carried out; (b) which methods are used to investigate these systems; and (c) what conclusions can be drawn as a result of the development of these systems. To provide clear answers to these questions, a literature search was conducted on six electronic databases and Google Scholar . The search was performed using several terms and free text words, combining them in an appropriate manner. Four inclusion and three exclusion criteria were utilized during the screening process. Both authors independently reviewed each of the identified articles to determine eligibility and extract study information. A flow diagram shows the number of studies identified, screened, and included or excluded at each stage of study selection. In terms of contributions, this review provides a series of practical recommendations for m-health intervention development.

9.3.2. Descriptive or Mapping Reviews

The primary goal of a descriptive review is to determine the extent to which a body of knowledge in a particular research topic reveals any interpretable pattern or trend with respect to pre-existing propositions, theories, methodologies or findings ( King & He, 2005 ; Paré et al., 2015 ). In contrast with narrative reviews, descriptive reviews follow a systematic and transparent procedure, including searching, screening and classifying studies ( Petersen, Vakkalanka, & Kuzniarz, 2015 ). Indeed, structured search methods are used to form a representative sample of a larger group of published works ( Paré et al., 2015 ). Further, authors of descriptive reviews extract from each study certain characteristics of interest, such as publication year, research methods, data collection techniques, and direction or strength of research outcomes (e.g., positive, negative, or non-significant) in the form of frequency analysis to produce quantitative results ( Sylvester et al., 2013 ). In essence, each study included in a descriptive review is treated as the unit of analysis and the published literature as a whole provides a database from which the authors attempt to identify any interpretable trends or draw overall conclusions about the merits of existing conceptualizations, propositions, methods or findings ( Paré et al., 2015 ). In doing so, a descriptive review may claim that its findings represent the state of the art in a particular domain ( King & He, 2005 ).

In the fields of health sciences and medical informatics, reviews that focus on examining the range, nature and evolution of a topic area are described by Anderson, Allen, Peckham, and Goodwin (2008) as mapping reviews . Like descriptive reviews, the research questions are generic and usually relate to publication patterns and trends. There is no preconceived plan to systematically review all of the literature although this can be done. Instead, researchers often present studies that are representative of most works published in a particular area and they consider a specific time frame to be mapped.

An example of this approach in the eHealth domain is offered by DeShazo, Lavallie, and Wolf (2009). The purpose of this descriptive or mapping review was to characterize publication trends in the medical informatics literature over a 20-year period (1987 to 2006). To achieve this ambitious objective, the authors performed a bibliometric analysis of medical informatics citations indexed in medline using publication trends, journal frequencies, impact factors, Medical Subject Headings (MeSH) term frequencies, and characteristics of citations. Findings revealed that there were over 77,000 medical informatics articles published during the covered period in numerous journals and that the average annual growth rate was 12%. The MeSH term analysis also suggested a strong interdisciplinary trend. Finally, average impact scores increased over time with two notable growth periods. Overall, patterns in research outputs that seem to characterize the historic trends and current components of the field of medical informatics suggest it may be a maturing discipline (DeShazo et al., 2009).

9.3.3. Scoping Reviews

Scoping reviews attempt to provide an initial indication of the potential size and nature of the extant literature on an emergent topic (Arksey & O’Malley, 2005; Daudt, van Mossel, & Scott, 2013 ; Levac, Colquhoun, & O’Brien, 2010). A scoping review may be conducted to examine the extent, range and nature of research activities in a particular area, determine the value of undertaking a full systematic review (discussed next), or identify research gaps in the extant literature ( Paré et al., 2015 ). In line with their main objective, scoping reviews usually conclude with the presentation of a detailed research agenda for future works along with potential implications for both practice and research.

Unlike narrative and descriptive reviews, the whole point of scoping the field is to be as comprehensive as possible, including grey literature (Arksey & O’Malley, 2005). Inclusion and exclusion criteria must be established to help researchers eliminate studies that are not aligned with the research questions. It is also recommended that at least two independent coders review abstracts yielded from the search strategy and then the full articles for study selection ( Daudt et al., 2013 ). The synthesized evidence from content or thematic analysis is relatively easy to present in tabular form (Arksey & O’Malley, 2005; Thomas & Harden, 2008 ).

One of the most highly cited scoping reviews in the eHealth domain was published by Archer, Fevrier-Thomas, Lokker, McKibbon, and Straus (2011) . These authors reviewed the existing literature on personal health record ( phr ) systems including design, functionality, implementation, applications, outcomes, and benefits. Seven databases were searched from 1985 to March 2010. Several search terms relating to phr s were used during this process. Two authors independently screened titles and abstracts to determine inclusion status. A second screen of full-text articles, again by two independent members of the research team, ensured that the studies described phr s. All in all, 130 articles met the criteria and their data were extracted manually into a database. The authors concluded that although there is a large amount of survey, observational, cohort/panel, and anecdotal evidence of phr benefits and satisfaction for patients, more research is needed to evaluate the results of phr implementations. Their in-depth analysis of the literature signalled that there is little solid evidence from randomized controlled trials or other studies through the use of phr s. Hence, they suggested that more research is needed that addresses the current lack of understanding of optimal functionality and usability of these systems, and how they can play a beneficial role in supporting patient self-management ( Archer et al., 2011 ).

9.3.4. Forms of Aggregative Reviews

Healthcare providers, practitioners, and policy-makers are nowadays overwhelmed with large volumes of information, including research-based evidence from numerous clinical trials and evaluation studies, assessing the effectiveness of health information technologies and interventions ( Ammenwerth & de Keizer, 2004 ; Deshazo et al., 2009 ). It is unrealistic to expect that all these disparate actors will have the time, skills, and necessary resources to identify the available evidence in the area of their expertise and consider it when making decisions. Systematic reviews that involve the rigorous application of scientific strategies aimed at limiting subjectivity and bias (i.e., systematic and random errors) can respond to this challenge.

Systematic reviews attempt to aggregate, appraise, and synthesize in a single source all empirical evidence that meet a set of previously specified eligibility criteria in order to answer a clearly formulated and often narrow research question on a particular topic of interest to support evidence-based practice ( Liberati et al., 2009 ). They adhere closely to explicit scientific principles ( Liberati et al., 2009 ) and rigorous methodological guidelines (Higgins & Green, 2008) aimed at reducing random and systematic errors that can lead to deviations from the truth in results or inferences. The use of explicit methods allows systematic reviews to aggregate a large body of research evidence, assess whether effects or relationships are in the same direction and of the same general magnitude, explain possible inconsistencies between study results, and determine the strength of the overall evidence for every outcome of interest based on the quality of included studies and the general consistency among them ( Cook, Mulrow, & Haynes, 1997 ). The main procedures of a systematic review involve:

  • Formulating a review question and developing a search strategy based on explicit inclusion criteria for the identification of eligible studies (usually described in the context of a detailed review protocol).
  • Searching for eligible studies using multiple databases and information sources, including grey literature sources, without any language restrictions.
  • Selecting studies, extracting data, and assessing risk of bias in a duplicate manner using two independent reviewers to avoid random or systematic errors in the process.
  • Analyzing data using quantitative or qualitative methods.
  • Presenting results in summary of findings tables.
  • Interpreting results and drawing conclusions.

Many systematic reviews, but not all, use statistical methods to combine the results of independent studies into a single quantitative estimate or summary effect size. Known as meta-analyses , these reviews use specific data extraction and statistical techniques (e.g., network, frequentist, or Bayesian meta-analyses) to calculate from each study by outcome of interest an effect size along with a confidence interval that reflects the degree of uncertainty behind the point estimate of effect ( Borenstein, Hedges, Higgins, & Rothstein, 2009 ; Deeks, Higgins, & Altman, 2008 ). Subsequently, they use fixed or random-effects analysis models to combine the results of the included studies, assess statistical heterogeneity, and calculate a weighted average of the effect estimates from the different studies, taking into account their sample sizes. The summary effect size is a value that reflects the average magnitude of the intervention effect for a particular outcome of interest or, more generally, the strength of a relationship between two variables across all studies included in the systematic review. By statistically combining data from multiple studies, meta-analyses can create more precise and reliable estimates of intervention effects than those derived from individual studies alone, when these are examined independently as discrete sources of information.

The review by Gurol-Urganci, de Jongh, Vodopivec-Jamsek, Atun, and Car (2013) on the effects of mobile phone messaging reminders for attendance at healthcare appointments is an illustrative example of a high-quality systematic review with meta-analysis. Missed appointments are a major cause of inefficiency in healthcare delivery with substantial monetary costs to health systems. These authors sought to assess whether mobile phone-based appointment reminders delivered through Short Message Service ( sms ) or Multimedia Messaging Service ( mms ) are effective in improving rates of patient attendance and reducing overall costs. To this end, they conducted a comprehensive search on multiple databases using highly sensitive search strategies without language or publication-type restrictions to identify all rct s that are eligible for inclusion. In order to minimize the risk of omitting eligible studies not captured by the original search, they supplemented all electronic searches with manual screening of trial registers and references contained in the included studies. Study selection, data extraction, and risk of bias assessments were performed inde­­pen­dently by two coders using standardized methods to ensure consistency and to eliminate potential errors. Findings from eight rct s involving 6,615 participants were pooled into meta-analyses to calculate the magnitude of effects that mobile text message reminders have on the rate of attendance at healthcare appointments compared to no reminders and phone call reminders.

Meta-analyses are regarded as powerful tools for deriving meaningful conclusions. However, there are situations in which it is neither reasonable nor appropriate to pool studies together using meta-analytic methods simply because there is extensive clinical heterogeneity between the included studies or variation in measurement tools, comparisons, or outcomes of interest. In these cases, systematic reviews can use qualitative synthesis methods such as vote counting, content analysis, classification schemes and tabulations, as an alternative approach to narratively synthesize the results of the independent studies included in the review. This form of review is known as qualitative systematic review.

A rigorous example of one such review in the eHealth domain is presented by Mickan, Atherton, Roberts, Heneghan, and Tilson (2014) on the use of handheld computers by healthcare professionals and their impact on access to information and clinical decision-making. In line with the methodological guide­lines for systematic reviews, these authors: (a) developed and registered with prospero ( www.crd.york.ac.uk/ prospero / ) an a priori review protocol; (b) conducted comprehensive searches for eligible studies using multiple databases and other supplementary strategies (e.g., forward searches); and (c) subsequently carried out study selection, data extraction, and risk of bias assessments in a duplicate manner to eliminate potential errors in the review process. Heterogeneity between the included studies in terms of reported outcomes and measures precluded the use of meta-analytic methods. To this end, the authors resorted to using narrative analysis and synthesis to describe the effectiveness of handheld computers on accessing information for clinical knowledge, adherence to safety and clinical quality guidelines, and diagnostic decision-making.

In recent years, the number of systematic reviews in the field of health informatics has increased considerably. Systematic reviews with discordant findings can cause great confusion and make it difficult for decision-makers to interpret the review-level evidence ( Moher, 2013 ). Therefore, there is a growing need for appraisal and synthesis of prior systematic reviews to ensure that decision-making is constantly informed by the best available accumulated evidence. Umbrella reviews , also known as overviews of systematic reviews, are tertiary types of evidence synthesis that aim to accomplish this; that is, they aim to compare and contrast findings from multiple systematic reviews and meta-analyses ( Becker & Oxman, 2008 ). Umbrella reviews generally adhere to the same principles and rigorous methodological guidelines used in systematic reviews. However, the unit of analysis in umbrella reviews is the systematic review rather than the primary study ( Becker & Oxman, 2008 ). Unlike systematic reviews that have a narrow focus of inquiry, umbrella reviews focus on broader research topics for which there are several potential interventions ( Smith, Devane, Begley, & Clarke, 2011 ). A recent umbrella review on the effects of home telemonitoring interventions for patients with heart failure critically appraised, compared, and synthesized evidence from 15 systematic reviews to investigate which types of home telemonitoring technologies and forms of interventions are more effective in reducing mortality and hospital admissions ( Kitsiou, Paré, & Jaana, 2015 ).

9.3.5. Realist Reviews

Realist reviews are theory-driven interpretative reviews developed to inform, enhance, or supplement conventional systematic reviews by making sense of heterogeneous evidence about complex interventions applied in diverse contexts in a way that informs policy decision-making ( Greenhalgh, Wong, Westhorp, & Pawson, 2011 ). They originated from criticisms of positivist systematic reviews which centre on their “simplistic” underlying assumptions ( Oates, 2011 ). As explained above, systematic reviews seek to identify causation. Such logic is appropriate for fields like medicine and education where findings of randomized controlled trials can be aggregated to see whether a new treatment or intervention does improve outcomes. However, many argue that it is not possible to establish such direct causal links between interventions and outcomes in fields such as social policy, management, and information systems where for any intervention there is unlikely to be a regular or consistent outcome ( Oates, 2011 ; Pawson, 2006 ; Rousseau, Manning, & Denyer, 2008 ).

To circumvent these limitations, Pawson, Greenhalgh, Harvey, and Walshe (2005) have proposed a new approach for synthesizing knowledge that seeks to unpack the mechanism of how “complex interventions” work in particular contexts. The basic research question — what works? — which is usually associated with systematic reviews changes to: what is it about this intervention that works, for whom, in what circumstances, in what respects and why? Realist reviews have no particular preference for either quantitative or qualitative evidence. As a theory-building approach, a realist review usually starts by articulating likely underlying mechanisms and then scrutinizes available evidence to find out whether and where these mechanisms are applicable ( Shepperd et al., 2009 ). Primary studies found in the extant literature are viewed as case studies which can test and modify the initial theories ( Rousseau et al., 2008 ).

The main objective pursued in the realist review conducted by Otte-Trojel, de Bont, Rundall, and van de Klundert (2014) was to examine how patient portals contribute to health service delivery and patient outcomes. The specific goals were to investigate how outcomes are produced and, most importantly, how variations in outcomes can be explained. The research team started with an exploratory review of background documents and research studies to identify ways in which patient portals may contribute to health service delivery and patient outcomes. The authors identified six main ways which represent “educated guesses” to be tested against the data in the evaluation studies. These studies were identified through a formal and systematic search in four databases between 2003 and 2013. Two members of the research team selected the articles using a pre-established list of inclusion and exclusion criteria and following a two-step procedure. The authors then extracted data from the selected articles and created several tables, one for each outcome category. They organized information to bring forward those mechanisms where patient portals contribute to outcomes and the variation in outcomes across different contexts.

9.3.6. Critical Reviews

Lastly, critical reviews aim to provide a critical evaluation and interpretive analysis of existing literature on a particular topic of interest to reveal strengths, weaknesses, contradictions, controversies, inconsistencies, and/or other important issues with respect to theories, hypotheses, research methods or results ( Baumeister & Leary, 1997 ; Kirkevold, 1997 ). Unlike other review types, critical reviews attempt to take a reflective account of the research that has been done in a particular area of interest, and assess its credibility by using appraisal instruments or critical interpretive methods. In this way, critical reviews attempt to constructively inform other scholars about the weaknesses of prior research and strengthen knowledge development by giving focus and direction to studies for further improvement ( Kirkevold, 1997 ).

Kitsiou, Paré, and Jaana (2013) provide an example of a critical review that assessed the methodological quality of prior systematic reviews of home telemonitoring studies for chronic patients. The authors conducted a comprehensive search on multiple databases to identify eligible reviews and subsequently used a validated instrument to conduct an in-depth quality appraisal. Results indicate that the majority of systematic reviews in this particular area suffer from important methodological flaws and biases that impair their internal validity and limit their usefulness for clinical and decision-making purposes. To this end, they provide a number of recommendations to strengthen knowledge development towards improving the design and execution of future reviews on home telemonitoring.

9.4. Summary

Table 9.1 outlines the main types of literature reviews that were described in the previous sub-sections and summarizes the main characteristics that distinguish one review type from another. It also includes key references to methodological guidelines and useful sources that can be used by eHealth scholars and researchers for planning and developing reviews.

Table 9.1. Typology of Literature Reviews (adapted from Paré et al., 2015).

Typology of Literature Reviews (adapted from Paré et al., 2015).

As shown in Table 9.1 , each review type addresses different kinds of research questions or objectives, which subsequently define and dictate the methods and approaches that need to be used to achieve the overarching goal(s) of the review. For example, in the case of narrative reviews, there is greater flexibility in searching and synthesizing articles ( Green et al., 2006 ). Researchers are often relatively free to use a diversity of approaches to search, identify, and select relevant scientific articles, describe their operational characteristics, present how the individual studies fit together, and formulate conclusions. On the other hand, systematic reviews are characterized by their high level of systematicity, rigour, and use of explicit methods, based on an “a priori” review plan that aims to minimize bias in the analysis and synthesis process (Higgins & Green, 2008). Some reviews are exploratory in nature (e.g., scoping/mapping reviews), whereas others may be conducted to discover patterns (e.g., descriptive reviews) or involve a synthesis approach that may include the critical analysis of prior research ( Paré et al., 2015 ). Hence, in order to select the most appropriate type of review, it is critical to know before embarking on a review project, why the research synthesis is conducted and what type of methods are best aligned with the pursued goals.

9.5. Concluding Remarks

In light of the increased use of evidence-based practice and research generating stronger evidence ( Grady et al., 2011 ; Lyden et al., 2013 ), review articles have become essential tools for summarizing, synthesizing, integrating or critically appraising prior knowledge in the eHealth field. As mentioned earlier, when rigorously conducted review articles represent powerful information sources for eHealth scholars and practitioners looking for state-of-the-art evidence. The typology of literature reviews we used herein will allow eHealth researchers, graduate students and practitioners to gain a better understanding of the similarities and differences between review types.

We must stress that this classification scheme does not privilege any specific type of review as being of higher quality than another ( Paré et al., 2015 ). As explained above, each type of review has its own strengths and limitations. Having said that, we realize that the methodological rigour of any review — be it qualitative, quantitative or mixed — is a critical aspect that should be considered seriously by prospective authors. In the present context, the notion of rigour refers to the reliability and validity of the review process described in section 9.2. For one thing, reliability is related to the reproducibility of the review process and steps, which is facilitated by a comprehensive documentation of the literature search process, extraction, coding and analysis performed in the review. Whether the search is comprehensive or not, whether it involves a methodical approach for data extraction and synthesis or not, it is important that the review documents in an explicit and transparent manner the steps and approach that were used in the process of its development. Next, validity characterizes the degree to which the review process was conducted appropriately. It goes beyond documentation and reflects decisions related to the selection of the sources, the search terms used, the period of time covered, the articles selected in the search, and the application of backward and forward searches ( vom Brocke et al., 2009 ). In short, the rigour of any review article is reflected by the explicitness of its methods (i.e., transparency) and the soundness of the approach used. We refer those interested in the concepts of rigour and quality to the work of Templier and Paré (2015) which offers a detailed set of methodological guidelines for conducting and evaluating various types of review articles.

To conclude, our main objective in this chapter was to demystify the various types of literature reviews that are central to the continuous development of the eHealth field. It is our hope that our descriptive account will serve as a valuable source for those conducting, evaluating or using reviews in this important and growing domain.

  • Ammenwerth E., de Keizer N. An inventory of evaluation studies of information technology in health care. Trends in evaluation research, 1982-2002. International Journal of Medical Informatics. 2004; 44 (1):44–56. [ PubMed : 15778794 ]
  • Anderson S., Allen P., Peckham S., Goodwin N. Asking the right questions: scoping studies in the commissioning of research on the organisation and delivery of health services. Health Research Policy and Systems. 2008; 6 (7):1–12. [ PMC free article : PMC2500008 ] [ PubMed : 18613961 ] [ CrossRef ]
  • Archer N., Fevrier-Thomas U., Lokker C., McKibbon K. A., Straus S.E. Personal health records: a scoping review. Journal of American Medical Informatics Association. 2011; 18 (4):515–522. [ PMC free article : PMC3128401 ] [ PubMed : 21672914 ]
  • Arksey H., O’Malley L. Scoping studies: towards a methodological framework. International Journal of Social Research Methodology. 2005; 8 (1):19–32.
  • A systematic, tool-supported method for conducting literature reviews in information systems. Paper presented at the Proceedings of the 19th European Conference on Information Systems ( ecis 2011); June 9 to 11; Helsinki, Finland. 2011.
  • Baumeister R. F., Leary M.R. Writing narrative literature reviews. Review of General Psychology. 1997; 1 (3):311–320.
  • Becker L. A., Oxman A.D. In: Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. Hoboken, nj : John Wiley & Sons, Ltd; 2008. Overviews of reviews; pp. 607–631.
  • Borenstein M., Hedges L., Higgins J., Rothstein H. Introduction to meta-analysis. Hoboken, nj : John Wiley & Sons Inc; 2009.
  • Cook D. J., Mulrow C. D., Haynes B. Systematic reviews: Synthesis of best evidence for clinical decisions. Annals of Internal Medicine. 1997; 126 (5):376–380. [ PubMed : 9054282 ]
  • Cooper H., Hedges L.V. In: The handbook of research synthesis and meta-analysis. 2nd ed. Cooper H., Hedges L. V., Valentine J. C., editors. New York: Russell Sage Foundation; 2009. Research synthesis as a scientific process; pp. 3–17.
  • Cooper H. M. Organizing knowledge syntheses: A taxonomy of literature reviews. Knowledge in Society. 1988; 1 (1):104–126.
  • Cronin P., Ryan F., Coughlan M. Undertaking a literature review: a step-by-step approach. British Journal of Nursing. 2008; 17 (1):38–43. [ PubMed : 18399395 ]
  • Darlow S., Wen K.Y. Development testing of mobile health interventions for cancer patient self-management: A review. Health Informatics Journal. 2015 (online before print). [ PubMed : 25916831 ] [ CrossRef ]
  • Daudt H. M., van Mossel C., Scott S.J. Enhancing the scoping study methodology: a large, inter-professional team’s experience with Arksey and O’Malley’s framework. bmc Medical Research Methodology. 2013; 13 :48. [ PMC free article : PMC3614526 ] [ PubMed : 23522333 ] [ CrossRef ]
  • Davies P. The relevance of systematic reviews to educational policy and practice. Oxford Review of Education. 2000; 26 (3-4):365–378.
  • Deeks J. J., Higgins J. P. T., Altman D.G. In: Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. Hoboken, nj : John Wiley & Sons, Ltd; 2008. Analysing data and undertaking meta-analyses; pp. 243–296.
  • Deshazo J. P., Lavallie D. L., Wolf F.M. Publication trends in the medical informatics literature: 20 years of “Medical Informatics” in mesh . bmc Medical Informatics and Decision Making. 2009; 9 :7. [ PMC free article : PMC2652453 ] [ PubMed : 19159472 ] [ CrossRef ]
  • Dixon-Woods M., Agarwal S., Jones D., Young B., Sutton A. Synthesising qualitative and quantitative evidence: a review of possible methods. Journal of Health Services Research and Policy. 2005; 10 (1):45–53. [ PubMed : 15667704 ]
  • Finfgeld-Connett D., Johnson E.D. Literature search strategies for conducting knowledge-building and theory-generating qualitative systematic reviews. Journal of Advanced Nursing. 2013; 69 (1):194–204. [ PMC free article : PMC3424349 ] [ PubMed : 22591030 ]
  • Grady B., Myers K. M., Nelson E. L., Belz N., Bennett L., Carnahan L. … Guidelines Working Group. Evidence-based practice for telemental health. Telemedicine Journal and E Health. 2011; 17 (2):131–148. [ PubMed : 21385026 ]
  • Green B. N., Johnson C. D., Adams A. Writing narrative literature reviews for peer-reviewed journals: secrets of the trade. Journal of Chiropractic Medicine. 2006; 5 (3):101–117. [ PMC free article : PMC2647067 ] [ PubMed : 19674681 ]
  • Greenhalgh T., Wong G., Westhorp G., Pawson R. Protocol–realist and meta-narrative evidence synthesis: evolving standards ( rameses ). bmc Medical Research Methodology. 2011; 11 :115. [ PMC free article : PMC3173389 ] [ PubMed : 21843376 ]
  • Gurol-Urganci I., de Jongh T., Vodopivec-Jamsek V., Atun R., Car J. Mobile phone messaging reminders for attendance at healthcare appointments. Cochrane Database System Review. 2013; 12 cd 007458. [ PMC free article : PMC6485985 ] [ PubMed : 24310741 ] [ CrossRef ]
  • Hart C. Doing a literature review: Releasing the social science research imagination. London: SAGE Publications; 1998.
  • Higgins J. P. T., Green S., editors. Cochrane handbook for systematic reviews of interventions: Cochrane book series. Hoboken, nj : Wiley-Blackwell; 2008.
  • Jesson J., Matheson L., Lacey F.M. Doing your literature review: traditional and systematic techniques. Los Angeles & London: SAGE Publications; 2011.
  • King W. R., He J. Understanding the role and methods of meta-analysis in IS research. Communications of the Association for Information Systems. 2005; 16 :1.
  • Kirkevold M. Integrative nursing research — an important strategy to further the development of nursing science and nursing practice. Journal of Advanced Nursing. 1997; 25 (5):977–984. [ PubMed : 9147203 ]
  • Kitchenham B., Charters S. ebse Technical Report Version 2.3. Keele & Durham. uk : Keele University & University of Durham; 2007. Guidelines for performing systematic literature reviews in software engineering.
  • Kitsiou S., Paré G., Jaana M. Systematic reviews and meta-analyses of home telemonitoring interventions for patients with chronic diseases: a critical assessment of their methodological quality. Journal of Medical Internet Research. 2013; 15 (7):e150. [ PMC free article : PMC3785977 ] [ PubMed : 23880072 ]
  • Kitsiou S., Paré G., Jaana M. Effects of home telemonitoring interventions on patients with chronic heart failure: an overview of systematic reviews. Journal of Medical Internet Research. 2015; 17 (3):e63. [ PMC free article : PMC4376138 ] [ PubMed : 25768664 ]
  • Levac D., Colquhoun H., O’Brien K. K. Scoping studies: advancing the methodology. Implementation Science. 2010; 5 (1):69. [ PMC free article : PMC2954944 ] [ PubMed : 20854677 ]
  • Levy Y., Ellis T.J. A systems approach to conduct an effective literature review in support of information systems research. Informing Science. 2006; 9 :181–211.
  • Liberati A., Altman D. G., Tetzlaff J., Mulrow C., Gøtzsche P. C., Ioannidis J. P. A. et al. Moher D. The prisma statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. Annals of Internal Medicine. 2009; 151 (4):W-65. [ PubMed : 19622512 ]
  • Lyden J. R., Zickmund S. L., Bhargava T. D., Bryce C. L., Conroy M. B., Fischer G. S. et al. McTigue K. M. Implementing health information technology in a patient-centered manner: Patient experiences with an online evidence-based lifestyle intervention. Journal for Healthcare Quality. 2013; 35 (5):47–57. [ PubMed : 24004039 ]
  • Mickan S., Atherton H., Roberts N. W., Heneghan C., Tilson J.K. Use of handheld computers in clinical practice: a systematic review. bmc Medical Informatics and Decision Making. 2014; 14 :56. [ PMC free article : PMC4099138 ] [ PubMed : 24998515 ]
  • Moher D. The problem of duplicate systematic reviews. British Medical Journal. 2013; 347 (5040) [ PubMed : 23945367 ] [ CrossRef ]
  • Montori V. M., Wilczynski N. L., Morgan D., Haynes R. B., Hedges T. Systematic reviews: a cross-sectional study of location and citation counts. bmc Medicine. 2003; 1 :2. [ PMC free article : PMC281591 ] [ PubMed : 14633274 ]
  • Mulrow C. D. The medical review article: state of the science. Annals of Internal Medicine. 1987; 106 (3):485–488. [ PubMed : 3813259 ] [ CrossRef ]
  • Evidence-based information systems: A decade later. Proceedings of the European Conference on Information Systems ; 2011. Retrieved from http://aisel ​.aisnet.org/cgi/viewcontent ​.cgi?article ​=1221&context ​=ecis2011 .
  • Okoli C., Schabram K. A guide to conducting a systematic literature review of information systems research. ssrn Electronic Journal. 2010
  • Otte-Trojel T., de Bont A., Rundall T. G., van de Klundert J. How outcomes are achieved through patient portals: a realist review. Journal of American Medical Informatics Association. 2014; 21 (4):751–757. [ PMC free article : PMC4078283 ] [ PubMed : 24503882 ]
  • Paré G., Trudel M.-C., Jaana M., Kitsiou S. Synthesizing information systems knowledge: A typology of literature reviews. Information & Management. 2015; 52 (2):183–199.
  • Patsopoulos N. A., Analatos A. A., Ioannidis J.P. A. Relative citation impact of various study designs in the health sciences. Journal of the American Medical Association. 2005; 293 (19):2362–2366. [ PubMed : 15900006 ]
  • Paul M. M., Greene C. M., Newton-Dame R., Thorpe L. E., Perlman S. E., McVeigh K. H., Gourevitch M.N. The state of population health surveillance using electronic health records: A narrative review. Population Health Management. 2015; 18 (3):209–216. [ PubMed : 25608033 ]
  • Pawson R. Evidence-based policy: a realist perspective. London: SAGE Publications; 2006.
  • Pawson R., Greenhalgh T., Harvey G., Walshe K. Realist review—a new method of systematic review designed for complex policy interventions. Journal of Health Services Research & Policy. 2005; 10 (Suppl 1):21–34. [ PubMed : 16053581 ]
  • Petersen K., Vakkalanka S., Kuzniarz L. Guidelines for conducting systematic mapping studies in software engineering: An update. Information and Software Technology. 2015; 64 :1–18.
  • Petticrew M., Roberts H. Systematic reviews in the social sciences: A practical guide. Malden, ma : Blackwell Publishing Co; 2006.
  • Rousseau D. M., Manning J., Denyer D. Evidence in management and organizational science: Assembling the field’s full weight of scientific knowledge through syntheses. The Academy of Management Annals. 2008; 2 (1):475–515.
  • Rowe F. What literature review is not: diversity, boundaries and recommendations. European Journal of Information Systems. 2014; 23 (3):241–255.
  • Shea B. J., Hamel C., Wells G. A., Bouter L. M., Kristjansson E., Grimshaw J. et al. Boers M. amstar is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. Journal of Clinical Epidemiology. 2009; 62 (10):1013–1020. [ PubMed : 19230606 ]
  • Shepperd S., Lewin S., Straus S., Clarke M., Eccles M. P., Fitzpatrick R. et al. Sheikh A. Can we systematically review studies that evaluate complex interventions? PLoS Medicine. 2009; 6 (8):e1000086. [ PMC free article : PMC2717209 ] [ PubMed : 19668360 ]
  • Silva B. M., Rodrigues J. J., de la Torre Díez I., López-Coronado M., Saleem K. Mobile-health: A review of current state in 2015. Journal of Biomedical Informatics. 2015; 56 :265–272. [ PubMed : 26071682 ]
  • Smith V., Devane D., Begley C., Clarke M. Methodology in conducting a systematic review of systematic reviews of healthcare interventions. bmc Medical Research Methodology. 2011; 11 (1):15. [ PMC free article : PMC3039637 ] [ PubMed : 21291558 ]
  • Sylvester A., Tate M., Johnstone D. Beyond synthesis: re-presenting heterogeneous research literature. Behaviour & Information Technology. 2013; 32 (12):1199–1215.
  • Templier M., Paré G. A framework for guiding and evaluating literature reviews. Communications of the Association for Information Systems. 2015; 37 (6):112–137.
  • Thomas J., Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. bmc Medical Research Methodology. 2008; 8 (1):45. [ PMC free article : PMC2478656 ] [ PubMed : 18616818 ]
  • Reconstructing the giant: on the importance of rigour in documenting the literature search process. Paper presented at the Proceedings of the 17th European Conference on Information Systems ( ecis 2009); Verona, Italy. 2009.
  • Webster J., Watson R.T. Analyzing the past to prepare for the future: Writing a literature review. Management Information Systems Quarterly. 2002; 26 (2):11.
  • Whitlock E. P., Lin J. S., Chou R., Shekelle P., Robinson K.A. Using existing systematic reviews in complex systematic reviews. Annals of Internal Medicine. 2008; 148 (10):776–782. [ PubMed : 18490690 ]

This publication is licensed under a Creative Commons License, Attribution-Noncommercial 4.0 International License (CC BY-NC 4.0): see https://creativecommons.org/licenses/by-nc/4.0/

  • Cite this Page Paré G, Kitsiou S. Chapter 9 Methods for Literature Reviews. In: Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.
  • PDF version of this title (4.5M)
  • Disable Glossary Links

In this Page

  • Introduction
  • Overview of the Literature Review Process and Steps
  • Types of Review Articles and Brief Illustrations
  • Concluding Remarks

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Recent Activity

  • Chapter 9 Methods for Literature Reviews - Handbook of eHealth Evaluation: An Ev... Chapter 9 Methods for Literature Reviews - Handbook of eHealth Evaluation: An Evidence-based Approach

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Module 2 Chapter 3: What is Empirical Literature & Where can it be Found?

In Module 1, you read about the problem of pseudoscience. Here, we revisit the issue in addressing how to locate and assess scientific or empirical literature . In this chapter you will read about:

  • distinguishing between what IS and IS NOT empirical literature
  • how and where to locate empirical literature for understanding diverse populations, social work problems, and social phenomena.

Probably the most important take-home lesson from this chapter is that one source is not sufficient to being well-informed on a topic. It is important to locate multiple sources of information and to critically appraise the points of convergence and divergence in the information acquired from different sources. This is especially true in emerging and poorly understood topics, as well as in answering complex questions.

What Is Empirical Literature

Social workers often need to locate valid, reliable information concerning the dimensions of a population group or subgroup, a social work problem, or social phenomenon. They might also seek information about the way specific problems or resources are distributed among the populations encountered in professional practice. Or, social workers might be interested in finding out about the way that certain people experience an event or phenomenon. Empirical literature resources may provide answers to many of these types of social work questions. In addition, resources containing data regarding social indicators may also prove helpful. Social indicators are the “facts and figures” statistics that describe the social, economic, and psychological factors that have an impact on the well-being of a community or other population group.The United Nations (UN) and the World Health Organization (WHO) are examples of organizations that monitor social indicators at a global level: dimensions of population trends (size, composition, growth/loss), health status (physical, mental, behavioral, life expectancy, maternal and infant mortality, fertility/child-bearing, and diseases like HIV/AIDS), housing and quality of sanitation (water supply, waste disposal), education and literacy, and work/income/unemployment/economics, for example.

Image of the Globe

Three characteristics stand out in empirical literature compared to other types of information available on a topic of interest: systematic observation and methodology, objectivity, and transparency/replicability/reproducibility. Let’s look a little more closely at these three features.

Systematic Observation and Methodology. The hallmark of empiricism is “repeated or reinforced observation of the facts or phenomena” (Holosko, 2006, p. 6). In empirical literature, established research methodologies and procedures are systematically applied to answer the questions of interest.

Objectivity. Gathering “facts,” whatever they may be, drives the search for empirical evidence (Holosko, 2006). Authors of empirical literature are expected to report the facts as observed, whether or not these facts support the investigators’ original hypotheses. Research integrity demands that the information be provided in an objective manner, reducing sources of investigator bias to the greatest possible extent.

Transparency and Replicability/Reproducibility.   Empirical literature is reported in such a manner that other investigators understand precisely what was done and what was found in a particular research study—to the extent that they could replicate the study to determine whether the findings are reproduced when repeated. The outcomes of an original and replication study may differ, but a reader could easily interpret the methods and procedures leading to each study’s findings.

What is NOT Empirical Literature

By now, it is probably obvious to you that literature based on “evidence” that is not developed in a systematic, objective, transparent manner is not empirical literature. On one hand, non-empirical types of professional literature may have great significance to social workers. For example, social work scholars may produce articles that are clearly identified as describing a new intervention or program without evaluative evidence, critiquing a policy or practice, or offering a tentative, untested theory about a phenomenon. These resources are useful in educating ourselves about possible issues or concerns. But, even if they are informed by evidence, they are not empirical literature. Here is a list of several sources of information that do not meet the standard of being called empirical literature:

  • your course instructor’s lectures
  • political statements
  • advertisements
  • newspapers & magazines (journalism)
  • television news reports & analyses (journalism)
  • many websites, Facebook postings, Twitter tweets, and blog postings
  • the introductory literature review in an empirical article

You may be surprised to see the last two included in this list. Like the other sources of information listed, these sources also might lead you to look for evidence. But, they are not themselves sources of evidence. They may summarize existing evidence, but in the process of summarizing (like your instructor’s lectures), information is transformed, modified, reduced, condensed, and otherwise manipulated in such a manner that you may not see the entire, objective story. These are called secondary sources, as opposed to the original, primary source of evidence. In relying solely on secondary sources, you sacrifice your own critical appraisal and thinking about the original work—you are “buying” someone else’s interpretation and opinion about the original work, rather than developing your own interpretation and opinion. What if they got it wrong? How would you know if you did not examine the primary source for yourself? Consider the following as an example of “getting it wrong” being perpetuated.

Example: Bullying and School Shootings . One result of the heavily publicized April 1999 school shooting incident at Columbine High School (Colorado), was a heavy emphasis placed on bullying as a causal factor in these incidents (Mears, Moon, & Thielo, 2017), “creating a powerful master narrative about school shootings” (Raitanen, Sandberg, & Oksanen, 2017, p. 3). Naturally, with an identified cause, a great deal of effort was devoted to anti-bullying campaigns and interventions for enhancing resilience among youth who experience bullying.  However important these strategies might be for promoting positive mental health, preventing poor mental health, and possibly preventing suicide among school-aged children and youth, it is a mistaken belief that this can prevent school shootings (Mears, Moon, & Thielo, 2017). Many times the accounts of the perpetrators having been bullied come from potentially inaccurate third-party accounts, rather than the perpetrators themselves; bullying was not involved in all instances of school shooting; a perpetrator’s perception of being bullied/persecuted are not necessarily accurate; many who experience severe bullying do not perpetrate these incidents; bullies are the least targeted shooting victims; perpetrators of the shooting incidents were often bullying others; and, bullying is only one of many important factors associated with perpetrating such an incident (Ioannou, Hammond, & Simpson, 2015; Mears, Moon, & Thielo, 2017; Newman &Fox, 2009; Raitanen, Sandberg, & Oksanen, 2017). While mass media reports deliver bullying as a means of explaining the inexplicable, the reality is not so simple: “The connection between bullying and school shootings is elusive” (Langman, 2014), and “the relationship between bullying and school shooting is, at best, tenuous” (Mears, Moon, & Thielo, 2017, p. 940). The point is, when a narrative becomes this publicly accepted, it is difficult to sort out truth and reality without going back to original sources of information and evidence.

Wordcloud of Bully Related Terms

What May or May Not Be Empirical Literature: Literature Reviews

Investigators typically engage in a review of existing literature as they develop their own research studies. The review informs them about where knowledge gaps exist, methods previously employed by other scholars, limitations of prior work, and previous scholars’ recommendations for directing future research. These reviews may appear as a published article, without new study data being reported (see Fields, Anderson, & Dabelko-Schoeny, 2014 for example). Or, the literature review may appear in the introduction to their own empirical study report. These literature reviews are not considered to be empirical evidence sources themselves, although they may be based on empirical evidence sources. One reason is that the authors of a literature review may or may not have engaged in a systematic search process, identifying a full, rich, multi-sided pool of evidence reports.

There is, however, a type of review that applies systematic methods and is, therefore, considered to be more strongly rooted in evidence: the systematic review .

Systematic review of literature. A systematic reviewis a type of literature report where established methods have been systematically applied, objectively, in locating and synthesizing a body of literature. The systematic review report is characterized by a great deal of transparency about the methods used and the decisions made in the review process, and are replicable. Thus, it meets the criteria for empirical literature: systematic observation and methodology, objectivity, and transparency/reproducibility. We will work a great deal more with systematic reviews in the second course, SWK 3402, since they are important tools for understanding interventions. They are somewhat less common, but not unheard of, in helping us understand diverse populations, social work problems, and social phenomena.

Locating Empirical Evidence

Social workers have available a wide array of tools and resources for locating empirical evidence in the literature. These can be organized into four general categories.

Journal Articles. A number of professional journals publish articles where investigators report on the results of their empirical studies. However, it is important to know how to distinguish between empirical and non-empirical manuscripts in these journals. A key indicator, though not the only one, involves a peer review process . Many professional journals require that manuscripts undergo a process of peer review before they are accepted for publication. This means that the authors’ work is shared with scholars who provide feedback to the journal editor as to the quality of the submitted manuscript. The editor then makes a decision based on the reviewers’ feedback:

  • Accept as is
  • Accept with minor revisions
  • Request that a revision be resubmitted (no assurance of acceptance)

When a “revise and resubmit” decision is made, the piece will go back through the review process to determine if it is now acceptable for publication and that all of the reviewers’ concerns have been adequately addressed. Editors may also reject a manuscript because it is a poor fit for the journal, based on its mission and audience, rather than sending it for review consideration.

Word cloud of social work related publications

Indicators of journal relevance. Various journals are not equally relevant to every type of question being asked of the literature. Journals may overlap to a great extent in terms of the topics they might cover; in other words, a topic might appear in multiple different journals, depending on how the topic was being addressed. For example, articles that might help answer a question about the relationship between community poverty and violence exposure might appear in several different journals, some with a focus on poverty, others with a focus on violence, and still others on community development or public health. Journal titles are sometimes a good starting point but may not give a broad enough picture of what they cover in their contents.

In focusing a literature search, it also helps to review a journal’s mission and target audience. For example, at least four different journals focus specifically on poverty:

  • Journal of Children & Poverty
  • Journal of Poverty
  • Journal of Poverty and Social Justice
  • Poverty & Public Policy

Let’s look at an example using the Journal of Poverty and Social Justice . Information about this journal is located on the journal’s webpage: http://policy.bristoluniversitypress.co.uk/journals/journal-of-poverty-and-social-justice . In the section headed “About the Journal” you can see that it is an internationally focused research journal, and that it addresses social justice issues in addition to poverty alone. The research articles are peer-reviewed (there appear to be non-empirical discussions published, as well). These descriptions about a journal are almost always available, sometimes listed as “scope” or “mission.” These descriptions also indicate the sponsorship of the journal—sponsorship may be institutional (a particular university or agency, such as Smith College Studies in Social Work ), a professional organization, such as the Council on Social Work Education (CSWE) or the National Association of Social Work (NASW), or a publishing company (e.g., Taylor & Frances, Wiley, or Sage).

Indicators of journal caliber.  Despite engaging in a peer review process, not all journals are equally rigorous. Some journals have very high rejection rates, meaning that many submitted manuscripts are rejected; others have fairly high acceptance rates, meaning that relatively few manuscripts are rejected. This is not necessarily the best indicator of quality, however, since newer journals may not be sufficiently familiar to authors with high quality manuscripts and some journals are very specific in terms of what they publish. Another index that is sometimes used is the journal’s impact factor . Impact factor is a quantitative number indicative of how often articles published in the journal are cited in the reference list of other journal articles—the statistic is calculated as the number of times on average each article published in a particular year were cited divided by the number of articles published (the number that could be cited). For example, the impact factor for the Journal of Poverty and Social Justice in our list above was 0.70 in 2017, and for the Journal of Poverty was 0.30. These are relatively low figures compared to a journal like the New England Journal of Medicine with an impact factor of 59.56! This means that articles published in that journal were, on average, cited more than 59 times in the next year or two.

Impact factors are not necessarily the best indicator of caliber, however, since many strong journals are geared toward practitioners rather than scholars, so they are less likely to be cited by other scholars but may have a large impact on a large readership. This may be the case for a journal like the one titled Social Work, the official journal of the National Association of Social Workers. It is distributed free to all members: over 120,000 practitioners, educators, and students of social work world-wide. The journal has a recent impact factor of.790. The journals with social work relevant content have impact factors in the range of 1.0 to 3.0 according to Scimago Journal & Country Rank (SJR), particularly when they are interdisciplinary journals (for example, Child Development , Journal of Marriage and Family , Child Abuse and Neglect , Child Maltreatmen t, Social Service Review , and British Journal of Social Work ). Once upon a time, a reader could locate different indexes comparing the “quality” of social work-related journals. However, the concept of “quality” is difficult to systematically define. These indexes have mostly been replaced by impact ratings, which are not necessarily the best, most robust indicators on which to rely in assessing journal quality. For example, new journals addressing cutting edge topics have not been around long enough to have been evaluated using this particular tool, and it takes a few years for articles to begin to be cited in other, later publications.

Beware of pseudo-, illegitimate, misleading, deceptive, and suspicious journals . Another side effect of living in the Age of Information is that almost anyone can circulate almost anything and call it whatever they wish. This goes for “journal” publications, as well. With the advent of open-access publishing in recent years (electronic resources available without subscription), we have seen an explosion of what are called predatory or junk journals . These are publications calling themselves journals, often with titles very similar to legitimate publications and often with fake editorial boards. These “publications” lack the integrity of legitimate journals. This caution is reminiscent of the discussions earlier in the course about pseudoscience and “snake oil” sales. The predatory nature of many apparent information dissemination outlets has to do with how scientists and scholars may be fooled into submitting their work, often paying to have their work peer-reviewed and published. There exists a “thriving black-market economy of publishing scams,” and at least two “journal blacklists” exist to help identify and avoid these scam journals (Anderson, 2017).

This issue is important to information consumers, because it creates a challenge in terms of identifying legitimate sources and publications. The challenge is particularly important to address when information from on-line, open-access journals is being considered. Open-access is not necessarily a poor choice—legitimate scientists may pay sizeable fees to legitimate publishers to make their work freely available and accessible as open-access resources. On-line access is also not necessarily a poor choice—legitimate publishers often make articles available on-line to provide timely access to the content, especially when publishing the article in hard copy will be delayed by months or even a year or more. On the other hand, stating that a journal engages in a peer-review process is no guarantee of quality—this claim may or may not be truthful. Pseudo- and junk journals may engage in some quality control practices, but may lack attention to important quality control processes, such as managing conflict of interest, reviewing content for objectivity or quality of the research conducted, or otherwise failing to adhere to industry standards (Laine & Winker, 2017).

One resource designed to assist with the process of deciphering legitimacy is the Directory of Open Access Journals (DOAJ). The DOAJ is not a comprehensive listing of all possible legitimate open-access journals, and does not guarantee quality, but it does help identify legitimate sources of information that are openly accessible and meet basic legitimacy criteria. It also is about open-access journals, not the many journals published in hard copy.

An additional caution: Search for article corrections. Despite all of the careful manuscript review and editing, sometimes an error appears in a published article. Most journals have a practice of publishing corrections in future issues. When you locate an article, it is helpful to also search for updates. Here is an example where data presented in an article’s original tables were erroneous, and a correction appeared in a later issue.

  • Marchant, A., Hawton, K., Stewart A., Montgomery, P., Singaravelu, V., Lloyd, K., Purdy, N., Daine, K., & John, A. (2017). A systematic review of the relationship between internet use, self-harm and suicidal behaviour in young people: The good, the bad and the unknown. PLoS One, 12(8): e0181722. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5558917/
  • Marchant, A., Hawton, K., Stewart A., Montgomery, P., Singaravelu, V., Lloyd, K., Purdy, N., Daine, K., & John, A. (2018).Correction—A systematic review of the relationship between internet use, self-harm and suicidal behaviour in young people: The good, the bad and the unknown. PLoS One, 13(3): e0193937.  http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0193937

Search Tools. In this age of information, it is all too easy to find items—the problem lies in sifting, sorting, and managing the vast numbers of items that can be found. For example, a simple Google® search for the topic “community poverty and violence” resulted in about 15,600,000 results! As a means of simplifying the process of searching for journal articles on a specific topic, a variety of helpful tools have emerged. One type of search tool has previously applied a filtering process for you: abstracting and indexing databases . These resources provide the user with the results of a search to which records have already passed through one or more filters. For example, PsycINFO is managed by the American Psychological Association and is devoted to peer-reviewed literature in behavioral science. It contains almost 4.5 million records and is growing every month. However, it may not be available to users who are not affiliated with a university library. Conducting a basic search for our topic of “community poverty and violence” in PsychINFO returned 1,119 articles. Still a large number, but far more manageable. Additional filters can be applied, such as limiting the range in publication dates, selecting only peer reviewed items, limiting the language of the published piece (English only, for example), and specified types of documents (either chapters, dissertations, or journal articles only, for example). Adding the filters for English, peer-reviewed journal articles published between 2010 and 2017 resulted in 346 documents being identified.

Just as was the case with journals, not all abstracting and indexing databases are equivalent. There may be overlap between them, but none is guaranteed to identify all relevant pieces of literature. Here are some examples to consider, depending on the nature of the questions asked of the literature:

  • Academic Search Complete—multidisciplinary index of 9,300 peer-reviewed journals
  • AgeLine—multidisciplinary index of aging-related content for over 600 journals
  • Campbell Collaboration—systematic reviews in education, crime and justice, social welfare, international development
  • Google Scholar—broad search tool for scholarly literature across many disciplines
  • MEDLINE/ PubMed—National Library of medicine, access to over 15 million citations
  • Oxford Bibliographies—annotated bibliographies, each is discipline specific (e.g., psychology, childhood studies, criminology, social work, sociology)
  • PsycINFO/PsycLIT—international literature on material relevant to psychology and related disciplines
  • SocINDEX—publications in sociology
  • Social Sciences Abstracts—multiple disciplines
  • Social Work Abstracts—many areas of social work are covered
  • Web of Science—a “meta” search tool that searches other search tools, multiple disciplines

Placing our search for information about “community violence and poverty” into the Social Work Abstracts tool with no additional filters resulted in a manageable 54-item list. Finally, abstracting and indexing databases are another way to determine journal legitimacy: if a journal is indexed in a one of these systems, it is likely a legitimate journal. However, the converse is not necessarily true: if a journal is not indexed does not mean it is an illegitimate or pseudo-journal.

Government Sources. A great deal of information is gathered, analyzed, and disseminated by various governmental branches at the international, national, state, regional, county, and city level. Searching websites that end in.gov is one way to identify this type of information, often presented in articles, news briefs, and statistical reports. These government sources gather information in two ways: they fund external investigations through grants and contracts and they conduct research internally, through their own investigators. Here are some examples to consider, depending on the nature of the topic for which information is sought:

  • Agency for Healthcare Research and Quality (AHRQ) at https://www.ahrq.gov/
  • Bureau of Justice Statistics (BJS) at https://www.bjs.gov/
  • Census Bureau at https://www.census.gov
  • Morbidity and Mortality Weekly Report of the CDC (MMWR-CDC) at https://www.cdc.gov/mmwr/index.html
  • Child Welfare Information Gateway at https://www.childwelfare.gov
  • Children’s Bureau/Administration for Children & Families at https://www.acf.hhs.gov
  • Forum on Child and Family Statistics at https://www.childstats.gov
  • National Institutes of Health (NIH) at https://www.nih.gov , including (not limited to):
  • National Institute on Aging (NIA at https://www.nia.nih.gov
  • National Institute on Alcohol Abuse and Alcoholism (NIAAA) at https://www.niaaa.nih.gov
  • National Institute of Child Health and Human Development (NICHD) at https://www.nichd.nih.gov
  • National Institute on Drug Abuse (NIDA) at https://www.nida.nih.gov
  • National Institute of Environmental Health Sciences at https://www.niehs.nih.gov
  • National Institute of Mental Health (NIMH) at https://www.nimh.nih.gov
  • National Institute on Minority Health and Health Disparities at https://www.nimhd.nih.gov
  • National Institute of Justice (NIJ) at https://www.nij.gov
  • Substance Abuse and Mental Health Services Administration (SAMHSA) at https://www.samhsa.gov/
  • United States Agency for International Development at https://usaid.gov

Each state and many counties or cities have similar data sources and analysis reports available, such as Ohio Department of Health at https://www.odh.ohio.gov/healthstats/dataandstats.aspx and Franklin County at https://statisticalatlas.com/county/Ohio/Franklin-County/Overview . Data are available from international/global resources (e.g., United Nations and World Health Organization), as well.

Other Sources. The Health and Medicine Division (HMD) of the National Academies—previously the Institute of Medicine (IOM)—is a nonprofit institution that aims to provide government and private sector policy and other decision makers with objective analysis and advice for making informed health decisions. For example, in 2018 they produced reports on topics in substance use and mental health concerning the intersection of opioid use disorder and infectious disease,  the legal implications of emerging neurotechnologies, and a global agenda concerning the identification and prevention of violence (see http://www.nationalacademies.org/hmd/Global/Topics/Substance-Abuse-Mental-Health.aspx ). The exciting aspect of this resource is that it addresses many topics that are current concerns because they are hoping to help inform emerging policy. The caution to consider with this resource is the evidence is often still emerging, as well.

Numerous “think tank” organizations exist, each with a specific mission. For example, the Rand Corporation is a nonprofit organization offering research and analysis to address global issues since 1948. The institution’s mission is to help improve policy and decision making “to help individuals, families, and communities throughout the world be safer and more secure, healthier and more prosperous,” addressing issues of energy, education, health care, justice, the environment, international affairs, and national security (https://www.rand.org/about/history.html). And, for example, the Robert Woods Johnson Foundation is a philanthropic organization supporting research and research dissemination concerning health issues facing the United States. The foundation works to build a culture of health across systems of care (not only medical care) and communities (https://www.rwjf.org).

While many of these have a great deal of helpful evidence to share, they also may have a strong political bias. Objectivity is often lacking in what information these organizations provide: they provide evidence to support certain points of view. That is their purpose—to provide ideas on specific problems, many of which have a political component. Think tanks “are constantly researching solutions to a variety of the world’s problems, and arguing, advocating, and lobbying for policy changes at local, state, and federal levels” (quoted from https://thebestschools.org/features/most-influential-think-tanks/ ). Helpful information about what this one source identified as the 50 most influential U.S. think tanks includes identifying each think tank’s political orientation. For example, The Heritage Foundation is identified as conservative, whereas Human Rights Watch is identified as liberal.

While not the same as think tanks, many mission-driven organizations also sponsor or report on research, as well. For example, the National Association for Children of Alcoholics (NACOA) in the United States is a registered nonprofit organization. Its mission, along with other partnering organizations, private-sector groups, and federal agencies, is to promote policy and program development in research, prevention and treatment to provide information to, for, and about children of alcoholics (of all ages). Based on this mission, the organization supports knowledge development and information gathering on the topic and disseminates information that serves the needs of this population. While this is a worthwhile mission, there is no guarantee that the information meets the criteria for evidence with which we have been working. Evidence reported by think tank and mission-driven sources must be utilized with a great deal of caution and critical analysis!

In many instances an empirical report has not appeared in the published literature, but in the form of a technical or final report to the agency or program providing the funding for the research that was conducted. One such example is presented by a team of investigators funded by the National Institute of Justice to evaluate a program for training professionals to collect strong forensic evidence in instances of sexual assault (Patterson, Resko, Pierce-Weeks, & Campbell, 2014): https://www.ncjrs.gov/pdffiles1/nij/grants/247081.pdf . Investigators may serve in the capacity of consultant to agencies, programs, or institutions, and provide empirical evidence to inform activities and planning. One such example is presented by Maguire-Jack (2014) as a report to a state’s child maltreatment prevention board: https://preventionboard.wi.gov/Documents/InvestmentInPreventionPrograming_Final.pdf .

When Direct Answers to Questions Cannot Be Found. Sometimes social workers are interested in finding answers to complex questions or questions related to an emerging, not-yet-understood topic. This does not mean giving up on empirical literature. Instead, it requires a bit of creativity in approaching the literature. A Venn diagram might help explain this process. Consider a scenario where a social worker wishes to locate literature to answer a question concerning issues of intersectionality. Intersectionality is a social justice term applied to situations where multiple categorizations or classifications come together to create overlapping, interconnected, or multiplied disadvantage. For example, women with a substance use disorder and who have been incarcerated face a triple threat in terms of successful treatment for a substance use disorder: intersectionality exists between being a woman, having a substance use disorder, and having been in jail or prison. After searching the literature, little or no empirical evidence might have been located on this specific triple-threat topic. Instead, the social worker will need to seek literature on each of the threats individually, and possibly will find literature on pairs of topics (see Figure 3-1). There exists some literature about women’s outcomes for treatment of a substance use disorder (a), some literature about women during and following incarceration (b), and some literature about substance use disorders and incarceration (c). Despite not having a direct line on the center of the intersecting spheres of literature (d), the social worker can develop at least a partial picture based on the overlapping literatures.

Figure 3-1. Venn diagram of intersecting literature sets.

literature review theoretical empirical

Take a moment to complete the following activity. For each statement about empirical literature, decide if it is true or false.

Social Work 3401 Coursebook Copyright © by Dr. Audrey Begun is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Research Methods

  • Getting Started
  • Literature Review Research
  • Research Design
  • Research Design By Discipline
  • SAGE Research Methods
  • Teaching with SAGE Research Methods

Literature Review

  • What is a Literature Review?
  • What is NOT a Literature Review?
  • Purposes of a Literature Review
  • Types of Literature Reviews
  • Literature Reviews vs. Systematic Reviews
  • Systematic vs. Meta-Analysis

Literature Review  is a comprehensive survey of the works published in a particular field of study or line of research, usually over a specific period of time, in the form of an in-depth, critical bibliographic essay or annotated list in which attention is drawn to the most significant works.

Also, we can define a literature review as the collected body of scholarly works related to a topic:

  • Summarizes and analyzes previous research relevant to a topic
  • Includes scholarly books and articles published in academic journals
  • Can be an specific scholarly paper or a section in a research paper

The objective of a Literature Review is to find previous published scholarly works relevant to an specific topic

  • Help gather ideas or information
  • Keep up to date in current trends and findings
  • Help develop new questions

A literature review is important because it:

  • Explains the background of research on a topic.
  • Demonstrates why a topic is significant to a subject area.
  • Helps focus your own research questions or problems
  • Discovers relationships between research studies/ideas.
  • Suggests unexplored ideas or populations
  • Identifies major themes, concepts, and researchers on a topic.
  • Tests assumptions; may help counter preconceived ideas and remove unconscious bias.
  • Identifies critical gaps, points of disagreement, or potentially flawed methodology or theoretical approaches.
  • Indicates potential directions for future research.

All content in this section is from Literature Review Research from Old Dominion University 

Keep in mind the following, a literature review is NOT:

Not an essay 

Not an annotated bibliography  in which you summarize each article that you have reviewed.  A literature review goes beyond basic summarizing to focus on the critical analysis of the reviewed works and their relationship to your research question.

Not a research paper   where you select resources to support one side of an issue versus another.  A lit review should explain and consider all sides of an argument in order to avoid bias, and areas of agreement and disagreement should be highlighted.

A literature review serves several purposes. For example, it

  • provides thorough knowledge of previous studies; introduces seminal works.
  • helps focus one’s own research topic.
  • identifies a conceptual framework for one’s own research questions or problems; indicates potential directions for future research.
  • suggests previously unused or underused methodologies, designs, quantitative and qualitative strategies.
  • identifies gaps in previous studies; identifies flawed methodologies and/or theoretical approaches; avoids replication of mistakes.
  • helps the researcher avoid repetition of earlier research.
  • suggests unexplored populations.
  • determines whether past studies agree or disagree; identifies controversy in the literature.
  • tests assumptions; may help counter preconceived ideas and remove unconscious bias.

As Kennedy (2007) notes*, it is important to think of knowledge in a given field as consisting of three layers. First, there are the primary studies that researchers conduct and publish. Second are the reviews of those studies that summarize and offer new interpretations built from and often extending beyond the original studies. Third, there are the perceptions, conclusions, opinion, and interpretations that are shared informally that become part of the lore of field. In composing a literature review, it is important to note that it is often this third layer of knowledge that is cited as "true" even though it often has only a loose relationship to the primary studies and secondary literature reviews.

Given this, while literature reviews are designed to provide an overview and synthesis of pertinent sources you have explored, there are several approaches to how they can be done, depending upon the type of analysis underpinning your study. Listed below are definitions of types of literature reviews:

Argumentative Review      This form examines literature selectively in order to support or refute an argument, deeply imbedded assumption, or philosophical problem already established in the literature. The purpose is to develop a body of literature that establishes a contrarian viewpoint. Given the value-laden nature of some social science research [e.g., educational reform; immigration control], argumentative approaches to analyzing the literature can be a legitimate and important form of discourse. However, note that they can also introduce problems of bias when they are used to to make summary claims of the sort found in systematic reviews.

Integrative Review      Considered a form of research that reviews, critiques, and synthesizes representative literature on a topic in an integrated way such that new frameworks and perspectives on the topic are generated. The body of literature includes all studies that address related or identical hypotheses. A well-done integrative review meets the same standards as primary research in regard to clarity, rigor, and replication.

Historical Review      Few things rest in isolation from historical precedent. Historical reviews are focused on examining research throughout a period of time, often starting with the first time an issue, concept, theory, phenomena emerged in the literature, then tracing its evolution within the scholarship of a discipline. The purpose is to place research in a historical context to show familiarity with state-of-the-art developments and to identify the likely directions for future research.

Methodological Review      A review does not always focus on what someone said [content], but how they said it [method of analysis]. This approach provides a framework of understanding at different levels (i.e. those of theory, substantive fields, research approaches and data collection and analysis techniques), enables researchers to draw on a wide variety of knowledge ranging from the conceptual level to practical documents for use in fieldwork in the areas of ontological and epistemological consideration, quantitative and qualitative integration, sampling, interviewing, data collection and data analysis, and helps highlight many ethical issues which we should be aware of and consider as we go through our study.

Systematic Review      This form consists of an overview of existing evidence pertinent to a clearly formulated research question, which uses pre-specified and standardized methods to identify and critically appraise relevant research, and to collect, report, and analyse data from the studies that are included in the review. Typically it focuses on a very specific empirical question, often posed in a cause-and-effect form, such as "To what extent does A contribute to B?"

Theoretical Review      The purpose of this form is to concretely examine the corpus of theory that has accumulated in regard to an issue, concept, theory, phenomena. The theoretical literature review help establish what theories already exist, the relationships between them, to what degree the existing theories have been investigated, and to develop new hypotheses to be tested. Often this form is used to help establish a lack of appropriate theories or reveal that current theories are inadequate for explaining new or emerging research problems. The unit of analysis can focus on a theoretical concept or a whole theory or framework.

* Kennedy, Mary M. "Defining a Literature."  Educational Researcher  36 (April 2007): 139-147.

All content in this section is from The Literature Review created by Dr. Robert Larabee USC

Robinson, P. and Lowe, J. (2015),  Literature reviews vs systematic reviews.  Australian and New Zealand Journal of Public Health, 39: 103-103. doi: 10.1111/1753-6405.12393

literature review theoretical empirical

What's in the name? The difference between a Systematic Review and a Literature Review, and why it matters . By Lynn Kysh from University of Southern California

Diagram for "What's in the name? The difference between a Systematic Review and a Literature Review, and why it matters"

Systematic review or meta-analysis?

A  systematic review  answers a defined research question by collecting and summarizing all empirical evidence that fits pre-specified eligibility criteria.

A  meta-analysis  is the use of statistical methods to summarize the results of these studies.

Systematic reviews, just like other research articles, can be of varying quality. They are a significant piece of work (the Centre for Reviews and Dissemination at York estimates that a team will take 9-24 months), and to be useful to other researchers and practitioners they should have:

  • clearly stated objectives with pre-defined eligibility criteria for studies
  • explicit, reproducible methodology
  • a systematic search that attempts to identify all studies
  • assessment of the validity of the findings of the included studies (e.g. risk of bias)
  • systematic presentation, and synthesis, of the characteristics and findings of the included studies

Not all systematic reviews contain meta-analysis. 

Meta-analysis is the use of statistical methods to summarize the results of independent studies. By combining information from all relevant studies, meta-analysis can provide more precise estimates of the effects of health care than those derived from the individual studies included within a review.  More information on meta-analyses can be found in  Cochrane Handbook, Chapter 9 .

A meta-analysis goes beyond critique and integration and conducts secondary statistical analysis on the outcomes of similar studies.  It is a systematic review that uses quantitative methods to synthesize and summarize the results.

An advantage of a meta-analysis is the ability to be completely objective in evaluating research findings.  Not all topics, however, have sufficient research evidence to allow a meta-analysis to be conducted.  In that case, an integrative review is an appropriate strategy. 

Some of the content in this section is from Systematic reviews and meta-analyses: step by step guide created by Kate McAllister.

  • << Previous: Getting Started
  • Next: Research Design >>
  • Last Updated: Jul 15, 2024 10:34 AM
  • URL: https://guides.lib.udel.edu/researchmethods

Frequently asked questions

What is the difference between a literature review and a theoretical framework.

A literature review and a theoretical framework are not the same thing and cannot be used interchangeably. While a theoretical framework describes the theoretical underpinnings of your work, a literature review critically evaluates existing research relating to your topic. You’ll likely need both in your dissertation .

Frequently asked questions: Dissertation

Dissertation word counts vary widely across different fields, institutions, and levels of education:

  • An undergraduate dissertation is typically 8,000–15,000 words
  • A master’s dissertation is typically 12,000–50,000 words
  • A PhD thesis is typically book-length: 70,000–100,000 words

However, none of these are strict guidelines – your word count may be lower or higher than the numbers stated here. Always check the guidelines provided by your university to determine how long your own dissertation should be.

A dissertation prospectus or proposal describes what or who you plan to research for your dissertation. It delves into why, when, where, and how you will do your research, as well as helps you choose a type of research to pursue. You should also determine whether you plan to pursue qualitative or quantitative methods and what your research design will look like.

It should outline all of the decisions you have taken about your project, from your dissertation topic to your hypotheses and research objectives , ready to be approved by your supervisor or committee.

Note that some departments require a defense component, where you present your prospectus to your committee orally.

A thesis is typically written by students finishing up a bachelor’s or Master’s degree. Some educational institutions, particularly in the liberal arts, have mandatory theses, but they are often not mandatory to graduate from bachelor’s degrees. It is more common for a thesis to be a graduation requirement from a Master’s degree.

Even if not mandatory, you may want to consider writing a thesis if you:

  • Plan to attend graduate school soon
  • Have a particular topic you’d like to study more in-depth
  • Are considering a career in research
  • Would like a capstone experience to tie up your academic experience

The conclusion of your thesis or dissertation should include the following:

  • A restatement of your research question
  • A summary of your key arguments and/or results
  • A short discussion of the implications of your research

The conclusion of your thesis or dissertation shouldn’t take up more than 5–7% of your overall word count.

For a stronger dissertation conclusion , avoid including:

  • Important evidence or analysis that wasn’t mentioned in the discussion section and results section
  • Generic concluding phrases (e.g. “In conclusion …”)
  • Weak statements that undermine your argument (e.g., “There are good points on both sides of this issue.”)

Your conclusion should leave the reader with a strong, decisive impression of your work.

While it may be tempting to present new arguments or evidence in your thesis or disseration conclusion , especially if you have a particularly striking argument you’d like to finish your analysis with, you shouldn’t. Theses and dissertations follow a more formal structure than this.

All your findings and arguments should be presented in the body of the text (more specifically in the discussion section and results section .) The conclusion is meant to summarize and reflect on the evidence and arguments you have already presented, not introduce new ones.

A theoretical framework can sometimes be integrated into a  literature review chapter , but it can also be included as its own chapter or section in your dissertation . As a rule of thumb, if your research involves dealing with a lot of complex theories, it’s a good idea to include a separate theoretical framework chapter.

While a theoretical framework describes the theoretical underpinnings of your work based on existing research, a conceptual framework allows you to draw your own conclusions, mapping out the variables you may use in your study and the interplay between them.

A thesis or dissertation outline is one of the most critical first steps in your writing process. It helps you to lay out and organize your ideas and can provide you with a roadmap for deciding what kind of research you’d like to undertake.

Generally, an outline contains information on the different sections included in your thesis or dissertation , such as:

  • Your anticipated title
  • Your abstract
  • Your chapters (sometimes subdivided into further topics like literature review , research methods , avenues for future research, etc.)

When you mention different chapters within your text, it’s considered best to use Roman numerals for most citation styles. However, the most important thing here is to remain consistent whenever using numbers in your dissertation .

In most styles, the title page is used purely to provide information and doesn’t include any images. Ask your supervisor if you are allowed to include an image on the title page before doing so. If you do decide to include one, make sure to check whether you need permission from the creator of the image.

Include a note directly beneath the image acknowledging where it comes from, beginning with the word “ Note .” (italicized and followed by a period). Include a citation and copyright attribution . Don’t title, number, or label the image as a figure , since it doesn’t appear in your main text.

Definitional terms often fall into the category of common knowledge , meaning that they don’t necessarily have to be cited. This guidance can apply to your thesis or dissertation glossary as well.

However, if you’d prefer to cite your sources , you can follow guidance for citing dictionary entries in MLA or APA style for your glossary.

A glossary is a collection of words pertaining to a specific topic. In your thesis or dissertation, it’s a list of all terms you used that may not immediately be obvious to your reader. In contrast, an index is a list of the contents of your work organized by page number.

The title page of your thesis or dissertation goes first, before all other content or lists that you may choose to include.

The title page of your thesis or dissertation should include your name, department, institution, degree program, and submission date.

Glossaries are not mandatory, but if you use a lot of technical or field-specific terms, it may improve readability to add one to your thesis or dissertation. Your educational institution may also require them, so be sure to check their specific guidelines.

A glossary or “glossary of terms” is a collection of words pertaining to a specific topic. In your thesis or dissertation, it’s a list of all terms you used that may not immediately be obvious to your reader. Your glossary only needs to include terms that your reader may not be familiar with, and is intended to enhance their understanding of your work.

A glossary is a collection of words pertaining to a specific topic. In your thesis or dissertation, it’s a list of all terms you used that may not immediately be obvious to your reader. In contrast, dictionaries are more general collections of words.

An abbreviation is a shortened version of an existing word, such as Dr. for Doctor. In contrast, an acronym uses the first letter of each word to create a wholly new word, such as UNESCO (an acronym for the United Nations Educational, Scientific and Cultural Organization).

As a rule of thumb, write the explanation in full the first time you use an acronym or abbreviation. You can then proceed with the shortened version. However, if the abbreviation is very common (like PC, USA, or DNA), then you can use the abbreviated version from the get-go.

Be sure to add each abbreviation in your list of abbreviations !

If you only used a few abbreviations in your thesis or dissertation , you don’t necessarily need to include a list of abbreviations .

If your abbreviations are numerous, or if you think they won’t be known to your audience, it’s never a bad idea to add one. They can also improve readability, minimizing confusion about abbreviations unfamiliar to your reader.

A list of abbreviations is a list of all the abbreviations that you used in your thesis or dissertation. It should appear at the beginning of your document, with items in alphabetical order, just after your table of contents .

Your list of tables and figures should go directly after your table of contents in your thesis or dissertation.

Lists of figures and tables are often not required, and aren’t particularly common. They specifically aren’t required for APA-Style, though you should be careful to follow their other guidelines for figures and tables .

If you have many figures and tables in your thesis or dissertation, include one may help you stay organized. Your educational institution may require them, so be sure to check their guidelines.

A list of figures and tables compiles all of the figures and tables that you used in your thesis or dissertation and displays them with the page number where they can be found.

The table of contents in a thesis or dissertation always goes between your abstract and your introduction .

You may acknowledge God in your dissertation acknowledgements , but be sure to follow academic convention by also thanking the members of academia, as well as family, colleagues, and friends who helped you.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

In a thesis or dissertation, the discussion is an in-depth exploration of the results, going into detail about the meaning of your findings and citing relevant sources to put them in context.

The conclusion is more shorter and more general: it concisely answers your main research question and makes recommendations based on your overall findings.

In the discussion , you explore the meaning and relevance of your research results , explaining how they fit with existing research and theory. Discuss:

  • Your  interpretations : what do the results tell us?
  • The  implications : why do the results matter?
  • The  limitation s : what can’t the results tell us?

The results chapter or section simply and objectively reports what you found, without speculating on why you found these results. The discussion interprets the meaning of the results, puts them in context, and explains why they matter.

In qualitative research , results and discussion are sometimes combined. But in quantitative research , it’s considered important to separate the objective results from your interpretation of them.

Results are usually written in the past tense , because they are describing the outcome of completed actions.

The results chapter of a thesis or dissertation presents your research results concisely and objectively.

In quantitative research , for each question or hypothesis , state:

  • The type of analysis used
  • Relevant results in the form of descriptive and inferential statistics
  • Whether or not the alternative hypothesis was supported

In qualitative research , for each question or theme, describe:

  • Recurring patterns
  • Significant or representative individual responses
  • Relevant quotations from the data

Don’t interpret or speculate in the results chapter.

To automatically insert a table of contents in Microsoft Word, follow these steps:

  • Apply heading styles throughout the document.
  • In the references section in the ribbon, locate the Table of Contents group.
  • Click the arrow next to the Table of Contents icon and select Custom Table of Contents.
  • Select which levels of headings you would like to include in the table of contents.

Make sure to update your table of contents if you move text or change headings. To update, simply right click and select Update Field.

All level 1 and 2 headings should be included in your table of contents . That means the titles of your chapters and the main sections within them.

The contents should also include all appendices and the lists of tables and figures, if applicable, as well as your reference list .

Do not include the acknowledgements or abstract in the table of contents.

The abstract appears on its own page in the thesis or dissertation , after the title page and acknowledgements but before the table of contents .

An abstract for a thesis or dissertation is usually around 200–300 words. There’s often a strict word limit, so make sure to check your university’s requirements.

In a thesis or dissertation, the acknowledgements should usually be no longer than one page. There is no minimum length.

The acknowledgements are generally included at the very beginning of your thesis , directly after the title page and before the abstract .

Yes, it’s important to thank your supervisor(s) in the acknowledgements section of your thesis or dissertation .

Even if you feel your supervisor did not contribute greatly to the final product, you must acknowledge them, if only for a very brief thank you. If you do not include your supervisor, it may be seen as a snub.

In the acknowledgements of your thesis or dissertation, you should first thank those who helped you academically or professionally, such as your supervisor, funders, and other academics.

Then you can include personal thanks to friends, family members, or anyone else who supported you during the process.

Ask our team

Want to contact us directly? No problem.  We  are always here for you.

Support team - Nina

Our team helps students graduate by offering:

  • A world-class citation generator
  • Plagiarism Checker software powered by Turnitin
  • Innovative Citation Checker software
  • Professional proofreading services
  • Over 300 helpful articles about academic writing, citing sources, plagiarism, and more

Scribbr specializes in editing study-related documents . We proofread:

  • PhD dissertations
  • Research proposals
  • Personal statements
  • Admission essays
  • Motivation letters
  • Reflection papers
  • Journal articles
  • Capstone projects

Scribbr’s Plagiarism Checker is powered by elements of Turnitin’s Similarity Checker , namely the plagiarism detection software and the Internet Archive and Premium Scholarly Publications content databases .

The add-on AI detector is powered by Scribbr’s proprietary software.

The Scribbr Citation Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.

You can find all the citation styles and locales used in the Scribbr Citation Generator in our publicly accessible repository on Github .

Usc Upstate Library Home

Literature Review: Types of Literature Reviews

  • Literature Review
  • Purpose of a Literature Review
  • Work in Progress
  • Compiling & Writing
  • Books, Articles, & Web Pages

Types of Literature Reviews

  • Departmental Differences
  • Citation Styles & Plagiarism
  • Know the Difference! Systematic Review vs. Literature Review

It is important to think of knowledge in a given field as consisting of three layers.

  • First, there are the primary studies that researchers conduct and publish.
  • Second, are the reviews of those studies that summarize and offer new interpretations built from and often extending beyond the original studies.
  • Third, there are the perceptions, conclusions, opinions, and interpretations that are shared informally that become part of the lore of the field.

In composing a literature review, it is important to note that it is often this third layer of knowledge that is cited as "true" even though it often has only a loose relationship to the primary studies and secondary literature reviews.

Given this, while literature reviews are designed to provide an overview and synthesis of pertinent sources you have explored, there are several approaches to how they can be done, depending upon the type of analysis underpinning your study. Listed below are definitions of types of literature reviews:

Argumentative Review      This form examines literature selectively in order to support or refute an argument, deeply embedded assumption, or philosophical problem already established in the literature. The purpose is to develop a body of literature that establishes a contrarian viewpoint. Given the value-laden nature of some social science research [e.g., educational reform; immigration control], argumentative approaches to analyzing the literature can be a legitimate and important form of discourse. However, note that they can also introduce problems of bias when they are used to make summary claims of the sort found in systematic reviews.

Integrative Review      Considered a form of research that reviews, critiques, and synthesizes representative literature on a topic in an integrated way such that new frameworks and perspectives on the topic are generated. The body of literature includes all studies that address related or identical hypotheses. A well-done integrative review meets the same standards as primary research in regard to clarity, rigor, and replication.

Historical Review      Few things rest in isolation from historical precedent. Historical reviews are focused on examining research throughout a period of time, often starting with the first time an issue, concept, theory, phenomenon emerged in the literature, then tracing its evolution within the scholarship of a discipline. The purpose is to place research in a historical context to show familiarity with state-of-the-art developments and to identify the likely directions for future research.

Methodological Review      A review does not always focus on what someone said [content], but how they said it [method of analysis]. This approach provides a framework of understanding at different levels (i.e. those of theory, substantive fields, research approaches, and data collection and analysis techniques), enables researchers to draw on a wide variety of knowledge ranging from the conceptual level to practical documents for use in fieldwork in the areas of ontological and epistemological consideration, quantitative and qualitative integration, sampling, interviewing, data collection and data analysis, and helps highlight many ethical issues which we should be aware of and consider as we go through our study.

Systematic Review      This form consists of an overview of existing evidence pertinent to a clearly formulated research question, which uses pre-specified and standardized methods to identify and critically appraise relevant research, and to collect, report, and analyze data from the studies that are included in the review. Typically it focuses on a very specific empirical question, often posed in a cause-and-effect form, such as "To what extent does A contribute to B?"

Theoretical Review      The purpose of this form is to concretely examine the corpus of theory that has accumulated in regard to an issue, concept, theory, phenomenon. The theoretical literature review help establish what theories already exist, the relationships between them, to what degree the existing theories have been investigated, and to develop new hypotheses to be tested. Often this form is used to help establish a lack of appropriate theories or reveal that current theories are inadequate for explaining new or emerging research problems. The unit of analysis can focus on a theoretical concept or a whole theory or framework.

* Kennedy, Mary M. "Defining a Literature." Educational Researcher 36 (April 2007): 139-147.

All content is from The Literature Review created by Dr. Robert Larabee USC

  • << Previous: Books, Articles, & Web Pages
  • Next: Departmental Differences >>
  • Last Updated: Oct 19, 2023 12:07 PM
  • URL: https://uscupstate.libguides.com/Literature_Review

home

  • Getting Started
  • Topic Overviews
  • Other Databases
  • Tests and Measures

What is an Empirical Study?

Literature review.

  • Organize Your Sources
  • NEW: LibKey Nomad

An empirical article reports the findings of a study conducted by the authors and uses data gathered from an experiment or observation. An empirical study is verifiable and "based on facts, systematic observation, or experiment, rather than theory or general philosophical principle" ( APA Databases Methodology Field Values ).  In other words, it tells the story of a research conducted, doing it in great detail. The study may utilize quantitative research methods to produce numerical data and seek to find a causal relationship between two or more variables. Conversely, it may use qualitative research methods, which involves collecting non-numerical data to analyze concepts, opinions, or experiences.

Key parts of an empirical article:

  • Abstract  - Provides a brief overview of the research.
  • Introduction  - The introduction provides a review of previous research on the topic and states the hypothesis. 
  • Methods  - The methods area describes how the research was conducted, identifies the design of the study, the participants, and any measurements that were taken during the study.
  • Results  - The results section describes the outcome of the study. 
  • Discussion (or conclusion)  - The discussion section addresses the researchers' interpretations of their study and any future implications from their findings.
  • References  - A list of works that were cited in the study.
  • What is a Lit. Review?
  • Purpose of a Lit. Review
  • Limitations
  • Non-Empirical Research
  • Useful Links/Additional Info

A review of the published resources related to a specific issue, area of research, or specific theory. It provides a summary, description, and critical evaluation of each resource.

A literature review:

  •  Synthesizes and places into context the research and scholarly literature relevant to the topic.
  • Maps the different approaches to a given question and reveals patterns.
  • Forms the foundation for subsequent research 
  • Justifies the significance of the new investigation.
  • Contains the most pertinent studies and points to important past and current research and practices.

A Lit. Review provides background and context; it shows how your research will contribute to the field. 

There are generally five parts to a literature review:

  • Introduction
  • Bibliography

A literature review should: 

  • Provide a comprehensive and updated review of the literature
  • Explain why this review has taken place
  • Articulate a position or hypothesis
  • Acknowledge and account for conflicting and corroborating points of view

 Add / Reorder  

A lit. review's purpose is to offer an overview of the significant works published on a topic. It can be written as an introduction to a study in order to:

  • Demonstrate how a study fills a gap in research
  • Compare a study with other research that's been done

It could be a separate work (a research article on its own) that:

  • Organizes or describes a topic
  • Describes variables within a particular issue/problem

Some limitations of a literature review include:

  • It's a snapshot in time. Unlike other reviews, this one has beginning, a middle and an end. Future developments could make your work less relevant.
  • It may be too focused. Some niche studies may miss the bigger picture.
  • It can be difficult to be comprehensive. There is no way to ensure that all the literature on a topic was considered.
  • It is easy to be biased if you stick to top tier journals. There may be other places where people are publishing exemplary research. Look to open access publications and conferences to reflect a more inclusive collection. Also, make sure to include opposing views (and not just supporting evidence).

Non-Empirical Research articles focus more on theories, methods and their implications for research. Non-Empirical Research can include comprehensive reviews and articles that focus on methodology. They rely on empirical research literature as well but does not need to be essentially data-driven.

Write a Literature Review (UCSC)

  • Literature Review (Purdue)
  • Overview: Lit Reviews (UNC)
  • Review of Literature (UW-Madison)
  • << Previous: Tests and Measures
  • Next: Organize Your Sources >>
  • Last Updated: Aug 15, 2024 7:09 PM
  • URL: https://libguides.macalester.edu/psyc

This site uses cookies to optimize functionality and give you the best possible experience. If you continue to navigate this website beyond this page, cookies will be placed on your browser. To learn more about cookies, click here .

Navigating Spatial Ability for Mathematics Education: a Review and Roadmap

  • REVIEW ARTICLE
  • Open access
  • Published: 17 August 2024
  • Volume 36 , article number  90 , ( 2024 )

Cite this article

You have full access to this open access article

literature review theoretical empirical

  • Kelsey E. Schenck   ORCID: orcid.org/0000-0002-3777-2085 1 &
  • Mitchell J. Nathan   ORCID: orcid.org/0000-0003-2058-7016 2  

Spatial skills can predict mathematics performance, with many researchers investigating how and why these skills are related. However, a literature review on spatial ability revealed a multiplicity of spatial taxonomies and analytical frameworks that lack convergence, presenting a confusing terrain for researchers to navigate. We expose two central challenges: (1) many of the ways spatial ability is defined and subdivided are often not based in well-evidenced theoretical and analytical frameworks, and (2) the sheer variety of spatial assessments. These challenges impede progress in designing spatial skills interventions for improving mathematics thinking based on causal principles, selecting appropriate metrics for documenting change, and analyzing and interpreting student outcome data. We offer solutions by providing a practical guide for navigating and selecting among the various major spatial taxonomies and instruments used in mathematics education research. We also identify current limitations of spatial ability research and suggest future research directions.

Explore related subjects

  • Artificial Intelligence

Avoid common mistakes on your manuscript.

Introduction

Spatial ability can be broadly defined as imagining, maintaining, and manipulating spatial information and relations. Over the past several decades, researchers have found reliable associations between spatial abilities and mathematics performance (e.g., Newcombe, 2013 ; Young et al., 2018a ). However, the sheer plurality of spatial taxonomies and analytical frameworks that scholars use to describe spatial skills, the lack of theoretical spatial taxonomies, and the variety of spatial assessments available makes it very difficult for education researchers to make the appropriate selection of spatial measures for their investigations. Education researchers also face the daunting task of selecting the ideal spatial skills to design studies and interventions to enhance student learning and the development of reasoning in STEM (science, technology, engineering, and mathematics) more broadly. To address these needs, we have provided a review that focuses on the relationship between spatial skills and mathematical thinking and learning. Our specific contribution is to offer a guide for educational researchers who recognize the importance of measuring spatial skills but who are themselves not spatial skills scholars. This guide will help researchers navigate and select among the various major taxonomies on spatial reasoning and among the various instruments for assessing spatial skills for use in mathematics education research.

We offer three central objectives for this paper. First, we aim to provide an updated review of the ways spatial ability is defined and subdivided. Second, we list some of the currently most widely administered instruments used to measure subcomponents of spatial ability. Third, we propose an organizational framework that acknowledges this complex picture and — rather than offer overly optimistic proposals for resolving long-standing complexities — offers ways for math education researchers to operate within this framework from an informed perspective. This review offers guidance through this complicated state of the literature to help STEM education researchers select appropriate spatial measures and taxonomies for their investigations, assessments, and interventions. We review and synthesize several lines of the spatial ability literature and provide researchers exploring the link between spatial ability and mathematics education with a guiding framework for research design. To foreshadow, this framework identifies three major design decisions that can help guide scholars and practitioners seeking to use spatial skills to enhance mathematics education research. The framework provides a theoretical basis to select: (1) a spatial ability taxonomy, (2) corresponding analytical frameworks, and (3) spatial tasks for assessing spatial performance (Fig.  1 ). This guiding framework is intended to provide educational researchers and practitioners with a common language and decision-making process for conducting research and instruction that engages learners’ spatial abilities. The intent is that investigators’ use of this framework may enhance their understanding of the associative and causal links between spatial and mathematical abilities, and thereby improve the body of mathematics education research and practice.

figure 1

Major elements of an investigation into the role of spatial reasoning

The Importance of Spatial Reasoning for Mathematics and STEM Education

Spatial ability has been linked to the entrance into, retention in, and success within STEM fields (e.g., Shea et al., 2001 ; Wolfgang et al., 2003 ), while deficiencies in spatial abilities have been shown to create obstacles for STEM education (Harris et al., 2013 ; Wai et al., 2009 ). Although spatial skills are not typically taught in the general K-16 curriculum, these lines of research have led some scholars to make policy recommendations for explicitly teaching children about spatial thinking as a viable way to increase STEM achievement and retention in STEM education programs and career pathways (Sorby, 2009 ; Stieff & Uttal, 2015 ). Combined, the findings suggest that spatial ability serves as a gateway for entry into STEM fields (Uttal & Cohen, 2012 ) and that educational institutions should consider the importance of explicitly training students’ spatial thinking skills as a way to further develop students’ STEM skills.

Findings from numerous studies have demonstrated that spatial ability is critical for many domains of mathematics education, including basic numeracy and arithmetic (Case et al., 1996 ; Gunderson et al., 2012 ; Hawes et al., 2015 ; Tam et al., 2019 ) and geometry (Battista et al., 2018 ; Davis, 2015 ), as well as more advanced topics such as algebra word problem-solving (Oostermeijer et al., 2014 ), calculus (Sorby et al., 2013 ), and interpreting complex quantitative relationships (Tufte, 2001 ). For example, scores on the mathematics portion of the Program for International Student Assessment (PISA) are significantly positively correlated with scores on tests of spatial cognition (Sorby & Panther, 2020 ). Broadly, studies have found evidence of the connections between success on spatial tasks and mathematics tasks in children and adults. For example, first grade girls’ spatial skills were correlated with the frequency of retrieval and decomposition strategies when solving arithmetic problems (Laski et al., 2013 ), and these early spatial ability scores were the strongest predictors of their sixth-grade mathematics reasoning abilities (Casey et al., 2015 ). In adults ( n  = 101), spatial ability scores were positively associated with mathematics abilities measured through PISA mathematics questions (Schenck & Nathan, 2020 ).

Though there is a clear connection between spatial and mathematical abilities, understanding the intricacies of this relationship is difficult. Some scholars have sought to determine which mathematical concepts engage spatial thinking. For example, studies on specific mathematical concepts found spatial skills were associated with children’s one-to-one mapping (Gallistel & Gelman, 1992 ), missing-term problems (Cheng & Mix, 2014 ), mental computation (Verdine et al., 2014 ), and various geometry concepts (Hannafin et al., 2008 ). Schenck and Nathan ( 2020 ) identified associations between several specific sub-components of spatial reasoning and specific mathematics skills of adults. Specifically, adults’ mental rotation skills correlated with performance on questions about change and relationships, spatial orientation skills correlated with quantity questions, and spatial visualization skills correlated with questions about space and shape. Burte and colleagues ( 2017 ) proposed categories of mathematical concepts such as problem type, problem context, and spatial thinking level to target math improvements following spatial invention training. Their study concluded that mathematics problems that included visual representations, real-world contexts, and that involved spatial thinking are more likely to show improvement after embodied spatial training.

However, these lines of work are complicated by the variety of problem-solving strategies students employ when solving mathematics problems and issues with generalizability. While some students may rely on a specific spatial ability to solve a particular mathematics problem, others may use non-spatial approaches or apply spatial thinking differently for the same assessment item. For example, some students solving graphical geometric problem-solving tasks utilized their spatial skills by constructing and manipulating mental images of the problem, while others created external representations such as isometric sketches, alleviating the need for some aspects of spatial reasoning (Buckley et al., 2019 ). Though this difference could be attributed to lower spatial abilities in the students who used external representations, it could also be attributed to high levels of discipline-specific knowledge seen in domains such as geoscience (Hambrick et al., 2012 ), physics (Kozhevnikov & Thorton, 2006 ), and chemistry (Stieff, 2007 ). Though some amount of generalization is needed in spatial and mathematics education research, investigators should take care not to overgeneralize findings of specific spatial ability and mathematic domain connections.

This selective review shows ample reasons to attend to spatial abilities in mathematics education research and the design of effective interventions. However, studies across this vast body of work investigating the links between spatial abilities and mathematics performance use different spatial taxonomies, employ different spatial measures, and track improvement across many different topics of mathematics education. This variety makes it difficult for mathematics education scholars to draw clear causal lines between specific spatial skills interventions and specific mathematics educational improvements and for educators to follow clear guidance as to how to improve mathematical reasoning through spatial skills development.

The Varieties of Approaches to Explaining the Spatial-Mathematics Connection

Meta-analyses have suggested that domain-general reasoning skills such as fluid reasoning and verbal skills may mediate the relationships between spatial and mathematical skills (Atit et al., 2022 ), and that the mathematical domain is a moderator with the strongest association between logical reasoning and spatial skills (Xie et al., 2020 ). Despite these efforts, the specific nature of these associations remains largely unknown. Several lines of research have suggested processing requirements shared among mathematical and spatial tasks could account for these associations. Brain imaging studies have shown similar brain activation patterns in both spatial and mathematics tasks (Amalric & Dehaene, 2016 ; Hawes & Ansari, 2020 ; Hubbard et al., 2005 ; Walsh, 2003 ). Hawes and Ansari’s ( 2020 ) review of psychology, neuroscience, and education spatial research described four possible explanatory accounts (spatial representations of numbers, shared neuronal processing, spatial modeling, and working memory) for how spatial visualization was linked to numerical competencies. They suggest integrating the four accounts to explain an underlying singular mechanism to explain lasting neural and behavioral correlations between spatial and numerical processes. In a study of spatial and mathematical thinking, Mix et al. ( 2016 ) showed a strong within-domain factor structure and overlapping variance irrespective of task-specificity. They proposed that the ability to recognize and decompose objects (i.e., form perception ), visualize spatial information, and relate distances in one space to another (i.e., spatial scaling ) are shared processes required when individuals perform a range of spatial reasoning and mathematical reasoning tasks.

Efforts to date to document the relationship between mathematics performance and spatial skills or to enhance mathematics through spatial skills interventions show significant limitations in their theoretical framing. One significant issue is theory-based. Currently, there is no commonly accepted definition of spatial ability or its exact sub-components in the literature (Carroll, 1993 ; Lohman, 1988 ; McGee, 1979 ; Michael et al., 1957 ; Yilmaz, 2009 ). For example, many studies designed to investigate and improve spatial abilities have tended to focus on either a particular spatial sub-component or a particular mathematical skill. Much of the research has primarily focused on measuring only specific aspects of object-based spatial ability, such as mental rotation. Consequently, there is insufficient guidance for mathematics and STEM education researchers to navigate the vast landscape of spatial taxonomies and analytical frameworks, select the most appropriate measures for documenting student outcomes, design potential interventions targeting spatial abilities, select appropriate metrics, and analyze and interpret outcome data.

One notable program of research that has been particularly attentive to the spatial qualities of mathematical reasoning is the work by Battista et al. ( 2018 ). They collected think-aloud data about emerging spatial descriptions from individual interviews and teaching experiments with elementary and middle-grade students to investigate the relationship between spatial reasoning and geometric reasoning. Across several studies, the investigators seldom observed the successful application of generalized object-based spatial skills of the type typically measured by psychometric instruments of spatial ability. Rather, they found that students’ geometric reasoning succeeded when “spatial visualization and spatial analytic reasoning [were] based on operable knowledge of relevant geometric properties of the spatial-geometric objects under consideration” (Battista et al., p 226; emphasis added). By highlighting the ways that one’s reasoning aligns with geometric properties, Battista and colleagues shifted the analytic focus away from either general, psychological constructs that can be vague and overly broad, and away from a narrow set of task-specific skills, to a kind of intermediate-level that are relevant for describing topic and task-specific performance while identifying forms of reasoning that may generalize beyond the specific tasks and objects at hand. For example, property-based spatial analytic reasoning might focus on an invariant geometric property, such as the property of rectangles that their diagonals always bisect each other, to guide the decomposition and transformation of rectangles and their component triangles in service of a geometric proof. Establishing bridges and analytic distinctions between education domain-centric analyses of this sort and traditional psychometric accounts about domain-general spatial abilities is central to our review and broader aims to relate mathematical reasoning processes to spatial processes.

Selecting a Spatial Taxonomy

As noted, a substantial body of empirical evidence indicates that students’ spatial abilities figure into their mathematical reasoning, offering promising pathways toward interventions designed to improve math education. To capitalize on this association, one of the first decisions mathematics education researchers must make is selecting a spatial taxonomy that suits the data collected and analyzed. A spatial taxonomy is an organizational system for classifying spatial abilities and, thus, serves an important role in shaping the theoretical framework for any inquiry as well as interpreting and generalizing findings from empirical investigations. However, the manner in which spatial abilities are subdivided, defined, and named has changed over the decades of research on this topic. In practice, the decision for how to define and select spatial abilities is often difficult for researchers who are not specialists due to the expansive literature in this area.

In an attempt to make the vast number of spatial definitions and subcomponents more navigable for mathematics researchers and educators, we describe three general types of spatial taxonomies that are reflected in the current literature: Those that (1) classify according to different specific spatial abilities, (2) distinguish between different broad spatial abilities, and (3) those that treat spatial abilities as derived from a single, or unitary, factor structure. Although this is not a comprehensive account, these spatial taxonomies were chosen to highlight the main sub-factor dissociations in the literature.

Specific-Factor Structures

Since the earliest conceptualization (e.g., Galton, 1879 ), the communities of researchers studying spatial abilities have struggled to converge on one all-encompassing definition or provide a complete list of its subcomponents. Though the literature provides a variety of definitions of spatial ability that focus on the capacity to visualize and manipulate mental images (e.g., Battista, 2007 ; Gaughran, 2002 ; Lohman, 1979 ; Sorby, 1999 ), some scholars posit that it may be more precise to define spatial ability as a constellation of quantifiably measurable skills based on performance on tasks that load on specific individual spatial factors (Buckley et al., 2018 ). Difficulties directly observing the cognitive processes and neural structures involved in spatial reasoning have, in practice, spurred substantive research focused on uncovering the nature of spatial ability and its subcomponents. Historically, scholars have used psychometric methods to identify a variety of specific spatial subcomponents, including closure flexibility/speed (Carroll, 1993 ), field dependence/independence (McGee, 1979 ; Witkin, 1950 ), spatial relations (Carroll, 1993 ; Lohman, 1979 ), spatial orientation (Guilford & Zimmerman, 1948 ), spatial visualization (Carroll, 1993 ; McGee, 1979 ), and speeded rotation (Lohman, 1988 ). However, attempts to dissociate subfactors were often met with difficulty due to differing factor analytic techniques and variations in the spatial ability tests that were used (D'Oliveira, 2004 ). The subsequent lack of cohesion in this field of study led to different camps of researchers adopting inconsistent names for spatial subcomponents (Cooper & Mumaw, 1985 ; McGee, 1979 ) and divergent factorial frameworks (Hegarty & Waller, 2005 ; Yilmaz, 2009 ). Such a lack of convergence is clearly problematic for the scientific study of spatial ability and its application to mathematics education research.

In the last few decades, several attempts have been made to dissociate subcomponents of spatial ability further. Yilmaz ( 2009 ) combined aspects of the models described above with studies identifying dynamic spatial abilities and environmental spatial abilities to divide spatial ability into eight factors, which acknowledge several spatial skills (e.g., environmental ability and spatiotemporal ability) needed in real-life situations. More recently, Buckley et al. ( 2018 ) proposed an extended model for spatial ability. This model combines many ideas from the previously described literature and the spatial factors identified in the Cattell-Horn-Carroll theory of intelligence (see Schneider & McGrew, 2012 ). It currently includes 25 factors that can also be divided into two broader categories of static and dynamic, with the authors acknowledging that additional factors may be added as research warrants. It is unclear how a dissociation of this many subfactors could be practically applied in empirical research, which we regard as an important goal for bridging theory and research practices.

Dissociation Between Spatial Orientation and Rotational Spatial Visualization

Though specific definitions vary, many authors of the models discussed above agree on making a dissociation between spatial orientation and visualization skills. While performing perspective-taking (a subfactor of spatial orientation) and rotational spatial visualization tasks often involve a form of rotation, several studies have indicated that these skills are psychometrically separable. Measures for these skills often ask participants to anticipate the appearance of arrays of objects after either a rotation (visualization) of the objects or a change in the objects’ perspective (perspective-taking). Findings show that visualization and perspective-taking tasks have different error patterns and activate different neural processes (e.g., Huttenlocher & Presson, 1979 ; Kozhevnikov & Hegarty, 2001 ; Wraga et al., 2000 ). Perspective rotation tasks often lead to egocentric errors such as reflection errors when trying to reorient perspectives, while object rotation task errors are not as systematic (Kozhevnikov & Hegarty, 2001 ; Zacks et al., 2000 ). For example, to solve a spatial orientation/perspective-taking task (Fig.  2 A), participants may imagine their bodies moving to a new position or viewpoint with the objects of interest remaining stationary. In contrast, the objects in a spatial visualization task are often rotated in one’s imagination (Fig.  2 B). Behavioral and neuroscience evidence is consistent with these findings, suggesting a dissociation between an object-to-object representational system and a self-to-object representational system (Hegarty & Waller, 2004 ; Kosslyn et al., 1998 ; Zacks et al., 1999 ). Thus, within the specific-factor structure of spatial ability, spatial orientation/perspective-taking can be considered a separate factor from spatial visualization/mental rotation (Thurstone, 1950 ).

figure 2

Exemplars of spatial orientation, mental rotation, and non-rotational spatial visualization tasks. The spatial orientation task ( A ) is adapted from Hegarty and Waller’s ( 2004 ) Object Perception/Spatial Orientation Test. The mental rotation task ( B ) is adapted from Vandenberg and Kuse’s ( 1978 ) Mental Rotation Test. The non-rotational spatial visualization task ( C ) is adapted from Ekstrom et al.’s ( 1976 ) Paper Folding Task

Dissociation Between Mental Rotation and Non-rotational Spatial Visualization

The boundaries between specific factors of spatial ability are often blurred and context dependent. To address this, Ramful and colleagues ( 2017 ) have created a three-factor framework that clarifies the distinctions between spatial visualization and spatial orientation (see the “Dissociation Between Spatial Orientation and Rotational Spatial Visualization” section) by treating mental rotation as a separate factor. Their framework is unique in that they used mathematics curricula, rather than solely basing their analysis on a factor analysis, to identify three sub-factors of spatial ability: (1) mental rotation, (2) spatial orientation, and (3) spatial visualization. Mental rotation describes how one imagines how a two-dimensional or three-dimensional object would appear after it has been turned (Fig.  2 B). Mental rotation is a cognitive process that has received considerable attention from psychologists (Bruce & Hawes, 2015 ; Lombardi et al., 2019 ; Maeda & Yoon, 2013 ). Spatial orientation , in contrast, involves egocentric representations of objects and locations and includes the notion of perspective-taking (Fig.  2 A). Spatial visualization in their classification system (previously an umbrella term for many spatial skills that included mental rotation) describes mental transformations that do not require mental rotation or spatial orientation (Linn & Peterson, 1985 ) and can be measured through tasks like those shown in Fig.  2 C that involve operations such as paper folding and unfolding. Under this definition, spatial visualization may involve complex sequences in which intermediate steps may need to be stored in spatial working memory (Shah & Miyake, 1996 ). In mathematics, spatial visualization skills often correlate with symmetry, geometric translations, part-to-whole relationships, and geometric nets (Ramful et al., 2017 ).

Summary and Implications

As described above , decades of research on spatial ability have involved scholars using factor-analytic methods to identify and define various spatial sub-components. The results of these effects have created a multitude of specific-factor structures, with models identifying anywhere from two to 25 different spatial subcomponents. However, there are two dissociations that may be particularly important for mathematics education research. The first is the dissociation between spatial orientation and spatial visualization abilities. Spatial orientation tasks typically involve rotating one’s perspective for viewing an object or scene, while spatial visualization tasks require imagining object rotation. The second dissociation is between mental rotation and non-rotational spatial visualization. While this distinction is relatively recent, it separates the larger spatial visualization sub-component into tasks that either involve rotating imagined objects or a sequence of visualization tasks that do not require mental rotation or spatial orientation. The historical focus on psychometric accounts of spatial ability strove to identify constructs that could apply generally to various forms of reasoning, yet it has contributed to a complex literature that may be difficult for scholars who are not steeped in the intricacies of spatial reasoning research to parse and effectively apply to mathematics education.

Studies of mathematical reasoning and learning that rely on specific-factor structures can yield different results and interpretations depending on their choices of factors. For example, Schenck et al. ( 2022 ) fit several models using different spatial sub-factors to predict undergraduates’ production of verbal mathematical insights. The authors demonstrated that combining mental rotation and non-rotational spatial visualization into a single factor (per McGee, 1979 ) rather than separating them (per Ramful et al., 2017 ) can lead to conflicting interpretations on the relevance of these skills for improving mathematics. Some scholars argue that a weakness of many traditional specific-factor structures of spatial ability is that they rely on exploratory factor analysis rather than confirmatory factor analyses informed by a clear theoretical basis of spatial ability (Uttal et al., 2013 ; Young et al., 2018b ). Finding differing results based on small and reasonable analytic choices presents a serious problem for finding convergence of the role of particular spatial abilities on particular mathematics concepts.

Broad-Factor Structures

Alternative approaches to factor-analytic methods rely on much broader distinctions between spatial ability subcomponents. We refer to these alternatives as broad-factor structure approaches since their categorizations align with theoretically motivated combinations of specific spatial ability subfactors. Some scholars who draw on broad-factor structures have argued for a partial dissociation (Ferguson et al., 2015 ; Hegarty et al., 2006 , 2018 ; Jansen, 2009 ; Potter, 1995 ). Large-scale spatial abilities involve reasoning about larger-scale objects and space, such as physical navigation and environmental maps. Small-scale spatial abilities are defined as those that predominantly rely on mental transformations of shapes or objects (e.g., mental rotation tasks). A meta-analysis (Wang et al., 2014 ) examining the relationship between small- and large-scale abilities provided further evidence that these two factors should be defined separately. Hegarty et al. ( 2018 ) recommend measuring large-scale abilities through sense-of-direction measures and navigation activities. These scholars suggest that small-scale abilities, such as mental rotation, may be measured through typical spatial ability tasks like those discussed in the “Choosing Spatial Tasks in Mathematics Education Research” section of this paper.

Other lines of research that use broad-factor structures have drawn on linguistic, cognitive, and neuroscientific findings to develop a 2 × 2 classification system that distinguishes between intrinsic and extrinsic information along one dimension, and static and dynamic tasks another an orthogonal dimension (Newcombe & Shipley, 2015 ; Uttal et al., 2013 ). Intrinsic spatial skills involve attention to a single object's spatial properties, while extrinsic spatial skills predominately rely on attention to the spatial relationships between objects. The second dimension in this classification system defines static tasks as those that involve recognizing and thinking about objects and their relations. In contrast, dynamic tasks often move beyond static coding of the spatial features of an object and its relations to imagining spatial transformations of one or more objects.

Uttal and colleagues ( 2013 ) describe how this 2 × 2 broad-factor classification framework can be mapped onto Linn and Peterson’s ( 1985 ) three-factor model, breaking spatial ability into spatial perception, mental rotation, and spatial visualization sub-factors. Spatial visualization tasks fall into the intrinsic classification and can address static and dynamic reasoning depending on whether the objects are unchanged or require spatial transformations. The Embedded Figures Test (Fig.  3 A; Witkin et al., 1971 ) is an example of an intrinsic-static classification, while Ekstrom and colleagues’ ( 1976 ) Form Board Test and Paper Folding Test (Fig.  3 B) are two examples of spatial visualization tasks that measure the intrinsic-dynamic classification. Mental rotation tasks (e.g., the Mental Rotations Test of Vandenberg & Kuse, 1978 ) also represent the intrinsic-dynamic category. Spatial perception tasks (e.g., water level tasks; Fig.  3 C; see Inhelder & Piaget, 1958 ) capture the extrinsic-static category in the 2 × 2 because they require coding spatial position information between objects or gravity without manipulating them. Furthermore, Uttal et al. ( 2013 ) address a limitation of Linn and Peterson’s ( 1985 ) model by including the extrinsic/dynamic classification, which they note can be measured through spatial orientation and navigation instruments such as the Guilford-Zimmerman Spatial Orientation Task (Fig.  3 D; Guilford & Zimmerman, 1948 ).

figure 3

Exemplar tasks that map to Uttal and colleagues’ ( 2013 ) framework . The intrinsic-static task ( A ) is adapted from Witkin and colleagues’ ( 1971 ) Embedded Figures Test. The intrinsic-dynamic task ( B ) is adapted from Ekstrom and colleagues’ ( 1976 ) Paper Folding Task. The extrinsic-static task ( C ) is adapted from Piaget and Inhelder’s ( 1956 ) water level tasks. The extrinsic-dynamic task ( D ) is adapted from Guilford and Zimmerman’s ( 1948 ) Spatial Orientation Survey Test

Though Uttal et al.’s ( 2013 ) classification provides a helpful framework for investigating spatial ability and its links to mathematics (Young et al., 2018b ), it faces several challenges. Some critics posit that spatial tasks often require a combination of spatial subcomponents and cannot be easily mapped onto one domain in the framework (Okamoto et al., 2015 ). For example, a think-aloud task might ask students to describe a different viewpoint of an object. The student may imagine a rotated object (intrinsic-dynamic), imagine moving their body to the new viewpoint (extrinsic-dynamic), use a combination of strategies, or employ a non-spatial strategy such as logical deduction. Additionally, an experimental study by Mix et al. ( 2018 ) testing the 2 × 2 classification framework using confirmatory factor analysis on data from children in kindergarten, 3rd, and 6th grades failed to find evidence for the static-dynamic dimension at any age or for the overall 2 × 2 classification framework. This study demonstrates that there are limitations to this framework in practice. It suggests that other frameworks with less dimensionality may be more appropriate for understanding children's spatial abilities.

Even in light of these challenges, broad-factor taxonomies can benefit researchers who do not expect specific sub-factors of spatial ability to be relevant for their data or those controlling for spatial ability as part of an investigation of a related construct. Currently, no validated and reliable instruments have been explicitly designed to assess these broad-factor taxonomies. Instead, the scholars proposing these broad-factor taxonomies suggest mapping existing spatial tasks, which are usually tied to specific sub-factors of spatial ability, to the broader categories.

Unitary-Factor Structure

Many scholars understand spatial ability to be composed of a set of specific or broad factors. Neuroimaging studies have even provided preliminary evidence of a distinction between object-based abilities such as mental rotation and orientation skills (e.g., Kosslyn & Thompson, 2003 ). However, there is also empirical support for considering spatial ability as a unitary construct . Early studies (Spearman,  1927 ; Thurstone,  1938 ) identified spatial ability as one factor separate from general intelligence that mentally operates on spatial or visual images. Evidence for a unitary model of spatial ability proposes a common genetic network that supports all spatial abilities (Malanchini et al., 2020 ; Rimfeld et al., 2017 ). When a battery of 10 gamified measures of spatial abilities was given to 1,367 twin pairs, results indicated that tests assessed a single spatial ability factor and that the one-factor model of spatial ability fit better than the two-factor model, even when controlling for a common genetic factor (Rimfeld et al., 2017 ). In another study, Malanchini et al. ( 2020 ) administered 16 spatial tests clustered into three main sub-components: Visualization , Navigation , and Object Manipulation . They then conducted a series of confirmatory factor analyses to fit one-factor (Spatial Ability), two-factor (Spatial Orientation and Object Manipulation), and three-factor models (Visualization, Navigation, and Object Manipulation). The one-factor model gave the best model fit, even when controlling for general intelligence.

A unitary structure is beneficial for researchers interested in questions about general associations between mathematics and spatial ability or for those using spatial ability as a moderator in their analyses. However, to date, no valid and reliable instruments have been created to fit within the unitary taxonomy, such as those that include various spatial items. Instead, researchers who discuss spatial ability as a unitary construct often choose one or multiple well-known spatial measures based on a particular sub-factor of spatial ability (e.g., Boonen et al., 2013 ; Burte et al., 2017 ). This issue motivates the need for an evidence-based, theory-grounded task selection procedure as well as the need to develop a unitary spatial cognition measure. In the absence of a single spatial cognition measure designed to assess spatial ability from a unitary perspective, researchers will need to think critically about selecting measures and analytic frameworks for their studies to cover a range of spatial ability sub-factors and address the limitations of such decisions.

This section reviewed ways spatial abilities have been historically defined and subdivided, with a focus on three of the most widely reported taxonomies: specific-factor structure, broad-factor structure, and unitary structure. The specific-factor structure taxonomy includes subcomponents, such as spatial orientation and rotational and non-rotational spatial visualization, that primarily arise using factor-analytic methods such as exploratory factor analyses. However, discrepancies in factor analytic techniques and test variations led to divergent nomenclature and factorial frameworks. A few dissociations in spatial skills arose from these well-supported methods, such as the distinction between spatial orientation and perspective-taking. The broad-factor structures taxonomy dissociates spatial abilities based on theoretically motivated categories, such as large-scale and small-scale spatial abilities. While these classifications may be helpful for investigating the links between spatial abilities and mathematics, there is currently no empirical evidence to support using these frameworks in practice. The unitary structure taxonomy is based on factor-analytic evidence for a single, overarching spatial ability factor that is separate from general intelligence. Despite the potential advantages of simplicity, there are currently no valid and reliable instruments for measuring a single spatial factor, so this must be based on performance using instruments that measure performance for a specific factor or are imputed across multiple instruments. Additional complexities of directly applying existing measures to mathematics education research include the awareness that mathematical task performance often involves the use of a variety of spatial and non-spatial skills.

Choosing Spatial Tasks in Mathematics Education Research

The context of mathematical reasoning and learning often leads to scenarios where the choice of spatial sub-components influences interpretations. Given the complex nature of spatial ability and the reliance on exploratory rather than confirmatory analyses, there is a need for dissociation approaches with clearer theoretical foundations. Due to the absence of comprehensive spatial cognition measures that address the possible broad-factor and unitary structure of spatial ability, researchers often resort to well-established spatial measures focusing on specific sub-factors, necessitating critical consideration in task selection and analytical frameworks. Thus, there is a need for evidence-based, theory-grounded task selection procedures to help address the current limitations in spatial ability as it relates to mathematics education research.

With so many spatial ability taxonomies to choose from, education researchers must carefully select tasks and surveys that match their stated research goals and theoretical frameworks, the spatial ability skills of interest, and the populations under investigation. As mentioned, mathematics education researchers often select spatial tasks based on practical motivations, such as access or familiarity, rather than theoretical ones. These decisions can be complicated by the vast number of spatial tasks, with little guidance for which ones best align with the various spatial taxonomies. In recent years, there has been a concerted effort by groups such as the Spatial Intelligence and Learning Center (spatiallearning.org) to collect and organize a variety of spatial measurements in one place. However, there is still work to be done to create a list of spatial instruments that researchers can easily navigate. To help guide researchers with these decisions, we have compiled a list of spatial instruments referenced in this paper and matched them with their associated spatial sub-components and intended populations (Table  1 ). These instruments primarily consist of psychometric tests initially designed to determine suitability for occupations such as in the military before being adapted for use with university and high school students (Hegaryt & Waller, 2005 ). As such, the majority of instruments are intended to test specific spatial sub-components derived from factor-analytic methods and were created by psychologists for use in controlled laboratory-based studies rather than in classroom contexts (Atit et al., 2020 ; Lowrie et al., 2020 ). Therefore, we have organized Table  1 by specific spatial sub-components described in the “Specific-Factor Structures” section that overlap with skills found in mathematics curricula as proposed by Ramful and colleagues ( 2017 ).

Comparing the instruments in these ways reveals several vital gaps that must be addressed to measure spatial cognition in a way that correlates with mathematics and spatial abilities across the lifespan. In particular, this analysis reveals an over-representation of certain spatial sub-components, such as mental rotation and spatial visualization, which also map to quadrants of the 2 × 2 (intrinsic-extrinsic/static-dynamic) classification system described in the “Broad-Factor Structures” section. It shows a pressing need for more tasks explicitly designed for other broad sub-components, such as the extrinsic-static classifications. It also reveals that the slate of available instruments is dominated by tasks that have only been tested on adults and few measures that test more than one subcomponent. These disparities are essential for educational considerations and are taken up in the final section.

Due to the sheer number of spatial tasks, the observations that these tasks may not load consistently on distinct spatial ability factors and the lack of tasks that address broad and unitary factor structures, it is not possible in the scope of this review to discuss every task-factor relationship. As a practical alternative, we have grouped spatial ability tasks into three aggregated categories based on their specific-factor dissociations, as discussed in the previous section: Spatial orientation tasks, non-rotational spatial visualization tasks, and mental rotation tasks (for examples, see Fig.  2 ). We have chosen these three categories for two reasons: (1) there is empirical evidence linking these spatial sub-categories to mathematical thinking outcomes, and (2) these categories align with Ramful et al.’s ( 2017 ) three-factor framework, which is one of the only spatial frameworks that was designed with links to mathematical thinking in mind. We acknowledge that other scholars may continue to identify different aggregations of spatial reasoning tasks, including those used with mechanical reasoning and abstract reasoning tasks (e.g., Tversky, 2019 ; Wai et al., 2009 ). In our aggregated categories, mechanical reasoning tasks would align with either mental rotation or non-rotational tasks depending on the specific task demands. In contrast, abstract reasoning tasks would align most closely with non-rotational spatial visualization tasks.

As there are no universally accepted measures of spatial ability for each spatial factor, we have narrowed our discussion to include exemplars of validated, cognitive, pen-and-pencil spatial ability tasks. These tasks have been historically associated with various spatial ability factors rather than merely serving as measures of general intelligence or visuospatial working memory (Carroll, 1993 ) and are easily implemented and scored by educators and researchers without specialized software or statistical knowledge. Notably, this discussion of spatial ability tasks and instruments excludes self-report questionnaires such as the Navigational Strategy Questionnaire (Zhong & Kozhevnikov, 2016 ) and the Santa Barbara Sense of Direction Scale (Hegarty et al., 2002 ); navigation simulations such as the Virtual SILC Test of Navigation (Weisberg et al., 2014 ) and SOIVET-Maze (da Costa et al., 2018 ); and tasks that involve physical manipulation such as the Test of Spatial Ability (Verdine et al., 2014 ). As such, we were unable to find any published, validated instruments for large-scale spatial orientation, a sub-factor of spatial orientation, that meet our inclusion criteria.

Additionally, we would like to highlight one instrument that does not fit into the categories presented in the following sections but may be of use to researchers. The Spatial Reasoning Instrument (SRI; Ramful et al., 2017 ) is a multiple-choice test that consists of three spatial subscales (spatial orientation, spatial visualization, and mental rotation). Notably, the questions that measure spatial visualization are specifically designed not to require mental rotation or spatial orientation. Unlike previously mentioned instruments, the SRI is not a speed test, though students are given a total time limit. This instrument targets middle school students and was designed to align more closely with students’ mathematical curricular experiences rather than a traditional psychological orientation. Mathematical connections in the SRI include visualizing lines of symmetry, using two-dimensional nets to answer questions about corresponding three-dimensional shapes, and reflecting objects.

In the next sections, we detail the types of tasks and instruments commonly used to measure spatial orientation, non-rotational spatial visualization, and mental rotation. Ultimately, these help form a guide for navigating and selecting among the various instruments for assessing spatial skills in relation to mathematical reasoning.

Spatial Orientation Tasks

Much like spatial ability more generally, spatial orientation skills fit into the broad distinctions of large-scale (e.g., wayfinding, navigation, and scaling abilities) and small-scale (e.g., perspective-taking and directional sense) skills, with small-scale spatial orientation skills being shown to be correlated with larger-scale spatial orientation skills (Hegarty & Waller, 2004 ; Hegarty et al., 2002 ). Aspects of mathematical thinking that may involve spatial orientation include scaling, reading maps and graphs, identifying orthogonal views of objects, and determining position and location. Although few empirical studies have attempted to determine statistical associations between spatial orientation and mathematics, spatial orientation has been correlated with some forms of scholastic mathematical reasoning. One area of inquiry showed associations between spatial orientation and early arithmetic and number line estimation (Cornu et al., 2017 ; Zhang & Lin, 2015 ). In another, spatial orientation skills were statistically associated with problem-solving strategies and flexible strategy use during high school-level geometric and non-geometric tasks (Tartre, 1990 ). Studies of disoriented children as young as three years old show that they reorient themselves based on the Euclidean geometric properties of distance and direction, which may contribute to children's developing abstract geometric intuitions (Izard et al., 2011 ; Lee et al., 2012 ; Newcombe et al., 2009 ).

Historically, the Guilford-Zimmerman (GZ) Spatial Orientation Test ( 1948 ) was used to measure spatial orientation. Critics have shown that this test may be too complicated and confusing for participants (Kyritsis & Gulliver, 2009 ) and that the task involves both spatial orientation and spatial visualization (Lohman, 1979 ; Schultz, 1991 ). To combat the GZ Spatial Orientation Test problems, Kozhevnikov and Hegarty ( 2001 ) developed the Object Perspective Taking Test, which was later revised into the Object Perspective/Spatial Orientation Test (see Fig.  2 A; Hegarty & Waller, 2004 ). Test takers are prevented from physically moving the test booklet, and all items involved an imagined perspective change of at least 90°. Unlike previous instruments, results from the Object Perspective/Spatial Orientation Test showed a dissociation between spatial orientation and spatial visualization factors (though they were highly correlated) and correlated with self-reported judgments of large-scale spatial cognition. A similar instrument, the Perspective Taking Test for Children, has been developed for younger children. (Frick et al., 2014a , 2014b ). Additionally, simpler versions of these tasks that asked participants to match an object to one that has been drawn from an alternative point of view have also been used, such as those in the Spatial Reasoning Instrument (Ramful et al., 2017 ).

Non-Rotational Spatial Visualization Tasks

With differing definitions of spatial visualization, measures of this spatial ability sub-component often include tasks that evaluate other spatial ability skills, such as cross-sectioning tasks (e.g., Mental Cutting Test; CEEB, 1939 , and Santa Barbara Solids Test; Cohen & Hegarty, 2012 ), that may require elements of spatial orientation or mental rotation. Though these tasks may be relevant for mathematical thinking, this section focuses on tasks that do not overtly require mental rotation. Non-rotational spatial visualization may be involved in several aspects of mathematical thinking, including reflections (Ramful et al., 2015 ) and visual-spatial geometry (Hawes et al., 2017 ; Lowrie et al., 2019 ), visualizing symmetry (Ramful et al., 2015 ), symbolic comparison (Hawes et al., 2017 ), and imagining problem spaces (Fennema & Tarte, 1985 ). A recent study by Lowrie and Logan ( 2023 ) posits that developing students’ non-rotational spatial visualization abilities may be related to better mathematics scores by improving students generalized mathematical reasoning skills and spatial working memory.

The three tests for non-rotational spatial visualization come from the Kit of Factor-Referenced Cognitive Tests developed by Educational Testing Services (Ekstrom et al., 1976 ). These instruments were developed for research on cognitive factors in adult populations. The first instrument is the Paper Folding Test (PFT), one of the most commonly used tests for measuring spatial visualization (see Fig.  2 C). In this test, participants view diagrams of a square sheet of paper being folded and then punched with a hole. They are asked to select the picture that correctly shows the resulting holes after the paper is unfolded. Though this task assumes participants imagine unfolding the paper without the need to rotate, studies have shown that problem attributes (e.g., number and type of folds and fold occlusions) impact PFT accuracy and strategy use (Burte et al., 2019a ).

The second instrument is the Form Board Test. Participants are shown an outline of a complete geometric figure with a row of five shaded pieces. The task is to decide which of the shaded pieces will make the complete figure when put together. During the task, participants are told that the pieces can be turned but not flipped and can sketch how they may fit together.

The third instrument, the Surface Development Test, asks participants to match the sides of a net of a figure to the sides of a drawing of a three-dimensional figure. Like the PFT, strategy use may also impact accuracy on these two measures. This led to the development of a similar Make-A-Dice test (Burte et al., 2019b ), which relies on the number of squares in a row and consecutive folding in different directions rather than just increasing the number of folds to increase difficulty. Additionally, none of these three instruments were explicitly designed to test non-rotational spatial visualization but rather a broader definition of spatial visualization that includes mental rotation. Thus, it is possible that some participants’ strategies may include mental rotation or spatial orientation.

Other common types of spatial visualization tasks include embedded figures adapted from the Gottschaldt Figures Test (Gottschaldt, 1926 ). These tasks measure spatial perception, field-independence, and the ability to disembed shapes from a background, which may be a necessary problem-solving skill (Witkin et al., 1977 ). One instrument, the Embedded Figures Test, originally consisted of 24 trials during which a participant is presented with a complex figure, then a simple figure, and then shown the complex figure again with instructions to locate the simple figure within it (Witkin, 1950 ). Others have used Witkin’s ( 1950 ) stimuli as a basis to develop various embedded figures tests, including the Children’s Embedded Figures Test (Karp & Konstadt, 1963 ) and the Group Embedded Figure Test (Oltman et al., 1971 ).

Mental Rotation Tasks

Mental rotation can be broadly defined as a cognitive operation in which a mental image is formed and rotated in space. Though mental rotation skills are often subsumed under spatial visualization or spatial relations sub-components, they can be treated as a separate skill from spatial orientation and spatial visualization (Linn & Peterson, 1985 ; Shepard & Metzler, 1971 ). As many definitions of general spatial ability include a “rotation” aspect, several studies have investigated the links between mental rotation and mathematics. For young children, cross-sectional studies have shown mixed results. In some studies, significant correlations were found between mental rotation and both calculation and arithmetic skills (Bates et al., 2021 ; Cheng & Mix, 2014 ; Gunderson et al., 2012 ; Hawes et al., 2015 ). Conversely, Carr et al. ( 2008 ) found no significant associations between mental rotation and standardized mathematics performances in similar populations. In middle school-aged children (11–13 years), mental rotation skill was positively associated with geometry knowledge (Battista, 1990 ; Casey et al., 1999 ) and problem-solving (Delgado & Prieto, 2004 ; Hegarty & Kozhevnikov, 1999 ). Studies of high school students and adults have indicated that mental rotation is associated with increased accuracy on mental arithmetic problems (Geary et al., 2000 ; Kyttälä & Lehto, 2008 ; Reuhkala, 2001 ).

Behavioral and imaging evidence suggests that mental rotation tasks invoke visuospatial representations that correspond to object rotation in the physical world (Carpenter et al., 1999 ; Shepard & Metzler, 1971 ). This process develops from 3 to 5 years of age with large individual differences (Estes, 1998 ) and shows varying performance across individuals irrespective of other intelligence measures (Borst et al., 2011 ). Several studies have also demonstrated significant gender differences, with males typically outperforming females (e.g., Voyer et al., 1995 ). However, this gap may be decreasing across generations (Richardson, 1994 ), suggesting it is due at least in part to sociocultural factors such as educational experiences rather than exclusively based on genetic factors. Historically, three-dimensional mental rotation ability has fallen under the spatial visualization skill, while two-dimensional mental rotation occasionally falls under a separate spatial relations skill (e.g., Carroll, 1993 ; Lohman, 1979 ). Thus, mental rotation measures often include either three-dimensional or two-dimensional stimuli rather than a mixture of both.

Three-Dimensional Mental Rotation Tasks

In one of the earliest studies of three-dimensional mental rotation, Shepard and Metzler ( 1971 ) presented participants with pictures of pairs of objects and asked them to answer as quickly as possible whether the two objects were the same or different, regardless of differences in orientation. The stimuli showed objects that were either the same, differing in orientation, or mirror images of those objects. This design provided a nice control since the mirror images had comparable visual complexity but could not be rotated to match the original. Results revealed a positive linear association between reaction time and the angular difference in the orientation of objects. In combination with participant post-interviews, this finding illustrated that in order to make an accurate comparison between the object and the answer questions, participants first imagined the object as rotated into the same orientation as the target object and that participants perceived the two-dimensional pictures as three-dimensional objects in order to complete the imagined rotation. Additional studies have replicated these findings over the last four decades (Uttal et al., 2013 ). Shepard and Metzler-type stimuli have been used in many different instruments, including the Purdue Spatial Visualization Test: Rotations (Guay, 1976 ) and the Mental Rotation Test (see Fig.  2 B; Vandenberg & Kuse, 1978 ). However, recent studies have shown that some items on the Mental Rotation Test can be solved using analytic strategies such as global-shape strategy to eliminate answer choices rather than relying on mental rotation strategies (Hegarty, 2018 ).

One common critique of the Shepard and Metzler-type stimuli is that the classic cube configurations’ complex design is not appropriate for younger populations, leading to few mental rotation studies on this population. Studies have shown that children under 5 years of age have severe difficulties solving standard mental rotation tasks, with children between the ages of 5 and 9 solving such tasks at chance (Frick et al., 2014a , 2014b ). To combat this, studies with pre-school age children often lower task demands by reducing the number of answer choices, removing mirrored and incongruent stimuli, and using exclusively images of two-dimensional objects (Krüger, 2018 ; Krüger et al., 2013 ). In response, some scholars have begun developing appropriate three-dimensional mental rotation instruments for elementary school students, such as the Rotated Colour Cube Test (Lütke & Lange-Küttner, 2015 ). In this instrument, participants are presented with a stimulus consisting of a single cube with different colored sides and are asked to identify an identical cube that has been rotated. However, studies on both three-dimensional and two-dimensional rotation have found that cognitive load depends more on the stimulus angle orientation than the object’s complexity or dimensionality (Cooper, 1975 ; Jolicoeur et al., 1985 ).

Two-Dimensional Mental Rotation Tasks

To measure two-dimensional mental rotation, tasks for all populations feature similar stimuli. These tasks, often referred to as spatial relations or speeded rotation tasks, typically involve single-step mental rotation (Carroll, 1993 ). One common instrument for two-dimensional mental rotation is the Card Rotation Test (Ekstrom et al., 1976 ). This instrument presents an initial figure and asks participants to select the rotated but not reflected items. Importantly, these tasks can be modified for various populations (Krüger et al., 2013 ). One standardized instrument for pre-school and early primary school-age children, the Picture Rotation Test, demonstrates how easily these two-dimensional stimuli can be modified (Quaiser-Pohl, 2003 ).

This section aims to provide an updated review of the various ways in which spatial ability has been historically measured and critically evaluates these assessment tools. As the majority of these measures were designed based on specific-factor structures outlined in the “Specific-Factor Structures” section, we chose to organize our discussion by grouping assessments based on the specific factor it was intended to capture. We also decided to focus on the spatial sub-components that have been suggested to be linked to mathematical thinking, including spatial orientation, spatial visualization, and mental rotation. Ultimately, we found that although there are many spatial measures that researchers can choose from, there is a need for additional measures that address gaps in population and include more than one spatial subcomponent. Additionally, there is a critical need for spatial assessments that can be used in contexts outside of controlled laboratory and one-on-one settings to more deeply understand the complex connections between spatial ability and mathematics education in more authentic learning settings such as classrooms.

A Guiding Framework

We contend that the decisions made regarding the choice of spatial subdivisions, analytical frameworks, and spatial measures will impact both the results and interpretations of findings from studies on the nature of mathematical reasoning in controlled studies. One way these decisions affect the outcomes of a study is that they may change the specific spatial ability sub-components that reliably predict mathematics performance. This is because some factors of spatial ability have been shown to be more strongly associated with certain sub-domains of mathematics than with others (Delgado & Prieto, 2004 ; Schenck & Nathan, 2020 ), but it is unclear how generalizable these findings are as students may use a variety of spatial and non-spatial strategies. Additionally, some models and classifications of spatial ability, such as Uttal et al.’s ( 2013 ) classification and the unitary model of spatial ability, currently do not have validated instruments. Thus, selecting a spatial skills instrument poorly suited to the mathematical skills or population under investigation may fail to show a suitable predictive value. This can lead to an overall weaker model of the dependent variable and lead the research team to conclude that spatial reasoning overall is not relevant to the domain of mathematical reasoning interest. These limitations are often not discussed in the publications we reviewed and, perhaps, may not even be realized by many education researchers. However, as noted, it can be difficult for education researchers to select an appropriate framework among the many alternatives that match their specific domains of study.

Due to the various spatial taxonomies and the assumptions and design decisions needed for choosing the accompanying analytical frameworks, we assert that it is beneficial for most education researchers who do not identify as spatial cognition researchers to avoid attempts to create a specific, universal taxonomy of spatial ability. The evidence of the ways individuals interact with spatial information through the various spatial subcomponents may be based on a particular scholar's perspective of spatial ability, which should inform their choices of spatial taxonomies and analytical frameworks and measures based on their goals.

To help education researchers who may be unfamiliar with the vast literature on spatial ability navigate this large and potentially confusing landscape in service of their study objectives, we have designed a guide in the form of a flowchart that enables them to match spatial taxonomies to analytic frameworks (Fig.  4 ). Our guide, understandably, does not include every possible spatial taxonomy or study aim. Instead, it offers a helpful starting point for incorporating spatial skills into an investigation of mathematical reasoning by focusing on how researchers can draw on specific factor taxonomies and current validated measures of spatial ability in controlled studies.

figure 4

Flowchart for selecting the appropriate spatial taxonomy and analytic framework for one’s investigation

The first question in the flowchart, Q1, asks researchers to decide how spatial ability will be used in their investigation: either as a covariate or as the main variable of interest. If spatial ability is a covariate, the most appropriate taxonomy would be the unitary model to capture the many possible ways participants could utilize spatial thinking during mathematical reasoning. However, as mentioned in the above section, this model has no validated measure. Thus, we recommend researchers select several measures that cover a variety of specific spatial subcomponents, or a measure designed to test more than one spatial subcomponent, such as the Spatial Reasoning Instrument (Ramful et al., 2017 ). We would then suggest using an analytical framework with a single composite score across multiple tasks to combat issues such as task-related biases (Moreu & Weibels, 2021 ).

If spatial ability is the main variable of interest, answering Q2 in the flowchart directs the researcher to consider whether they are interested in investigating the role of spatial ability as a general concept or as one or more specific sub-components. For example, suppose the researcher is interested in understanding links between spatial ability and a specific mathematic domain. In that case, we recommend using the unitary model of spatial ability and following the recommendations outlined above for using spatial ability as a covariate. For example, Casey et al. ( 2015 ) found that children’s early spatial skills were long-term predictors of later math reasoning skills. In their analysis, the authors identified two key spatial skills, mental rotation, and spatial visualization, that previous work by Mix and Cheng ( 2012 ) found to be highly associated with mathematics performance. To measure these constructs, Casey and colleagues administered three spatial tasks to participants: a spatial visualization measure, a 2-D mental rotation measure, and a 3-D mental rotation measure. The authors were interested in the impact of overall spatial ability on analytical math reasoning and in partially replicating previous findings rather than whether these two factors impacted mathematics performance. Thus, they combined these three spatial scores into a single composite score.

For investigations centering around one or more specific spatial sub-components, we recommend novice researchers use sub-components from specific factor taxonomies (e.g., mental rotation, spatial visualization, spatial orientation). Specific-factor taxonomies are used in a variety of lines of research, including mathematics education. Studies exploring the association between spatial ability and mathematics often focus on a particular sub-factor. For example, some studies have focused on the association between mental rotation and numerical representations (e.g., Rutherford et al., 2018 ; Thompson et al., 2013 ), while others have focused on spatial orientation and mathematical problem solving (e.g., Tartre, 1990 ). Similarly, scholars investigating spatial training efficacy often use spatial tasks based on a single factor or a set of factors as pre- and post-test measures and in intervention designs (e.g., Bruce & Hawes, 2015 ; Gilligan et al., 2019 ; Lowrie et al., 2019 ; Mix et al., 2021 ).

The third question in the flowchart, Q3, asks researchers to select whether their investigation will focus on one particular spatial sub-component or several to provide guidance for analytic frames. In new studies, the sub-components of interest may be selected based on prior studies for confirmatory analyses or on a theoretical basis for exploratory studies. If only a single spatial sub-component is of interest to the investigation, we suggest an analytic approach that includes a single score from one task. If multiple spatial sub-components are relevant to the investigation, we recommend using a single score from one task for each sub-component of interest.

Task selection, the final step in the flow chart, will depend on practical considerations such as which spatial sub-components are relevant, population age, and time constraints. Though thousands of spatial tasks are available, the tasks listed in Table  1 , which also identifies corresponding broad and specific spatial sub-components, can be a useful starting point for designing a study. We recommend that researchers acknowledge that students may solve mathematical problems in various spatial and non-spatial ways and, thus, their results may not generalize to all students or all mathematical tasks and domains. We also remind researchers that the majority of the measures described in the “Choosing Spatial Tasks in Mathematics Education Research” section are designed as psychometric instruments for use in tightly controlled studies. The guidance above is not intended for studies that involve investigating spatial ability in classrooms or other in situ contexts.

Conclusions and Lingering Questions

Researchers largely agree that spatial ability is essential for mathematical reasoning and success in STEM fields (National Research Council, 2006 ). The two goals of this review were, first, to summarize the relevant spatial ability literature, including the various factor structures and measures, in an attempt to more clearly understand the elements of spatial ability that may relate most closely to mathematics education; and second, to provide recommendations for education researchers and practitioners for selecting appropriate theoretical taxonomies, analytical frameworks, and specific instruments for measuring, interpreting, and improving spatial reasoning for mathematics education. Our review exposed a wide array of spatial taxonomies and analytical frameworks developed by spatial ability scholars for understanding and measuring spatial reasoning. However, this review shows no convergence on a definition of spatial ability or agreement regarding its sub-components, no universally accepted set of standardized measures to assess spatial skills, and, most importantly, no consensus on the nature of the link between mathematical reasoning and spatial ability. Thus, this review exposes several challenges to understanding the relationship between spatial skills and performance in mathematics. One is that the connections between mathematical reasoning and spatial skills, while supported, are complicated by the divergent descriptions of spatial taxonomies and analytical frameworks, the sheer volume of spatial measures one encounters as a potential consumer, and a lack of a universally accepted means of mapping spatial measures to mathematical reasoning processes. These challenges should be seen as the responsibility of the educational psychology research communities. The lack of progress on these issues impedes progress in designing effective spatial skills interventions for improving mathematics thinking and learning based on clear causal principles, selecting appropriate metrics for documenting change, and for analyzing and interpreting student outcome data.

Our primary contribution in the context of these challenges is to provide a guide, well situated in the research literature, for navigating and selecting among the various major spatial taxonomies and validated instruments for assessing spatial skills for use in mathematics education research and instructional design. In order to anchor our recommendations, we first summarized much of the history and major findings of spatial ability research as it relates to education (“Selecting a Spatial Taxonomy” section). In this summary, we identified three major types of spatial taxonomies: specific, broad, and unitary, and provided recommendations for associated analytical frameworks. We then discussed the plethora of spatial ability tasks that investigators and educators must navigate (“Choosing Spatial Tasks in Mathematics Education Research” section). To make the landscape more tractable, we divided these tasks into three categories shown to be relevant to mathematics education — spatial orientation, mental rotation, and non-rotational spatial visualization (see Table  1 ) — and mapped these tasks to their intended populations. We acknowledge that researchers and educators often select spatial tasks and analytic frameworks for practical rather than theoretical reasons, which can undermine the validity of their own research and assessment efforts. To provide educators with a stronger evidence-based foundation, we then offered a guiding framework (“A Guiding Framework” section) in the form of a flowchart to assist investigators in selecting appropriate spatial taxonomies and analytic frameworks as a precursor to making well-suited task sections to meet their particular needs. A guide of this sort provides some of the best steps forward to utilizing the existing resources for understanding and improving education through the lens of spatial abilities. We focused on providing a tool to guide the decision-making of investigators seeking to relate spatial skills with mathematics performance based on the existing resources, empirical findings, and the currently dominant theoretical frameworks.

Several limitations remain, however. One is that the vast majority of published studies administered spatial skills assessments using paper-and-pencil instruments. In recent years, testing has moved online, posing new challenges regarding the applicability and reliability of past instruments and findings. Updating these assessments will naturally take time until research using online instruments and new immersive technologies catches up (see Uttal et al., 2024 , for discussion). A second limitation is that studies investigating the associations between spatial ability and mathematics have often focused on a particular spatial ability or particular mathematical skill. There are many unknowns about which spatial abilities map to which areas of mathematics performance. This limitation can only be addressed through careful, systematic, large-scale studies. A third limitation is that many of the instruments in the published literature were developed for and tested on adult populations. This greatly limits their applicability to school-aged populations. Again, this limitation can only be addressed through more research that extends this work across a broader developmental range. Fourth, many spatial ability instruments reported in the literature include tasks that may be solved using various strategies, some that are non-spatial, thus calling into question their construct validity of whether they measure the specific spatial skills they claim to measure. For example, some tasks in assessments, such as the Paper Folding Test may be effectively solved through non-spatial methods such as logic or counting rather than pure spatial visualization. Thus, there is a pressing need for process-level data, such as immediate retrospective reports and eye tracking (cf. Just & Carpenter, 1985 ), to accurately describe the various psychological processes involved and how they vary by age, individual differences, and assessment context. A fifth limitation relates to the 2 × 2 classification system using intrinsic and extrinsic information along one dimension and static and dynamic tasks along the other (Newcombe & Shipley, 2015 ; Uttal et al., 2013 ). In mapping existing tasks to this system, it became clear that there is a need for more development of extrinsic-static tasks and instruments. We found no studies investigating the link between mathematical reasoning and extrinsic-static spatial abilities, perhaps because of the lack of appropriate assessments. The sixth, and arguably greatest limitation is that scholarly research on spatial ability still lacks a convergent taxonomy and offers no clear picture as to which aspects of spatial thinking are most relevant to STEM thinking and learning. More research is needed to test additional models of spatial ability, such as the unitary model, and to expand spatial ability assessment tools to capture the complex and multifaceted nature of spatial thinking needed in mathematics education environments.

The objectives of this paper were to provide researchers with an updated review of spatial ability and its measures and to provide a guide for researchers new to spatial cognition to help navigate this vast literature when making study design decisions. Overall, research to understand the structure of spatial ability more deeply is at a crossroads. Spatial ability is demonstrably relevant for the development of mathematics reasoning and offers a malleable factor that may have a profound impact on the design of future educational interventions and assessments. Synthesizing these lines of research highlighted several areas that remain unexplored and in need of future research and development. STEM education and workforce development remain essential for scientific and economic advancements, and spatial skills are an important aspect of success and retention in technical fields. Thus, it is critical to further understand the connections between spatial and mathematical abilities as ways to increase our understanding of the science of learning and inform the design of future curricular interventions that transfer skills for science, technology, engineering, and mathematics.

Amalric, M., & Dehaene, S. (2016). Origins of the brain networks for advanced mathematics in expert mathematicians. Proceedings of the National Academy of Sciences, 113 (18), 4909–4917. https://doi.org/10.1073/pnas.1603205113

Article   Google Scholar  

Atit, K., Power, J. R., Pigott, T., Lee, J., Geer, E. A., Uttal, D. H., Ganley, C. M., & Sorby, S. A. (2022). Examining the relations between spatial skills and mathematical performance: A meta-analysis. Psychonomic Bulletin & Review, 29 , 699–720. https://doi.org/10.3758/s13423-021-02012-w

Atit, K., Uttal, D. H., & Stieff, M. (2020). Situating space: Using a discipline-focused lens to examine spatial thinking skills. Cognitive Research: Principle and Implications, 5 (19), 1–16.

Google Scholar  

Bates, K. E., Gilligan-Lee, K., & Farran, E. K. (2021). Reimagining mathematics: The role of mental imagery in explaining mathematical calculation skills in childhood. Mind, Brain, and Education, 15 (2), 189–198.

Battista, M. T. (1990). Spatial visualization and gender differences in high school geometry. Journal for Research in Mathematics Education, 21 (1), 47–60. https://doi.org/10.2307/749456

Battista, M. T. (2007). The development of geometric and spatial thinking. In F. K. Lester (Ed.), Second handbook of research on mathematics teaching and learning (pp. 843–908). Information Age Publishing.

Battista, M.T., Frazee, L. M., & Winer, M. L. (2018). Analyzing the relation between spatial and geometric reasoning for elementary and middle school students. In K. S. Mix & M. T. Battista (Eds.), Visualizing Mathematics. Research in Mathematics Education (pp. 195 – 228). Springer, Cham. https://doi.org/10.1007/978-3-319-98767-5_10

Boonen, A. J. H., van der Schoot, M., van Wesel, F., de Vries, M. H., & Jolles, J. (2013). What underlies successful world problem solving? A path analysis in sixth grade students. Contemporary Educational Psychology, 38 , 271–279. https://doi.org/10.1016/j.cedpsych.2013.05.001

Borst, G., Ganis, G., Thompson, W. L., & Kosslyn, S. M. (2011). Representations in mental imagery and working memory: Evidence from different types of visual masks. Memory & Cognition, 40 (2), 204–217. https://doi.org/10.3758/s13421-011-0143-7

Bruce, C. D., & Hawes, Z. (2015). The role of 2D and 3D mental rotation in mathematics for young children: What is it? Why does it matter? And what can we do about it? ZDM, 47 (3), 331–343. https://doi.org/10.1007/s11858-014-0637-4

Buckley, J., Seery, N., & Canty, D. (2018). A heuristic framework of spatial ability: A review and synthesis of spatial factor literature to support its translation into STEM education. Educational Psychology Review, 30 , 947–972. https://doi.org/10.1007/s10648-018-9432z

Buckley, J., Seery, N., & Canty, D. (2019). Investigating the use of spatial reasoning strategies in geometric problem solving. International Journal of Technology and Design Education, 29 , 341–362. https://doi.org/10.1007/s10798-018-9446-3

Burte, H., Gardony, A. L., Hutton, A., & Taylor, H. A. (2017). Think3d!: Improving mathematical learning through embodied spatial training. Cognitive Research: Principles and Implications, 2 (13), 1–8. https://doi.org/10.1186/s41235-017-0052-9

Burte, H., Gardony, A. L., Hutton, A., & Taylor, H. A. (2019). Knowing when to fold ‘em: Problem attributes and strategy differences in the Paper Folding Test. Personality and Individual Differences, 146 , 171–181.

Burte, H., Gardony, A. L., Hutton, A., & Taylor, H. A. (2019). Make-A-Dice test: Assessing the intersection of mathematical and spatial thinking. Behavior Research Methods, 51 (2), 602–638. https://doi.org/10.3758/s13428-018-01192-z

Carpenter, P. A., Just, M. A., Keller, T. A., Eddy, W., & Thulborn, K. (1999). Graded functional activation in the visuospatial system with the amount of task demand. Journal of Cognitive Neuroscience, 11 (1), 9–24. https://doi.org/10.1162/089892999563210

Carr, M., Steiner, H. H., Kyser, B., & Biddlecomb, B. (2008). A comparison of predictors of early emerging gender differences in mathematics competency. Learning and Individual Differences, 18 (1), 61–75. https://doi.org/10.1016/j.lindif.2007.04.005

Carroll, J. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge University Press . https://doi.org/10.1017/CBO9780511571312

Case, R., Okamoto, Y., Griffin, S., McKeough, A., Bleiker, C., Henderson, B., Stephenson, K. M., Siegler, R. S., & Keating, D. P. (1996). The role of central conceptual structures in the development of children’s thought. Monographs of the Society for Research in Child Development, 61 (1/2), i–295. https://doi.org/10.2307/1166077

Casey, B. M., Nuttall, R. L., & Pezaris, E. (1999). Evidence in support of a model that predicts how biological and environmental factors interact to influence spatial skills. Developmental Psychology, 35 (5), 1237–1247. https://doi.org/10.1037/0012-1649.35.5.1237

Casey, B. M., Pezaris, E., Fineman, B., Pollock, A., Demers, L., & Dearing, E. (2015). A longitudinal analysis of early spatial skills compared to arithmetic and verbal skills as predictors of fifth-grade girls’ math reasoning. Learning and Individual Differences, 40 , 90–100. https://doi.org/10.1016/j.lindif.2015.03.028

Cheng, Y. L., & Mix, K. S. (2014). Spatial training improves children’s mathematics ability. Journal of Cognition and Development, 15 (1), 2–11. https://doi.org/10.1080/15248372.2012.725186

Cohen, C. A., & Hegarty, M. (2012). Inferring cross sections of 3D objects: A new spatial thinking test. Learning and Individual Differences, 22 (6), 868–874. https://doi.org/10.1177/1541931213601788

College Entrance Examination Board (CEEB). (1939).  Special aptitude test in spatial relations . College Entrance Examination Board, New York.

Cooper, L. A. (1975). Mental rotation of random two-dimensional shapes. Cognitive Psychology, 7 (1), 20–43. https://doi.org/10.1016/0010-0285(75)90003-1

Cooper, L. A., & Mumaw, R. J. (1985).Spatial aptitude. In R. F. Dillman (Ed.). Individual differences in cognition (2nd. Ed., pp.67–94). Academic Press.

Cornu, V., Hornung, C., Schiltz, C., & Martin, R. (2017). How do different aspects of spatial skills relate to early arithmetic and number line estimation? Journal of Numerical Cognition, 3 (2), 309–343. https://doi.org/10.5964/jnc.v3i2.36

da Costa, R., Pompeu, J. E., de Mello, D. D., Moretto, E., Rodrigues, F. Z., Dos Santos, M. D., Nitrini, R., Morganti, F., & Brucki, S. (2018). Two new virtual reality tasks for the assessment of spatial orientation Preliminary results of tolerability, sense of presence and usability. Dementia & Neuropsychologia, 12 (2), 196–204. https://doi.org/10.1590/1980-57642018dn12-020013

Davis, B. (2015). Spatial reasoning in the early years: Principles, assertions, and speculations . Routledge.

Delgado, A. R., & Prieto, G. (2004). Cognitive mediators and sex-related differences in mathematics. Intelligence, 32 , 25–32. https://doi.org/10.1016/S0160-2896(03)00061-8

D’Oliveira, T. (2004). Dynamic spatial ability: An exploratory analysis and a confirmatory study. The International Journal of Aviation Psychology, 14 (1), 19–38. https://doi.org/10.1207/s15327108ijap1401_2

Ekstrom, R. B., French, J. W., & Harmon, H. H. (1976). Manual for kit of factor-referenced cognitive tests . Educational Testing Service.

Estes, D. (1998). Young children’s awareness of their mental activity The case of mental rotation. Child Development, 69 (5), 1345–1360. https://doi.org/10.2307/1132270

Fennema, E., & Tartre, L. A. (1985). The use of spatial visualization in mathematics by girls and boys. Journal for Research in Mathematics Education, 16 (3), 184–206.

Ferguson, A. M., Maloney, E. A., Fugelsang, J., & Risko, E. F. (2015). On the relation between math and spatial ability: The case of math anxiety. Learning and Individual Differences, 39 , 1–12.

Frick, A., Hanson, M. A., & Newcombe, N. S. (2014). Development of mental rotation in 3- to 5-year-old children. Cognitive Development, 28 (4), 386–399. https://doi.org/10.1016/j.cogdev.2013.06.002

Frick, A., Möhring, W., & Newcombe, N. S. (2014). Picturing perspectives: Development of perspective-taking abilities in 4- to 8-year-olds. Frontiers in Psychology, 5 , 386. https://doi.org/10.3389/fpsyg.2014.00386

Gallistel, C. R., & Gelman, R. (1992). Preverbal and verbal counting and computation. Cognition, 44 (1/2), 43–74. https://doi.org/10.1016/0010-0277(92)90050-R

Galton, F. (1879). Generic Images. The Nineteenth Century, 6 (1), 157–169.

Gaughran, W. (2002). Cognitive modelling for engineers. In 2002 American Society for Engineering Education annual conference and exposition . Montréal, Canada: American Society for Engineering Education.

Geary, D. C., Saults, S. J., Liu, F., & Hoard, M. K. (2000). Sex differences in spatial cognition, computational fluency, and arithmetical reasoning. Journal of Experimental Child Psychology, 77 (4), 337–353. https://doi.org/10.1006/jecp.2000.2594

Gilligan, K. A., Thomas, M. S. C., & Farran, E. K. (2019). First demonstration of effective spatial training for near transfer to spatial performance and far transfer to a range of mathematics skills at 8 years. Developmental Science, 23 (4), e12909. https://doi.org/10.1111/desc.12909

Gottschaldt, K. (1926). Über den Einfluss der Erfahrung auf die Wahrnehmung von Figuren. Psychologische Forschung, 8 , 261–318. https://doi.org/10.1007/BF02411523

Guay, R. B. (1976). Purdue spatial visualization test . Purdue Research Foundation.

Guilford, J. P., & Zimmerman, W. S. (1948). The Guilford-Zimmerman aptitude survey. Journal of Applied Psychology, 32 (1), 24–34. https://doi.org/10.1037/h0063610

Gunderson, E. A., Ramirez, G., Beilock, S. L., & Levine, S. C. (2012). The relation between spatial skill and early number knowledge: The role of the linear number line. Developmental Psychology, 48 (5), 1229–1241. https://doi.org/10.1037/a0027433

Hambrick, D. Z., Libarkin, J. C., Petcovic, H. L., Baker, K. M., Elkins, J., Callahan, C. N., Turner, S. P., Rench, T. A., & LaDue, N. D. (2012). A test of the circumvention-of-limits hypothesis in scientific problem solving: The case of geological bedrock mapping. Journal of Experimental Psychology: General, 14 (3), 397–403. https://doi.org/10.1037/a0025927

Hannafin, R. D., Truxaw, M. P., Vermillion, J. R., & Liu, Y. (2008). Effects of spatial ability and instructional program on geometry achievement. Journal of Educational Research, 101 (3), 148–157. https://doi.org/10.3200/JOER.101.3.148-157

Harris, J., Hirsh-Pasek, K., & Newcombe, N. S. (2013). A new twist on studying the development of dynamic spatial transformations: Mental paper folding in young children. Mind, Brain, and Education, 7 (1), 49–55. https://doi.org/10.1111/mbe.12007

Hawes, Z., & Ansari, D. (2020). What explains the relationship between spatial and mathematical skills? A review of the evidence from brain and behavior. Psychonomic Bulletin & Review, 27 , 465–482. https://doi.org/10.3758/s13423-019-01694-7

Hawes, Z. C. K., Moss, J., Caswell, B., Naqvi, S., & MacKinnon, S. (2017). Enhancing children’s spatial and numerical skills through a dynamic spatial approach to early geometry instruction: Effects of a 32-week intervention. Cognition and Instruction, 35 , 236–264.

Hawes, Z., Moss, J., Caswell, B., & Poliszczuk, D. (2015). Effects of mental rotation training on children’s spatial and mathematics performance: A randomized controlled study. Trends in Neuroscience and Education, 4 (3), 60–68. https://doi.org/10.1016/j.tine.2015.05.001

Hegarty, M. (2018). Ability and sex differences in spatial thinking: What does the mental rotation test really measure? Psychonomic Bulletin & Review, 25 , 1212–1219.

Hegarty, M., & Kozhevnikov, M. (1999). Types of visual-spatial representations and mathematical problem solving. Journal of Educational Psychology, 91 (4), 684–689. https://doi.org/10.1037/0022-0663.91.4.684

Hegarty, M., Montello, D. R., Richardson, A. E., Ishikawa, T., & Lovelace, K. (2006). Spatial abilities at different scales: Individual differences in aptitude-test performance and spatial layout learning. Intelligence, 34 (2), 151–176. https://doi.org/10.1016/j.intell.2005.09.005

Hegarty, M., Richardson, A. E., Montello, D. R., Lovelace, K., & Subbiah, I. (2002). Development of a self-report measure of environmental spatial ability. Intelligence, 30 , 425–447. https://doi.org/10.1016/S0160-2896(02)00116-2

Hegarty, M., & Waller, D. (2004). A dissociation between mental rotation and perspective-taking spatial abilities. Intelligence, 32 (2), 175–191. https://doi.org/10.1016/j.intell.2003.12.001

Hegarty, M. & Waller, D. A. (2005). Individual differences in spatial abilities. In P. Shah, & A. Miyake (Eds.), The Cambridge handbook of visuospatial thinking (pp. 121–169). Cambridge University Press. https://doi.org/10.1017/CBO9780511610448.005

Hubbard, E. M., Piazza, M., Pinel, P., & Dehaene, S. (2005). Interactions between number and space in parietal cortex. Nature Reviews Neuroscience, 6 (6), 435–448. https://doi.org/10.1038/nrn1684

Huttenlocher, J., & Presson, C. C. (1979). The coding and transformation of spatial information. Cognitive Psychology, 11 (3), 375–394. https://doi.org/10.1016/0010-0285(79)90017-3

Inhelder, B. & Piaget, J. (1958). The growth of logical thinking: From childhood to adolescence. Basic Books. https://doi.org/10.1037/10034-000

Izard, V., Pica, P., Spelke, E. S., & Dehaene, S. (2011). Flexible intuitions of Euclidean geometry in an Amazonian indigene group. Proceedings of the National Academy of Sciences, 108 (24), 9782–9787. https://doi.org/10.1073/pnas.1016686108

Jansen, P. (2009). The dissociation of small-and large-scale spatial abilities in school-age children. Perceptual and Motor Skills, 109 (2), 357–361.

Jolicoeur, P., Regehr, S., Smith, L. B. J. P., & Smith, G. N. (1985). Mental rotation of representations of two-dimensional and three-dimensional objects. Canadian Journal of Psychology, 39 (1), 100–129.

Just, M. A., & Carpenter, P. A. (1985). Cognitive coordinate systems: Accounts of mental rotation and individual differences in spatial ability. Psychological Review, 92 (2), 137.

Karp, S. A., & Konstadt, N. L. (1963). Manual for the children’s embedded figures test . Cognitive Tests.

Kosslyn, S. M., Koenig, O., Barrett, A., Cave, C. B., Tang, J., & Gabrieli, J. D. E. (1989). Evidence for two types of spatial representations: Hemispheric specialization for categorical and coordinate relations. Journal of Experimental Psychology: Human Perception and Performance, 15 (4), 723–735. https://doi.org/10.1037/0096-1523.15.4.723

Kosslyn, S. M., & Thompson, W. L. (2003). When is early visual cortex activated during visual mental imagery? Psychological Bulletin, 129 (5), 723–746. https://doi.org/10.1037/0033-2909.129.5.723

Kozhevnikov, M., & Hegarty, M. (2001). A dissociation between object manipulation spatial ability and spatial orientation ability. Memory & Cognition, 29 (5), 745–756. https://doi.org/10.3758/BF03200477

Kozhevnikov, M., & Thornton, R. (2006). Real-time data display, spatial visualization ability, and learning force and motion concepts. Journal of Science Education and Technology, 15 (1), 111–132. https://doi.org/10.1007/s10956-006-0361-0

Krüger, M. (2018). Mental rotation and the human body: Children’s inflexible use of embodiment mirrors that of adults. British Journal of Developmental Psychology, 23 (3), 418–437. https://doi.org/10.1111/bjdp.12228

Krüger, M., Kaiser, M., Mahler, K., Bartels, W., & Krist, H. (2013). Analogue mental transformations in 3-year-olds: Introducing a new mental rotation paradigm suitable for young children. Infant and Child Development, 23 , 123–138. https://doi.org/10.1002/icd.1815

Kyritsis, M., & Gulliver, S.R. (2009). Gilford Zimmerman orientation survey: A validation. 2009 7th International Conference on Information, Communications and Signal Processing (ICICS) (pp. 1-4).

Kyttälä, M., & Lehto, J. E. (2008). Some factors underlying mathematical performance: The role of visuospatial working memory and non-verbal intelligence. European Journal of Psychology of Education, 23 (1), 77–94. https://doi.org/10.1007/BF03173141

Laski, E. V., Casey, B. M., Yu, Q., Dulaney, A., Heyman, M., & Dearing, E. (2013). Spatial skills as a predictor of first grade girls’ use of higher level arithmetic strategies. Learning and Individual Differences, 23 , 123–130. https://doi.org/10.1016/j.lindif.2012.08.001

Lee, S. A., Sovrano, V. A., & Spelke, E. S. (2012). Navigation as a source of geometric knowledge: Young children’s use of length, angle, distance, and direction in a reorientation task. Cognition, 1 , 144–161. https://doi.org/10.1016/j.cognition.2011.12.015

Linn, M., & Petersen, A. C. (1985). Emergence and characterization of sex differences in spatial ability: A meta-analysis. Child Development, 56 (6), 1479–1498. https://doi.org/10.2307/1130467

Lohman, D. F. (1979).  Spatial ability: A review and re-analysis of the correlational literature (Technical Report No. 8). Stanford, CA: Aptitudes Research Project, School of Education, Stanford University.

Lohman, D. F. (1988). Spatial abilities as traits, processes, and knowledge. In R. J. Sternberg (Ed.), Advances in the psychology of human intelligence (pp. 181–248). Lawrence Erlbaum.

Lombardi, C. M., Casey, B. M., Pezaris, E., Shadmehr, M., & Jong, M. (2019). Longitudinal analysis of associations between 3-d mental rotation and mathematics reasoning skills during middle school: Across and within genders. Journal of Cognition and Development, 20 (4), 487–509. https://doi.org/10.1080/15248372.2019.1614592

Lowrie, T., & Logan, T. (2023). Spatial visualization supports students’ math: Mechanisms for spatial transfer. Journal of Intelligence, 11 (6), 127. https://doi.org/10.3390/jintelligence11060127

Lowrie, T., Logan, T., & Hegarty, M. (2019). The influence of spatial visualization training on students’ spatial reasoning and mathematics performance. Journal of Cognition and Development, 20 (5), 729–751. https://doi.org/10.1080/15248372.2019.1653298

Lowrie, T., Resnick, I., Harris, D., & Logan, T. (2020). In search of the mechanisms that enable transfer from spatial reasoning to mathematics understanding. Mathematics Educational Research Journal, 32 , 175–188. https://doi.org/10.1007/s13394-020-00336-9

Lütke, N., & Lange-Küttner, C. (2015). Keeping it in three dimensions: Measuring the development of mental rotation in children with the Rotated Colour Cube Test (RCCT). International Journal of Developmental Science, 9 (2), 95–114. https://doi.org/10.3233/DEV-14154

Maeda, Y., & Yoon, S. Y. (2013). A meta-analysis on gender differences in mental rotation ability measured by the Purdue spatial visualization tests: Visualization of rotations (PSVT: R). Educational Psychology Review, 25 (1), 69–94. https://doi.org/10.1007/s10648-012-9215-x

Malanchini, M., Rimfeld, K., Shakeshaft, N. G., McMilan, A., Schofield, K. L., Rodic, M., Rossi, V., Kovas, Y., Dale, P. S., Tucker-Drob, E. M., & Plomin, R. (2020). Evidence for a unitary structure of spatial cognition beyond general intelligence. Science of Learning, 5 (1), 9. https://doi.org/10.1038/s41539-020-0067-8

McGee, M. (1979). Human spatial abilities: Psychometric studies and environmental, genetic, hormonal, and neurological influences. Psychological Bulletin, 86 (5), 889–918. https://doi.org/10.1037/0033-2909.86.5.889

Michael, W. B., Guilford, J. P., Fruchter, B., & Zimmerman, W. S. (1957). The description of spatial-visualization abilities. Educational and Psychological Measurement, 17 , 185–199. https://doi.org/10.1177/001316445701700202

Mix, K. S., & Cheng, Y. L. (2012). The relation between space and math: Developmental and educational implications. Advances in Child Development and Behavior, 42 , 197–243. https://doi.org/10.1016/b978-0-12-394388-0.00006-x

Mix, K. S., Hambrick, D. Z., Satyam, V. R., Burgoyne, A. P., & Levine, S. C. (2018). The latent structure of spatial skill: A test of the 2 x 2 typology. Cognition, 180 , 268–278. https://doi.org/10.1016/j.cognition.2018.07.012

Mix, K. S., Levine, S. C., Cheng, Y. L., Stockton, J. D., & Bower, C. (2021). Effects of spatial training on mathematics in first and sixth grade children. Journal of Educational Psychology, 113 (2), 304–314. https://doi.org/10.1037/edu0000494

Mix, K. S., Levine, S. C., Cheng, Y. L., Young, C., Hambrick, D. Z., Ping, R., & Konstantopoulos, S. (2016). Separate but correlated: The latent structure of space and mathematics across development. Journal of Experimental Psychology: General, 145 (9), 1206–1227. https://doi.org/10.1037/xge0000182

Moreau, D., & Wiebels, K. (2021). Assessing change in intervention research: The benefits of composite outcomes. Advances in Methods and Practice in Psychological Science, 4 (1), 2515245920931930. https://doi.org/10.1177/2515245920931930

Newcombe, N. S. (2013). Seeing relationships: Using spatial thinking to teach science, mathematics, and social studies. American Educator, 37 , 26–40.

Newcombe, N. S., Ratliff, K. R., Shallcross, W. L., & Twyman, A. D. (2009). Young children’s use of features to reorient is more than just associative: Further evidence against a modular view of spatial processing. Developmental Science, 13 (1), 213–220. https://doi.org/10.1111/j.1467-7687.2009.00877.x

Newcombe, N. S. & Shipley, T. F. (2015). Thinking about spatial thinking: new typology, new assignments. In J. S. Gero (Ed), Studying Visual and Spatial Reasoning for Design Creativity (pp. 179–192). Springer, Dordrecht . https://doi.org/10.1007/978-94-017-9297-4_10

Okamoto, Y., Kotsopoulos, D., McGarvey, L., & Hallowell, D. (2015). The development of spatial reasoning in young children. In B. Davis (Ed.), Spatial reasoning in the early years: Principles, assertions, and speculations (pp. 25–38). Routledge.

Oltman, P. K., Raskin, E., & Witkin, H. A. (1971). Group embedded figure test . Consulting Psychologists Press.

Oostermeijer, M., Boonen, A. J. H., & Jolles, J. (2014). The relation between children’s constructive play activities, spatial ability, and mathematical work problem-solving performance: A mediation analysis in sixth-grade students. Frontiers in Psychology, 5 , 782. https://doi.org/10.3389/fpsyg.2014.00782

Piaget, J., & Inhelder, B. (1956). The child’s conception of space. London: Routledge & Kegan Paul.

Potter, L. E. (1995). Small-scale versus large-scale spatial reasoning: Educational implications for children who are visually impaired. Journal of Visual Impairment & Blindness, 89 (2), 142–152.

National Research Council. (2006). Learning to think spatially . Washington, DC: The National Academies Press. https://doi.org/10.17226/11019

Quaiser-Pohl, C. (2003). The mental cutting test “Schnitte” and the Picture Rotation Test – Two new measures to assess spatial ability. International Journal of Testing, 3 (3), 219–231. https://doi.org/10.1207/S15327574IJT0303_2

Ramful, A., Ho, S. Y., & Lowrie, T. (2015). Visual and analytical strategies in spatial visualisation: Perspectives from bilateral symmetry and reflection. Mathematics Education Research Journal, 27 , 443–470. https://doi.org/10.1007/s13394-015-0144-0

Ramful, A., Lowrie, T., & Logan, T. (2017). Measurement of spatial ability: Construction and validation of the Spatial Reasoning Instrument for Middle School Students. Journal of Psychoeducational Assessment, 35 (7), 709–727. https://doi.org/10.1177/0734282916659207

Reuhkala, M. (2001). Mathematical skills in ninth-graders: Relationship with visuo-spatial abilities and working memory. Educational Psychology, 21 (4), 387–399. https://doi.org/10.1080/01443410120090786

Richardson, J. T. E. (1994). Gender differences in mental rotation. Perceptual and Motor Skills, 78 (2), 435–448. https://doi.org/10.2466/pms.1994.78.2.435

Rimfeld, K., Shakeshaft, N. G., Malanchini, M., Rodic, M., Selzam, S., Schofield, K., Dale, P. S., Kovas, Y., & Plomin, R. (2017). Phenotypic and genetic evidence for a unifactorial structure of spatial abilities. Proceedings of the National Academy of Sciences, 114 (10), 2777–2782. https://doi.org/10.1073/pnas.1607883114

Rutherford, T., Karamarkovich, S. M., & Lee, D. S. (2018). Is the spatial/math connection unique? Associations between mental rotation and elementary mathematics and English achievement. Learning and Individual Differences, 62 , 180–199. https://doi.org/10.1016/j.lindif.2018.01.014

Schenck, K. E., Kim, D., Swart, M. I., & Nathan, M. J. (2022, April). With no universal consensus, spatial system perspective affects model fitting and interpretation for mathematics . [Paper Presentation]. American Educational Research Association Conference, San Diego, CA.

Schenck, K. E. & Nathan, M. J. (2020, April). Connecting mathematics, spatial ability, and spatial anxiety . [Paper Presentation]. American Educational Research Association Conference, San Francisco, CA.

Schneider, J., & McGrew, K. (2012). The Cattell–Horn–Carroll model of intelligence. In D. Flanagan & P. Harrison (Eds .), Contemporary intellectual assessment: Theories, tests, and issues (3rd ed., pp. 99–144). Guilford Press. https://doi.org/10.1177/0734282916651360

Schultz, K. (1991). The contribution of solution strategy to spatial performance. Canadian Journal of Psychology, 45 (4), 474–491. https://doi.org/10.1037/h0084301

Shah, P., & Miyake, A. (1996). The separability of working memory resources for spatial thinking and language processing: An individual differences approach. Journal of Experimental Psychology: General, 125 (1), 4–27. https://doi.org/10.1037/0096-3445.125.1.4

Shea, D. L., Lubinski, D., & Benbow, C. P. (2001). Importance of assessing spatial ability in intellectually talented young adolescents: A 20-year longitudinal study. Journal of Educational Psychology, 93 (3), 604–614. https://doi.org/10.1037/0022-0663.93.3.604

Shepard, R. N., & Metzler, J. (1971). Mental rotation of three-dimensional objects. Science, 171 (3972), 701–703. https://doi.org/10.1126/science.171.3972.701

Sorby, S. (1999). Developing 3-D spatial visualization skills. Engineering Design Graphics Journal, 63 (2), 21–32.

Sorby, S. A. (2009). Educational research in developing 3-D spatial skills for engineering students. International Journal of Science Education, 31 (3), 459–480. https://doi.org/10.1080/09500690802595839

Sorby, S., Casey, B., Veurink, N., & Dulaney, A. (2013). The role of spatial training in improving spatial and calculus performance in engineering students. Learning and Individual Differences, 26 , 20–29. https://doi.org/10.1016/j.lindif.2013.03.010

Sorby, S. A., & Panther, G. C. (2020). Is the key to better PISA math scores improving spatial skills? Mathematics Education Research Journal, 32 (2), 213–233. https://doi.org/10.1007/s13394-020-00328-9

Spearman, C. (1927). The abilities of man, their nature and measurement . Macmillan.

Stieff, M. (2007). Mental rotation and diagrammatic reasoning in science. Learning and Instruction, 17 (2), 219–234.

Stieff, M., & Uttal, D. (2015). How much can spatial training improve STEM achievement? Educational Psychology Review, 27 (4), 607–615.

Tam, Y. P., Wong, T. T., & Chan, W. W. L. (2019). The relation between spatial skills and mathematical abilities: The mediating role of mental number line representation. Contemporary Educational Psychology, 56 , 14–24. https://doi.org/10.1016/j.cedpsych.2018.10.007

Tartre, L. A. (1990). Spatial orientation skill and mathematical problem solving. Journal for Research in Mathematics Education, 21 (3), 216–229.

Thompson, J. M., Nurek, H. C., Moeller, K., & Kardosh, R. C. (2013). The link between mental rotation ability and basic numerical representations. Acta Psychologica, 144 , 324–331. https://doi.org/10.1016/j.actpsy.2013.05.009

Thurstone, L. L. (1938). Primary mental abilities . Chicago University Press.

Thurstone, L. L. (1950). Some primary abilities in visual thinking. Proceedings of the American Philosophical Society, 94 (6), 517–521.

Tufte, E. R. (2001). The visual display of quantitative information (2nd ed.). Graphics Press.

Tversky, B. (2019). Transforming thought. In Mind in motion (pp. 85 – 106). Basic Books.

Uttal, D. H., & Cohen, C. A. (2012). Spatial thinking and STEM education: When, why, and how? In B. Ross (Ed.), Psychology of learning and motivation (Vol. 57, pp. 147–181). Academic Press. https://doi.org/10.1016/B978-0-12-394293-7.00004-2

Uttal, D. H., McKee, K., Simms, N., Hegarty, M., & Newcombe, N. S. (2024). How can we best assess spatial skills? Practical and conceptual challenges. Journal of Intelligence, 12 (1), 8.

Uttal, D. H., Meadow, N. G., Tipton, E., Hand, L. L., Alden, A. R., Warren, C., & Newcombe, N. S. (2013). The malleability of spatial skills: A meta-analysis of training studies. Psychological Bulletin, 139 (2), 352–402. https://doi.org/10.1037/a0028446

Vandenberg, S. G., & Kuse, A. R. (1978). Mental rotations, a group test of three-dimensional spatial visualization. Perceptual and Motor Skills, 47 , 599–604. https://doi.org/10.2466/pms.1978.47.2.599

Verdine, B. N., Golinkoff, R. M., Hirsh-Pasek, K., Newcombe, N. S., Filipowicz, A. T., & Chang, A. (2014). Deconstructing building blocks: Preschoolers’ spatial assembly performance relates to early mathematical skills. Child Development, 85 (3), 1062–1076. https://doi.org/10.1111/cdev.12165

Voyer, D., Voyer, S., & Bryden, M. P. (1995). Magnitude of sex differences in spatial abilities: A meta-analysis and consideration of critical variables. Psychological Bulletin, 117 (2), 250–270. https://doi.org/10.1037/0033-2909.117.2.250

Wai, J., Lubinski, D., & Benbow, C. P. (2009). Spatial ability for STEM domains: Aligning over 50 years of cumulative psychological knowledge solidifies its importance. Journal of Educational Psychology, 101 (4), 817. https://doi.org/10.1037/a0016127

Walsh, V. (2003). A theory of magnitude: Common cortical metrics of time, space, and quantity. Trends in Cognitive Sciences, 7 , 483–488. https://doi.org/10.1016/j.tics.2003.09.002

Wang, L., Cohen, A. S., & Carr, M. (2014). Spatial ability at two scales of representation: A meta-analysis. Learning and Individual Differences, 36 , 140–144. https://doi.org/10.1016/j.lindif.2014.10.006

Weisberg, S. M., Schinazi, V. R., Newcombe, N. S., Shipley, T. F., & Epstein, R. A. (2014). Variations in cognitive maps: Understanding individual differences in navigation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40 (3), 669–682. https://doi.org/10.1037/a0035261

Witkin, H. A. (1950). Individual differences in ease of perception of embedded figures. Journal of Personality, 19 , 1–15. https://doi.org/10.1111/j.1467-6494.1950.tb01084.x

Witkin, H. A., Moore, C. A., Goodenough, D. R., & Cox, P. W. (1977). Field-dependent and field-independent cognitive styles and their educational implications. Review of Educational Research, 47 (1), 1–64. https://doi.org/10.3102/00346543047001001

Witkin, H. A., Oltman, P. K., Raskin, E., & Karp, S. A. (1971). A manual for the embedded figures test. Consulting Psychologist Press . https://doi.org/10.1007/978-0-387-79948-3_1361

Wolfgang, C., Stannard, L., & Jones, I. (2003). Advanced constructional play with LEGOs among preschoolers as a predictor of later school achievement in mathematics. Early Child Development and Care, 173 (5), 467–475. https://doi.org/10.1080/0300443032000088212

Wraga, M., Creem, S. H., & Profitt, D. R. (2000). Updating displays after imagined object and viewer rotations. Journal of Experimental Psychology: Learning, Memory, & Cognition, 26 (1), 151–168. https://doi.org/10.1037/0278-7393.26.1.151

Xie, F., Zhang, L., Chen, X., & Xin, Z. (2020). Is spatial ability related to mathematical ability: A meta-analysis. Educational Psychology Review, 32 , 113–155.

Yilmaz, B. (2009). On the development and measurement of spatial ability. International Electronic Journal of Elementary Education, 1 (2), 1–14.

Young, C. J., Levine, S. C., & Mix, K. S. (2018). The connection between spatial and mathematical ability across development. Frontiers in Psychology, 9 , 775. https://doi.org/10.3389/fpsyg.2018.00755

Young, C. J., Levine, S. C., & Mix, K. S. (2018). What processes underlie the relation between spatial skill and mathematics? In K. S. Mix & M. T. Battista (Eds.), Visualizing Mathematics. Research in Mathematics Education (pp. 195–228). Springer, Cham. https://doi.org/10.1007/978-3-319-98767-5_10

Zacks, J. M., Mires, J., Tversky, B., & Hazeltine, E. (2000). Mental spatial transformations of objects and perspective. Spatial Cognition and Computation, 2 , 315–332. https://doi.org/10.1023/A:1015584100204

Zacks, J., Rypma, B., Gabrieli, J. D. E., Tversky, B., & Glover, G. H. (1999). Imagined transformations of bodies: An fMRI investigation. Neuropsychologia, 37 (9), 1029–1040. https://doi.org/10.1016/S0028-3932(99)00012-3

Zhang, X., & Lin, D. (2015). Pathways to arithmetic: The role of visual-spatial and language skills in written arithmetic, arithmetic word problems, and nonsymbolic arithmetic. Contemporary Educational Psychology, 41 , 188–197. https://doi.org/10.1016/j.cedpsych.2015.01.005

Zhong, J. Y., & Kozhevnikov, M. (2016). Relating allocentric and egocentric survey-based representations to the self-reported use of a navigation strategy of egocentric spatial updating. Journal of Environmental Psychology, 46 , 154–175. https://doi.org/10.1016/j.jenvp.2016.04.0

Download references

Acknowledgements

We thank Dr. Martha W. Alibali and Dr. Edward M. Hubbard for their extensive and valuable feedback as part of the preliminary examination committee. We also thank Dr. Michael I. Swart for his feedback on initial and subsequent drafts and for lending graphic design knowledge. Last, we thank Dr. Mary Hegarty and Monica Mendoza for their feedback on the initial drafts of this work.

Open access funding provided by SCELC, Statewide California Electronic Library Consortium No funding, grants, or other support was received for the submitted work.

Author information

Authors and affiliations.

Department of Teaching and Learning, Southern Methodist University, Dallas, TX, USA

Kelsey E. Schenck

Department of Educational Psychology, University of Wisconsin-Madison, Madison, WI, USA

Mitchell J. Nathan

You can also search for this author in PubMed   Google Scholar

Contributions

This work is primarily based on Kelsey E. Schenck’s preliminary examination thesis working under her advisor, Mitchell J. Nathan. The idea for the article was Kelsey E. Schenck’s under the guidance of Mitchell J. Nathan. Kelsey E. Schenck performed the initial literature search and drafted the initial work. Mitchell J. Nathan critically revised and contributed to subsequent drafts.

Corresponding author

Correspondence to Kelsey E. Schenck .

Ethics declarations

Conflict of interest.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Schenck, K.E., Nathan, M.J. Navigating Spatial Ability for Mathematics Education: a Review and Roadmap. Educ Psychol Rev 36 , 90 (2024). https://doi.org/10.1007/s10648-024-09935-5

Download citation

Accepted : 05 August 2024

Published : 17 August 2024

DOI : https://doi.org/10.1007/s10648-024-09935-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Spatial Ability
  • Mathematics Education
  • Student Cognition
  • Factor Analysis
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Literature Review Outline: Writing Approaches With Examples

    literature review theoretical empirical

  2. Notable Differences between Empirical Review and Literature Review

    literature review theoretical empirical

  3. Presentation on literature review

    literature review theoretical empirical

  4. Differences Between Empirical Research and Literature Review

    literature review theoretical empirical

  5. Day 5: Empirical Review So, you have understood the significance of a

    literature review theoretical empirical

  6. PPT

    literature review theoretical empirical

COMMENTS

  1. Difference between theoretical literature review and empirical

    The theoretical review looks at existing theories (concepts or whole), their relationships, extend the theories have been studied and the establishment of new hypotheses. While empirical ...

  2. Literature Reviews, Theoretical Frameworks, and Conceptual Frameworks

    Conducting a literature review, selecting a theoretical framework, and building a conceptual framework are some of the most difficult elements of a research study. ... Standards for reporting on empirical social science research in AERA publications: American Educational Research Association. Educational Researcher, 35 (6), 33-40. [Google ...

  3. Writing a Literature Review

    Empirical versus theoretical scholarship; Divide the research by sociological, historical, or cultural sources; Theoretical: In many humanities articles, the literature review is the foundation for the theoretical framework. You can use it to discuss various theories, models, and definitions of key concepts.

  4. How to Write a Literature Review

    Discuss how the topic has been approached by empirical versus theoretical scholarship; Divide the literature into sociological, historical, and cultural sources; Theoretical. A literature review is often the foundation for a theoretical framework. You can use it to discuss various theories, models, and definitions of key concepts.

  5. (PDF) Literature Reviews, Conceptual Frameworks, and Theoretical

    The literature review and conceptual and theoretical frameworks share five functions: (a) to build a foundation, (b) to demonstrate how a study advances knowledge, (c) to conceptualize the study ...

  6. PDF The Thesis Writing Process and Literature Review

    The key here is to focus first on the literature relevant to the puzzle. In this example, the tokenism literature sets up a puzzle derived from a theory and contradictory empirical evidence. Let's consider what each of these means... The literature(s) from which you develop the theoretical/empirical puzzle that drives your research question.

  7. PDF LITERATURE REVIEWS

    orient your reader by defining key concepts (theoretical) and/or providing relevant background (empirical) ... • be aware of genre (e.g. lit. review, theory, empirical) ... The literature review is an opportunity to discover and craft your scholarly identity through the kinds of questions you engage, the discussions you enter, the critiques ...

  8. Literature review as a research methodology: An ...

    By integrating findings and perspectives from many empirical findings, a literature review can address research questions with a power that no single study has. ... This is generally referred to as the "literature review," "theoretical framework," or "research background." However, for a literature review to become a proper research ...

  9. Methodological Approaches to Literature Review

    A literature review is defined as "a critical analysis of a segment of a published body of knowledge through summary, classification, and comparison of prior research studies, reviews of literature, and theoretical articles." (The Writing Center University of Winconsin-Madison 2022) A literature review is an integrated analysis, not just a summary of scholarly work on a specific topic.

  10. Guidance on Conducting a Systematic Literature Review

    Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...

  11. 3 Essential Components Of A Literature Review

    Literature Review: 3 Essential Ingredients. The theoretical framework, empirical research and research gap. Writing a comprehensive but concise literature review is no simple task. There's a lot of ground to cover and it can be challenging to figure out what's important and what's not. In this post, we'll unpack three essential ...

  12. PDF Literature Reviews, Conceptual Frameworks, and Theoretical Frameworks

    among literature review, theoretical framework, and conceptual framework as a component of an empirical manuscript. Interchangeable Use of Terms ... qualitative researchers incorporate both relevant theory and empirical research that help to organize the conceptual framework and "to see where the overlaps, contradictions, refinements, or ...

  13. The art of writing literature review: What do we know and what do we

    A literature review article provides a comprehensive overview of literature related to a theme/theory/method and synthesizes prior studies to strengthen the foundation of knowledge. ... sometimes with a special emphasis on theoretical contributions and/or empirical developments in specific set of scientific journals (Colquitt & Zapata-Phelan ...

  14. Difference Between Literature Review And Theoretical Framework

    A literature review and a theoretical framework are both important components of academic research. However, they serve different purposes and have distinct characteristics. In this article, we will examine the concepts of literature review and theoretical framework, explore their significance, and highlight the key differences between the two.

  15. PDF Conceptualizing the Pathways of Literature Review in Research

    work. I have also inferred, like many have done, that the basic components of literature review consist of introduction, review of theoretical and empirical literature, implication of the review, and theoretical and/or conceptual framework/s. Its implication is that any research work needs to pave its pathways distinctly for its successful ...

  16. Chapter 9 Methods for Literature Reviews

    9.3. Types of Review Articles and Brief Illustrations. EHealth researchers have at their disposal a number of approaches and methods for making sense out of existing literature, all with the purpose of casting current research findings into historical contexts or explaining contradictions that might exist among a set of primary research studies conducted on a particular topic.

  17. PDF Writing the literature review for empirical papers

    3. The literature review in an empirical paper In this section we discuss the literature review as a part of an empirical article. It plays the fundamental role of unveiling the theory, or theories, that underpin the paper argument, or, if there are no such theoretical background, which is the related extant knowledge.

  18. Module 2 Chapter 3: What is Empirical Literature & Where can it be

    What May or May Not Be Empirical Literature: Literature Reviews Investigators typically engage in a review of existing literature as they develop their own research studies. The review informs them about where knowledge gaps exist, methods previously employed by other scholars, limitations of prior work, and previous scholars' recommendations ...

  19. Literature Review Research

    Literature Review is a comprehensive survey of the works published in a particular field of study or line of research, usually over a specific period of time, in the form of an in-depth, critical bibliographic essay or annotated list in which attention is drawn to the most significant works. Also, we can define a literature review as the ...

  20. What is the difference between a literature review and a theoretical

    A literature review and a theoretical framework are not the same thing and cannot be used interchangeably. While a theoretical framework describes the theoretical underpinnings of your work, a literature review critically evaluates existing research relating to your topic. You'll likely need both in your dissertation.

  21. Types of Literature Reviews

    Theoretical Review The purpose of this form is to concretely examine the corpus of theory that has accumulated in regard to an issue, concept, theory, phenomenon. The theoretical literature review help establish what theories already exist, the relationships between them, to what degree the existing theories have been investigated, and to ...

  22. All Guides: Psychology: Empirical Study & Literature Review

    An empirical article reports the findings of a study conducted by the authors and uses data gathered from an experiment or observation. An empirical study is verifiable and "based on facts, systematic observation, or experiment, rather than theory or general philosophical principle" (APA Databases Methodology Field Values).In other words, it tells the story of a research conducted, doing it in ...

  23. Economic growth : a review of the theoretical and empirical literature

    With 189 member countries, staff from more than 170 countries, and offices in over 130 locations, the World Bank Group is a unique global partnership: five institutions working for sustainable solutions that reduce poverty and build shared prosperity in developing countries.

  24. On Crafting Effective Theoretical Contributions for Empirical Papers in

    Although some see such contributions as the raison d'etre for academic scholars engaged in research, others feel that the discipline has developed a fetish for theory, with reviewers and editors often demanding an unreasonable level of theoretical contributions for empirical manuscripts to succeed in the review process. Moreover, there exists ...

  25. Full article: The impact of corporate social responsibility disclosure

    Theoretical literature review. This study draws on stakeholder and institutional theories. Stakeholders' theory is used in this study to explain the direct relationship between CSRD and corporate reputation, while institutional theory supports the investigation regarding the influence of national culture on the relationship between CSRD and ...

  26. The Impact of Blockchain Technology Applications on Enterprise

    The follow-up of this paper includes: the second part is theoretical background, the third part is literature review and hypothesis development, the fourth part is research design, the fifth part is research results, the sixth part is further analysis, the seventh part is conclusions and recommendations, and the last part is limitations and ...

  27. Pursuing Paradigm Shift in Construction Safety Management: A

    1. Introduction. Construction safety is an industry-wide concern. In 2021, the construction industry experienced approximately 162,500 nonfatal injuries, which represents 2.4 injuries per 100 full-time workers [].Furthermore, the number of nonfatal injuries resulting in days of job transfer or restriction increased by 10.4% in 2021 in the private construction sector [].

  28. Impact of Fiscal Decentralization in Improving Public ...

    The organization of this study provided review of literature in the "Literature Review" section. The "Decentralization in Pakistan" section provides decentralization in Pakistan. The "Econometric Model, Methodology, and Data" section is on the construction of models and methods for analysis, with the data sources. The "Empirical ...

  29. Navigating Spatial Ability for Mathematics Education: a Review and

    Spatial skills can predict mathematics performance, with many researchers investigating how and why these skills are related. However, a literature review on spatial ability revealed a multiplicity of spatial taxonomies and analytical frameworks that lack convergence, presenting a confusing terrain for researchers to navigate. We expose two central challenges: (1) many of the ways spatial ...

  30. Computation

    Traditional mean-variance (MV) models, considered effective in stable conditions, often prove inadequate in uncertain market scenarios. Therefore, there is a need for more robust and better portfolio optimization methods to handle the fluctuations and uncertainties in asset returns and covariances. This study aims to perform a Systematic Literature Review (SLR) on robust portfolio mean ...