U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Conceptualizations of E-recruitment: A Literature Review and Analysis

14 Department of Computer Science, Namibia University of Science and Technology, 13 Jackson Kaujeua Street, Windhoek, Namibia

Irwin Brown

15 Department of Information Systems, University of Cape Town, Rondebosch, Cape Town, 7701 South Africa

There is diversity in understanding of electronic recruitment (e-recruitment) which results in confusion on the meaning and use of the term. The purpose of this paper is to bring conceptual clarity by investigating the alternative conceptualizations of e-recruitment in academic literature. Using Grounded Theory Methodology (GTM) techniques we analyzed literature to reveal five alternative conceptualizations; these being: (1) E-recruitment as a Technology Tool, (2) E-recruitment as a System, (3) E-recruitment as a Process, (4) E-recruitment as a Service, and (5) E-recruitment as a Proxy. The conceptualizations map to the scope of the definition and utilization of e-recruitment. Identifying conceptualizations of e-recruitment sets a platform for further research. Further research may include determining the relationships between the conceptualizations and determining conceptualizations in different settings among many other possible research focus topics.

Introduction

E-recruitment has many labels that include; internet recruitment, online recruitment, web-recruitment and many others. Unlike traditional recruitment, e-recruitment makes use of information technology to handle the recruitment processes. Breaugh et al. [ 1 ] defined a recruitment model that presents the recruitment process at a macro level with the following activities: Setting recruitment objectives, developing a strategy, performing the recruitment activity and obtaining and evaluating recruitment results. Recruiters compete with each other for candidates (jobseekers suitable for available jobs), while jobseekers compete for jobs; which drives both groups to adopt information technologies at accelerated rates in order to take the strain out of some of the recruitment activities [ 2 – 7 ]. “ For most job seekers, the Internet is where the action is ” [ 3 , p. 140]. Thus, to get candidates, recruiters need to move swiftly to locate and hire, which may require use of a multitude of information technologies in the process [ 8 , p. 130].

There is evidence in research papers that academic disciplines and stakeholders have varied definitions of e-recruitment. The variety of definitions of e-recruitment is expected because it is part of e-HRM (electronic Human Resource Management) that has in itself different definitions depending on the context [ 6 , p. 26], [ 9 , p. 98]. Studies based on these definitions tend to reveal overlapping and contradictory results due to the overlaps or differences in definitions [ 9 , p. 100]. The differences in definitions, aside from being problematic, is evidence of the variety in conceptualization of e-recruitment. Thus to find a standard definition of e-recruitment, conceptualizations of e-recruitment need to be known. To our knowledge, no research paper in e-recruitment has focused on conceptualization of e-recruitment, however there are studies in other areas of information systems (IS) that have focused on conceptualization [ 10 – 19 ]. Most view conceptualization as the formulation of a view about the nature of a phenomenon. The research questions to be answered are:

  • What conceptualizations of e - recruitment exist in literature?
  • How can the conceptualizations be described and explained?

Methodology for Reviewing Literature

Because of the large number of research papers on e-recruitment we aimed at selecting papers for review that would embrace the full variety of conceptualizations of e-recruitment. Also, we wanted a flexible review methodology that would allow for selection and analysis of papers simultaneously, as the conceptualizations emerged, rather than a sequential review methodology that required all research papers to be selected beforehand. Such flexibility is provided for by applying grounded theory methodology (GTM) as a suitable review methodology [ 20 ]. GTM techniques used in this study included open coding to identify concepts, constant comparative analysis to refine and differentiate conceptualizations, and theoretical sampling to identify further relevant literature [ 21 , 22 ].

Figure  1 is a flowchart depicting how the literature was processed from search until conceptualizations of e-recruitment were identified, saturated and completed.

An external file that holds a picture, illustration, etc.
Object name is 497534_1_En_32_Fig1_HTML.jpg

GTM for reviewing literature

Searching for Articles

We used the web search engine Google Scholar to search electronically for the articles. We fed keywords synonymic with the word e-recruitment into the searching tool. These are: e-recruiting, e-HRM, e-Human Resource Management, electronic HRM, electronic Human Resource Management, e-recruiting, e-recruitment, internet recruiting, internet recruitment, online recruiting, online recruitment, recruiting online, recruiting on the internet, recruiting on the web, recruitment online, web-based recruiting, web-based recruitment, web recruiting, web recruitment [ 20 ].

After an initial search on Google Scholar and filtering of articles for relevancy based on paper titles and abstracts we had 445 journal articles and conference papers published in the period 1998 to 2019 in approximately 145 sources. The search process provided a set of many articles, but it did not qualify all of them as useful for the review. The selection process had to take place to sample useful and relevant articles for the review.

Theoretical Sampling of Articles

Ideally all papers on e-recruitment needed to be included in the review. Alternatively, papers included in the analysis had to be a representative sample of all papers in e-recruitment that were relevant for the developing conceptualizations. However with the vast amount of research in e-recruitment and the huge number of articles from our search and filtering it would be difficult or time consuming to include all relevant e-recruitment research articles for the review. The alternative of having a representative sample was viable and using GTM’s theoretical sampling [ 21 ] was feasible for the objectives of this research to be met.

An initial article to be analyzed was picked from the population of 445 articles. Picking of subsequent articles for inclusion in the sample was informed by the emerging conceptualizations. Theoretical sampling was performed until all the conceptualizations got saturated and completed. Glaser [ 22 ] defines saturation as a state where new data does not bring new properties to the concepts. In an effort to attain completeness a check was done to make sure all conceptualizations were included. Theoretical sampling ended when saturation and completeness was achieved. This is the point at which the number of research articles involved in identifying conceptualizations in e-recruitment were counted. In the end 26 research articles were relevant for identifying and explaining conceptualizations of e-recruitment.

Analyzing Articles

Analysis of the articles that let conceptualizations of e-recruitment emerge (see Fig.  1 ) required that constant comparison be applied by comparing codes to codes and concepts to concepts to find and note their relationships and further develop the labelled conceptualizations [ 21 , 22 ]. The emerging conceptualizations served as a framework for further selection of articles and using systematic deduction from the emerging conceptualization possibilities and probabilities were determined to guide the next cycle of article selection. Memos were created to note the emergent ideas. Memoed ideas also served to direct which article to sample next.

Every sampled article was investigated for its perspective on the essence of e-recruitment or the most essential or most vital part that embodied the conceptualization of e-recruitment. Indicators in the article brought forth the conceptualizations. The moment of departure from the analysis to getting another article for analysis came only after the article was fully analyzed. The resulting conceptualizations are detailed in the next section.

Conceptualizations of E-recruitment

Five conceptualizations of e-recruitment emerged from extent literature, namely: e-recruitment as a technology tool, e-recruitment as a system, e-recruitment as a process, e-recruitment as a service and e-recruitment as a proxy. Although many of the articles had a mixture of conceptualizations, one or two stood out in each article and for each conceptualization Table  1 gives example research articles. After the presentation in Table  1 each of the conceptualizations is described and explained in sub-sections that follow.

Table 1.

Conceptualization of e-Recruitment ( )DescriptionArticles
1. E-recruitment as a Technology ToolE-recruitment is viewed in some studies as a technology tool[ ]
2. E-recruitment as a SystemE-recruitment is a group of independent but interrelated elements comprising a unified whole. These elements include technology, society, organizations, etc.[ – ]
3. E-recruitment as a ProcessE-recruitment is a set of systematic well-coordinated activities. The activities are done by information technology or traditionally[ , – ]

4. E-recruitment as a Service

  a. E-recruitment as a Repository

  b. E-recruitment as a Medium

  c. E-recruitment as a Program (E-recruitment as an Implemented Algorithm)

E-recruitment is a service to recruitment. It cannot be entrusted to do all that is needed for successful recruitment, therefore it only provides certain functionalities

  a. E-recruitment provides storage facilities for recruitment data

  b. E-recruitment is a communications conduit between stakeholders in recruitment

  c. E-recruitment is a set of precise rules for solving a problem

[ , , ]

a. [ , , ]

b. [ , – ]

c. [ , , ]

5. E-recruitment as a ProxyE-recruitment acts on behalf of organizational and societal entities[ , , ]

E-recruitment as a Technology Tool

E-recruitment as a technology tool is a conceptualization of e-recruitment as a technical artefact [ 19 ]. This means is demonstrated by Faliagka et al. [ 23 ] who presented a tool to automate the ranking of applicants in recruitment.

E-recruitment as a System

Studies that view e-recruitment as a system conceptually divide e-recruitment into independent but interrelated elements, at the core of which is information technology, society, organizations, etc. The system view allows each component to receive input from the other elements and produce input for other components [ 25 ]. The system view of e-recruitment assigns all automating functions to the IT artefact of the system while organizational recruitment experts evaluate the outcome [ 24 ]. While some stakeholders view e-recruitment as a system, others view it as a process.

E-recruitment as a Process

Instead of focusing on entities, the process view of e-recruitment focuses on e-recruitment activities [ 37 ]. There is no attempt to set boundaries between the IT artefact, society and organization, but activities are clearly identified and can be performed by either the IT artefact or by human actors. Examples include e-recruitment being seen as data collection activity using an online system [ 33 ]. However recruitment activities can be performed by human actors too [ 37 ]. With the process view of e-recruitment the end goal is the execution of all the recruitment activities.

E-recruitment as a Service

The view exists that e-recruitment is a service to recruiters and job-seekers. Many e-recruitment platforms are independent of the organizations or societies they serve. Sub-views of e-recruitment as a service include: e-recruitment as a repository, e-recruitment as a medium, and e-recruitment as a program.

E-recruitment as a repository.

Some studies portrayed e-recruitment as a repository for data about jobs, recruiters and employers [ 40 ]. In another study online forms were filled in by jobseekers and the data provided on the forms was stored for recruiters and other stakeholders to retrieve [ 33 ]. While the view of e-recruitment as a repository is usually held when e-recruitment is newly adopted, other services follow suit.

E-recruitment as a medium is another view held, e.g. Bartram [ 41 ] portrays e-recruitment as a facilitator of communication between jobseekers and organizations. Traditional media like newspaper [ 42 ] are sometimes found inconvenient thus e-recruitment takes their place. Some organizations employ e-recruiters who form part of e-recruitment and serve to link the IT artefact and other elements in recruitment. Although e-recruitment as a medium improves communication speed it also comes with a downside, e.g. information overload [ 37 ].

E-recruitment as a program is a view that associates e-recruitment with calculations and logical interpretation and processing of data. One study included, as an algorithmic module, a Pre-screening Management System to automatically assess the extent of match between an applicant’s qualification and job requirements [ 25 ]. Such module or similar modules are found in many e-recruitment systems given the high volumes of applications associated with e-recruitment. Therefore, many studies espouse the view that e-recruitment serves to provide a convenient matching program.

E-Recruitment as a Proxy

Orlikowski and Iacono [ 19 ] reveal the pervasiveness of the proxy view of the IT artefact in IS literature. E-recruitment may act to present the image of the company, culture of the company, etc. Braddy et al. [ 45 ] examined the effects of website content features on people’s perceptions of organizational culture. Their study implies that e-recruitment, especially the IT artefact (website) acts on behalf of some corporate image management entity in the organization. Some studies focused on website content [ 45 ], while others focused on website characteristics [ 46 ].

Contribution and Implications of Conceptualizations of E-recruitment

Conceptualizations of e-recruitment contribute to understanding of e-recruitment and have implications for both practice and research as discussed in this section.

Contribution of the Research

This study mapped the scope of the definition of e-recruitment by explaining the diversity in understanding. This mapping was done by identifying five conceptualizations of e-recruitment and labelling them as: E-recruitment as a Technology Tool, E-recruitment as a System, E-recruitment as a Process, E-recruitment as a Service and E-recruitment as a Proxy. Taking note of conceptualizations provides practitioners with a tool to enhance productivity while allowing researchers to have more focus in their research.

Implications of Conceptualizations of E-recruitment

The implications of conceptualizations of e-recruitment stem from being able to attach a label to the said stakeholders’ conceptualizations and put it to their trade or scholarly pursuits. Labelling conceptualizations provides a pathway to standardization of e-recruitment. The benefits of such standardization include having common understanding of concepts, and ease of communication. While these are overarching implications, some implications are specific to practice or research.

Implications for Practice.

Labelled and well defined conceptualizations of e-recruitment sets bounds on what practitioners should expect in their practice and strive towards when they adopt a particular conceptualization. Well defined conceptualizations as ones in this study provide alternative conceptualization options that practitioners can adopt depending on their needs. Practitioners can always adopt a conceptualization that best reflects their situation. As there are implications for practice, there are implications for research as well.

Implications for Research.

Through this identification, description and explanation of conceptualizations of e-recruitment, there are a number of conceptualizations to consider. Therefore, focus on a specific conceptualization or focus on specific conceptualizations is possible. Such focus allows the researcher to delimit research.

Conclusion and Further Research

The study highlighted the problem of diversity in understanding of e-recruitment that goes without explicit attention in literature and proposed that identifying and labelling the varied conceptualizations of e-recruitment can be part of better articulation of the diversity. Using GTM, literature on e-recruitment was reviewed and conceptualizations of e-recruitment were identified. Taking note of conceptualizations provides practitioners with a tool to enhance productivity while allowing researchers to have more focus in their research. In addition this study provides insight into directions for potential further study.

Further Research

While this research contributes to understanding of e-recruitment, further research related to it can respond to several issues which are not addressed herein. Understanding of relationships between conceptualizations helps to avoid conceptual chaos. Therefore, further research aimed at relating the conceptualizations is essential. Conceptualizations of e-recruitment may be compared to conceptualizations of other forms of e-phenomena, and hence to the development of more general understanding of IS and the IT artefact.

Contributor Information

Marié Hattingh, Email: [email protected] .

Machdel Matthee, Email: [email protected] .

Hanlie Smuts, Email: [email protected] .

Ilias Pappas, Email: [email protected] .

Yogesh K. Dwivedi, Email: moc.liamg@ideviwdky .

Matti Mäntymäki, Email: [email protected] .

Mike Abia, Email: moc.liamg@ekimaiba .

Irwin Brown, Email: [email protected] .

Edit for Scribbr

Join the top 2% of academic editors worldwide. Apply to become a Scribbr editor now:

  • Pass the Scribbr Academy
  • Double your editing speed and learn new skills
  • Exchange knowledge with a community of editors
  • Help students become stronger writers
  • Make extra money

online job research paper

  • Our editors
  • Become an editor

About-Scribbr-editors-job

About Scribbr

We believe that all students deserve guidance as they learn how to communicate effectively and put their ideas on paper. With every edit, we provide personalized feedback to help students learn from their mistakes and become better writers.

Did you know that we have a Trustpilot score of 4.7 out of 5 ? We take pride in the quality of our service and our highly skilled editors .

What makes Scribbr unique?

  • Passionate team of experts
  • New orders every day, all year round
  • 24/7 support by phone, email, and chat
  • Courses, mentorship, and skills development
  • Open community and community events

Freelance-editor-hours-Scribbr

Flexible work on your terms

As a Scribbr editor, you’ll help students all over the world from your home office. All you need is a laptop and a wifi connection.

How does it work?

  • Set your availability calendar.
  • Accept or decline assignments.
  • Edit with track changes.
  • Share constructive feedback.
  • Work whenever, wherever.

We have transparent per-word fees and pay you every two weeks. You can earn an average of €145 for reviewing a 10,000-word thesis .

Our support team and community of editors are available every day to help you. You’ll have the best of both worlds: the flexibility you’ve always dreamed of and an awesome team that has your back.

Scribbr Develop and grow

Develop and grow

We value growth, and it’s at the heart of everything we do. As part of your application process, you will join the Scribbr Academy, where you’ll learn how to perform our services and receive personalized coaching. If you’re successful it doesn’t stop there. A lot more is waiting for you!

Learning culture at Scribbr

  • Skills roadmap for editors
  • Courses and mentorship opportunities
  • Bi-weekly newsletters with valuable tips
  • 24/7 help with editing questions

Freelance-editor-Scribbr

Freelancing simplified

As a Scribbr editor, you’ll receive work directly to your inbox. Our HQ team works behind the scenes to keep the work flowing.

Here’s what we do for you:

  • Our support team handles customer communication and logistics
  • Our marketing team drives new customers to our platform
  • Our finance team does the invoicing to pay you every two weeks
  • Our operations team collects and implements your feedback

Let us handle the business, so you can focus on editing.

Editor-help-students-Scribbr

Make a real impact

See how you help students every day!

After students review your feedback, they can send you a digital thank you note. We receive words of gratitude from students all the time.

As a qualified editor, you’ll be able to see the positive difference you make in students’ lives on your thank you wall. Until then, check out these Trustpilot review from happy customers.

Join-the-Scribbr-community

Join the community!

Welcome to the Scribbr community!

As part of this community, you’ll be in the top 2% of academic editors worldwide.

Tap into this network:

  • Ask questions and get advice
  • Meet fellow editors all over the world
  • Participate in workshops and events
  • Help other editors realize their untapped potential

We are on a mission to make Scribbr a place where freelance editors love to work. When you join our team, you join a supportive and thriving community of like-minded editors from all over the world!

Editor-application-Scribbr

Qualifications

Do you want to join our editor team? We’d love to invite you to start the application process !

Requirements

  • A bachelor’s degree or higher
  • Interest in a wide range of subjects
  • Microsoft Word skills and tech skills
  • Availability to edit 10,000 words per week
  • Prior academic editing experience
  • Freelance and remote work experience
  • Interest in a long-term collaboration

Why the top 2%?

We promise students that we work with highly skilled editors—and to keep this promise, we’ve developed a unique (and admittedly demanding) application process for our editors.

Out of every 100 applicants, we only qualify 2 new editors. We use our challenging language quiz to identify the applicants who are the best fit for our team. As you advance through the process, you’ll receive more support and feedback from our Academy Coaches. With every step, you’ll get closer to becoming a qualified editor.

Do you have what it takes? We look forward to welcoming you to our team!

language-quiz-freelance-editor-Scribbr

Language quiz

Only 2% of applicants pass our challenging grammar and style quiz. If you’re up for the challenge and make the grade, we’ll invite you to apply.

test-assignment-freelance-editor-Scribbr

Test Document

You’ve proven that you know your grammar — now, we want to see how you apply that knowledge through three short sample edits.

scribbr-academy-freelance-editor-Scribbr

Scribbr Academy

This is the final stage of your application process. During the Scribbr Academy you will learn how to edit according to Scribbr guidelines, and get tested via simulation orders (the quantity depends on your performance). Throughout this stage you’ll receive support from our Academy Coaches every step of the way.

Thank you for your interest in working as an editor! Unfortunately, we are not recruiting at the moment, but we will make sure to update this page whenever we start our recruitment process again.

Ask our team

Want to contact us directly? No problem.  We  are always here for you.

Support team - Nina

Frequently asked questions

All Scribbr editors are native speakers, which means that they have spoken English since early childhood. We only work with native speakers because these editors understand the cadence of the language and have mastered its idiomatic forms.

It is difficult for non-native speakers to acquire these traits, even if they are fluent and have spoken the language throughout their adult lives. Since our customers wish to work only with native speakers, we have no flexibility in this requirement.

We don’t rely solely on nativeness. Our editors are vetted through a rigorous application process, through which they are asked to demonstrate technical grammar knowledge, familiarity with academic writing, and an understanding of editing principles. All successful applicants also complete our Scribbr Academy training program, where they learn how to edit for students.

Editor application process

Editing for students is different from other kinds of editing. The Scribbr Academy is the last step of your application process, but it comes with some benefits. During your time in the Academy, we will train you on how to perform Scribbr’s services and edit in the Scribbr style . The training also features practical elements, such as one-on-one coaching, that are beneficial to you . In this way, we will prepare you to face real-life student orders and jump start your editing career at Scribbr.

In order to become a qualified editor at Scribbr, you will need to apply the learnings from the Academy and pass 2 – 5 simulation orders (the exact number depends on performance).

You must apply through our website and complete all the steps in the Scribbr editor application process.

It is not possible to see the answers to our quiz . If you’re looking for more insights on related issues, check out Scribbr’s language articles and websites that address grammar and writing issues.

If you’re accepted as an editor in the Scribbr Academy , the information package you gain access to includes an article with the correct answers to a previous version of our language quiz.

On average, you can expect to earn approximately €20 to €30 per hour as a Scribbr editor .

The earnings are calculated based on fixed per-word rates that we have set for different kinds of assignments. We will communicate these rates to you as soon as you are in your Scribbr Academy .

The per-word rate for each order is determined by:

  • The editing deadline (the shorter the deadline, the higher the rate); and
  • The services purchased (whether a Structure or Clarity Check is booked on top of standard proofreading and editing ).

In our Scribbr Academy , we train you to edit as efficiently as possible—which will help you to increase the speed at which you work. For example, we include a Scribbr Word macro that you can use to easily utilize standardized in-text comments.

Incoming editors should be highly knowledgeable regarding grammar, academic style, and the conventions of both US English and UK English.

You’ll need this knowledge to not only edit student papers but also provide individualized feedback for students.

We also expect new editors to have read widely in a variety of fields and to feel comfortable editing academic texts in a range of subjects, including the hard sciences and the social sciences. New editors likewise need to be familiar with the structure of different types of academic texts, including dissertations, theses, reports, and essays.

Finally, you’ll need to be comfortable working with Microsoft Word, including its Track Changes feature.

Since Scribbr specializes in academic editing for students , we require our editors to be thesis and academic writing experts. Every Scribbr editor has a thorough understanding of academic writing conventions and research concepts used in higher education.

Most of the papers we receive are theses and dissertations. We prefer to work with editors who know first-hand how difficult it is to write a thesis, as they can offer constructive and relevant advice to our students.

Therefore, we only accept applications from editors with a university degree.

However, a university degree alone is not enough. All applicants must also demonstrate technical grammar knowledge and fundamental editing skills during the application process.  Applicants must also complete our rigorous Scribbr Academy training program before they can join the editor team.

Incoming editors must be familiar with the conventions of both US English and UK English and able to consistently follow the related rules. However, if you’re only familiar with one of these dialects, don’t despair! Getting up to speed on the major differences between the two systems is definitely doable. Our Knowledge Base is a good place to start.

If you are familiar with the conventions of US and UK English and can edit according to them, you are more than welcome to apply. At this time, however, we cannot accept applicants who are only familiar with other English dialects. The reason is that the vast majority of our clients require their papers to be written in US or UK English.

The application process consists of three steps that you must successfully complete to become an active editor within our system:

  • Scribbr language quiz : 30 minutes
  • Scribbr application assignment : 2.5 hours
  • Scribbr Academy : 2 – 3 weeks (the Academy has to be completed within 4 weeks to be able to pass)

The speed at which you are able to complete the application process depends on your availability and the level of your work you submit. For us, it is important not only that you edit according to our guidelines, but also that you feel you have enough experience with us to make the leap to being an active Scribbr editor.

Throughout the process, you will receive feedback from experienced editors – so no matter what happens, you won’t be wasting your time!

When we receive a new order, we choose the most suitable Scribbr editor based on the following factors:

  • Availability . If you would like to receive a lot of orders, you can indicate that you are available immediately. We will then try to send more assignments your way.
  • Interest in the subject . We will not be as fast to send you orders that are from fields you have not marked as preferred.
  • Returning client . We will automatically send you orders from a returning student whose work you have already edited, unless he or she specifically requests otherwise.

The moment we have a new order for you, we will send you an email, an SMS and a notification via your Scribbr account on our website. You may then choose to accept or decline that assignment. You make this decision for every order we send you.

All orders are classified into 1 of 9 categories:

  • Business and Management : Business Administration, Hotel Management, Accounting, Marketing
  • Economics : Commercial Economics, Econometrics, Finance
  • IT and Engineering: ICT, Computer Science, Artificial Intelligence, Applied Mathematics, Civil Engineering, Industrial Design, Electrical Engineering
  • Natural and Life Sciences : Biomedical Sciences, Biology, Chemistry
  • Geography, Agriculture and Environment : Ecology, Earth Sciences, Environmental Studies, Urban Planning
  • Health and Medical Sciences : Medicine, Obstetrics, Pharmacy, Nutrition, Dentistry
  • Arts and Humanities : Philosophy, History, Literature, Cultural Studies, Theology
  • Law and Policy : Law, Political Science, Public Policy, Human Rights
  • Social and Behavioral Sciences : Psychology, Sociology, Anthropology, Communication Science, Education

You can specify the fields that you are interested in. When we send you an order , we always take your preferences into account.

More information about the categories

No, you don’t.

As an editor, you are affiliated with us on a freelance basis. You can work for us from anywhere in the world and from any time zone.

It is important that you are frequently online and have a phone with Internet access, as we will send you both an e-mail and an SMS as soon as we have a new assignment for you .

A Comprehensive Framework for Online Job Portals for Job Recommendation Strategies Using Machine Learning Techniques

  • Conference paper
  • First Online: 08 November 2022
  • Cite this conference paper

online job research paper

  • Kamal Upreti 12 ,
  • Shikha Mittal 13 ,
  • Prakash Divakaran 14 ,
  • Prashant Vats 15 ,
  • Manpreet Bajwa 15 &
  • Sandeep Singh 15  

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 520))

495 Accesses

11 Citations

The employment market in today’s modern society is growing increasingly active, which makes choosing a clear opportunity for yourself a difficult endeavor, particularly for newcomers who are unfamiliar with the numerous possible professions. As a result, the need for employment recommendation systems has been steadily increasing. Many systems employ suggestions to provide consumers with personalized solutions. By examining job recommendation articles, we are taking into account various machine learning algorithms as well as models provided in this study. The information in the student’s résumé is compared to the specifications of the job opportunities. Users’ abilities, knowledge, past previous employment, demographic data, as well as other necessary details are extracted from recommendation apps. The applicant is presented with fresh positions that are unrelated to the one being sought based on the extraction of information. We discovered that by using content-based filtering to unsupervised based on deep learning classification methods such as SVM, KNN, and randomized forest, the random forest approach delivers the highest outcomes for our applications. Python is used to construct the recommendation engine.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

online job research paper

Job Recommendation a Hybrid Approach Using Text Processing

online job research paper

Data Mining Approach to Job Recommendation Systems

online job research paper

A Recommendation System for Job Providers Using a Big Data Approach

Varshney S, Vats P, Choudhary S, Singh D (2022) A blockchain-based framework for IoT based secure identity management. In: 2nd international conference on innovative practices in technology and management (ICIPTM), pp 227–234. https://doi.org/10.1109/ICIPTM54933.2022.9753887

Chauhan K, Gupta K, Vats P, Mandot M (2022) A comparative study of various wire-less network optimization techniques. In: Joshi A, Mahmud M, Ragel RG, Thakur NV (eds) Information and communication technology for competitive strategies (ICTCS 2020). Lecture notes in networks and systems, vol 191. Springer, Singapore. https://doi.org/10.1007/978-981-16-0739-4_61

Kaur R, Vats P, Mandot M, Biswas SS, Garg R (2021) Literature survey for IoT-based smart home automation: a comparative analysis. In: 9th international conference on reliability, Infocom technologies and optimization (trends and future directions) (ICRITO), pp 1–6. https://doi.org/10.1109/ICRITO51393.2021.9596421

Zhang Y, Yang C, Niu Z (2014) A research of job recommendation system based on collaborative filtering. In: Seventh international symposium on computational intelligence and design

Google Scholar  

Al-Otaibi ST, Ykhlef M (2012) A survey of job recommender systems. Int J Phys Sci 7(29):5127–5142

Article   Google Scholar  

Hong W, Zheng S, Wang H (2013) A job recommender system based on user clustering. J Comput 8(8)

Alghieth M, Shargabi AA (2019) A map-based job recommender model. Int J Adv Comput Sci Appl 10(9)

Qureshi A, Batra S (2022) A review of machine learning (ML) in the internet of medical things (IOMT) in the construction of a smart healthcare structure. J Algebraic Stat 13(2)

Schafer JB, Frankowski D, Herlocker J et al (2007) Collaborative filtering recommender systems. In: The adaptive web. Springer Berlin Heidelberg, pp 291–324

Sarwar B, Karypis G, Konstan J et al (2001) Item-based collaborative filtering recommendation algorithms. In: Proceedings of the 10th international conference on World Wide Web. ACM, pp 285–295

Parida B, Patra PK, Mohanty SP (2020) Use of social network for recommending job by applying machine learning techniques

Taunk K, Verma S, Swetapadma A (2019) A brief review of nearest neighbor algorithm for learning and classification. In: Proceedings of the international conference on intelligent computing and control systems (ICICCS)

Martinez-Gil J, Freudenthaler B, Natschläger T (2018) Recommendation of job offers using random forests and support vector machines. Conference paper

Min S-H, Han I (2005) Recommender systems using support vector machines. Conference Paper in lecture notes in computer science

Suharyadi J, Kusnadi A Design and development of job recommendation system based on two dominants on psycho test results using KNN algorithm. ISSN 2355-0082

Kamal Upreti, Mohammad Shahnawaz Nasir, Mohammad 12. Shabbir Alam, Ankit Verma, A.K. Sharma (2021) Analyzing real time performance in Vigil Net using wireless sensor network. Mater Today: Proc. https://www.sciencedirect.com/science/article/pii/S2214785321005812 . ISSN 2214-7853. https://doi.org/10.1016/j.matpr.2021.01.490

Upreti K, Sharma AK, Vargis B, Sidhu RS (2020) An efficient approach for generating IRIS codes for optimally recognizing IRIS using multi objective genetic algorithm. Mater Today: Proc. https://www.sciencedirect.com/science/article/pii/S2214785320376525 . ISSN 2214-7853. https://doi.org/10.1016/j.matpr.2020.10.085

Upreti K, Vargis BK, Jain R, Upadhyaya M (2021) Analytical study on performance of cloud computing with respect to data security. In: 5th international conference on intelligent computing and control systems (ICICCS), pp 96–101. https://doi.org/10.1109/ICICCS51141.2021.9432268

Bedi P, Upreti K, Rajawat AS, Shaw RN, Ghosh A (2021) Impact analysis of industry 4.0 on realtime smart production planning and supply chain management. In: IEEE 4th international conference on computing, power and communication technologies (GUCON), pp 1–6. https://doi.org/10.1109/GUCON50781.2021.9573563

Juneja N, Upreti K (2017) An introduction to few soft computing techniques to predict software quality. In: 2nd international conference on telecommunication and networks (TEL-NET), pp 1–6. https://doi.org/10.1109/TEL-NET.2017.8343581

Sharma A, Singh UK, Upreti K, Yadav DS (2021) An investigation of security risk and taxonomy of cloud computing environment. In: 2nd international conference on smart electronics and communication (ICOSEC), pp 1056–1063. https://doi.org/10.1109/ICOSEC51865.2021.9591954

Alanya-Beltran J et al (2022) Machine learning-based intelligent wireless communication system for solving real-world security issues. Secur Commun Netw 2022

Upreti K, Kumar N, Alam MS, Verma A, Nandan M, Gupta AK (2021) Machine learning-based congestion control routing strategy for healthcare IoT enabled wireless sensor networks. In: Fourth international conference on electrical, computer and communication technologies (ICECCT), pp 1–6. https://doi.org/10.1109/ICECCT52121.2021.9616864

Download references

Author information

Authors and affiliations.

Department of Computer Science and Engineering, Dr. Akhilesh Das Gupta Institute of Technology and Management, New Delhi, India

Kamal Upreti

Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, India

Shikha Mittal

Department of Business Administration, Himalayan University, Itanagar, Arunachal Pradesh, India

Prakash Divakaran

Department of Computer Science and Engineering, SGT University, Gurugram, Haryana, India

Prashant Vats, Manpreet Bajwa & Sandeep Singh

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Kamal Upreti .

Editor information

Editors and affiliations.

Singidunum University, Belgrade, Serbia

ITM University, Gwalior, Madhya Pradesh, India

Shyam Akashe

Global Knowledge Research Foundation, Ahmedabad, India

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Cite this paper.

Upreti, K., Mittal, S., Divakaran, P., Vats, P., Bajwa, M., Singh, S. (2023). A Comprehensive Framework for Online Job Portals for Job Recommendation Strategies Using Machine Learning Techniques. In: Tuba, M., Akashe, S., Joshi, A. (eds) ICT Infrastructure and Computing. Lecture Notes in Networks and Systems, vol 520. Springer, Singapore. https://doi.org/10.1007/978-981-19-5331-6_74

Download citation

DOI : https://doi.org/10.1007/978-981-19-5331-6_74

Published : 08 November 2022

Publisher Name : Springer, Singapore

Print ISBN : 978-981-19-5330-9

Online ISBN : 978-981-19-5331-6

eBook Packages : Intelligent Technologies and Robotics Intelligent Technologies and Robotics (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Measuring Bias in Job Recommender Systems: Auditing the Algorithms

We audit the job recommender algorithms used by four Chinese job boards by creating fictitious applicant profiles that differ only in their gender. Jobs recommended uniquely to the male and female profiles in a pair differ modestly in their observed characteristics, with female jobs advertising lower wages, requesting less experience, and coming from smaller firms. Much larger differences are observed in these ads’ language, however, with women’s jobs containing 0.58 standard deviations more stereotypically female content than men’s. Using our experimental design, we can conclude that these gender gaps are generated primarily by content-based matching algorithms that use the worker’s declared gender as a direct input. Action-based processes like item-based collaborative filtering and recruiters’ reactions to workers’ resumes contribute little to these gaps.

We thank Sarah Bana, Kelly Bedard, Clément de Chaisemartin, Joanna Lahey, Heather Royer, Benjamin Villena Roldan, Kim Weeden, Catherine Weinberger, seminar participants at the University of Oregon and UC San Diego, and participants at the Chinese Labor Economists Society Conference (Shandong University) and LM2C2 Workshop on Gender and Labor Market Mismatches, Santiago, Chile for many helpful comments. This study was approved under UCSB IRB No.17-20-0451 and is in the AEA RCT Registry under AEARCTR-0006101. The authors declare that they have no relevant financial relationships or other conflicts of interest affecting this manuscript. The views expressed herein are those of the authors and do not necessarily reflect the views of the National Bureau of Economic Research.

MARC RIS BibTeΧ

Download Citation Data

  • randomized controlled trials registry entry

Working Groups

More from nber.

In addition to working papers , the NBER disseminates affiliates’ latest findings through a range of free periodicals — the NBER Reporter , the NBER Digest , the Bulletin on Retirement and Disability , the Bulletin on Health , and the Bulletin on Entrepreneurship  — as well as online conference reports , video lectures , and interviews .

2024, 16th Annual Feldstein Lecture, Cecilia E. Rouse," Lessons for Economists from the Pandemic" cover slide

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

A Review Study on Online Job Portal

Profile image of International Journal of Scientific Research in Computer Science, Engineering and Information Technology IJSRCSEIT

2019, International Journal of Scientific Research in Computer Science, Engineering and Information Technology

Gaining information and explicit job skills have turned into the fundamental destinations for understudies in the colleges. Learning is important to settle on educated choices, particularly, in a basic circumstance. Learning and information the executives in any organization are pivotal to give it an aggressive edge in the present testing and globalized condition. In this paper we present the plan of different on-line recruitment framework, that enables businesses to post their job advertisements, which job searcher can allude to, when searching for jobs. This job portal can catch job prerequisites dependent on industry needs.

Related Papers

Millicent Wanja

online job research paper

IRJET Journal

The software for the training and placement cell of a college is a need for the students and the institute management for proper placement and training of the students of the institute. It helps the students to provide their profiles to the training and placement cell of the institute, updating their respective profiles with their gradual approach towards the course end. The students also get to know about the companies coming for the oncampus/off-campus/pool/group pool categories of campus interviews. This paper emphasizes the significance of an on-campus online job recruitment system and its function in assisting students in obtaining available employment. It emphasizes the issues with traditional employment practices, particularly for college students. It also allows for teachers and placement officers to view statistics of the placements. The work discussed in this paper is based on an erecruitment system developed for the Dayananda Sagar College of Engineering campus, one of Bangalore's major engineering institutes. This system shows to be valuable for everybody, including firms, students, and the university, with features such as circular vitae ranking along with job recommendation based on various levels of talent, smart multi-criteria search, and graduate tracking.

This paper presents and emphasizes the need for an online employment posting platform for colleges and the efficacy of such a system in connecting students with job possibilities. Historically, Human Resource management has utilized employment websites for candidate sourcing and placement. The current project is an employment website developed for one of the most prestigious engineering schools, which is a variant of such job boards tailored to service the students of the institution. With functions such as job suggestions provided offering learners recommendations based on their skills and candidate filtering to aid employers in application matchmaking, the platform is anticipated to be useful for both students investigating job opportunities and employers locating candidates who are appropriate to the position.

International Journal of Computer Theory and Engineering

norizan mohd yasin

International Journal of Scientific Research in Computer Science, Engineering and Information Technology

International Journal of Scientific Research in Computer Science, Engineering and Information Technology IJSRCSEIT

Searching the jobs that best suits the interests and skill set is quite a challenging task for the job seekers/Workers. The problem arise from not having proper knowledge on the organization’s objective, their work culture and current job openings. In addition, looking for the perfect workers with desired work skill set to fill their current job openings is an important task for the recruiters of any organization, agencies, companies, contractors etc. Online Job Search Applications have certainly made job seeking convenient on both sides. Job Search application is the solution where recruiter as well as the job seeker/Workers meet aiming at fulfilling their individual requirement. They are the cheapest as well as the fastest mode of communication reaching wide range of desired workers on just a single click irrespective of their geographical distance. The application “Job Search Application” provides convenient and easy search application for the job seekers to find their desired jobs and for the recruiters to find the perfect candidate that suits the best. Skilled Job seekers from any background can look for the current job openings. Job seekers/Workers can register with the application and update their details and skill set along with how long they have been working for. They can search for availability of jobs and apply to their desired positions they want to work in. Android application, being open source has already made its mark in the mobile application development. To make things useful, the user functionalities are developed as an Android application. Workers/labors can register with the application and posts their current openings. They can also view the Job applicants and can screen them according to the best fit. Workers can provide a review about an organization and share their interview/work experience, which can be viewed by the Employers.

IJCRT - International Journal of Creative Research Thoughts (IJCRT)

In today's world, the whole profession is a downto-earth race. The same goes for online portals. From food ordering to recruiting, today's tech-savvy teens use the internet for everything. Indeed, today the elect people rely more on the internet than on any other source, such as a newspaper or network. The job search online starts with signing up for a job portfolio, which is done by everyone looking for a job. Then there is the turnaround, in which some selected ones receive faster responses and assignments, while others are reduced to a single record on a portal website. This is because most job seekers ignore the importance of understanding job portfolios and their features, which can help them simplify and speed up their job search. We are trying to close the gap between the Job Seeker and the Employer in this job. This is achieved by considering the information provided by both the Job Seeker and the Employer, as well as the use of various filters to minimize the effects.

Assoc. Prof. Dr. Rashad Yazdanifard

Abstract This paper elaborates on simplifying the recruitment and selection process in purely filtering the best candidates that fulfil the basic requirement which is language proficiency, interpersonal skill and confidence. Basically, this research is on transforming the way of recruiters to screen and filter applicants through a simple and fastest way with the cost saving elements which would be benefited by many people including the organizations. Keywords: online recruitment, screening, interview

Finding jobs that match one's interests and skill set is a difficult undertaking for job searchers. The issues stem from a lack of understanding of the organization's mission, work culture, and current employment opportunities. Furthermore, for any organization's recruiters, identifying the perfect individual with the needed qualities to fill their current job opportunities is a critical duty. Online Job Portals have made job searching much more convenient for both parties. Job Portal is a solution that brings together recruiters and job seekers in order to meet their specific needs. They are the cheapest and fastest means of communication, reaching a vast range of people with only a single click, regardless of their geographical location.

International Journal for Research in Applied Science and Engineering Technology

JOHN SAGAPSAPAN

This study aimed to develop an online jobs publication system that caters both employers and job seekers to post jobs related information and qualification respectively through the internet. Employers may post job vacancies online in their respective industries so that job seekers may be able to view and apply certain position electronically. Job seekers may also attach their educational qualification and job experiences. It also aimed to assess the software’s quality assurance in terms of job seeker’s security, usability, functionality, and reliability. It utilized the project method through the software development life cycle model. The main assessment tool was the evaluation sheet prepared by the researchers. There were 30 respondents included during testing and evaluation. The study revealed that the online jobs publication system is the best tool for communication between employers/recruitment agencies and job seekers through communication and information technology. Electronic...

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

IJRASET Publication

Ramzi Haraty

International Journal of Innovative Research in Engineering Science and Technology

Nasurudeen Ahamed N

Advances in Science, Technology and Engineering Systems Journal

Aman Shakya

Amanda Pyman

International Journal of Technology and Human Interaction

Pratyush Banerjee

سیستم نشریات دانشگاه اصفهان

zahra afshar

Zul Hilmi Abdullah

Vitaliy Kobets

F M Javed Mehedi Shamrat

Indian Journal of Commerce & …

Naveed R Khan

IOSR Journal of Engineering (IOSRJEN)

vishakha Nagrale

Journal of Interdisciplinary Cycle Research

Dr Touseefa Qayoom

International Journal of Applied Engineering and Management Letters (IJAEML)

Srinivas Publication

International Journal of Electrical and Computer Engineering (IJECE)

Rami Hansenne

Internet Research

Karen Jansen

Journal of Computer and Communications

Elsayed Atlam Atlam

Human Resource Management

Daniel Feldman

Dr Mumthas Yahiya

Lincoln Moore

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

online job research paper

Online Students

For All Online Programs

International Students

On Campus, need or have Visa

Campus Students

For All Campus Programs

What a Thesis Paper is and How to Write One

A student sitting at her laptop working on her college thesis paper.

From choosing a topic and conducting research to crafting a strong argument, writing a thesis paper can be a rewarding experience.

It can also be a challenging experience. If you've never written a thesis paper before, you may not know where to start. You may not even be sure exactly what a thesis paper is. But don't worry; the right support and resources can help you navigate this writing process.

What is a Thesis Paper?

Shana Chartier,  director of information literacy at SNHU.

A thesis paper is a type of academic essay that you might write as a graduation requirement for certain bachelor's, master's or honors programs. Thesis papers present your own original research or analysis on a specific topic related to your field.

“In some ways, a thesis paper can look a lot like a novella,” said Shana Chartier , director of information literacy at Southern New Hampshire University (SNHU). “It’s too short to be a full-length novel, but with the standard size of 40-60 pages (for a bachelor’s) and 60-100 pages (for a master’s), it is a robust exploration of a topic, explaining one’s understanding of a topic based on personal research.”

Chartier has worked in academia for over 13 years and at SNHU for nearly eight. In her role as an instructor and director, Chartier has helped to guide students through the writing process, like editing and providing resources.

Chartier has written and published academic papers such as "Augmented Reality Gamifies the Library: A Ride Through the Technological Frontier" and "Going Beyond the One-Shot: Spiraling Information Literacy Across Four Years." Both of these academic papers required Chartier to have hands-on experience with the subject matter. Like a thesis paper, they also involved hypothesizing and doing original research to come to a conclusion.

“When writing a thesis paper, the importance of staying organized cannot be overstated,” said Chartier. “Mapping out each step of the way, making firm and soft deadlines... and having other pairs of eyes on your work to ensure academic accuracy and clean editing are crucial to writing a successful paper.”

How Do I Choose a Topic For My Thesis Paper?

Rochelle Attari, a peer tutor at SNHU.

What your thesis paper is for will determine some of the specific requirements and steps you might take, but the first step is usually the same: Choosing a topic.

“Choosing a topic can be daunting," said Rochelle Attari , a peer tutor at SNHU. "But if (you) stick with a subject (you're) interested in... choosing a topic is much more manageable.”

Similar to a thesis, Attari recently finished the capstone  for her bachelor’s in psychology . Her bachelor’s concentration is in forensics, and her capstone focused on the topic of using a combined therapy model for inmates who experience substance abuse issues to reduce recidivism.

“The hardest part was deciding what I wanted to focus on,” Attari said. “But once I nailed down my topic, each milestone was more straightforward.”

In her own writing experience, Attari said brainstorming was an important step when choosing her topic. She recommends writing down different ideas on a piece of paper and doing some preliminary research on what’s already been written on your topic.

By doing this exercise, you can narrow or broaden your ideas until you’ve found a topic you’re excited about. " Brainstorming is essential when writing a paper and is not a last-minute activity,” Attari said.

How Do I Structure My Thesis Paper?

An icon of a white-outlined checklist with three items checked off

Thesis papers tend to have a standard format with common sections as the building blocks.

While the structure Attari describes below will work for many theses, it’s important to double-check with your program to see if there are any specific requirements. Writing a thesis for a Master of Fine Arts, for example, might actually look more like a fiction novel.

According to Attari, a thesis paper is often structured with the following major sections:

Introduction

  • Literature review
  • Methods, results

Now, let’s take a closer look at what each different section should include.

A blue and white icon of a pencil writing on lines

Your introduction is your opportunity to present the topic of your thesis paper. In this section, you can explain why that topic is important. The introduction is also the place to include your thesis statement, which shows your stance in the paper.

Attari said that writing an introduction can be tricky, especially when you're trying to capture your reader’s attention and state your argument.

“I have found that starting with a statement of truth about a topic that pertains to an issue I am writing about typically does the trick,” Attari said. She demonstrated this advice in an example introduction she wrote for a paper on the effects of daylight in Alaska:

In the continental United States, we can always count on the sun rising and setting around the same time each day, but in Alaska, during certain times of the year, the sun rises and does not set for weeks. Research has shown that the sun provides vitamin D and is an essential part of our health, but little is known about how daylight twenty-four hours a day affects the circadian rhythm and sleep.

In the example Attari wrote, she introduces the topic and informs the reader what the paper will cover. Somewhere in her intro, she said she would also include her thesis statement, which might be:

Twenty-four hours of daylight over an extended period does not affect sleep patterns in humans and is not the cause of daytime fatigue in northern Alaska .

Literature Review

In the literature review, you'll look at what information is already out there about your topic. “This is where scholarly articles  about your topic are essential,” said Attari. “These articles will help you find the gap in research that you have identified and will also support your thesis statement."

Telling your reader what research has already been done will help them see how your research fits into the larger conversation. Most university libraries offer databases of scholarly/peer-reviewed articles that can be helpful in your search.

In the methods section of your thesis paper, you get to explain how you learned what you learned. This might include what experiment you conducted as a part of your independent research.

“For instance,” Attari said, “if you are a psychology major and have identified a gap in research on which therapies are effective for anxiety, your methods section would consist of the number of participants, the type of experiment and any other particulars you would use for that experiment.”

In this section, you'll explain the results of your study. For example, building on the psychology example Attari outlined, you might share self-reported anxiety levels for participants trying different kinds of therapies. To help you communicate your results clearly, you might include data, charts, tables or other visualizations.

The discussion section of your thesis paper is where you will analyze and interpret the results you presented in the previous section. This is where you can discuss what your findings really mean or compare them to the research you found in your literature review.

The discussion section is your chance to show why the data you collected matters and how it fits into bigger conversations in your field.

The conclusion of your thesis paper is your opportunity to sum up your argument and leave your reader thinking about why your research matters.

Attari breaks the conclusion down into simple parts. “You restate the original issue and thesis statement, explain the experiment's results and discuss possible next steps for further research,” she said.

Find Your Program

Resources to help write your thesis paper.

an icon of a computer's keyboard

While your thesis paper may be based on your independent research, writing it doesn’t have to be a solitary process. Asking for help and using the resources that are available to you can make the process easier.

If you're writing a thesis paper, some resources Chartier encourages you to use are:

  • Citation Handbooks: An online citation guide or handbook can help you ensure your citations are correct. APA , MLA and Chicago styles have all published their own guides.
  • Citation Generators: There are many citation generator tools that help you to create citations. Some — like RefWorks — even let you directly import citations from library databases as you research.
  • Your Library's Website: Many academic and public libraries allow patrons to access resources like databases or FAQs. Some FAQs at the SNHU library that might be helpful in your thesis writing process include “ How do I read a scholarly article? ” or “ What is a research question and how do I develop one? ”

It can also be helpful to check out what coaching or tutoring options are available through your school. At SNHU, for example, the Academic Support Center offers writing and grammar workshops , and students can access 24/7 tutoring and 1:1 sessions with peer tutors, like Attari.

"Students can even submit their papers and receive written feedback... like revisions and editing suggestions," she said.

If you are writing a thesis paper, there are many resources available to you. It's a long paper, but with the right mindset and support, you can successfully navigate the process.

“Pace yourself,” said Chartier. “This is a marathon, not a sprint. Setting smaller goals to get to the big finish line can make the process seem less daunting, and remember to be proud of yourself and celebrate your accomplishment once you’re done. Writing a thesis is no small task, and it’s important work for the scholarly community.”

A degree can change your life. Choose your program  from 200+ SNHU degrees that can take you where you want to go.

Meg Palmer ’18 is a writer and scholar by trade who loves reading, riding her bike and singing in a barbershop quartet. She earned her bachelor’s degree in English, language and literature at Southern New Hampshire University (SNHU) and her master’s degree in writing, rhetoric and discourse at DePaul University (’20). While attending SNHU, she served as the editor-in-chief of the campus student newspaper, The Penmen Press, where she deepened her passion for writing. Meg is an adjunct professor at Johnson and Wales University, where she teaches first year writing, honors composition, and public speaking. Connect with her on LinkedIn .

Explore more content like this article

A person with a laptop and notebook, considering the difference between a bachelor's degree and a master's degree.

What is the Difference Between Bachelor’s and Master’s Degrees?

 A student holding a stack of books in a library working on academic referencing for their research paper.

Academic Referencing: How to Cite a Research Paper

A student at a desk, typing on a computer

What is Considered Plagiarism And How to Avoid It

About southern new hampshire university.

Two students walking in front of Monadnock Hall

SNHU is a nonprofit, accredited university with a mission to make high-quality education more accessible and affordable for everyone.

Founded in 1932, and online since 1995, we’ve helped countless students reach their goals with flexible, career-focused programs . Our 300-acre campus in Manchester, NH is home to over 3,000 students, and we serve over 135,000 students online. Visit our about SNHU  page to learn more about our mission, accreditations, leadership team, national recognitions and awards.

  • Type a title or code for matches
  • No quick matches found
  • Search for keyword results

online job research paper

O*NET OnLine features

Introduction.

Welcome to your tool for career exploration and job analysis!

O*NET OnLine has detailed descriptions of the world of work for use by job seekers, workforce development and HR professionals, students, developers, researchers, and more!

Find, search, or browse across 900+ occupations based on your goals and needs. Then use comprehensive reports to learn about requirements, characteristics, and available opportunities for your selected occupation.

Take advantage of the customized OnLine Help feature available throughout the site. Or, use the available Desk Aid .

Build your future with O*NET OnLine!

Occupation Keyword Search

Find occupations.

Bright Outlook occupations are expected to grow rapidly in the next several years, will have large numbers of job openings, or are new and emerging occupations.

Career Clusters contain occupations in the same field of work that require similar skills. They can be used to focus education plans towards obtaining the necessary knowledge, competencies, and training for success in a particular career pathway.

Hot technologies are software and technology skills frequently included in employer job postings.

Discover hot technologies now

Industries are broad groups of businesses or organizations with similar activities, products, or services. Occupations are included based on the percentage of workers employed in that industry.

Job Families are groups of occupations based on work performed, skills, education, training, and credentials.

Job Zones group occupations into one of five categories based on levels of education, experience, and training necessary to perform the occupation.

Occupations are listed that require education in science, technology, engineering, and mathematics (STEM) disciplines.

There are 1,016 occupation titles and codes within the current O*NET system.

Find an occupation in the list

Advanced Searches

Use your job duties to find occupations that perform similar work. The search uses the O*NET database of over 19,000 occupation-specific task statements.

Professional associations are a great source of additional information on jobs, specialties, and industries. They also serve as an excellent starting point for networking in your career of choice. Get seamless access to professional associations by searching the O*NET database of almost 3,000 organizations related to the occupations in the U.S. economy.

Use activities performed across different types of jobs to find occupations. The search uses the O*NET database of over 2,000 detailed work activities performed across a small to moderate number of occupations.

Many employers value workers with soft skills—interpersonal and thinking skills needed to interact successfully with people and to perform efficiently and effectively in the workplace.

Find occupations using your current and/or future soft skills.

Build your skills list now

Find occupations based on software used on the job. Learn about the technology and related skills needed to successfully perform in today’s world of work.

Browse by O*NET Data

online job research paper

More career sites & resources

Our O*NET information portal has data and tools for workforce professionals and developers, including:

  • Current O*NET data files
  • Interest Profiler
  • License agreements
  • O*NET Content Model
  • O*NET-SOC occupation taxonomy
  • Reports and documents
  • Web Services

Find more options on the home page , or search the site:

Stay up to date with product releases, new features, database updates, and other important O*NET project developments.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 28 August 2024

AI generates covertly racist decisions about people based on their dialect

  • Valentin Hofmann   ORCID: orcid.org/0000-0001-6603-3428 1 , 2 , 3 ,
  • Pratyusha Ria Kalluri 4 ,
  • Dan Jurafsky   ORCID: orcid.org/0000-0002-6459-7745 4 &
  • Sharese King 5  

Nature ( 2024 ) Cite this article

1 Citations

162 Altmetric

Metrics details

  • Computer science

Hundreds of millions of people now interact with language models, with uses ranging from help with writing 1 , 2 to informing hiring decisions 3 . However, these language models are known to perpetuate systematic racial prejudices, making their judgements biased in problematic ways about groups such as African Americans 4 , 5 , 6 , 7 . Although previous research has focused on overt racism in language models, social scientists have argued that racism with a more subtle character has developed over time, particularly in the United States after the civil rights movement 8 , 9 . It is unknown whether this covert racism manifests in language models. Here, we demonstrate that language models embody covert racism in the form of dialect prejudice, exhibiting raciolinguistic stereotypes about speakers of African American English (AAE) that are more negative than any human stereotypes about African Americans ever experimentally recorded. By contrast, the language models’ overt stereotypes about African Americans are more positive. Dialect prejudice has the potential for harmful consequences: language models are more likely to suggest that speakers of AAE be assigned less-prestigious jobs, be convicted of crimes and be sentenced to death. Finally, we show that current practices of alleviating racial bias in language models, such as human preference alignment, exacerbate the discrepancy between covert and overt stereotypes, by superficially obscuring the racism that language models maintain on a deeper level. Our findings have far-reaching implications for the fair and safe use of language technology.

Similar content being viewed by others

online job research paper

Large language models propagate race-based medicine

online job research paper

The benefits, risks and bounds of personalizing the alignment of large language models to individuals

online job research paper

Cognitive causes of ‘like me’ race and gender biases in human language production

Language models are a type of artificial intelligence (AI) that has been trained to process and generate text. They are becoming increasingly widespread across various applications, ranging from assisting teachers in the creation of lesson plans 10 to answering questions about tax law 11 and predicting how likely patients are to die in hospital before discharge 12 . As the stakes of the decisions entrusted to language models rise, so does the concern that they mirror or even amplify human biases encoded in the data they were trained on, thereby perpetuating discrimination against racialized, gendered and other minoritized social groups 4 , 5 , 6 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 .

Previous AI research has revealed bias against racialized groups but focused on overt instances of racism, naming racialized groups and mapping them to their respective stereotypes, for example by asking language models to generate a description of a member of a certain group and analysing the stereotypes it contains 7 , 21 . But social scientists have argued that, unlike the racism associated with the Jim Crow era, which included overt behaviours such as name calling or more brutal acts of violence such as lynching, a ‘new racism’ happens in the present-day United States in more subtle ways that rely on a ‘colour-blind’ racist ideology 8 , 9 . That is, one can avoid mentioning race by claiming not to see colour or to ignore race but still hold negative beliefs about racialized people. Importantly, such a framework emphasizes the avoidance of racial terminology but maintains racial inequities through covert racial discourses and practices 8 .

Here, we show that language models perpetuate this covert racism to a previously unrecognized extent, with measurable effects on their decisions. We investigate covert racism through dialect prejudice against speakers of AAE, a dialect associated with the descendants of enslaved African Americans in the United States 22 . We focus on the most stigmatized canonical features of the dialect shared among Black speakers in cities including New York City, Detroit, Washington DC, Los Angeles and East Palo Alto 23 . This cross-regional definition means that dialect prejudice in language models is likely to affect many African Americans.

Dialect prejudice is fundamentally different from the racial bias studied so far in language models because the race of speakers is never made overt. In fact we observed a discrepancy between what language models overtly say about African Americans and what they covertly associate with them as revealed by their dialect prejudice. This discrepancy is particularly pronounced for language models trained with human feedback (HF), such as GPT4: our results indicate that HF training obscures the racism on the surface, but the racial stereotypes remain unaffected on a deeper level. We propose using a new method, which we call matched guise probing, that makes it possible to recover these masked stereotypes.

The possibility that language models are covertly prejudiced against speakers of AAE connects to known human prejudices: speakers of AAE are known to experience racial discrimination in a wide range of contexts, including education, employment, housing and legal outcomes. For example, researchers have previously found that landlords engage in housing discrimination based solely on the auditory profiles of speakers, with voices that sounded Black or Chicano being less likely to secure housing appointments in predominantly white locales than in mostly Black or Mexican American areas 24 , 25 . Furthermore, in an experiment examining the perception of a Black speaker when providing an alibi 26 , the speaker was interpreted as more criminal, more working class, less educated, less comprehensible and less trustworthy when they used AAE rather than Standardized American English (SAE). Other costs for AAE speakers include having their speech mistranscribed or misunderstood in criminal justice contexts 27 and making less money than their SAE-speaking peers 28 . These harms connect to themes in broader racial ideology about African Americans and stereotypes about their intelligence, competence and propensity to commit crimes 29 , 30 , 31 , 32 , 33 , 34 , 35 . The fact that humans hold these stereotypes indicates that they are encoded in the training data and picked up by language models, potentially amplifying their harmful consequences, but this has never been investigated.

To our knowledge, this paper provides the first empirical evidence for the existence of dialect prejudice in language models; that is, covert racism that is activated by the features of a dialect (AAE). Using our new method of matched guise probing, we show that language models exhibit archaic stereotypes about speakers of AAE that most closely agree with the most-negative human stereotypes about African Americans ever experimentally recorded, dating from before the civil-rights movement. Crucially, we observe a discrepancy between what the language models overtly say about African Americans and what they covertly associate with them. Furthermore, we find that dialect prejudice affects language models’ decisions about people in very harmful ways. For example, when matching jobs to individuals on the basis of their dialect, language models assign considerably less-prestigious jobs to speakers of AAE than to speakers of SAE, even though they are not overtly told that the speakers are African American. Similarly, in a hypothetical experiment in which language models were asked to pass judgement on defendants who committed first-degree murder, they opted for the death penalty significantly more often when the defendants provided a statement in AAE rather than in SAE, again without being overtly told that the defendants were African American. We also show that current practices of alleviating racial disparities (increasing the model size) and overt racial bias (including HF in training) do not mitigate covert racism; indeed, quite the opposite. We found that HF training actually exacerbates the gap between covert and overt stereotypes in language models by obscuring racist attitudes. Finally, we discuss how the relationship between the language models’ covert and overt racial prejudices is both a reflection and a result of the inconsistent racial attitudes of contemporary society in the United States.

Probing AI dialect prejudice

To explore how dialect choice impacts the predictions that language models make about speakers in the absence of other cues about their racial identity, we took inspiration from the ‘matched guise’ technique used in sociolinguistics, in which subjects listen to recordings of speakers of two languages or dialects and make judgements about various traits of those speakers 36 , 37 . Applying the matched guise technique to the AAE–SAE contrast, researchers have shown that people identify speakers of AAE as Black with above-chance accuracy 24 , 26 , 38 and attach racial stereotypes to them, even without prior knowledge of their race 39 , 40 , 41 , 42 , 43 . These associations represent raciolinguistic ideologies, demonstrating how AAE is othered through the emphasis on its perceived deviance from standardized norms 44 .

Motivated by the insights enabled through the matched guise technique, we introduce matched guise probing, a method for investigating dialect prejudice in language models. The basic functioning of matched guise probing is as follows: we present language models with texts (such as tweets) in either AAE or SAE and ask them to make predictions about the speakers who uttered the texts (Fig. 1 and Methods ). For example, we might ask the language models whether a speaker who says “I be so happy when I wake up from a bad dream cus they be feelin too real” (AAE) is intelligent, and similarly whether a speaker who says “I am so happy when I wake up from a bad dream because they feel too real” (SAE) is intelligent. Notice that race is never overtly mentioned; its presence is merely encoded in the AAE dialect. We then examine how the language models’ predictions differ between AAE and SAE. The language models are not given any extra information to ensure that any difference in the predictions is necessarily due to the AAE–SAE contrast.

figure 1

a , We used texts in SAE (green) and AAE (blue). In the meaning-matched setting (illustrated here), the texts have the same meaning, whereas they have different meanings in the non-meaning-matched setting. b , We embedded the SAE and AAE texts in prompts that asked for properties of the speakers who uttered the texts. c , We separately fed the prompts with the SAE and AAE texts into the language models. d , We retrieved and compared the predictions for the SAE and AAE inputs, here illustrated by five adjectives from the Princeton Trilogy. See Methods for more details.

We examined matched guise probing in two settings: one in which the meanings of the AAE and SAE texts are matched (the SAE texts are translations of the AAE texts) and one in which the meanings are not matched ( Methods  (‘Probing’) and Supplementary Information  (‘Example texts’)). Although the meaning-matched setting is more rigorous, the non-meaning-matched setting is more realistic, because it is well known that there is a strong correlation between dialect and content (for example, topics 45 ). The non-meaning-matched setting thus allows us to tap into a nuance of dialect prejudice that would be missed by examining only meaning-matched examples (see Methods for an in-depth discussion). Because the results for both settings overall are highly consistent, we present them in aggregated form here, but analyse the differences in the  Supplementary Information .

We examined GPT2 (ref. 46 ), RoBERTa 47 , T5 (ref. 48 ), GPT3.5 (ref. 49 ) and GPT4 (ref. 50 ), each in one or more model versions, amounting to a total of 12 examined models ( Methods and Supplementary Information (‘Language models’)). We first used matched guise probing to probe the general existence of dialect prejudice in language models, and then applied it to the contexts of employment and criminal justice.

Covert stereotypes in language models

We started by investigating whether the attitudes that language models exhibit about speakers of AAE reflect human stereotypes about African Americans. To do so, we replicated the experimental set-up of the Princeton Trilogy 29 , 30 , 31 , 34 , a series of studies investigating the racial stereotypes held by Americans, with the difference that instead of overtly mentioning race to the language models, we used matched guise probing based on AAE and SAE texts ( Methods ).

Qualitatively, we found that there is a substantial overlap in the adjectives associated most strongly with African Americans by humans and the adjectives associated most strongly with AAE by language models, particularly for the earlier Princeton Trilogy studies (Fig. 2a ). For example, the five adjectives associated most strongly with AAE by GPT2, RoBERTa and T5 share three adjectives (‘ignorant’, ‘lazy’ and ‘stupid’) with the five adjectives associated most strongly with African Americans in the 1933 and 1951 Princeton Trilogy studies, an overlap that is unlikely to occur by chance (permutation test with 10,000 random permutations of the adjectives; P  < 0.01). Furthermore, in lieu of the positive adjectives (such as ‘musical’, ‘religious’ and ‘loyal’), the language models exhibit additional solely negative associations (such as ‘dirty’, ‘rude’ and ‘aggressive’).

figure 2

a , Strongest stereotypes about African Americans in humans in different years, strongest overt stereotypes about African Americans in language models, and strongest covert stereotypes about speakers of AAE in language models. Colour coding as positive (green) and negative (red) is based on ref. 34 . Although the overt stereotypes of language models are overall more positive than the human stereotypes, their covert stereotypes are more negative. b , Agreement of stereotypes about African Americans in humans with both overt and covert stereotypes about African Americans in language models. The black dotted line shows chance agreement using a random bootstrap. Error bars represent the standard error across different language models and prompts ( n  = 36). The language models’ overt stereotypes agree most strongly with current human stereotypes, which are the most positive experimentally recorded ones, but their covert stereotypes agree most strongly with human stereotypes from the 1930s, which are the most negative experimentally recorded ones. c , Stereotype strength for individual linguistic features of AAE. Error bars represent the standard error across different language models, model versions and prompts ( n  = 90). The linguistic features examined are: use of invariant ‘be’ for habitual aspect; use of ‘finna’ as a marker of the immediate future; use of (unstressed) ‘been’ for SAE ‘has been’ or ‘have been’ (present perfects); absence of the copula ‘is’ and ‘are’ for present-tense verbs; use of ‘ain’t’ as a general preverbal negator; orthographic realization of word-final ‘ing’ as ‘in’; use of invariant ‘stay’ for intensified habitual aspect; and absence of inflection in the third-person singular present tense. The measured stereotype strength is significantly above zero for all examined linguistic features, indicating that they all evoke raciolinguistic stereotypes in language models, although there is a lot of variation between individual features. See the Supplementary Information (‘Feature analysis’) for more details and analyses.

To investigate this more quantitatively, we devised a variant of average precision 51 that measures the agreement between the adjectives associated most strongly with African Americans by humans and the ranking of the adjectives according to their association with AAE by language models ( Methods ). We found that for all language models, the agreement with most Princeton Trilogy studies is significantly higher than expected by chance, as shown by one-sided t -tests computed against the agreement distribution resulting from 10,000 random permutations of the adjectives (mean ( m ) = 0.162, standard deviation ( s ) = 0.106; Extended Data Table 1 ); and that the agreement is particularly pronounced for the stereotypes reported in 1933 and falls for each study after that, almost reaching the level of chance agreement for 2012 (Fig. 2b ). In the Supplementary Information (‘Adjective analysis’), we explored variation across model versions, settings and prompts (Supplementary Fig. 2 and Supplementary Table 4 ).

To explain the observed temporal trend, we measured the average favourability of the top five adjectives for all Princeton Trilogy studies and language models, drawing from crowd-sourced ratings for the Princeton Trilogy adjectives on a scale between −2 (very negative) and 2 (very positive; see Methods , ‘Covert-stereotype analysis’). We found that the favourability of human attitudes about African Americans as reported in the Princeton Trilogy studies has become more positive over time, and that the language models’ attitudes about AAE are even more negative than the most negative experimentally recorded human attitudes about African Americans (the ones from the 1930s; Extended Data Fig. 1 ). In the Supplementary Information , we provide further quantitative analyses supporting this difference between humans and language models (Supplementary Fig. 7 ).

Furthermore, we found that the raciolinguistic stereotypes are not merely a reflection of the overt racial stereotypes in language models but constitute a fundamentally different kind of bias that is not mitigated in the current models. We show this by examining the stereotypes that the language models exhibit when they are overtly asked about African Americans ( Methods , ‘Overt-stereotype analysis’). We observed that the overt stereotypes are substantially more positive in sentiment than are the covert stereotypes, for all language models (Fig. 2a and Extended Data Fig. 1 ). Strikingly, for RoBERTa, T5, GPT3.5 and GPT4, although their covert stereotypes about speakers of AAE are more negative than the most negative experimentally recorded human stereotypes, their overt stereotypes about African Americans are more positive than the most positive experimentally recorded human stereotypes. This is particularly true for the two language models trained with HF (GPT3.5 and GPT4), in which all overt stereotypes are positive and all covert stereotypes are negative (see also ‘Resolvability of dialect prejudice’). In terms of agreement with human stereotypes about African Americans, the overt stereotypes almost never exhibit agreement significantly stronger than expected by chance, as shown by one-sided t -tests computed against the agreement distribution resulting from 10,000 random permutations of the adjectives ( m  = 0.162, s  = 0.106; Extended Data Table 2 ). Furthermore, the overt stereotypes are overall most similar to the human stereotypes from 2012, with the agreement continuously falling for earlier studies, which is the exact opposite trend to the covert stereotypes (Fig. 2b ).

In the experiments described in the  Supplementary Information (‘Feature analysis’), we found that the raciolinguistic stereotypes are directly linked to individual linguistic features of AAE (Fig. 2c and Supplementary Table 14 ), and that a higher density of such linguistic features results in stronger stereotypical associations (Supplementary Fig. 11 and Supplementary Table 13 ). Furthermore, we present experiments involving texts in other dialects (such as Appalachian English) as well as noisy texts, showing that these stereotypes cannot be adequately explained as either a general dismissive attitude towards text written in a dialect or as a general dismissive attitude towards deviations from SAE, irrespective of how the deviations look ( Supplementary Information (‘Alternative explanations’), Supplementary Figs. 12 and 13 and Supplementary Tables 15 and 16 ). Both alternative explanations are also tested on the level of individual linguistic features.

Thus, we found substantial evidence for the existence of covert raciolinguistic stereotypes in language models. Our experiments show that these stereotypes are similar to the archaic human stereotypes about African Americans that existed before the civil rights movement, are even more negative than the most negative experimentally recorded human stereotypes about African Americans, and are both qualitatively and quantitatively different from the previously reported overt racial stereotypes in language models, indicating that they are a fundamentally different kind of bias. Finally, our analyses demonstrate that the detected stereotypes are inherently linked to AAE and its linguistic features.

Impact of covert racism on AI decisions

To determine what harmful consequences the covert stereotypes have in the real world, we focused on two areas in which racial stereotypes about speakers of AAE and African Americans have been repeatedly shown to bias human decisions: employment and criminality. There is a growing impetus to use AI systems in these areas. Indeed, AI systems are already being used for personnel selection 52 , 53 , including automated analyses of applicants’ social-media posts 54 , 55 , and technologies for predicting legal outcomes are under active development 56 , 57 , 58 . Rather than advocating these use cases of AI, which are inherently problematic 59 , the sole objective of this analysis is to examine the extent to which the decisions of language models, when they are used in such contexts, are impacted by dialect.

First, we examined decisions about employability. Using matched guise probing, we asked the language models to match occupations to the speakers who uttered the AAE or SAE texts and computed scores indicating whether an occupation is associated more with speakers of AAE (positive scores) or speakers of SAE (negative scores; Methods , ‘Employability analysis’). The average score of the occupations was negative ( m  = –0.046,  s  = 0.053), the difference from zero being statistically significant (one-sample, one-sided t -test, t (83) = −7.9, P  < 0.001). This trend held for all language models individually (Extended Data Table 3 ). Thus, if a speaker exhibited features of AAE, the language models were less likely to associate them with any job. Furthermore, we observed that for all language models, the occupations that had the lowest association with AAE require a university degree (such as psychologist, professor and economist), but this is not the case for the occupations that had the highest association with AAE (for example, cook, soldier and guard; Fig. 3a ). Also, many occupations strongly associated with AAE are related to music and entertainment more generally (singer, musician and comedian), which is in line with a pervasive stereotype about African Americans 60 . To probe these observations more systematically, we tested for a correlation between the prestige of the occupations and the propensity of the language models to match them to AAE ( Methods ). Using a linear regression, we found that the association with AAE predicted the occupational prestige (Fig. 3b ; β  = −7.8, R 2 = 0.193, F (1, 63) = 15.1, P  < 0.001). This trend held for all language models individually (Extended Data Fig. 2 and Extended Data Table 4 ), albeit in a less pronounced way for GPT3.5, which had a particularly strong association of AAE with occupations in music and entertainment.

figure 3

a , Association of different occupations with AAE or SAE. Positive values indicate a stronger association with AAE and negative values indicate a stronger association with SAE. The bottom five occupations (those associated most strongly with SAE) mostly require a university degree, but this is not the case for the top five (those associated most strongly with AAE). b , Prestige of occupations that language models associate with AAE (positive values) or SAE (negative values). The shaded area shows a 95% confidence band around the regression line. The association with AAE or SAE predicts the occupational prestige. Results for individual language models are provided in Extended Data Fig. 2 . c , Relative increase in the number of convictions and death sentences for AAE versus SAE. Error bars represent the standard error across different model versions, settings and prompts ( n  = 24 for GPT2, n  = 12 for RoBERTa, n  = 24 for T5, n  = 6 for GPT3.5 and n  = 6 for GPT4). In cases of small sample size ( n  ≤ 10 for GPT3.5 and GPT4), we plotted the individual results as overlaid dots. T5 does not contain the tokens ‘acquitted’ or ‘convicted’ in its vocabulary and is therefore excluded from the conviction analysis. Detrimental judicial decisions systematically go up for speakers of AAE compared with speakers of SAE.

We then examined decisions about criminality. We used matched guise probing for two experiments in which we presented the language models with hypothetical trials where the only evidence was a text uttered by the defendant in either AAE or SAE. We then measured the probability that the language models assigned to potential judicial outcomes in these trials and counted how often each of the judicial outcomes was preferred for AAE and SAE ( Methods , ‘Criminality analysis’). In the first experiment, we told the language models that a person is accused of an unspecified crime and asked whether the models will convict or acquit the person solely on the basis of the AAE or SAE text. Overall, we found that the rate of convictions was greater for AAE ( r  = 68.7%) than SAE ( r  = 62.1%; Fig. 3c , left). A chi-squared test found a strong effect ( χ 2 (1,  N  = 96) = 184.7,  P  < 0.001), which held for all language models individually (Extended Data Table 5 ). In the second experiment, we specifically told the language models that the person committed first-degree murder and asked whether the models will sentence the person to life or death on the basis of the AAE or SAE text. The overall rate of death sentences was greater for AAE ( r  = 27.7%) than for SAE ( r  = 22.8%; Fig. 3c , right). A chi-squared test found a strong effect ( χ 2 (1,  N  = 144) = 425.4,  P  < 0.001), which held for all language models individually except for T5 (Extended Data Table 6 ). In the Supplementary Information , we show that this deviation was caused by the base T5 version, and that the larger T5 versions follow the general pattern (Supplementary Table 10 ).

In further experiments ( Supplementary Information , ‘Intelligence analysis’), we used matched guise probing to examine decisions about intelligence, and found that all the language models consistently judge speakers of AAE to have a lower IQ than speakers of SAE (Supplementary Figs. 14 and 15 and Supplementary Tables 17 – 19 ).

Resolvability of dialect prejudice

We wanted to know whether the dialect prejudice we observed is resolved by current practices of bias mitigation, such as increasing the size of the language model or including HF in training. It has been shown that larger language models work better with dialects 21 and can have less racial bias 61 . Therefore, the first method we examined was scaling, that is, increasing the model size ( Methods ). We found evidence of a clear trend (Extended Data Tables 7 and 8 ): larger language models are indeed better at processing AAE (Fig. 4a , left), but they are not less prejudiced against speakers of it. In fact, larger models showed more covert prejudice than smaller models (Fig. 4a , right). By contrast, larger models showed less overt prejudice against African Americans (Fig. 4a , right). Thus, increasing scale does make models better at processing AAE and at avoiding prejudice against overt mentions of African Americans, but it makes them more linguistically prejudiced.

figure 4

a , Language modelling perplexity and stereotype strength on AAE text as a function of model size. Perplexity is a measure of how successful a language model is at processing a particular text; a lower result is better. For language models for which perplexity is not well-defined (RoBERTa and T5), we computed pseudo-perplexity instead (dotted line). Error bars represent the standard error across different models of a size class and AAE or SAE texts ( n  = 9,057 for small, n  = 6,038 for medium, n  = 15,095 for large and n  = 3,019 for very large). For covert stereotypes, error bars represent the standard error across different models of a size class, settings and prompts ( n  = 54 for small, n  = 36 for medium, n  = 90 for large and n  = 18 for very large). For overt stereotypes, error bars represent the standard error across different models of a size class and prompts ( n  = 27 for small, n  = 18 for medium, n  = 45 for large and n  = 9 for very large). Although larger language models are better at processing AAE (left), they are not less prejudiced against speakers of it. Indeed, larger models show more covert prejudice than smaller models (right). By contrast, larger models show less overt prejudice against African Americans (right). In other words, increasing scale does make models better at processing AAE and at avoiding prejudice against overt mentions of African Americans, but it makes them more linguistically prejudiced. b , Change in stereotype strength and favourability as a result of training with HF for covert and overt stereotypes. Error bars represent the standard error across different prompts ( n  = 9). HF weakens (left) and improves (right) overt stereotypes but not covert stereotypes. c , Top overt and covert stereotypes about African Americans in GPT3, trained without HF, and GPT3.5, trained with HF. Colour coding as positive (green) and negative (red) is based on ref. 34 . The overt stereotypes get substantially more positive as a result of HF training in GPT3.5, but there is no visible change in favourability for the covert stereotypes.

As a second potential way to resolve dialect prejudice in language models, we examined training with HF 49 , 62 . Specifically, we compared GPT3.5 (ref. 49 ) with GPT3 (ref. 63 ), its predecessor that was trained without using HF ( Methods ). Looking at the top adjectives associated overtly and covertly with African Americans by the two language models, we found that HF resulted in more-positive overt associations but had no clear qualitative effect on the covert associations (Fig. 4c ). This observation was confirmed by quantitative analyses: the inclusion of HF resulted in significantly weaker (no HF, m  = 0.135,  s  = 0.142; HF, m  = −0.119,  s  = 0.234;  t (16) = 2.6,  P  < 0.05) and more favourable (no HF, m  = 0.221,  s  = 0.399; HF, m  = 1.047,  s  = 0.387;  t (16) = −6.4,  P  < 0.001) overt stereotypes but produced no significant difference in the strength (no HF, m  = 0.153,  s  = 0.049; HF, m  = 0.187,  s  = 0.066;  t (16) = −1.2, P  = 0.3) or unfavourability (no HF, m  = −1.146, s  = 0.580; HF, m = −1.029, s  = 0.196; t (16) = −0.5, P  = 0.6) of covert stereotypes (Fig. 4b ). Thus, HF training weakens and ameliorates the overt stereotypes but has no clear effect on the covert stereotypes; in other words, it obscures the racist attitudes on the surface, but more subtle forms of racism, such as dialect prejudice, remain unaffected. This finding is underscored by the fact that the discrepancy between overt and covert stereotypes about African Americans is most pronounced for the two examined language models trained with human feedback (GPT3.5 and GPT4; see ‘Covert stereotypes in language models’). Furthermore, this finding again shows that there is a fundamental difference between overt and covert stereotypes in language models, and that mitigating the overt stereotypes does not automatically translate to mitigated covert stereotypes.

To sum up, neither scaling nor training with HF as applied today resolves the dialect prejudice. The fact that these two methods effectively mitigate racial performance disparities and overt racial stereotypes in language models indicates that this form of covert racism constitutes a different problem that is not addressed by current approaches for improving and aligning language models.

The key finding of this article is that language models maintain a form of covert racial prejudice against African Americans that is triggered by dialect features alone. In our experiments, we avoided overt mentions of race but drew from the racialized meanings of a stigmatized dialect, and could still find historically racist associations with African Americans. The implicit nature of this prejudice, that is, the fact it is about something that is not explicitly expressed in the text, makes it fundamentally different from the overt racial prejudice that has been the focus of previous research. Strikingly, the language models’ covert and overt racial prejudices are often in contradiction with each other, especially for the most recent language models that have been trained with HF (GPT3.5 and GPT4). These two language models obscure the racism, overtly associating African Americans with exclusively positive attributes (such as ‘brilliant’), but our results show that they covertly associate African Americans with exclusively negative attributes (such as ‘lazy’).

We argue that this paradoxical relation between the language models’ covert and overt racial prejudices manifests the inconsistent racial attitudes present in the contemporary society of the United States 8 , 64 . In the Jim Crow era, stereotypes about African Americans were overtly racist, but the normative climate after the civil rights movement made expressing explicitly racist views distasteful. As a result, racism acquired a covert character and continued to exist on a more subtle level. Thus, most white people nowadays report positive attitudes towards African Americans in surveys but perpetuate racial inequalities through their unconscious behaviour, such as their residential choices 65 . It has been shown that negative stereotypes persist, even if they are superficially rejected 66 , 67 . This ambivalence is reflected by the language models we analysed, which are overtly non-racist but covertly exhibit archaic stereotypes about African Americans, showing that they reproduce a colour-blind racist ideology. Crucially, the civil rights movement is generally seen as the period during which racism shifted from overt to covert 68 , 69 , and this is mirrored by our results: all the language models overtly agree the most with human stereotypes from after the civil rights movement, but covertly agree the most with human stereotypes from before the civil rights movement.

Our findings beg the question of how dialect prejudice got into the language models. Language models are pretrained on web-scraped corpora such as WebText 46 , C4 (ref. 48 ) and the Pile 70 , which encode raciolinguistic stereotypes about AAE. A drastic example of this is the use of ‘mock ebonics’ to parodize speakers of AAE 71 . Crucially, a growing body of evidence indicates that language models pick up prejudices present in the pretraining corpus 72 , 73 , 74 , 75 , which would explain how they become prejudiced against speakers of AAE, and why they show varying levels of dialect prejudice as a function of the pretraining corpus. However, the web also abounds with overt racism against African Americans 76 , 77 , so we wondered why the language models exhibit much less overt than covert racial prejudice. We argue that the reason for this is that the existence of overt racism is generally known to people 32 , which is not the case for covert racism 69 . Crucially, this also holds for the field of AI. The typical pipeline of training language models includes steps such as data filtering 48 and, more recently, HF training 62 that remove overt racial prejudice. As a result, much of the overt racism on the web does not end up in the language models. However, there are currently no measures in place to curtail covert racial prejudice when training language models. For example, common datasets for HF training 62 , 78 do not include examples that would train the language models to treat speakers of AAE and SAE equally. As a result, the covert racism encoded in the training data can make its way into the language models in an unhindered fashion. It is worth mentioning that the lack of awareness of covert racism also manifests during evaluation, where it is common to test language models for overt racism but not for covert racism 21 , 63 , 79 , 80 .

As well as the representational harms, by which we mean the pernicious representation of AAE speakers, we also found evidence for substantial allocational harms. This refers to the inequitable allocation of resources to AAE speakers 81 (Barocas et al., unpublished observations), and adds to known cases of language technology putting speakers of AAE at a disadvantage by performing worse on AAE 82 , 83 , 84 , 85 , 86 , 87 , 88 , misclassifying AAE as hate speech 81 , 89 , 90 , 91 or treating AAE as incorrect English 83 , 85 , 92 . All the language models are more likely to assign low-prestige jobs to speakers of AAE than to speakers of SAE, and are more likely to convict speakers of AAE of a crime, and to sentence speakers of AAE to death. Although the details of our tasks are constructed, the findings reveal real and urgent concerns because business and jurisdiction are areas for which AI systems involving language models are currently being developed or deployed. As a consequence, the dialect prejudice we uncovered might already be affecting AI decisions today, for example when a language model is used in application-screening systems to process background information, which might include social-media text. Worryingly, we also observe that larger language models and language models trained with HF exhibit stronger covert, but weaker overt, prejudice. Against the backdrop of continually growing language models and the increasingly widespread adoption of HF training, this has two risks: first, that language models, unbeknownst to developers and users, reach ever-increasing levels of covert prejudice; and second, that developers and users mistake ever-decreasing levels of overt prejudice (the only kind of prejudice currently tested for) for a sign that racism in language models has been solved. There is therefore a realistic possibility that the allocational harms caused by dialect prejudice in language models will increase further in the future, perpetuating the racial discrimination experienced by generations of African Americans.

Matched guise probing examines how strongly a language model associates certain tokens, such as personality traits, with AAE compared with SAE. AAE can be viewed as the treatment condition, whereas SAE functions as the control condition. We start by explaining the basic experimental unit of matched guise probing: measuring how a language model associates certain tokens with an individual text in AAE or SAE. Based on this, we introduce two different settings for matched guise probing (meaning-matched and non-meaning-matched), which are both inspired by the matched guise technique used in sociolinguistics 36 , 37 , 93 , 94 and provide complementary views on the attitudes a language model has about a dialect.

The basic experimental unit of matched guise probing is as follows. Let θ be a language model, t be a text in AAE or SAE, and x be a token of interest, typically a personality trait such as ‘intelligent’. We embed the text in a prompt v , for example v ( t ) = ‘a person who says t tends to be’, and compute P ( x ∣ v ( t );  θ ), which is the probability that θ assigns to x after processing v ( t ). We calculate P ( x ∣ v ( t );  θ ) for equally sized sets T a of AAE texts and T s of SAE texts, comparing various tokens from a set X as possible continuations. It has been shown that P ( x ∣ v ( t );  θ ) can be affected by the precise wording of v , so small modifications of v can have an unpredictable effect on the predictions made by the language model 21 , 95 , 96 . To account for this fact, we consider a set V containing several prompts ( Supplementary Information ). For all experiments, we have provided detailed analyses of variation across prompts in the  Supplementary Information .

We conducted matched guise probing in two settings. In the first setting, the texts in T a and T s formed pairs expressing the same underlying meaning, that is, the i -th text in T a (for example, ‘I be so happy when I wake up from a bad dream cus they be feelin too real’) matches the i -th text in T s (for example, ‘I am so happy when I wake up from a bad dream because they feel too real’). For this setting, we used the dataset from ref. 87 , which contains 2,019 AAE tweets together with their SAE translations. In the second setting, the texts in T a and T s did not form pairs, so they were independent texts in AAE and SAE. For this setting, we sampled 2,000 AAE and SAE tweets from the dataset in ref. 83 and used tweets strongly aligned with African Americans for AAE and tweets strongly aligned with white people for SAE ( Supplementary Information (‘Analysis of non-meaning-matched texts’), Supplementary Fig. 1 and Supplementary Table 3 ). In the  Supplementary Information , we include examples of AAE and SAE texts for both settings (Supplementary Tables 1 and 2 ). Tweets are well suited for matched guise probing because they are a rich source of dialectal variation 97 , 98 , 99 , especially for AAE 100 , 101 , 102 , but matched guise probing can be applied to any type of text. Although we do not consider it here, matched guise probing can in principle also be applied to speech-based models, with the potential advantage that dialectal variation on the phonetic level could be captured more directly, which would make it possible to study dialect prejudice specific to regional variants of AAE 23 . However, note that a great deal of phonetic variation is reflected orthographically in social-media texts 101 .

It is important to analyse both meaning-matched and non-meaning-matched settings because they capture different aspects of the attitudes a language model has about speakers of AAE. Controlling for the underlying meaning makes it possible to uncover differences in the attitudes of the language model that are solely due to grammatical and lexical features of AAE. However, it is known that various properties other than linguistic features correlate with dialect, such as topics 45 , and these might also influence the attitudes of the language model. Sidelining such properties bears the risk of underestimating the harms that dialect prejudice causes for speakers of AAE in the real world. For example, in a scenario in which a language model is used in the context of automated personnel selection to screen applicants’ social-media posts, the texts of two competing applicants typically differ in content and do not come in pairs expressing the same meaning. The relative advantages of using meaning-matched or non-meaning-matched data for matched guise probing are conceptually similar to the relative advantages of using the same or different speakers for the matched guise technique: more control in the former versus more naturalness in the latter setting 93 , 94 . Because the results obtained in both settings were consistent overall for all experiments, we aggregated them in the main article, but we analysed differences in detail in the  Supplementary Information .

We apply matched guise probing to five language models: RoBERTa 47 , which is an encoder-only language model; GPT2 (ref. 46 ), GPT3.5 (ref. 49 ) and GPT4 (ref. 50 ), which are decoder-only language models; and T5 (ref. 48 ), which is an encoder–decoder language model. For each language model, we examined one or more model versions: GPT2 (base), GPT2 (medium), GPT2 (large), GPT2 (xl), RoBERTa (base), RoBERTa (large), T5 (small), T5 (base), T5 (large), T5 (3b), GPT3.5 (text-davinci-003) and GPT4 (0613). Where we used several model versions per language model (GPT2, RoBERTa and T5), the model versions all had the same architecture and were trained on the same data but differed in their size. Furthermore, we note that GPT3.5 and GPT4 are the only language models examined in this paper that were trained with HF, specifically reinforcement learning from human feedback 103 . When it is clear from the context what is meant, or when the distinction does not matter, we use the term ‘language models’, or sometimes ‘models‘, in a more general way that includes individual model versions.

Regarding matched guise probing, the exact method for computing P ( x ∣ v ( t );  θ ) varies across language models and is detailed in the  Supplementary Information . For GPT4, for which computing P ( x ∣ v ( t );  θ ) for all tokens of interest was often not possible owing to restrictions imposed by the OpenAI application programming interface (API), we used a slightly modified method for some of the experiments, and this is also discussed in the  Supplementary Information . Similarly, some of the experiments could not be done for all language models because of model-specific constraints, which we highlight below. We note that there was at most one language model per experiment for which this was the case.

Covert-stereotype analysis

In the covert-stereotype analysis, the tokens x whose probabilities are measured for matched guise probing are trait adjectives from the Princeton Trilogy 29 , 30 , 31 , 34 , such as ‘aggressive’, ‘intelligent’ and ‘quiet’. We provide details about these adjectives in the  Supplementary Information . In the Princeton Trilogy, the adjectives are provided to participants in the form of a list, and participants are asked to select from the list the five adjectives that best characterize a given ethnic group, such as African Americans. The studies that we compare in this paper, which are the original Princeton Trilogy studies 29 , 30 , 31 and a more recent reinstallment 34 , all follow this general set-up and observe a gradual improvement of the expressed stereotypes about African Americans over time, but the exact interpretation of this finding is disputed 32 . Here, we used the adjectives from the Princeton Trilogy in the context of matched guise probing.

Specifically, we first computed P ( x ∣ v ( t );  θ ) for all adjectives, for both the AAE texts and the SAE texts. The method for aggregating the probabilities P ( x ∣ v ( t );  θ ) into association scores between an adjective x and AAE varies for the two settings of matched guise probing. Let \({t}_{{\rm{a}}}^{i}\) be the i -th AAE text in T a and \({t}_{{\rm{s}}}^{i}\) be the i -th SAE text in T s . In the meaning-matched setting, in which \({t}_{{\rm{a}}}^{i}\) and \({t}_{{\rm{s}}}^{i}\) express the same meaning, we computed the prompt-level association score for an adjective x as

where n = ∣ T a ∣ = ∣ T s ∣ . Thus, we measure for each pair of AAE and SAE texts the log ratio of the probability assigned to x following the AAE text and the probability assigned to x following the SAE text, and then average the log ratios of the probabilities across all pairs. In the non-meaning-matched setting, we computed the prompt-level association score for an adjective x as

where again n = ∣ T a ∣ = ∣ T s ∣ . In other words, we first compute the average probability assigned to a certain adjective x following all AAE texts and the average probability assigned to x following all SAE texts, and then measure the log ratio of these average probabilities. The interpretation of q ( x ;  v ,  θ ) is identical in both settings; q ( x ;  v , θ ) > 0 means that for a certain prompt v , the language model θ associates the adjective x more strongly with AAE than with SAE, and q ( x ;  v ,  θ ) < 0 means that for a certain prompt v , the language model θ associates the adjective x more strongly with SAE than with AAE. In the  Supplementary Information (‘Calibration’), we show that q ( x ;  v , θ ) is calibrated 104 , meaning that it does not depend on the prior probability that θ assigns to x in a neutral context.

The prompt-level association scores q ( x ;  v ,  θ ) are the basis for further analyses. We start by averaging q ( x ;  v ,  θ ) across model versions, prompts and settings, and this allows us to rank all adjectives according to their overall association with AAE for individual language models (Fig. 2a ). In this and the following adjective analyses, we focus on the five adjectives that exhibit the highest association with AAE, making it possible to consistently compare the language models with the results from the Princeton Trilogy studies, most of which do not report the full ranking of all adjectives. Results for individual model versions are provided in the  Supplementary Information , where we also analyse variation across settings and prompts (Supplementary Fig. 2 and Supplementary Table 4 ).

Next, we wanted to measure the agreement between language models and humans through time. To do so, we considered the five adjectives most strongly associated with African Americans for each study and evaluated how highly these adjectives are ranked by the language models. Specifically, let R l  = [ x 1 , …,  x ∣ X ∣ ] be the adjective ranking generated by a language model and \({R}_{h}^{5}\) = [ x 1 , …, x 5 ] be the ranking of the top five adjectives generated by the human participants in one of the Princeton Trilogy studies. A typical measure to evaluate how highly the adjectives from \({R}_{h}^{5}\) are ranked within R l is average precision, AP 51 . However, AP does not take the internal ranking of the adjectives in \({R}_{h}^{5}\) into account, which is not ideal for our purposes; for example, AP does not distinguish whether the top-ranked adjective for humans is on the first or on the fifth rank for a language model. To remedy this, we computed the mean average precision, MAP, for different subsets of \({R}_{h}^{5}\) ,

where \({R}_{h}^{i}\) denotes the top i adjectives from the human ranking. MAP = 1 if, and only if, the top five adjectives from \({R}_{h}^{5}\) have an exact one-to-one correspondence with the top five adjectives from R l , so, unlike AP, it takes the internal ranking of the adjectives into account. We computed an individual agreement score for each language model and prompt, so we average the q ( x ;  v ,  θ ) association scores for all model versions of a language model (GPT2, for example) and the two settings (meaning-matched and non-meaning-matched) to generate R l . Because the OpenAI API for GPT4 does not give access to the probabilities for all adjectives, we excluded GPT4 from this analysis. Results are presented in Fig. 2b and Extended Data Table 1 . In the Supplementary Information (‘Agreement analysis’), we analyse variation across model versions, settings and prompts (Supplementary Figs. 3 – 5 ).

To analyse the favourability of the stereotypes about African Americans, we drew from crowd-sourced favourability ratings collected previously 34 for the adjectives from the Princeton Trilogy that range between −2 (‘very unfavourable’, meaning very negative) and 2 (‘very favourable’, meaning very positive). For example, the favourability rating of ‘cruel’ is −1.81 and the favourability rating of ‘brilliant’ is 1.86. We computed the average favourability of the top five adjectives, weighting the favourability ratings of individual adjectives by their association scores with AAE and African Americans. More formally, let R 5 = [ x 1 , …, x 5 ] be the ranking of the top five adjectives generated by either a language model or humans. Furthermore, let f ( x ) be the favourability rating of adjective x as reported in ref. 34 , and let q ( x ) be the overall association score of adjective x with AAE or African Americans that is used to generate R 5 . For the Princeton Trilogy studies, q ( x ) is the percentage of participants who have assigned x to African Americans. For language models, q ( x ) is the average value of q ( x ;  v ,  θ ). We then computed the weighted average favourability, F , of the top five adjectives as

As a result of the weighting, the top-ranked adjective contributed more to the average than the second-ranked adjective, and so on. Results are presented in Extended Data Fig. 1 . To check for consistency, we also computed the average favourability of the top five adjectives without weighting, which yields similar results (Supplementary Fig. 6) .

Overt-stereotype analysis

The overt-stereotype analysis closely followed the methodology of the covert-stereotype analysis, with the difference being that instead of providing the language models with AAE and SAE texts, we provided them with overt descriptions of race (specifically, ‘Black’/‘black’ and ‘White’/‘white’). This methodological difference is also reflected by a different set of prompts ( Supplementary Information ). As a result, the experimental set-up is very similar to existing studies on overt racial bias in language models 4 , 7 . All other aspects of the analysis (such as computing adjective association scores) were identical to the analysis for covert stereotypes. This also holds for GPT4, for which we again could not conduct the agreement analysis.

We again present average results for the five language models in the main article. Results broken down for individual model versions are provided in the  Supplementary Information , where we also analyse variation across prompts (Supplementary Fig. 8 and Supplementary Table 5 ).

Employability analysis

The general set-up of the employability analysis was identical to the stereotype analyses: we fed text written in either AAE or SAE, embedded in prompts, into the language models and analysed the probabilities that they assigned to different continuation tokens. However, instead of trait adjectives, we considered occupations for X and also used a different set of prompts ( Supplementary Information ). We created a list of occupations, drawing from previously published lists 6 , 76 , 105 , 106 , 107 . We provided details about these occupations in the  Supplementary Information . We then computed association scores q ( x ;  v ,  θ ) between individual occupations x and AAE, following the same methodology as for computing adjective association scores, and ranked the occupations according to q ( x ;  v ,  θ ) for the language models. To probe the prestige associated with the occupations, we drew from a dataset of occupational prestige 105 that is based on the 2012 US General Social Survey and measures prestige on a scale from 1 (low prestige) to 9 (high prestige). For GPT4, we could not conduct the parts of the analysis that require scores for all occupations.

We again present average results for the five language models in the main article. Results for individual model versions are provided in the  Supplementary Information , where we also analyse variation across settings and prompts (Supplementary Tables 6 – 8 ).

Criminality analysis

The set-up of the criminality analysis is different from the previous experiments in that we did not compute aggregate association scores between certain tokens (such as trait adjectives) and AAE but instead asked the language models to make discrete decisions for each AAE and SAE text. More specifically, we simulated trials in which the language models were prompted to use AAE or SAE texts as evidence to make a judicial decision. We then aggregated the judicial decisions into summary statistics.

We conducted two experiments. In the first experiment, the language models were asked to determine whether a person accused of committing an unspecified crime should be acquitted or convicted. The only evidence provided to the language models was a statement made by the defendant, which was an AAE or SAE text. In the second experiment, the language models were asked to determine whether a person who committed first-degree murder should be sentenced to life or death. Similarly to the first (general conviction) experiment, the only evidence provided to the language models was a statement made by the defendant, which was an AAE or SAE text. Note that the AAE and SAE texts were the same texts as in the other experiments and did not come from a judicial context. Rather than testing how well language models could perform the tasks of predicting acquittal or conviction and life penalty or death penalty (an application of AI that we do not support), we were interested to see to what extent the decisions of the language models, made in the absence of any real evidence, were impacted by dialect. Although providing the language models with extra evidence as well as the AAE and SAE texts would have made the experiments more similar to real trials, it would have confounded the effect that dialect has on its own (the key effect of interest), so we did not consider this alternative set-up here. We focused on convictions and death penalties specifically because these are the two areas of the criminal justice system for which racial disparities have been described in the most robust and indisputable way: African Americans represent about 12% of the adult population of the United States, but they represent 33% of inmates 108 and more than 41% of people on death row 109 .

Methodologically, we used prompts that asked the language models to make a judicial decision ( Supplementary Information ). For a specific text, t , which is in AAE or SAE, we computed p ( x ∣ v ( t );  θ ) for the tokens x that correspond to the judicial outcomes of interest (‘acquitted’ or ‘convicted’, and ‘life’ or ‘death’). T5 does not contain the tokens ‘acquitted’ and ‘convicted’ in its vocabulary, so is was excluded from the conviction analysis. Because the language models might assign different prior probabilities to the outcome tokens, we calibrated them using their probabilities in a neutral context following v , meaning without text t 104 . Whichever outcome had the higher calibrated probability was counted as the decision. We aggregated the detrimental decisions (convictions and death penalties) and compared their rates (percentages) between AAE and SAE texts. An alternative approach would have been to generate the judicial decision by sampling from the language models, which would have allowed us to induce the language models to generate justifications of their decisions. However, this approach has three disadvantages: first, encoder-only language models such as RoBERTa do not lend themselves to text generation; second, it would have been necessary to apply jail-breaking for some of the language models, which can have unpredictable effects, especially in the context of socially sensitive tasks; and third, model-generated justifications are frequently not aligned with actual model behaviours 110 .

We again present average results on the level of language models in the main article. Results for individual model versions are provided in the  Supplementary Information , where we also analyse variation across settings and prompts (Supplementary Figs. 9 and 10 and Supplementary Tables 9 – 12 ).

Scaling analysis

In the scaling analysis, we examined whether increasing the model size alleviated the dialect prejudice. Because the content of the covert stereotypes is quite consistent and does not vary substantially between models with different sizes, we instead analysed the strength with which the language models maintain these stereotypes. We split the model versions of all language models into four groups according to their size using the thresholds of 1.5 × 10 8 , 3.5 × 10 8 and 1.0 × 10 10 (Extended Data Table 7 ).

To evaluate the familiarity of the models with AAE, we measured their perplexity on the datasets used for the two evaluation settings 83 , 87 . Perplexity is defined as the exponentiated average negative log-likelihood of a sequence of tokens 111 , with lower values indicating higher familiarity. Perplexity requires the language models to assign probabilities to full sequences of tokens, which is only the case for GPT2 and GPT3.5. For RoBERTa and T5, we resorted to pseudo-perplexity 112 as the measure of familiarity. Results are only comparable across language models with the same familiarity measure. We excluded GPT4 from this analysis because it is not possible to compute perplexity using the OpenAI API.

To evaluate the stereotype strength, we focused on the stereotypes about African Americans reported in ref. 29 , which the language models’ covert stereotypes agree with most strongly. We split the set of adjectives X into two subsets: the set of stereotypical adjectives in ref. 29 , X s , and the set of non-stereotypical adjectives, X n  =  X \ X s . For each model with a specific size, we then computed the average value of q ( x ;  v ,  θ ) for all adjectives in X s , which we denote as q s ( θ ), and the average value of q ( x ;  v ,  θ ) for all adjectives in X n , which we denote as q n ( θ ). The stereotype strength of a model θ , or more specifically the strength of the stereotypes about African Americans reported in ref. 29 , can then be computed as

A positive value of δ ( θ ) means that the model associates the stereotypical adjectives in X s more strongly with AAE than the non-stereotypical adjectives in X n , whereas a negative value of δ ( θ ) indicates anti-stereotypical associations, meaning that the model associates the non-stereotypical adjectives in X n more strongly with AAE than the stereotypical adjectives in X s . For the overt stereotypes, we used the same split of adjectives into X s and X n because we wanted to directly compare the strength with which models of a certain size endorse the stereotypes overtly as opposed to covertly. All other aspects of the experimental set-up are identical to the main analyses of covert and overt stereotypes.

HF analysis

We compared GPT3.5 (ref. 49 ; text-davinci-003) with GPT3 (ref. 63 ; davinci), its predecessor language model that was trained without HF. Similarly to other studies that compare these two language models 113 , this set-up allowed us to examine the effects of HF training as done for GPT3.5 in isolation. We compared the two language models in terms of favourability and stereotype strength. For favourability, we followed the methodology we used for the overt-stereotype analysis and evaluated the average weighted favourability of the top five adjectives associated with AAE. For stereotype strength, we followed the methodology we used for the scaling analysis and evaluated the average strength of the stereotypes as reported in ref.  29 .

Reporting summary

Further information on research design is available in the  Nature Portfolio Reporting Summary linked to this article.

Data availability

All the datasets used in this study are publicly available. The dataset released as ref. 87 can be found at https://aclanthology.org/2020.emnlp-main.473/ . The dataset released as ref. 83 can be found at http://slanglab.cs.umass.edu/TwitterAAE/ . The human stereotype scores used for evaluation can be found in the published articles of the Princeton Trilogy studies 29 , 30 , 31 , 34 . The most recent of these articles 34 also contains the human favourability scores for the trait adjectives. The dataset of occupational prestige that we used for the employability analysis can be found in the corresponding paper 105 . The Brown Corpus 114 , which we used for the  Supplementary Information (‘Feature analysis’), can be found at http://www.nltk.org/nltk_data/ . The dataset containing the parallel AAE, Appalachian English and Indian English texts 115 , which we used in the  Supplementary Information (‘Alternative explanations’), can be found at https://huggingface.co/collections/SALT-NLP/value-nlp-666b60a7f76c14551bda4f52 .

Code availability

Our code is written in Python and draws on the Python packages openai and transformers for language-model probing, as well as numpy, pandas, scipy and statsmodels for data analysis. The feature analysis described in the  Supplementary Information also uses the VALUE Python library 88 . Our code is publicly available on GitHub at https://github.com/valentinhofmann/dialect-prejudice .

Zhao, W. et al. WildChat: 1M ChatGPT interaction logs in the wild. In Proc. Twelfth International Conference on Learning Representations (OpenReview.net, 2024).

Zheng, L. et al. LMSYS-Chat-1M: a large-scale real-world LLM conversation dataset. In Proc. Twelfth International Conference on Learning Representations (OpenReview.net, 2024).

Gaebler, J. D., Goel, S., Huq, A. & Tambe, P. Auditing the use of language models to guide hiring decisions. Preprint at https://arxiv.org/abs/2404.03086 (2024).

Sheng, E., Chang, K.-W., Natarajan, P. & Peng, N. The woman worked as a babysitter: on biases in language generation. In Proc. 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing (eds Inui. K. et al.) 3407–3412 (Association for Computational Linguistics, 2019).

Nangia, N., Vania, C., Bhalerao, R. & Bowman, S. R. CrowS-Pairs: a challenge dataset for measuring social biases in masked language models. In Proc. 2020 Conference on Empirical Methods in Natural Language Processing (eds Webber, B. et al.) 1953–1967 (Association for Computational Linguistics, 2020).

Nadeem, M., Bethke, A. & Reddy, S. StereoSet: measuring stereotypical bias in pretrained language models. In Proc. 59th Annual Meeting of the Association for Computational Linguistics and 11th International Joint Conference on Natural Language Processing (eds Zong, C. et al.) 5356–5371 (Association for Computational Linguistics, 2021).

Cheng, M., Durmus, E. & Jurafsky, D. Marked personas: using natural language prompts to measure stereotypes in language models. In Proc. 61st Annual Meeting of the Association for Computational Linguistics (eds Rogers, A. et al.) 1504–1532 (Association for Computational Linguistics, 2023).

Bonilla-Silva, E. Racism without Racists: Color-Blind Racism and the Persistence of Racial Inequality in America 4th edn (Rowman & Littlefield, 2014).

Golash-Boza, T. A critical and comprehensive sociological theory of race and racism. Sociol. Race Ethn. 2 , 129–141 (2016).

Article   Google Scholar  

Kasneci, E. et al. ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ. 103 , 102274 (2023).

Nay, J. J. et al. Large language models as tax attorneys: a case study in legal capabilities emergence. Philos. Trans. R. Soc. A 382 , 20230159 (2024).

Article   ADS   Google Scholar  

Jiang, L. Y. et al. Health system-scale language models are all-purpose prediction engines. Nature 619 , 357–362 (2023).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Bolukbasi, T., Chang, K.-W., Zou, J., Saligrama, V. & Kalai, A. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Adv. Neural Inf. Process. Syst. 30 , 4356–4364 (2016).

Google Scholar  

Caliskan, A., Bryson, J. J. & Narayanan, A. Semantics derived automatically from language corpora contain human-like biases. Science 356 , 183–186 (2017).

Article   ADS   CAS   PubMed   Google Scholar  

Basta, C., Costa-jussà, M. R. & Casas, N. Evaluating the underlying gender bias in contextualized word embeddings. In Proc. First Workshop on Gender Bias in Natural Language Processing (eds Costa-jussà, M. R. et al.) 33–39 (Association for Computational Linguistics, 2019).

Kurita, K., Vyas, N., Pareek, A., Black, A. W. & Tsvetkov, Y. Measuring bias in contextualized word representations. In Proc. First Workshop on Gender Bias in Natural Language Processing (eds Costa-jussà, M. R. et al.) 166–172 (Association for Computational Linguistics, 2019).

Abid, A., Farooqi, M. & Zou, J. Persistent anti-muslim bias in large language models. In Proc. 2021 AAAI/ACM Conference on AI, Ethics, and Society (eds Fourcade, M. et al.) 298–306 (Association for Computing Machinery, 2021).

Bender, E. M., Gebru, T., McMillan-Major, A. & Shmitchell, S. On the dangers of stochastic parrots: can language models be too big? In Proc. 2021 ACM Conference on Fairness, Accountability, and Transparency 610–623 (Association for Computing Machinery, 2021).

Li, L. & Bamman, D. Gender and representation bias in GPT-3 generated stories. In Proc. Third Workshop on Narrative Understanding (eds Akoury, N. et al.) 48–55 (Association for Computational Linguistics, 2021).

Tamkin, A. et al. Evaluating and mitigating discrimination in language model decisions. Preprint at https://arxiv.org/abs/2312.03689 (2023).

Rae, J. W. et al. Scaling language models: methods, analysis & insights from training Gopher. Preprint at https://arxiv.org/abs/2112.11446 (2021).

Green, L. J. African American English: A Linguistic Introduction (Cambridge Univ. Press, 2002).

King, S. From African American Vernacular English to African American Language: rethinking the study of race and language in African Americans’ speech. Annu. Rev. Linguist. 6 , 285–300 (2020).

Purnell, T., Idsardi, W. & Baugh, J. Perceptual and phonetic experiments on American English dialect identification. J. Lang. Soc. Psychol. 18 , 10–30 (1999).

Massey, D. S. & Lundy, G. Use of Black English and racial discrimination in urban housing markets: new methods and findings. Urban Aff. Rev. 36 , 452–469 (2001).

Dunbar, A., King, S. & Vaughn, C. Dialect on trial: an experimental examination of raciolinguistic ideologies and character judgments. Race Justice https://doi.org/10.1177/21533687241258772 (2024).

Rickford, J. R. & King, S. Language and linguistics on trial: Hearing Rachel Jeantel (and other vernacular speakers) in the courtroom and beyond. Language 92 , 948–988 (2016).

Grogger, J. Speech patterns and racial wage inequality. J. Hum. Resour. 46 , 1–25 (2011).

Katz, D. & Braly, K. Racial stereotypes of one hundred college students. J. Abnorm. Soc. Psychol. 28 , 280–290 (1933).

Gilbert, G. M. Stereotype persistance and change among college students. J. Abnorm. Soc. Psychol. 46 , 245–254 (1951).

Article   CAS   Google Scholar  

Karlins, M., Coffman, T. L. & Walters, G. On the fading of social stereotypes: studies in three generations of college students. J. Pers. Soc. Psychol. 13 , 1–16 (1969).

Article   CAS   PubMed   Google Scholar  

Devine, P. G. & Elliot, A. J. Are racial stereotypes really fading? The Princeton Trilogy revisited. Pers. Soc. Psychol. Bull. 21 , 1139–1150 (1995).

Madon, S. et al. Ethnic and national stereotypes: the Princeton Trilogy revisited and revised. Pers. Soc. Psychol. Bull. 27 , 996–1010 (2001).

Bergsieker, H. B., Leslie, L. M., Constantine, V. S. & Fiske, S. T. Stereotyping by omission: eliminate the negative, accentuate the positive. J. Pers. Soc. Psychol. 102 , 1214–1238 (2012).

Article   PubMed   PubMed Central   Google Scholar  

Ghavami, N. & Peplau, L. A. An intersectional analysis of gender and ethnic stereotypes: testing three hypotheses. Psychol. Women Q. 37 , 113–127 (2013).

Lambert, W. E., Hodgson, R. C., Gardner, R. C. & Fillenbaum, S. Evaluational reactions to spoken languages. J. Abnorm. Soc. Psychol. 60 , 44–51 (1960).

Ball, P. Stereotypes of Anglo-Saxon and non-Anglo-Saxon accents: some exploratory Australian studies with the matched guise technique. Lang. Sci. 5 , 163–183 (1983).

Thomas, E. R. & Reaser, J. Delimiting perceptual cues used for the ethnic labeling of African American and European American voices. J. Socioling. 8 , 54–87 (2004).

Atkins, C. P. Do employment recruiters discriminate on the basis of nonstandard dialect? J. Employ. Couns. 30 , 108–118 (1993).

Payne, K., Downing, J. & Fleming, J. C. Speaking Ebonics in a professional context: the role of ethos/source credibility and perceived sociability of the speaker. J. Tech. Writ. Commun. 30 , 367–383 (2000).

Rodriguez, J. I., Cargile, A. C. & Rich, M. D. Reactions to African-American vernacular English: do more phonological features matter? West. J. Black Stud. 28 , 407–414 (2004).

Billings, A. C. Beyond the Ebonics debate: attitudes about Black and standard American English. J. Black Stud. 36 , 68–81 (2005).

Kurinec, C. A. & Weaver, C. III “Sounding Black”: speech stereotypicality activates racial stereotypes and expectations about appearance. Front. Psychol. 12 , 785283 (2021).

Rosa, J. & Flores, N. Unsettling race and language: toward a raciolinguistic perspective. Lang. Soc. 46 , 621–647 (2017).

Salehi, B., Hovy, D., Hovy, E. & Søgaard, A. Huntsville, hospitals, and hockey teams: names can reveal your location. In Proc. 3rd Workshop on Noisy User-generated Text (eds Derczynski, L. et al.) 116–121 (Association for Computational Linguistics, 2017).

Radford, A. et al. Language models are unsupervised multitask learners. OpenAI https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf (2019).

Liu, Y. et al. RoBERTa: a robustly optimized BERT pretraining approach. Preprint at https://arxiv.org/abs/1907.11692 (2019).

Raffel, C. et al. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21 , 1–67 (2020).

MathSciNet   Google Scholar  

Ouyang, L. et al. Training language models to follow instructions with human feedback. In Proc. 36th Conference on Neural Information Processing Systems (eds Koyejo, S. et al.) 27730–27744 (NeurIPS, 2022).

OpenAI et al. GPT-4 technical report. Preprint at https://arxiv.org/abs/2303.08774 (2023).

Zhang, E. & Zhang, Y. Average precision. In Encyclopedia of Database Systems (eds Liu, L. & Özsu, M. T.) 192–193 (Springer, 2009).

Black, J. S. & van Esch, P. AI-enabled recruiting: what is it and how should a manager use it? Bus. Horiz. 63 , 215–226 (2020).

Hunkenschroer, A. L. & Luetge, C. Ethics of AI-enabled recruiting and selection: a review and research agenda. J. Bus. Ethics 178 , 977–1007 (2022).

Upadhyay, A. K. & Khandelwal, K. Applying artificial intelligence: implications for recruitment. Strateg. HR Rev. 17 , 255–258 (2018).

Tippins, N. T., Oswald, F. L. & McPhail, S. M. Scientific, legal, and ethical concerns about AI-based personnel selection tools: a call to action. Pers. Assess. Decis. 7 , 1 (2021).

Aletras, N., Tsarapatsanis, D., Preoţiuc-Pietro, D. & Lampos, V. Predicting judicial decisions of the European Court of Human Rights: a natural language processing perspective. PeerJ Comput. Sci. 2 , e93 (2016).

Surden, H. Artificial intelligence and law: an overview. Ga State Univ. Law Rev. 35 , 1305–1337 (2019).

Medvedeva, M., Vols, M. & Wieling, M. Using machine learning to predict decisions of the European Court of Human Rights. Artif. Intell. Law 28 , 237–266 (2020).

Weidinger, L. et al. Taxonomy of risks posed by language models. In Proc. 2022 ACM Conference on Fairness, Accountability, and Transparency 214–229 (Association for Computing Machinery, 2022).

Czopp, A. M. & Monteith, M. J. Thinking well of African Americans: measuring complimentary stereotypes and negative prejudice. Basic Appl. Soc. Psychol. 28 , 233–250 (2006).

Chowdhery, A. et al. PaLM: scaling language modeling with pathways. J. Mach. Learn. Res. 24 , 11324–11436 (2023).

Bai, Y. et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. Preprint at https://arxiv.org/abs/2204.05862 (2022).

Brown, T. B. et al. Language models are few-shot learners. In  Proc. 34th International Conference on Neural Information Processing Systems  (eds Larochelle, H. et al.) 1877–1901 (NeurIPS, 2020).

Dovidio, J. F. & Gaertner, S. L. Aversive racism. Adv. Exp. Soc. Psychol. 36 , 1–52 (2004).

Schuman, H., Steeh, C., Bobo, L. D. & Krysan, M. (eds) Racial Attitudes in America: Trends and Interpretations (Harvard Univ. Press, 1998).

Crosby, F., Bromley, S. & Saxe, L. Recent unobtrusive studies of Black and White discrimination and prejudice: a literature review. Psychol. Bull. 87 , 546–563 (1980).

Terkel, S. Race: How Blacks and Whites Think and Feel about the American Obsession (New Press, 1992).

Jackman, M. R. & Muha, M. J. Education and intergroup attitudes: moral enlightenment, superficial democratic commitment, or ideological refinement? Am. Sociol. Rev. 49 , 751–769 (1984).

Bonilla-Silva, E. The New Racism: Racial Structure in the United States, 1960s–1990s. In Race, Ethnicity, and Nationality in the United States: Toward the Twenty-First Century 1st edn (ed. Wong, P.) Ch. 4 (Westview Press, 1999).

Gao, L. et al. The Pile: an 800GB dataset of diverse text for language modeling. Preprint at https://arxiv.org/abs/2101.00027 (2021).

Ronkin, M. & Karn, H. E. Mock Ebonics: linguistic racism in parodies of Ebonics on the internet. J. Socioling. 3 , 360–380 (1999).

Dodge, J. et al. Documenting large webtext corpora: a case study on the Colossal Clean Crawled Corpus. In Proc. 2021 Conference on Empirical Methods in Natural Language Processing (eds Moens, M.-F. et al.) 1286–1305 (Association for Computational Linguistics, 2021).

Steed, R., Panda, S., Kobren, A. & Wick, M. Upstream mitigation is not all you need: testing the bias transfer hypothesis in pre-trained language models. In Proc. 60th Annual Meeting of the Association for Computational Linguistics (eds Muresan, S. et al.) 3524–3542 (Association for Computational Linguistics, 2022).

Feng, S., Park, C. Y., Liu, Y. & Tsvetkov, Y. From pretraining data to language models to downstream tasks: tracking the trails of political biases leading to unfair NLP models. In Proc. 61st Annual Meeting of the Association for Computational Linguistics (eds Rogers, A. et al.) 11737–11762 (Association for Computational Linguistics, 2023).

Köksal, A. et al. Language-agnostic bias detection in language models with bias probing. In Findings of the Association for Computational Linguistics: EMNLP 2023 (eds Bouamor, H. et al.) 12735–12747 (Association for Computational Linguistics, 2023).

Garg, N., Schiebinger, L., Jurafsky, D. & Zou, J. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proc. Natl Acad. Sci. USA 115 , E3635–E3644 (2018).

Ferrer, X., van Nuenen, T., Such, J. M. & Criado, N. Discovering and categorising language biases in Reddit. In Proc. Fifteenth International AAAI Conference on Web and Social Media (eds Budak, C. et al.) 140–151 (Association for the Advancement of Artificial Intelligence, 2021).

Ethayarajh, K., Choi, Y. & Swayamdipta, S. Understanding dataset difficulty with V-usable information. In Proc. 39th International Conference on Machine Learning (eds Chaudhuri, K. et al.) 5988–6008 (Proceedings of Machine Learning Research, 2022).

Hoffmann, J. et al. Training compute-optimal large language models. Preprint at https://arxiv.org/abs/2203.15556 (2022).

Liang, P. et al. Holistic evaluation of language models. Transactions on Machine Learning Research https://openreview.net/forum?id=iO4LZibEqW (2023).

Blodgett, S. L., Barocas, S., Daumé III, H. & Wallach, H. Language (technology) is power: A critical survey of “bias” in NLP. In Proc. 58th Annual Meeting of the Association for Computational Linguistics (eds Jurafsky, D. et al.) 5454–5476 (Association for Computational Linguistics, 2020).

Jørgensen, A., Hovy, D. & Søgaard, A. Challenges of studying and processing dialects in social media. In Proc. Workshop on Noisy User-generated Text (eds Xu, W. et al.) 9–18 (Association for Computational Linguistics, 2015).

Blodgett, S. L., Green, L. & O’Connor, B. Demographic dialectal variation in social media: a case study of African-American English. In Proc. 2016 Conference on Empirical Methods in Natural Language Processing (eds Su, J. et al.) 1119–1130 (Association for Computational Linguistics, 2016).

Jørgensen, A., Hovy, D. & Søgaard, A. Learning a POS tagger for AAVE-like language. In Proc. 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (eds Knight, K. et al.) 1115–1120 (Association for Computational Linguistics, 2016).

Blodgett, S. L. & O’Connor, B. Racial disparity in natural language processing: a case study of social media African-American English. Preprint at https://arxiv.org/abs/1707.00061 (2017).

Blodgett, S. L., Wei, J. & O’Connor, B. Twitter universal dependency parsing for African-American and mainstream American English. In Proc. 56th Annual Meeting of the Association for Computational Linguistics (eds Gurevych, I. & Miyao, Y.) 1415–1425 (Association for Computational Linguistics, 2018).

Groenwold, S. et al. Investigating African-American vernacular English in transformer-based text generation. In Proc. 2020 Conference on Empirical Methods in Natural Language Processing (eds Webber, B. et al.) 5877–5883 (Association for Computational Linguistics, 2020).

Ziems, C., Chen, J., Harris, C., Anderson, J. & Yang, D. VALUE: Understanding dialect disparity in NLU. In Proc. 60th Annual Meeting of the Association for Computational Linguistics (eds Muresan, S. et al.) 3701–3720 (Association for Computational Linguistics, 2022).

Davidson, T., Bhattacharya, D. & Weber, I. Racial bias in hate speech and abusive language detection datasets. In Proc. Third Workshop on Abusive Language Online (eds Roberts, S. T. et al.) 25–35 (Association for Computational Linguistics, 2019).

Sap, M., Card, D., Gabriel, S., Choi, Y. & Smith, N. A. The risk of racial bias in hate speech detection. In Proc. 57th Annual Meeting of the Association for Computational Linguistics (eds Korhonen, A. et al.) 1668–1678 (Association for Computational Linguistics, 2019).

Harris, C., Halevy, M., Howard, A., Bruckman, A. & Yang, D. Exploring the role of grammar and word choice in bias toward African American English (AAE) in hate speech classification. In Proc. 2022 ACM Conference on Fairness, Accountability, and Transparency 789–798 (Association for Computing Machinery, 2022).

Gururangan, S. et al. Whose language counts as high quality? Measuring language ideologies in text data selection. In Proc. 2022 Conference on Empirical Methods in Natural Language Processing (eds Goldberg, Y. et al.) 2562–2580 (Association for Computational Linguistics, 2022).

Gaies, S. J. & Beebe, J. D. The matched-guise technique for measuring attitudes and their implications for language education: a critical assessment. In Language Acquisition and the Second/Foreign Language Classroom (ed. Sadtano, E.) 156–178 (SEAMEO Regional Language Centre, 1991).

Hudson, R. A. Sociolinguistics (Cambridge Univ. Press, 1996).

Delobelle, P., Tokpo, E., Calders, T. & Berendt, B. Measuring fairness with biased rulers: a comparative study on bias metrics for pre-trained language models. In Proc. 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (eds Carpuat, M. et al.) 1693–1706 (Association for Computational Linguistics, 2022).

Mattern, J., Jin, Z., Sachan, M., Mihalcea, R. & Schölkopf, B. Understanding stereotypes in language models: Towards robust measurement and zero-shot debiasing. Preprint at https://arxiv.org/abs/2212.10678 (2022).

Eisenstein, J., O’Connor, B., Smith, N. A. & Xing, E. P. A latent variable model for geographic lexical variation. In Proc. 2010 Conference on Empirical Methods in Natural Language Processing (eds Li, H. & Màrquez, L.) 1277–1287 (Association for Computational Linguistics, 2010).

Doyle, G. Mapping dialectal variation by querying social media. In Proc. 14th Conference of the European Chapter of the Association for Computational Linguistics (eds Wintner, S. et al.) 98–106 (Association for Computational Linguistics, 2014).

Huang, Y., Guo, D., Kasakoff, A. & Grieve, J. Understanding U.S. regional linguistic variation with Twitter data analysis. Comput. Environ. Urban Syst. 59 , 244–255 (2016).

Eisenstein, J. What to do about bad language on the internet. In Proc. 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (eds Vanderwende, L. et al.) 359–369 (Association for Computational Linguistics, 2013).

Eisenstein, J. Systematic patterning in phonologically-motivated orthographic variation. J. Socioling. 19 , 161–188 (2015).

Jones, T. Toward a description of African American vernacular English dialect regions using “Black Twitter”. Am. Speech 90 , 403–440 (2015).

Christiano, P. F. et al. Deep reinforcement learning from human preferences. Proc. 31st International Conference on Neural Information Processing Systems (eds von Luxburg, U. et al.) 4302–4310 (NeurIPS, 2017).

Zhao, T. Z., Wallace, E., Feng, S., Klein, D. & Singh, S. Calibrate before use: Improving few-shot performance of language models. In Proc. 38th International Conference on Machine Learning (eds Meila, M. & Zhang, T.) 12697–12706 (Proceedings of Machine Learning Research, 2021).

Smith, T. W. & Son, J. Measuring Occupational Prestige on the 2012 General Social Survey (NORC at Univ. Chicago, 2014).

Zhao, J., Wang, T., Yatskar, M., Ordonez, V. & Chang, K.-W. Gender bias in coreference resolution: evaluation and debiasing methods. In Proc. 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (eds Walker, M. et al.) 15–20 (Association for Computational Linguistics, 2018).

Hughes, B. T., Srivastava, S., Leszko, M. & Condon, D. M. Occupational prestige: the status component of socioeconomic status. Collabra Psychol. 10 , 92882 (2024).

Gramlich, J. The gap between the number of blacks and whites in prison is shrinking. Pew Research Centre https://www.pewresearch.org/short-reads/2019/04/30/shrinking-gap-between-number-of-blacks-and-whites-in-prison (2019).

Walsh, A. The criminal justice system is riddled with racial disparities. Prison Policy Initiative Briefing https://www.prisonpolicy.org/blog/2016/08/15/cjrace (2016).

Röttger, P. et al. Political compass or spinning arrow? Towards more meaningful evaluations for values and opinions in large language models. Preprint at https://arxiv.org/abs/2402.16786 (2024).

Jurafsky, D. & Martin, J. H. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition (Prentice Hall, 2000).

Salazar, J., Liang, D., Nguyen, T. Q. & Kirchhoff, K. Masked language model scoring. In Proc. 58th Annual Meeting of the Association for Computational Linguistics (eds Jurafsky, D. et al.) 2699–2712 (Association for Computational Linguistics, 2020).

Santurkar, S. et al. Whose opinions do language models reflect? In Proc. 40th International Conference on Machine Learning (eds Krause, A. et al.) 29971–30004 (Proceedings of Machine Learning Research, 2023).

Francis, W. N. & Kucera, H. Brown Corpus Manual (Brown Univ.,1979).

Ziems, C. et al. Multi-VALUE: a framework for cross-dialectal English NLP. In Proc. 61st Annual Meeting of the Association for Computational Linguistics (eds Rogers, A. et al.) 744–768 (Association for Computational Linguistics, 2023).

Download references

Acknowledgements

V.H. was funded by the German Academic Scholarship Foundation. P.R.K. was funded in part by the Open Phil AI Fellowship. This work was also funded by the Hoffman-Yee Research Grants programme and the Stanford Institute for Human-Centered Artificial Intelligence. We thank A. Köksal, D. Hovy, K. Gligorić, M. Harrington, M. Casillas, M. Cheng and P. Röttger for feedback on an earlier version of the article.

Author information

Authors and affiliations.

Allen Institute for AI, Seattle, WA, USA

Valentin Hofmann

University of Oxford, Oxford, UK

LMU Munich, Munich, Germany

Stanford University, Stanford, CA, USA

Pratyusha Ria Kalluri & Dan Jurafsky

The University of Chicago, Chicago, IL, USA

Sharese King

You can also search for this author in PubMed   Google Scholar

Contributions

V.H., P.R.K., D.J. and S.K. designed the research. V.H. performed the research and analysed the data. V.H., P.R.K., D.J. and S.K. wrote the paper.

Corresponding authors

Correspondence to Valentin Hofmann or Sharese King .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature thanks Rodney Coates and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data figures and tables

Extended data fig. 1 weighted average favourability of top stereotypes about african americans in humans and top overt as well as covert stereotypes about african americans in language models (lms)..

The overt stereotypes are more favourable than the reported human stereotypes, except for GPT2. The covert stereotypes are substantially less favourable than the least favourable reported human stereotypes from 1933. Results without weighting, which are very similar, are provided in Supplementary Fig. 6 .

Extended Data Fig. 2 Prestige of occupations associated with AAE (positive values) versus SAE (negative values), for individual language models.

The shaded areas show 95% confidence bands around the regression lines. The association with AAE versus SAE is negatively correlated with occupational prestige, for all language models. We cannot conduct this analysis with GPT4 since the OpenAI API does not give access to the probabilities for all occupations.

Supplementary information

Supplementary information, reporting summary, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Hofmann, V., Kalluri, P.R., Jurafsky, D. et al. AI generates covertly racist decisions about people based on their dialect. Nature (2024). https://doi.org/10.1038/s41586-024-07856-5

Download citation

Received : 08 February 2024

Accepted : 19 July 2024

Published : 28 August 2024

DOI : https://doi.org/10.1038/s41586-024-07856-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

online job research paper

American Psychological Association Logo

Writing for the Web

  • Mid-Senior Career
  • Marketing and Advertising
  • Social Media and Internet

Supercharge Your Presence

December 2018

  • Slides (PDF, 4MB)
  • Transcript (DOC, 59KB)

This content is disabled due to your privacy settings. To re-enable, please adjust your cookie preferences.

Are you a private practice clinician interested in learning how to market yourself through online publication? Are you a researcher or academic interested in learning how to share your ideas with the public via the internet? Or, are you a graduate student looking to sow SEO seeds by publishing on the web?

Our latest webinar will teach you:

  • The value of online publishing outside of academia.
  • Web-friendly writing techniques and formatting.
  • Strategies for creating excellent content.
  • Methods and venues for getting published.

Writing for the Web: Resources (DOC, 15KB)

Blog template (DOC, 35KB)

This program does not offer CE credit.

Kyler Shumway, PsyD

Kyler Shumway, PsyD

President and chief clinical officer of Deep Eddy Psychotherapy , one of the leading outpatient mental health practices in Texas. He is also a bestselling author with his fourth book, Neurodiversity and the Myth of Normal , being released soon as an Amazon Audible Original. He has been featured by Forbes , The New York Times , CNN, and more for his work in combatting the loneliness epidemic. As a licensed psychologist, thought leader, and TEDx speaker who has spoken to audiences across the nation (as well as internationally), his mission is to help people learn to love themselves and others, build satisfying and meaningful relationships, and find their place to belong.

More in this series

This webinar covers marketing strategies that will help you attract readers to your book.

May 2019 On Demand Webinar

Learn how to become a public speaker as a mental health professional.

March 2019 On Demand Webinar

Learn about search engine optimization, what is it and how do you find SEO success when you're not super tech-savvy.

October 2018 On Demand Webinar

Learn about options for designing and hosting as website as well as basic web design principles.

September 2018 On Demand Webinar

U.S. flag

An official website of the United States government

Here's how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Social Security Matters

Social security administration announces new efforts to simplify ssi applications.

August 27, 2024 • By Nate Osburn, Deputy Commissioner, Office of Communications

Last Updated: August 27, 2024

Social Security Administration Logo

The initial step – known as iClaim expansion – aims to establish a fully online, simplified iClaim application that leverages user-tested, plain-language questions, prepopulated answers where possible, seamless step-by-step transitions, and more. The online application aims to reduce the time spent applying as well as the processing time for initial claim decisions.

“Over the past year, we have asked many applicants and advocates – as well as our workforce – how we could make the SSI application process easier and simpler. Now, we are taking an important first step to do just that,” said Martin O’Malley, Commissioner of Social Security.

“People in our communities who need this crucial safety net deserve the dignity of an application process that is less burdensome and more accessible than what we now have, and we’re committed to achieving that vision over the next few years.”

The rollout of the iClaim expansion will generally be available to first-time applicants between 18 and almost 65 who never married and are concurrently applying for Social Security benefits and SSI. A goal of the second phase – currently targeted for late 2025 – is to expand this to all applicants.

The Federal Register Notice that supports this effort was published today and reflects changes based on what Social Security previously received. To read it, please visit Federal Register: Agency Information Collection Activities: Proposed Request .

Subsequent SSI simplification steps will incorporate lessons learned from the iClaim expansion into in-person, phone, mobile, and paper-based processes for SSI applications. As part of that, the agency plans to develop a separate simplified child SSI application.

All of these efforts will support and streamline the way Social Security’s staff technicians and applicants work together, providing an applicant journey that reflects continuous feedback gathered from the agency’s Customer Experience team, particularly from underserved communities.

Did you find this Information helpful?

Tags: online services , SSI , supplemental security income

About the Author

Nate Osburn, Deputy Commissioner, Office of Communications

Nate Osburn, Deputy Commissioner, Office of Communications

Related articles, progress with timely delivery of payments to people receiving ssi, supplemental security income (ssi) celebrates 50 years as a lifeline , expanding access to ssi: getting your application started online, social security administration expands outreach and access for supplemental security income.

September 3, 2024 8:55AM

Making the SSI application available online is a big win for accessibility. This will especially benefit those who may have difficulty with in-person or paper-based applications

September 3, 2024 5:37AM

Any chance we could get a post on the Amazon Vine program and how it can impact benefits? I know it can cause problems with SSI, but what about people who are solely on SSDI? It would be extremely helpful to have an official, written SSA stance. The program issues 1099-NEC forms (non-employee compensation) reporting the value of items a person in the program received (for free) to review. Tax officials vary in how they handle it, but some treat it as self-employment and others as hobby income. The Amazon Vine terms call these items “promotional”, so it’s a little weird they’re filling out tax forms that say it’s compensation.

I tried emails and got responses like an AI looked for buzzwords with no real understanding of what was asked. I’ve tried calling and was told that the person “didn’t think” it mattered to SSDI, but “didn’t think” isn’t a guarantee. I’ve tried digging through policy and found “§ 404.1575. Evaluation guides if you are self-employed.” But, I’m not sure this even counts as self-employment. I didn’t apply to participate in the program – Amazon just randomly invited me because of reviews I’ve left, like every customer can do. There’s no contract negotiation or anything like it. And, for the items received, we have to actually try the product AND must wait six months before we can sell if we choose to do that. That means that any item sold is sold “used” and worth less (if any worth at all) than the value reported on the 1099-NEC.

Some people report the 1099-NEC as self-employment income, others as hobby income. Others have contacted the IRS about how to file taxes and claim they were told by the IRS that the 1099-NEC is not how Amazon should be handling things given program details.

I’d be happy to provide further details on the program if someone emails me. It’s very frustrating when you’re trying your best to make sure you aren’t breaking any rules but can’t get a clear answer anywhere. I’m not the only person very worried that operating off of a random SSA employee’s assumption that it’s fine will lead to fraud accusations down the road. It shouldn’t be this hard to get a firm answer.

September 3, 2024 3:14AM

Dear sir , Rajesh kumar in india Dear sir, out of country intrested person u.s.a social security protect safty Available india born i am not false please All the fields success Help me Thank you.

September 3, 2024 2:52AM

Dear sir, Rajesh kumar in india social security perdonal safty All The Fields Success Help me Thank you.

August 31, 2024 12:28AM

social security administration or social security office at address 1871 Rockaway Pkwy, Brooklyn, New York city 11236 in canarsie took control of social security website support contact us and then email and they send me automate stuff with no work done or water damage emails

August 29, 2024 5:19PM

The assets limit rule for SSI needs to be increased. It was set in 1962 when SSI was created to $2,000 and this is a hardship to those on SSI. Those on SSI who work are only allowed to earn $65 a week and that needs to be raised too. That was also set decades ago and is not practical or realistic. A person on SSI who only gets $1,200 a month (combined SSA +SSI) cannot live on that unless they are in subsidized housing and there is not enough subsidized housing for all those who need it. So serious reforms need to be done to raise the limits of what people can earn and what people can save who are on SSI.

August 29, 2024 2:36PM

I have tried multiple times to get on the ss.gov site i made an id.me account now every time I sign with the id.me account it takes me right back to the sign in page. I tried a different way to sign in to ss.gov site and I tried the id.me sign in and it told me to make a login.gov account so I did and it took me right back to the sign in page. I am frustrated with this new way to sign in and i have yet to get back on my social security page! What is going on

online job research paper

August 30, 2024 10:24AM

Hello, TJ. Thank you for reading our blog. If you need help with transitioning your account, you can contact Login.gov help center  for assistance with Login.gov accounts, and  ID.me support center  for assistance with ID.me accounts. Call our toll-free number at 1-800-772-1213 (TTY 1-800-325-078). Our National 800 number is available Monday through Friday, 8:00 a.m. to 7:00 p.m. Please say “Help Desk” at the voice prompt. We also encourage you to visit our  Frequently Asked Questions for Transitioning Your Social Security Username to Login.gov . We hope this helps.

August 28, 2024 4:08PM

Question: If you are going to receive insurance claim benefit from your parent passing away, does social security receive this or part of this?

I’m only receiving retirement benefits from ssa

August 29, 2024 2:39PM

Susie, your Social Security benefits won’t be affected by the life insurance payout from your parents passing. Social Security doesn’t take any of the life insurance money. That money is paid directly to you as the beneficiary.

Life insurance payouts aren’t considered income as far as Social Security is concerned. And since you only get retirement benefits from Social Security, getting life insurance money won’t change that at all. The life insurance money also isn’t taxable income or counted as earnings that could impact your Social Security.

You can go ahead and claim the life insurance without it messing with your regular Social Security checks. Everything will keep coming in as usual for your retirement benefits. The life insurance payout won’t cause any deductions or changes to what Social Security sends you each month.

Leave a Comment

Please review our Comment Policy before leaving a comment. For your safety, please do not post Personally Identifiable Information (such as your Social Security Number, address, phone number, email address, bank account number, or birthdate) on our blog.

Cancel reply

Your email address will not be published. Required fields are marked *

Enter Your Name *

Enter Your Email Address to Comment *

80 Impactful Research Topics for High School Students

Photo of Rebekah Pierce

By Rebekah Pierce

Educational writer and former teacher

3 minute read

Choosing the right research topic can be the secret ingredient to making your high school student paper not only impressive but also fun to write. Let's face it - no one wants to slog through a boring topic that has been done a million times before. 

A good research topic is like the foundation of a strong building. It sets the stage for everything else - not to mention that it helps you develop critical thinking and analytical skills that you’ll need as you move into college and beyond. 

Here are some of the best research paper ideas (and some tips to help you get started with writing about these fun research topics for high school projects).

How to Choose the Right Research Paper Topic

Begin by identifying what interests you most. What do you want to learn more about? These don’t necessarily have to be controversial topics. Just think about what might be a good research topic for your interests.

Once you have a few ideas for a good topic, start the research process to hunt down resources and relevant literature. Aim for the best research paper topics that will allow for a comparative study, such as analyzing different perspectives on a social issue or contrasting historical events. 

Make sure your chosen topic is neither too broad nor too narrow. Finding the right balance is incredibly important if you want to produce a focused and impactful paper.

Do your own research through Polygence!

Polygence pairs you with an expert mentor in your area of passion. Together, you work to create a high quality research project that is uniquely your own.

How to Get Started with Your Research Paper Writing

First up, do a thorough literature review to gather existing research and insights relevant to your topic. This may even inspire new angles for you to explore!

Organize your findings and outline the structure of your paper to keep things clear, tight, and tidy. Write an abstract to break down your intentions.

As you write a research paper , critically analyze the information and present your arguments coherently, allowing your voice to shine through (objectively) while incorporating scholarly evidence. In the introduction , grab the reader with an enticing bit of information, like a hook, quote, or stat.

Edit, edit, and edit some more - then, get ready to publish!

Need some inspiration to get the creative juices flowing? Keep reading to discover the best research topics for high school students.

Technology Research Paper Topics

The Influence of Artificial Intelligence on Modern Society: Artificial Intelligence (AI) is no longer just a concept from sci-fi movies. What are the ethical considerations? 

Cybersecurity Threats and Measures in the Digital Age: With the rise of digital technology, cybersecurity is more important than ever. 

The Future of Renewable Energy Technologies: Solar panels, wind turbines, and electric cars are just the beginning. 

Impact of Social Media on Youth Behavior: Social media platforms like Instagram, TikTok, and Snapchat dominate the lives of teenagers - for better or worse.

The Role of Technology in Modern Education : How are digital tools and online platforms enhancing learning experiences? 

Health and Medicine Topics

The Effects of Diet and Nutrition on Mental Health: What we eat doesn't just affect our physical health.

Advances in Cancer Research and Treatment: Explore the latest advances in cancer research.

The Impact of Vaccines on Public Health: Are vaccines safe? What does the future hold?

Mental Health Issues Among Teenagers: For these psychology research paper topics for high schoolers, explore the many factors leading to an increased incidence of mental health issues in teens, from academics to Snapchat and everything in between.

The Role of Genetics in Personalized Medicine: Take a closer look at how genetic studies are being used to create personalized, in-depth treatment plans for patients.

Making a difference starts with you

Interested in Environmental Science? We'll match you with an expert mentor who will help you explore your next project.

Environment Topics

Climate Change and Its Impact on Global Ecosystems: Climate change is affecting us all. Take a look at how melting ice caps and rising temperatures are impacting ecosystems around the world. 

Sustainable Practices in Urban Development: To minimize our environmental impact, we need to think green. But what does this mean for urban development?

The Effects of Pollution on Marine Life: How can we reduce the impact of pollution on marine life?

Renewable Energy Sources: Benefits and Challenges: Renewable energy sources like wind, solar, and hydroelectric power offer numerous benefits but also come with challenges. Explore these.

The Importance of Biodiversity Conservation: How can we incorporate strategies to protect endangered habitats?

Social Issues and Sociology Research Topics

The Impact of Social Media on Interpersonal Relationships: Social media is shaking up the way we interact with others. 

The Role of Education in Reducing Inequality: Education is the number one way to reduce inequality. Explore strategies and policies that can help with this.

Gender Equality in the Workplace: Gender equality remains a significant issue in workplaces worldwide - talk about why and how to address this.

The Effects of Poverty on Community Health: Explore how poverty has far-reaching impacts on nutrition, healthcare access, and overall health and well-being.

Immigration Policies and Their Social Implications: Immigration policies are far-reaching, impacting more than just immigrant communities. 

History Argumentative Essay Topics

The Causes and Effects of World War II: Research the causes and ripple effects of the Second World War.

The Impact of the Civil Rights Movement on Modern Society: Ask how the Civil Rights Movement impacted racial equality today - and look at the continuing challenges.

Ancient Civilizations and Their Contributions to the Modern World: How do these ancient achievements influence us today?

The History of Space Exploration: Space exploration has captivated humanity for decades - but what’s the background?

The Evolution of Democracy Throughout History: Democracy has evolved significantly over the centuries - detail this evolution.

Science Research Topics

The Exploration of Space: Past, Present, and Future: What are the scientific and societal benefits of exploring space?

Genetic Engineering and Its Ethical Implications: Are there ethical considerations (or risks) of genetic engineering? Take a look at them. 

The Impact of Climate Change on Natural Disasters: Climate change is increasing the frequency and severity of natural disasters. 

Advances in Renewable Energy Technology: Renewable energy technology is advancing rapidly - what innovations hold the most promise?

The Role of Science in Solving Global Problems: How can science help solve problems related to disease, poverty, and climate change? 

Literature Research Topics

The Influence of Classic Literature on Modern Writing: Ever wondered how Shakespeare still affects today's bestsellers? A research paper on how classic literature influences modern writing can uncover fascinating parallels and divergences.

Themes of Dystopia in Contemporary Literature: From "The Hunger Games" to "1984," dystopian themes have captivated readers for ages. 

The Role of Literature in Social Change: Literature has the power to inspire revolutions. Explore books like "Uncle Tom's Cabin" and "To Kill a Mockingbird" and how they created societal shifts.

Comparative Analysis of Major Literary Movements: Compare the themes, styles, and impacts on society of different literary movements like Romanticism, Realism, and Modernism. 

The Impact of Digital Media on Reading Habits: Is the Kindle killing books? If so, research how and why in this essay topic.

Economics Topics

The Effects of Globalization on Local Economies: Globalization is reshaping economies worldwide -explore its impacts on local businesses and job markets.

The Role of Technology in Transforming the Job Market: From AI to automation, technology is revolutionizing jobs. 

Economic Impacts of Climate Change: Climate change isn't just an environmental issue; it's an economic one too.

The Influence of Consumer Behavior on Market Trends: Ever bought something because it was trending? Study how consumer behavior shapes market trends.

The Future of Cryptocurrencies in the Global Economy: Bitcoin, Ethereum, Dogecoin - what's the deal? 

Education Research Paper Topics

The Impact of Online Learning on Student Performance: Online learning is more relevant now than ever, which you’ll explore in this education research topic.

The Role of Technology in Modern Education: How are smart boards and tablets changing classrooms for public schools? How can they improve academic achievement?

Comparative Analysis of Education Systems Around the World: Why do some countries excel in education while others lag? Compare different education systems to see what works and doesn’t.

The Effects of Standardized Testing on Student Learning: Standardized tests are controversial; research their impacts on student learning and whether they accurately measure academic performance and predict academic success, particularly related to special education, elementary school, and early childhood education.

Innovations in Educational Methodologies: From flipped classrooms in elementary education to gamification for middle school, explore different teaching methods with this research question.

Arts Research Project Ideas

The Evolution of Visual Arts Through Different Periods: Study how visual arts have evolved from the Renaissance to Postmodernism.

The Influence of Digital Media on Traditional Arts: Analyze how digital media is affecting traditional arts like painting and sculpture.

The Role of Art in Cultural Preservation: Art isn’t just for aesthetics; it preserves culture too. 

Comparative Study of Art Movements: Compare movements like Impressionism and Cubism.

The Impact of Public Art on Community Identity: Murals, sculptures, and public installations - how do they shape community identity and pride? 

Athletics Topics

The Impact of Sports on Academic Performance: Do athletes perform better academically? 

The Role of Athletics in College Admissions: Sports can be a ticket to higher education. Research how athletics influence college admissions and scholarships for current college students.

The Effects of Physical Activity on Mental Health: Exercise isn’t just for the body; it’s also for the mind. Explore that in these research ideas.

The Influence of Sports on Leadership Skills

Sports teach more than physical skills. Analyze how participation in sports cultivates leadership qualities.

The Future of Technology in Sports Training: From wearable tech to virtual reality, technology is revolutionizing sports training. 

Music Research Paper Topics

The Influence of Classical Music on Modern Genres: Ever heard classical elements in pop songs? Explore how classical music influences modern genres.

The Role of Music in Cultural Identity: Music defines cultures. Study how different genres contribute to cultural identity.

The Effects of Music Therapy on Mental Health: Music heals. Research why that is.

Evolution of Music Technology: From vinyl to Spotify, music tech has come a long way. 

The Impact of Music Education on Academic Performance: Does music make you smarter? 

Government and Politics Persuasive Essay Topics

The Impact of Government Policies on Economic Growth: Government policies can make or break economies. 

Comparative Analysis of Political Systems: Democracy, autocracy, and everything in between - compare different political systems and their effectiveness.

The Role of Youth in Political Movements: Young people are powerful when it comes to historical and current political movements. 

Government Response to Climate Change: How are governments tackling climate change? 

The Influence of Lobbying on Legislation: Lobbying shapes laws. Investigate how.

Writing and Communication Topics

The Evolution of Writing Styles Over the Centuries: Writing styles have changed dramatically. Study their evolution and what influenced these changes.

The Impact of Digital Media on Writing and Communication: Digital media is reshaping communication. 

Creative Writing Techniques for Young Authors: Explore techniques and tips to enhance creative writing.

The Role of Writing in Personal Expression: Research how writing can be a powerful tool for self-expression.

The Importance of Effective Communication Skills: Study why effective communication skills are crucial in various aspects of life.

Society, Culture, and Social Science Topics

The Effects of Social Media on Cultural Norms: Social media is changing culture. Research its impacts on cultural norms and behaviors.

The Role of Tradition in Modern Society: Traditions persist in modern times. Study the role of ancient traditions in contemporary society.

Comparative Analysis of Cultural Practices Around the World: Different cultures, different practices. Compare cultural practices and their meanings worldwide.

The Influence of Media on Public Perception: Media shapes how we see the world. 

The Impact of Globalization on Cultural Identity: Globalization is blending cultures. Research its effects on cultural identities.

Business and Entrepreneurship Topics

The Impact of Startups on the Economy: Startups are economic powerhouses. Study their impacts on local and global economies.

The Role of Innovation in Business Success: Research how innovation influences business achievements.

Ethical Considerations in Business Practices: Investigate ethical considerations and their impacts on business practices.

The Influence of Digital Marketing on Consumer Behavior: Analyze the effects of digital marketing on consumer behavior and purchasing decisions.

Strategies for Successful Entrepreneurship: Want to start a business? Explore strategies.

Polygence Scholars Are Also Passionate About

Engaging in research with polygence's core program.

Picking the right research topic can set the tone for your entire project. It's not just about getting a good grade—it’s about developing critical thinking and enhancing your analytical skills. Your high school research paper topics can even set the stage for future academic pursuits or careers. 

Polygence’s Core Program offers a variety of resources to help you nail every aspect of your research paper. Sign up today!

By selecting an impactful research topic , you're not just writing a paper - you're developing research skills that will serve you for a lifetime. These skills can enhance your understanding of your current school curriculum and prepare you for the rigorous demands of higher education, setting a strong foundation for your academic future.

Blog The Education Hub

https://educationhub.blog.gov.uk/2024/08/20/gcse-results-day-2024-number-grading-system/

GCSE results day 2024: Everything you need to know including the number grading system

online job research paper

Thousands of students across the country will soon be finding out their GCSE results and thinking about the next steps in their education.   

Here we explain everything you need to know about the big day, from when results day is, to the current 9-1 grading scale, to what your options are if your results aren’t what you’re expecting.  

When is GCSE results day 2024?  

GCSE results day will be taking place on Thursday the 22 August.     

The results will be made available to schools on Wednesday and available to pick up from your school by 8am on Thursday morning.  

Schools will issue their own instructions on how and when to collect your results.   

When did we change to a number grading scale?  

The shift to the numerical grading system was introduced in England in 2017 firstly in English language, English literature, and maths.  

By 2020 all subjects were shifted to number grades. This means anyone with GCSE results from 2017-2020 will have a combination of both letters and numbers.  

The numerical grading system was to signal more challenging GCSEs and to better differentiate between students’ abilities - particularly at higher grades between the A *-C grades. There only used to be 4 grades between A* and C, now with the numerical grading scale there are 6.  

What do the number grades mean?  

The grades are ranked from 1, the lowest, to 9, the highest.  

The grades don’t exactly translate, but the two grading scales meet at three points as illustrated below.  

The image is a comparison chart from the UK Department for Education, showing the new GCSE grades (9 to 1) alongside the old grades (A* to G). Grade 9 aligns with A*, grades 8 and 7 with A, and so on, down to U, which remains unchanged. The "Results 2024" logo is in the bottom-right corner, with colourful stripes at the top and bottom.

The bottom of grade 7 is aligned with the bottom of grade A, while the bottom of grade 4 is aligned to the bottom of grade C.    

Meanwhile, the bottom of grade 1 is aligned to the bottom of grade G.  

What to do if your results weren’t what you were expecting?  

If your results weren’t what you were expecting, firstly don’t panic. You have options.  

First things first, speak to your school or college – they could be flexible on entry requirements if you’ve just missed your grades.   

They’ll also be able to give you the best tailored advice on whether re-sitting while studying for your next qualifications is a possibility.   

If you’re really unhappy with your results you can enter to resit all GCSE subjects in summer 2025. You can also take autumn exams in GCSE English language and maths.  

Speak to your sixth form or college to decide when it’s the best time for you to resit a GCSE exam.  

Look for other courses with different grade requirements     

Entry requirements vary depending on the college and course. Ask your school for advice, and call your college or another one in your area to see if there’s a space on a course you’re interested in.    

Consider an apprenticeship    

Apprenticeships combine a practical training job with study too. They’re open to you if you’re 16 or over, living in England, and not in full time education.  

As an apprentice you’ll be a paid employee, have the opportunity to work alongside experienced staff, gain job-specific skills, and get time set aside for training and study related to your role.   

You can find out more about how to apply here .  

Talk to a National Careers Service (NCS) adviser    

The National Career Service is a free resource that can help you with your career planning. Give them a call to discuss potential routes into higher education, further education, or the workplace.   

Whatever your results, if you want to find out more about all your education and training options, as well as get practical advice about your exam results, visit the  National Careers Service page  and Skills for Careers to explore your study and work choices.   

You may also be interested in:

  • Results day 2024: What's next after picking up your A level, T level and VTQ results?
  • When is results day 2024? GCSEs, A levels, T Levels and VTQs

Tags: GCSE grade equivalent , gcse number grades , GCSE results , gcse results day 2024 , gsce grades old and new , new gcse grades

Sharing and comments

Share this page, related content and links, about the education hub.

The Education Hub is a site for parents, pupils, education professionals and the media that captures all you need to know about the education system. You’ll find accessible, straightforward information on popular topics, Q&As, interviews, case studies, and more.

Please note that for media enquiries, journalists should call our central Newsdesk on 020 7783 8300. This media-only line operates from Monday to Friday, 8am to 7pm. Outside of these hours the number will divert to the duty media officer.

Members of the public should call our general enquiries line on 0370 000 2288.

Sign up and manage updates

Follow us on social media, search by date.

August 2024
M T W T F S S
 1234
5 7891011
131415161718
2122232425
2627 29 31  

Comments and moderation policy

IMAGES

  1. 31+ Research Paper Templates in PDF

    online job research paper

  2. FREE 5+ Sample Research Paper Templates in PDF

    online job research paper

  3. How to Write and Publish a Research Paper.pdf

    online job research paper

  4. Job research on Behance

    online job research paper

  5. (PDF) Using the web to look for work: Implications for online job

    online job research paper

  6. Sample Research Paper

    online job research paper

VIDEO

  1. Top 5 Government Job Vacancy in May 2024

  2. HOW TO GET PART TIME JOB IN CANADA 2024 || RESUME FOR PART TIME JOB IN CANADA || MR PATEL ||

  3. Tips on Applying for Jobs

  4. Research and preparation for job interviews

  5. प्लाज्मा अनुसंधान संस्थान मे निकली MTS की शानदार भर्ती 2024

  6. Free

COMMENTS

  1. Job Seeking: The Process and Experience of Looking for a Job

    Finally, we survey work on job search interventions and conclude with an overview of pressing job search issues in need of future research. You can access this article via the following e-print ...

  2. Digital Job Searching and Recruitment Platforms: A Semi-systematic

    Abstract. The purpose of this paper is to shed light on the new E-recruitment trend that is pervading the lives of job seekers, included students, and job offers. A semi-systematic literature ...

  3. Find the best Research Paper Writing jobs

    Browse 147 open jobs and land a remote Research Paper Writing job today. See detailed job requirements, compensation, duration, employer history, & apply today.

  4. (PDF) Job Search and Employment Success: A Quantitative Review and

    Job Search and Employment Success: A Quantitative Review and Future Research Agenda. July 2020. Journal of Applied Psychology. DOI: 10.1037/apl0000675. Authors: Edwin van Hooft. University of ...

  5. Conceptualizations of E-recruitment: A Literature Review and Analysis

    Abstract. There is diversity in understanding of electronic recruitment (e-recruitment) which results in confusion on the meaning and use of the term. The purpose of this paper is to bring conceptual clarity by investigating the alternative conceptualizations of e-recruitment in academic literature. Using Grounded Theory Methodology (GTM ...

  6. Good jobs, scam jobs: Detecting, normalizing, and internalizing online

    The difference between "good jobs" and "bad jobs" has been well documented (Kalleberg, 2009; Kalleberg and Dunn, 2016; Ravenelle, 2020).). "Good jobs" offer an opportunity for advancement and a source of stable employment with workplace protections and benefits, while "bad jobs" have few protections, benefits, or advancement opportunities (Kalleberg, 2009).

  7. How do digital platforms affect employment and job search? Evidence

    Evidence from the US on the role of online job search has been somewhat mixed. In the early years of internet job search (1998-2000), Kuhn and Skuterud (2004) ... In this paper we partner with Job Shikari, which at the time of study, was one of the only digital platforms advertising employment opportunities for job seekers without university ...

  8. AI and Jobs: Evidence from Online Vacancies

    Working Paper 28257. DOI 10.3386/w28257. Issue Date December 2020. Revision Date February 2022. We study the impact of AI on labor markets using establishment-level data on vacancies with detailed occupation and skill information comprising the near-universe of online vacancies in the US from 2010 onwards. There is rapid growth in AI related ...

  9. Combinations of approach and avoidance crafting matter: Linking job

    2.2 Outcomes of job crafting profiles. People who proactively craft their jobs are more likely to experience enhanced well-being and performance due to their increased control over their work environment (Wrzesniewski & Dutton, 2001) and improved person-job fit (Lu et al., 2014; Tims et al., 2016).The job demands-resources theory (Bakker & Demerouti, 2007) posits that sufficient job resources ...

  10. Large-scale online job search behaviors reveal labor market ...

    The COVID-19 pandemic has had an unprecedented impact on labor markets, significantly altering the structure of labor supply and demand in various regions. We use large-scale online job search ...

  11. 14 Online Research Jobs to Make Money from Home

    14 online research jobs from home. #1. Lionbridge. Lionbridge hires people to work from home for roles like search engine evaluator, internet assessor, and social media evaluator. The job involves doing web research and rating the search engine results, which, in turn, improves the organic search results for end users.

  12. Research Jobs

    Check out a sample of the 141,530 Research jobs posted on Upwork. Find freelance jobs ». Research Jobs. Social Media Posts. New. Hourly ‐ Posted 5 hours ago. 30+ hrs/week. Hours needed. Entry.

  13. Freelance Academic Editing Job?

    Requirements. A bachelor's degree or higher. Interest in a wide range of subjects. Microsoft Word skills and tech skills. Availability to edit 10,000 words per week. Preferred. Prior academic editing experience. Freelance and remote work experience. Interest in a long-term collaboration.

  14. Find the best Academic Research jobs

    See how it works. Browse 115 open jobs and land a remote Academic Research job today. See detailed job requirements, compensation, duration, employer history, & apply today.

  15. Hire a freelance scientific writer

    Choose from hundreds of freelance scientific writers specialized in medical writing, science writing, technical writing, academic writing, grant writing, and more. Get a subject matter expert to help you write a research paper and prepare it for publishing in a journal of your choice. Consult a specialist for help to write literature reviews ...

  16. A Comprehensive Framework for Online Job Portals for Job ...

    According to the classification algorithm in the research paper by Parida et al. ... (2014) A research of job recommendation system based on collaborative filtering. In: Seventh international symposium on computational intelligence and design. Google Scholar Al-Otaibi ST, Ykhlef M (2012) A survey of job recommender systems. Int J Phys Sci 7(29 ...

  17. PDF A Study of Issues in Job Portals: Research Analysis

    online information on job vacancies for job-seekers [2]. Moreover, if one tries to find any job by traditional means, it might take much time. ... The research outcomes through this paper will be a portal that is entirely based on a survey that we have conducted among various job- seekers, students, current employees, various kinds of ...

  18. Measuring Bias in Job Recommender Systems: Auditing the Algorithms

    We audit the job recommender algorithms used by four Chinese job boards by creating fictitious applicant profiles that differ only in their gender. Jobs recommended uniquely to the male and female profiles in a pair differ modestly in their observed characteristics, with female jobs advertising lower wages, requesting less experience, and ...

  19. Feature Extraction based Online Job Portal

    Job search portals and proper portal log-ins are the main goals of this research project. Placements are becoming increasingly important, and many people's lives are dependent on them. Both job seekers and recruiters can use a job portal to find the ideal company for their needs. In the event of activity seekers, their educational qualifications and their preferences are taken into ...

  20. A Review Study on Online Job Portal

    The same goes for online portals. From food ordering to recruiting, today's tech-savvy teens use the internet for everything. Indeed, today the elect people rely more on the internet than on any other source, such as a newspaper or network. The job search online starts with signing up for a job portfolio, which is done by everyone looking for a ...

  21. What is a Thesis Paper and How to Write One

    Thesis papers present your own original research or analysis on a specific topic related to your field. "In some ways, a thesis paper can look a lot like a novella," said Shana Chartier, director of information literacy at Southern New Hampshire University (SNHU). "It's too short to be a full-length novel, but with the standard size of ...

  22. O*NET OnLine

    Detailed descriptions of the world-of-work for use by job seekers, workforce development and HR professionals, students, developers, researchers, and more. Individuals can find, search, or browse across 900+ occupations based on their goals and needs. Comprehensive reports include occupation requirements, worker characteristics, and available training, education, and job opportunities.

  23. (PDF) Online Job Search: Study of Users' Search Behavior using Search

    Online Job Search: Study of Users' Search Behavior using Search Engine Query Logs. June 2018; ... Short Research Papers II. SIGIR'18, July 8-12, 2018, Ann Arbor, MI, USA. 1185.

  24. AI generates covertly racist decisions about people based on their

    Hundreds of millions of people now interact with language models, with uses ranging from help with writing 1,2 to informing hiring decisions 3.However, these language models are known to ...

  25. Supercharge your presence: Writing for the web

    Kyler Shumway, PsyD. President and chief clinical officer of Deep Eddy Psychotherapy, one of the leading outpatient mental health practices in Texas.He is also a bestselling author with his fourth book, Neurodiversity and the Myth of Normal, being released soon as an Amazon Audible Original.He has been featured by Forbes, The New York Times, CNN, and more for his work in combatting the ...

  26. Social Security Administration Announces New Efforts to Simplify SSI

    The initial step - known as iClaim expansion - aims to establish a fully online, simplified iClaim application that leverages user-tested, plain-language questions, prepopulated answers where possible, seamless step-by-step transitions, and more. ... phone, mobile, and paper-based processes for SSI applications. As part of that, the agency ...

  27. Hire the best Research Paper writers

    Research Paper Writers. $50/hr. Prerna J. Research Paper Writer. 4.9/5. (52 jobs) I have been working for over twelve years as a freelancer in academic and professional writing. I have served as a professor in leading B-schools for over 5 years and worked as a professional freelancer for over a decade. I am an expert at thesis, dissertations ...

  28. Universal CRISPR‐Cas12a and Toehold RNA ...

    Advanced Healthcare Materials is a broad-scope journal publishing research on healthcare materials, devices and technologies, including biomaterials and nanomedicine. Paper Sensor In article 2400508, Mehmet V. Yigit and co-workers develop a paper-based sensor for the visual detection of Salmonella genome. By incorporating a cell-free protein ...

  29. 80 Impactful Research Topics for High School Students

    It's not just about getting a good grade—it's about developing critical thinking and enhancing your analytical skills. Your high school research paper topics can even set the stage for future academic pursuits or careers. Polygence's Core Program offers a variety of resources to help you nail every aspect of your research paper. Sign up ...

  30. GCSE results day 2024: Everything you need to know including the number

    Apprenticeships combine a practical training job with study too. They're open to you if you're 16 or over, living in England, and not in full time education. As an apprentice you'll be a paid employee, have the opportunity to work alongside experienced staff, gain job-specific skills, and get time set aside for training and study related ...