TalVista FAQs

Reviewing Resumes

Why do I need to establish which screening criteria are most important before I start screening resumes?

A landmark study by Yale University psychologists unraveled the subtleties of job discrimination resulting from stereotyping. The study showed that even when we think we’re reviewing a candidate for totally objective screening criteria like education or experience, a gut-level idea about who would traditionally fill a job often has actually influenced who we select.

In the experiment, participants reviewed resumes that presented a male and female candidate for police chief. The resumes were actually identical with two alternating variables: whether the resume belonged to Michael or Michelle and whether the candidate had more formal education or more experience. Michael’s ratings consistently came out on top. When he was the resume without a college education, but with more experience participants said they picked him for “having more street smarts,”; and they still picked Michael when Michelle had more experience, and Michael had gone to college, saying that Michelle wasn’t a good candidate due to lack of education.

The good news is that the study also showed how to mitigate our predisposition for prejudice. Asking reviewers to establish screening criteria importance beforehand eliminated this shifting merits effect. With TalVista, resume reviewers are asked to “pre-commit” to which qualities are most important for job performance, helping eliminate the possibility that you will be swayed by other content in a particular candidate’s resume, such as their race, gender, or where they went to school.

Why should I score an applicant's resume in an identity-blind or redacted fashion?

Howard versus Heidi, Greg versus Jamal—researchers have replicated countless times that the inferred race or gender (or parenthood or religious status) of an applicant affects how we rate their qualifications; this is especially true when it comes to resume screening. When study participants were shown completely identical resumes save one fact—one has the name Heidi, and one has Howard—Howard gets higher ratings. We see the same effect when comparing a white-sounding name with a black-sounding name: Greg gets more callbacks than Jamal. By scoring a resume before getting identity information, TalVista helps users ensure an added measure of objectivity.

What is the evidence to support blind resume screening?

If you watched season three of “Mozart and the Jungle” you’ll recall New York Symphony candidates auditioning behind a white screen for the open oboe chair. In the 1970s, U.S. orchestras were only 5% female. As orchestras became aware of the concept of gender bias in their auditions, most started using a screen for auditions by the end of the 1980s. Today they are well above 30%, and researchers credit “the screen” with up to 55% of the increase in female new hires.

A similar rationale is at play in a popular reality TV show “The Voice”. Judges have to vote for candidates with their backs turned to them, judging candidates purely on their vocal talent—a self-proclaimed rejection of the superficiality that too often defines a singer’s success.

And when it comes to resume screenings, when researchers manipulate “perceived race” by changing resumes from white-sounding names like “Emily” and “Greg” to non-white sounding names like “Lakisha” and “Jamal”, white names are called back 50% more than non-white ones.