Use sift questions, scrap CVs and cover letters
The best available evidence tells us that CVs and unstructured interviews are not effective predictors of on-the-job performance and are full of unconscious bias.
Applied works by:
- testing candidates on what they would do in the role, using sift questions
- designing out unconscious bias, throughout the process
Using behavioural science to design out unconscious bias
Applied designs out unconscious biases using principles from behavioural science, so candidate scores reflect how well a candidate answered a particular question, and not on any irrelevant details or "noise".
During the first sift shortlist, we:
- anonymise candidate responses. You'll see candidate's answers to questions but no other details about them so you can focus on the quality of their answer alone. This will stop you from being influenced by details like where candidates went to university or what their name is.
- chunk applications up by answers to questions, not by candidate. This means you can compare answers to other answers for the same question. It's easier to objectively compare answers to the same question than it is to compare whole applications.
- randomise the order that candidates are reviewed so everyone gets a chance to be seen first, middle and last. There's evidence to suggest that candidates seen later on are scored more harshly - possibly due to "reviewer fatigue".
- crowdsource scores from a panel of reviewers across your team, who all score independently of each other. This stops them from influencing each others judgement and lets you benefit from your team's diversity by using the "wisdom of the crowd".
See the table below for more information:
Design |
Why? |
What it means for you |
---|---|---|
Anonymisation
|
Names and other details like where someone went to university can unconsciously influence how you see and score an application |
We remove all candidate personal details, while reviewing their answers to sift questions so that you can focus on the quality of their answers. |
Chunking
|
It's hard to compare candidates in multiples areas at once - it leads to cognitive overload. |
Instead of reviewing a single candidates' application in full, one at a time, you'll review a batch of answers to question 1, then to question 2 and so on. |
Randomisation
|
Our brains are heavily affected by 'ordering effects' and small contextual factors. For example, we tend to be more generous when scoring the first few candidates we see, or candidates who come just after a particularly bad application. If we're hungry or tired, we score more harshly. This leads to lots of "noise" and to scoring that is not objective. |
We randomise the order of all candidate responses. That way, no candidate is disproportionately advantaged or disadvantaged by where they show up in the pile. |
Crowdsourcing
|
We all have a slightly different way of seeing the world, and that means we rarely agree completely on what 'good' looks like. |
Reviewers score candidates independently, and their reviews are averaged out across candidates. This means reviewers will not influence each other in their scoring. A candidate's overall score is a more holistic measure of their quality. |
If anything in this article piques your interest or isn't clear, don't hesitate to get in touch at hello@beapplied.com, where a member of our team will be on hand to give you support.
Comments
0 comments
Article is closed for comments.