You can add a ‘written task’ stage at any point of your hiring process, ask candidates to answer questions after they have submitted their application, and then review those answers using the same behavioural science principles of Sift questions, ensuring a fair and equal process throughout.
You can assess candidates on written answers throughout the hiring process:
- During the initial application stage, using Applied's Sift questions.
- Later in the recruitment process, as a written task that candidates can take along with interviews or other hiring stages.
Just like Sift questions, written tasks will have a max word count of 250.
In both of the instances detailed above, you and your team will be nudged by behavioural design principles to mitigate bias.
This article will explain how to set a written task for candidates after their initial application has been assessed. If you want to learn about how to ask sift questions during the application stage, you can go here.
🗓 Asking for a written task is a functionality that is only available for roles created after 25 November 2022. Roles created before this date, or roles that are duplicated from roles created before November 25th date won't have this functionality available. |
Roles for which this written task can be helpful
- High-demand roles: ask written questions after first getting candidates to apply, with fewer sift questions upfront; also applicable for quick-apply candidates.
- High-volume roles e.g graduate schemes: ask Mapped, numerical skills test or multiple choice questions and a few sift questions during the application stage and then send a second round of questions to those who scored well.
- Roles for which written skills are key: ask one round of sifting questions and the second set of questions close to interview rounds.
Steps for assessing candidates on a written task
1. Setting up the questions
2. Sending the task to candidates
3. Candidates can answer questions by the due date
4. Scoring answers and principles to mitigate bias
1. Setting up the questions
You can set up the questions in the hiring stages section of each role builder, under the interview stages.
You first need to add a work sample round, name it according to your preferences (e.g. written task) and add the number of questions that will be part of the round.
Remember to set your role to draft before adding any questions |
As with sift questions, you can tag skills and add review guides for each question. This will help your team have a skills-oriented hiring process, and score candidate answers more objectively.
2. Sending the tasks to candidates
After an initial screening of submitted applications, you can ask candidates to answer the set of questions.
In a role's candidate list:
- Select the group of candidates who should answer the new set of questions
- Select the 'send assessment' action from the action menu
- In the drawer that opens after you select this action:
- Select the task you want to assign
- Set up a due date
- Update the email/instructions that candidates will receive
- Click send
3. Candidates can answer questions by a due date
In the email that candidates get, they're asked to go to their application dashboard to answer the questions. They'll be able to answer these questions before the due date.
As a job admin, you'll see how the status of candidates changes when they start submitting their answers. This status is visible in the same column that displays the scores that candidates get on this assessment.
4. Scoring answers
This step works similarly to how you allocate sift answers to review.
As a job admin:
- Select the group of candidates that you want your hiring team to review
- Select the 'send to anonymous review' action from the action menu
- In the menu that opens after you select this action:
- Select the question(s) you want to allocate to review
- Select the reviewers
- Select the number of reviews
- Add any extra notes you'd like to share with reviewers
- Send questions to review
Reviewers will get an email with a link to review the answers, which is also accessible via the personal tasks dashboard.
The review process is designed to mitigate bias by using the following 4 behavioural design principles:
Design |
Why? |
What it means for you |
Anonymisation
|
Names and other details, like where someone went to university, can unconsciously influence how you see and score an application. |
We remove candidates' personal details so that you can focus on the quality of their answers. |
Chunking
|
It's hard to compare candidates in multiple areas simultaneously - it leads to cognitive overload. |
Instead of reviewing the whole set of answers of a single candidate in full, one at a time, you'll review a batch of answers to question 1, then to question 2 and so on. |
Randomisation
|
Our brains are heavily affected by 'ordering effects' and small contextual factors. For example, we tend to be more generous when scoring the first few candidates we see, or candidates who come just after a particularly bad application. If we're hungry or tired, we score more harshly. This leads to lots of "noise" and to scoring that is not objective. |
We randomise the order of all candidate responses. That way, no candidate is disproportionately advantaged or disadvantaged by where they appear in a pile. |
Crowdsourcing
|
We all have slightly different ways of seeing the world, which means we rarely agree entirely on what 'good' looks like. |
Reviewers score candidates independently, and their reviews are averaged across candidates. This means reviewers will not influence each other in their scoring. A candidate's overall score is a more holistic measure of their quality. |
Comments
0 comments
Article is closed for comments.