→ To make sure you focus on essentials over nice-to-haves: The main reason for the limit is that we want to nudge our users to rethink what’s really needed and what’s optional for the role. Often, being a little less strict on the must-haves (in your job description and in your assessment) opens a wider and more diverse candidate pool. Some of our customers started out with very comprehensive lists of every possible skill a candidate would have in an ideal world without realising how much this narrowed their candidate pool. We usually recommend to ask yourself the following questions when deciding whether to assess a skill or not:
➤ How frequently is this skill needed?
➤ How critical for success is the skill?
➤ Does the hired candidate need the skill from day 1 or could they learn it on the job? Is it easy/quick to learn?
→ To avoid putting great burden on candidates: Complex questions assessing multiple skills, long interviews having lots of questions or many interview rounds may leave candidates feeling exhausted, not performing at their best, enjoying the process less, and possibly even dropping out of the process due to the burden or feel less positive about a potential employer and their job offer. Sometimes we get feedback from customers that they shorten their interview procedure after starting to use Applied because they see greater overall quality in the shortlisted candidates and less need to test them on every single aspect they previously covered. See how you feel about this after you’ve hired a few roles with Applied.
That said, there are two ways you can assess additional skills within Applied:
→ Assess more than one skill per question: You may have skills that go well together in one question, for example I could see you assessing the ‘testing’ skill alongside ‘debugging and monitoring’ in a question that ask how they’d go about debugging an issue in a newly feature that has just been release and ask how they’d avoid this happening again in the future. Or maybe ‘prioritisation’ and ‘pragmatism’ could go together? You could then simply call your skills ‘testing, debugging & monitoring’ and ’prioritisation & pragmatism”. Whether this approach is suitable really depends on the questions you’d like to ask. If you assess some skills in multiple questions and combine the skills in different ways, this approach wouldn’t really work. If a question has one ‘main’ skill and 1-2 ‘sub-skills’ you could also consider just tagging the main skill but adding the ‘sub-skills’ to your review guide. The only limitation is that these ‘sub-skills’ will not be integrated into the feedback ‘spider graph’ that candidates receive.
→ Add questions without tagging a skill: Alternatively, you can always fall back on the option of adding a question without tagging any skill. When you’re prompted to ‘Select the skill(s) you want to test’ just don’t click any and you should be able to proceed anyways.
Again, the limitation is that these questions will not be integrated into the feedback ‘spider graph’ that candidates receive.