There are four sequential steps to designing a job assessment: job analysis, hiring situation analysis, assessment content development, and assessment validation.
Job analysis identifies what to assess. It begins by asking:
- What work is performed on the job?
- What tasks are performed?
- What are the duties and responsibilities of the job?
- What competencies are needed to do the work?
- What skills are needed?
- What personality traits are needed?
- What knowledge must be possessed to do the work?
Evaluating the hiring situation adds real-world constraints to the assessment and considers tradeoffs between the desires of candidates and hiring managers.
- What is the volume of applicants expected?
- How motivated are the applicants to complete the job assessment?
- How much time will candidates devote to the job assessment?
- How much time will hiring managers allocate to reviewing each candidate’s assessment?
- Are the consumers of the job assessment results experts in the assessment content?
Determine how to assess candidates.
- Choose a subset of job-related competencies from the Job Analysis
- Not all competencies related to job success can be readily assessed
- Focus on the most critical competencies required upon job entry
- Construct authentic tasks
- What does good performance on this task look like?
- Gather input from hiring managers and supervisors for this role
- Gather input from on-the-ground employees who are in this role
- Choose assessment formats that match the authentic tasks, while considering the hiring situation. A hurdled (multi-stage) approach may be necessary e.g.
- Hurdle 1: Initial screening via multiple choice questions to clear minimum requirements
- Hurdle 2: Rating method with performance-based work samples
- Construct scoring rubric
- How well should most candidates perform?
- What is the minimum level of performance for qualified candidates?
- How are exceptionally strong candidates identified?
- Does the scoring rubric generate a normal (bell-curve) distribution of outcomes?
Validation asks two core questions about the job assessment:
- Reliability. To what extent is the assessment consistent in what it is measuring?
- Validity. To what extent is the assessment measuring what is intended to measure?The validation process should involve some combination of:
- Subject-matter experts
- Research-based experts, line managers, and functional employees should review the content of the assessments with the intent of exposing questions that fail these questions:
- Do the strongest candidates consistently score highly on each piece of assessment content?
- Do the weakest candidates consistently score poorly on each piece of assessment content?
- Candidate benchmarking
- Individuals who match the profile of future candidates for the role are recruited to complete the job assessment and provide question-specific feedback on the assessment to evaluate its content validity and reliability.
- Randomized trial
- During a defined trial period, all candidates are assessed using the newly developed job assessment. A randomized portion are hired using the results of the new job assessment while the remainder are hired using the preexisting hiring approach. At the end of the trial, objective performance measures such as attrition rates are compared between the trial and control groups.
- Existing employee benchmarking
- A large pool of existing employees complete the new job assessment and their scores are compared to objective job performance measures using regression analysis.