1 / 54

Exploring the Equivalence and Rater Bias in AC Ratings

Exploring the Equivalence and Rater Bias in AC Ratings. Prof Gert Roodt – Department of Industrial Psychology and People Management, University of Johannesburg Sandra Schlebusch – The Consultants. ACSG Conference 17 – 19 March 2010. Presentation Overview.

talib
Download Presentation

Exploring the Equivalence and Rater Bias in AC Ratings

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Exploring the Equivalence and Rater Bias in AC Ratings Prof Gert Roodt – Department of Industrial Psychology and People Management, University of Johannesburg Sandra Schlebusch – The Consultants ACSG Conference 17 – 19 March 2010

  2. Presentation Overview • Background and Objectives of the Study • Research Method • Results • Discussion and Conclusions • Recommendations

  3. Background Construct Validity has long been a Problem in ACs (Jones & Born, 2008) Perhaps the Mental Models that the Raters use are Part of the Problem However, other Factors that Influence Reliability Should not be Neglected

  4. Background Continued • To Increase Reliability Focus On all aspects of the Design Model (Schlebusch & Roodt, 2007): • Analysis • Design • Implementation • Context • Participants: • Process Owners (Simulation Administrator; Raters; Role-players)

  5. Background Continued • Analysis (International Guidelines, 2009) • Competencies / Dimensions • Also Characteristics of Dimensions (Jones & Born, 2008) • Situations • Trends/Issues in Organisation • Technology

  6. Background Continued • Design of Simulations • Fidelity • Elicit Behaviour • Pilot

  7. Background Continued • Implementation • Context: • Purpose • Participants • Simulation Administration (Potosky, 2008) • Instructions • Resources • Test Room Conditions

  8. Background Continued • Raters • Background • Characteristics • “What are Raters Thinking About When Making Ratings?” (Jones & Born, 2008)

  9. Sources of Rater Bias • Rater Differences (background; experience, etc.) • Rater Predisposition (attitude; ability; knowledge; skills, etc.) • Mental Models

  10. Objective of the Study The Focus of this Study is on Equivalence and Rater Bias in AC Ratings More specifically on: • Regional Differences • Age Differences • Tenure Differences • Rater Differences

  11. Research Method Participants (Ratees) Region

  12. Research Method (cont.) Participants (Ratees) Age

  13. Research Method (cont.) Participants (Ratees) Tenure

  14. Research Method (cont.) • Measurement: • In-Basket Test • Measuring Six Dimensions: • Initiative; • Information Gathering; • Judgement; • Providing Direction; • Empowerment; • Management Control • Overall In-Basket Rating

  15. Research Method (cont.) Procedure: Ratings were Conducted by 3 Raters on 1057 Ratees

  16. Results Initiative

  17. Results (cont.) Initiative

  18. Results (cont.) Information Gathering

  19. Results (cont.) Information Gathering

  20. Results (cont.) Judgement

  21. Results (cont.) Judgement

  22. Results (cont.) Providing Direction

  23. Results (cont.) Providing Direction

  24. Results (cont.) Empowerment

  25. Results (cont.) Empowerment

  26. Results (cont.) Control

  27. Results (cont.) Control

  28. Results (cont.) Overall In-Basket Rating

  29. Results (cont.) Regional Differences

  30. Results (cont.) Age Differences

  31. Results (cont.)- tenure Tenure differences

  32. Results (cont.) Rater Differences

  33. Results (cont.) Post Hoc Tests: Judgement

  34. Results (cont.)

  35. Results (cont.) Post Hoc Tests: Providing Direction

  36. Results (cont.)

  37. Results (cont.) Post Hoc Tests: Empowerment

  38. Results (cont.)

  39. Results (cont.) Post Hoc Tests: Control

  40. Results (cont.)

  41. Results (cont.) Post Hoc Tests: In-Basket

  42. Results (cont.)

  43. Results (cont.) Non-Parametric Correlations

  44. Discussion • Clear Regional; Age and Tenure Differences Do Exist among Participants • Possible Sources of the Differences: • Regional Administration of In-Basket • Thus Differences in Administration Medium (Potosky, 2008) • Different Administrators (Explaining Purpose; Giving Instructions; Answering Questions) • Different Resources • Different Test Room Conditions

  45. Discussion (cont.) • Differences Between Participants Regionally: • English Language Ability (not tested) • Motivation to Participate in the Assessment (not tested) • Differences in Employee Selection Processes as well as Training Opportunities (Burroughs et al., 1973) • Simulation Fidelity (not tested)

  46. Discussion (cont.) • Clear Regional; Age and Tenure Differences Do Exist among Participants • Supporting Findings by Burroughs et al. (1973) • Age does Significantly Influence AC Performance • Participants from Certain Departments Perform Better

  47. Discussion (cont.) • Appropriateness of In-Basket for Ratees • Level of Complexity • Situation Fidelity Recommendations: • Ensure Documented Evidence (Analysis Phase in Design Model) • Pilot In-Basket on Target Ratees (Design Phase of Design Model) • Shared Responsibility of Service Provider and Client Organisation

  48. Discussion (cont.) • Context in Which In-Basket Administered • Purpose Communicated Recommendations: • Ensure Participants (Ratees) and Process Owners Understand and Buy-into Purpose

  49. Discussion (cont.) • Consistent Simulation Administration: • Instructions Given Consistently • Interaction with Administrator • Appropriate Resources Available During Administration • TestRoom Conditions Appropriate for Testing Recommendations: • Ensure All Administrators Trained • Standardise Test Room Conditions

  50. Discussion (cont.) • Rater Differences do Exist • Possible Sources of Rater Differences: • Background (All from a Psychology Background, with Management Experience) • Characteristics such as Personality (Bartels & Doverspike) • Owing to Cognitive Load on Raters • Owing to Differences in Mental Models (Jones & Born, 2008)

More Related