Psychometric Services

Select A Service:

Job Anaylsis

Job analysis is a fundamental part of a well-developed assessment program and provides the objective, legally defensible criteria needed to support your assessment program. Job analysis is the systematic study of a job that documents the knowledge skills and abilities necessary for successful performance of the job under evaluation. McCann has over 50 years of experience designing and developing job analysis survey instruments, collecting and analyzing job analysis data, and preparing job analysis reports. Our various methods of collecting data range from surveys, focus groups, and data review (training materials, job descriptions, rules and regulations, etc.) to on-the-job observations and personal interviews. In addition, because the knowledge, skills, and abilities required to perform a specific job may change over time, we advise refreshing your job analysis every 5 years.
^Back To The Top

Test Specification Development

The test specification, sometimes referred to as the “test blueprint,” defines the structure of the test. Using the results of the job analysis, the “test blueprint” defines the content areas and appropriate weight of each content area, the number and type of items on the test for each content area, and may also include scoring and reporting procedures. Our methods of test specification development fortify content validity by relying on job analysis data and consultations with subject matter experts (SMEs).
^Back To The Top

Item Writing Workshops

Our Item Writing Workshops are designed to help your organization create valid items aligned to your program’s test objectives, test blueprint, and job analysis. We will assist you in developing a test blueprint to ensure content validity and, with our expert test development staff, your new items will meet or exceed industry item development standards.

In addition to our face-to-face workshop services, our psychometrically-trained staff is experienced in conducting virtual item writing workshops, which eliminate the time and financial expense associated with travel and accommodations for your item writers. Our workshops begin with a training session during which our test development experts orient the item writers to the characteristics of valid test items, particularly emphasizing alignment to your test blueprint to ensure content validity. Training includes a thorough review on how to write grammatically correct, clear items that conform to industry test standards, as well as the importance of referencing each item to the blueprint and job analysis (or other material as deemed relevant). Together, the group reviews a training set of items and discusses whether the items meet industry standards and whether they exhibit all the characteristics of a “good” item. Item writers then practice writing items, which are then critiqued by the group. Once training ends, based on each item writer’s area of expertise, item writing assignments are distributed. Face-to-face item writing workshops generally last 3 to 5 days, while our virtual item writing workshops accommodate essentially any schedule. Once the new pool of items is generated, each item is psychometrically reviewed by our team of item writing experts.


  • Generate a pool of content-valid items that align with your test blueprint and meet industry item writing standards, ready for piloting
  • Gain access to secure, online, virtual item writing tools to empower your item writers to perform tasks at home, thereby eradicating substantial travel and business expenses incurred with face-to-face item writing workshops
  • Create items directly in the online item bank, thereby removing back-end work of re-entering items into a new system
  • Assign work-flow tasks such as item writers, item reviewers, and item approvers
  • Easily generate item bank reports to find gaps, item status (draft, in review, approved), and other pertinent item-related data

^Back To The Top

Test Review

The purpose of a test review is to examine the content covered by an existing test and to determine the extent to which it is aligned with the test blueprint. During the workshop, SMEs are trained in test construction guidelines and work through the process of matching each test item back to the test blueprint.
^Back To The Top

Item Analysis Review

Item analysis review workshops are conducted to ensure all items on a test are performing in appropriate ways that contribute to the overall reliability and validity of a test. Prior to conducting the workshop, our psychometric staff uses a variety of statistical methods to describe item behaviors, detect items that may be adversely affecting the psychometric properties of the overall test, or items that are otherwise performing in unintended ways that can challenge the reliability and validity of the test for specific demographic subgroups.

These methodologies include classical test theory (CTT) and item response theory (IRT) measures of item difficulty, item discrimination, and measures of differential item functioning (DIF), such as Mantel-Haenszel, logistic regression, and/or item response theory (IRT) indices. Based on the results of these analyses, our staff flags any item that falls below the standard of acceptable performance levels.

Adverse impact analyses, such as the 80% rule, Fisher’s Exact Test, and the Binomial Test (which provide information on the test-level performance of demographic subgroups), may also be included in the item analysis review.

During the workshop, SMEs are trained on statistical concepts relating to determining the validity and reliability of test items and test and item bias. With our psychometric staff, the panel of SMEs reviews each flagged item to determine the most appropriate next steps to protect both the fairness and content validity of the test.
^Back To The Top

Essay and Constructed-Response Assessment Modeling

IntelliMetric® is the “gold standard” in automated essay scoring. More than 10 years of research studies show that IntelliMetric® equals or exceeds the accuracy of expert human scorers and is appropriate for both low- and high-stakes assessment environments and virtually any type of written content relevant to our clients’ written assessment needs. IntelliMetric® analyzes more than 400 semantic, syntactic, and discourse characteristics and provides a holistic score as well as scores within five major domains: focus and meaning, organization, content and development, language use and style, and mechanics and conventions.
^Back To The Top

Oral Assessments

McCann’s psychometrics and test development teams have over 50 years of experience working with clients to develop and deliver structured oral interview assessments that measure the knowledge, skills, and abilities (KSAs) necessary for successful performance in managerial and supervisory positions. The content of our promotional oral exams focuses on critical incidents encountered on the job and is developed in conjunction with SMEs. Members of our oral board panels are experienced SMEs and are well trained in objectively and impartially applying promotional oral assessment rating scales.


  • Superior content and predictive validity and reliability for knowledge, skills, and abilities that are integral to successful managerial and supervisory job performance
  • Procedures that are fair to EEOC-protected groups, such as minorities and women
  • Expert evaluators that have years of supervisory experience
  • 50 years of experience in designing, developing, and administering structured oral interviews

^Back To The Top

Assessment Centers

Assessment centers are designed to assess skills and abilities that paper-and-pencil tests simply can’t measure as accurately. Assessment centers are ideal in some limited circumstances; for example, to measure critical-thinking, decision-making, problem-solving, and interpersonal skills. Assessment centers provide standardized and objective ways to evaluate candidates on these and other knowledges, skills, and abilities by using realistic scenarios or job simulations. Assessment center exercises can include role playing, leaderless discussions, fact-finding, and in-basket exercises. Candidates participate in these activities and are rated on their performance by a panel of highly trained SMEs.
^Back To The Top