This website uses cookies to store information on your computer to ensure you get the best experience using our website. By using the site, you consent to the placement of these cookies. Learn more.

Agree and dismissCandidates will see pass/fail results from the “written” CoreCHI™ exam immediately after finishing the test. The *Score Report* is emailed to candidates within 48 hours of the test submission. The official score is emailed within approximately two to four weeks of completing the CoreCHI™ examination.

The CoreCHI™ exam uses four-option, multiple-choice questions scored electronically. Out of the total 100 questions, 85 are used to calculate the score used to determine the pass/fail status.(CCHI uses statistical data on 15 pre-test items to determine the use on the future test forms.)

The* passing score (passing standard)* is determined by teams of Subject Matter Experts (SMEs) and the CCHI Commissioners under the guidance of a qualified psychometrician through a systemic and legally-defensible standard setting process (see its explanation below). The raw test score (number of questions answered correctly) is scaled (via a mathematical formula) to the distribution of 300 to 600, with the

In the* Score Report*, CCHI provides the overall scaled score and

You need to look at your *Score Report* from the perspective that it states *two separate things*: the overall test score, and how well you did in specific parts of the test. There is *no* relationship between the percentages reported for the parts of the test (domains) and the overall test score.

Domain scores are reported as percentages of the *correct* answers within each test domain. The percentage correct for a domain (e.g., healthcare terminology) is computed as the portion of the points that you earned relative to the number of points it is possible to earn in that domain. For example, if the maximum number of points that it is possible to earn in a domain is 22 and you earned 15 points, the percentage on your score report would be 68% out of 100% possible in that domain.

Your total score is *not* the average of your scores in domains. Because each domain has a different number of questions, you cannot add up the domain percentages to obtain your overall score.

The total score is based on the full examination. There is no pass or fail status associated with an individual content domain. The percentages reported for domains are intended only as a guide and should be interpreted cautiously due to the small number of items included in each content domain. In order to improve your score, if you failed an exam, you need to review and study for *all* content domains of the exam. For more information on the domains, see the *Test Content Outline*.

**Explanation of Standard Setting**

To establish the passing score for the CoreCHI exam, CCHI uses the *Modified Angoff* method that has an established history of determining credible passing standards for credentialing examinations.

The method involves two basic elements: conceptualization of a minimally competent candidate and the probability, as assigned by SMEs, that a minimally competent candidate will answer an item correctly. A minimally competent candidate is described as an individual who would be able to demonstrate just enough knowledge to pass the examination. In general, such a candidate has enough knowledge to practice safely and competently, but does not demonstrate the knowledge level to be considered an expert.

SMEs provide ratings for each test item on whether a minimally competent candidate would get the item correct. Then they compare their ratings with empirical data collected during the pilot phase for each item and discuss their ratings as a group, with the goal to reach as close a consensus as possible. The SMEs’ ratings are then averaged, and this “provisional cut score” is further reviewed and validated to establish an operational cut score.

For more information about the Modified Angoff method, see:

- Angoff, W. H. (1971). Scales, norms, and equivalent scores. In R. L. Thorndike (Ed.),
*Educational measurement*(2nd ed.) (pp. 508‐600). Washington, DC: American Council on Education. - Plake, B. S., & Cizek, G. J. (2012). Variations on a theme: The Modified Angoff, Extended Angoff, and Yes/No standard setting methods. In G. J. Cizek (Ed.),
*Setting performance standards: Foundations, methods and innovations*(pp. 181‐200). New York, NY: Routledge.

*Explanation of Equating*

Following the best testing practices, CCHI has several versions of the same exam (called *test forms*) administered to candidates. One of the reasons for this is to be fair to candidates who take the exam for the first time and to those who retake the exam. Ideally, each candidate should have a new to them version of the exam.

Different test forms may be of slightly different difficulty, because of the natural variations in the language of the test items (e.g., scenarios, terms). And, again, it is important to be fair to all candidates regardless of which form they took. To achieve this fairness, the test forms undergo a procedure called *equating*.

Equating is a mathematical calculation that ensures that the test forms have the passing points at the same level of the candidate’s performance, i.e., that the forms are “equal” and “fair.” Test forms are equated to the “standard.” The “standard” is the form that the SMEs used to establish the passing score, and all subsequently developed forms are equated to it. Let’s say the standard is Form 1, and Forms 2 and 3 are equated to Form 1. Forms 2 and 3 will have different raw passing points because of this equating but they will be then scaled to represent the same passing score of 450 points. As a result of equating, a slightly easier form will require the candidate get higher points on some test items (called “raw scores”) to pass the exam. And a slightly more difficult test form will allow the candidate get lower points on some test items to pass.

Equating calculations are done by psychometricians and then reviewed and approved by CCHI.

As an analogy, if a second grade mathematics test included both addition and multiplication problems, you might expect the addition problems to be easier and multiplications problems to be harder. Let’s say Class 1 has an exam with 75 addition questions and 25 multiplication ones, whereas class 2 has an exam with 65 addition and 35 multiplication questions. Then, to be fair for both classes, the final grade on two exams would have to be mathematically adjusted. Let’s say the addition question is worth 1 point, and the multiplication question is worth 4 points. Now imagine these 4 students:

*Student A*from*Class 1*who correctly answers all addition questions and misses all multiplication problems would have a final score of 75, and 75% of his questions would be correct.*Student B*from*Class 1*who misses all the addition questions, but answers all the multiplication problems correctly would have a test score of 100, but would have only answered 25% of the questions correctly.*Student C*from*Class 2*who correctly answers all addition questions and misses all multiplication problems would have a final score of 65, and 65% of his questions would be correct.*Student D*from*Class 2*who misses all the addition questions, but answers all the multiplication problems correctly would have a test score of 140, but would have only answered 35% of the questions correctly.

To conclude, the percent scoring should be seen more as an indication that you did better in one domain than another on that particular test. You cannot compare between tests because they have a mix of test items with differing difficulties and, therefore, different weights for the final overall exam score.

When CCHI applies for accrediting and re-accrediting its exams, the equating procedures are submitted for review to the accrediting body. Accreditation is a form of final review and confirmation that the accredited exam meets all the requirements to be fair and reliable.