The common (Verbal and Quantitative) multiple-choice portions of the exam currently use computer-adaptive testing (CAT) methods that automatically change the difficulty of questions as the test taker proceeds with the exam, depending on the number of correct or incorrect answers that are given. The test taker is not allowed to go back and change the answers to previous questions, and some type of answer must be given before the next question is presented.
The first question that is given in a multiple-choice section is considered to be an "average level" question that half of the GRE test takers will answer correctly. If the question is answered correctly, then subsequent questions become more difficult. If the question is answered incorrectly, then subsequent questions become easier, until a question is answered correctly. This approach to administration yields scores that are of similar accuracy while using approximately half as many items. However, this effect is moderated with the GRE because it has a fixed length; true CATs are variable length, where the test will stop itself once it has zeroed in on a candidate's ability level.
The actual scoring of the test is done with item response theory (IRT). While CAT is associated with IRT, IRT is actually used to score non-CAT exams. The GRE subject tests, which are administered in the traditional paper-and-pencil format, use the same IRT scoring algorithm. The difference that CAT provides is that items are dynamically selected so that the test taker only sees items of appropriate difficulty. Besides the psychometric benefits, this has the added benefit of not wasting the examinee's time by administering items that are far too hard or easy. This occurs in fixed-form testing.
An examinee can miss one or more questions on a multiple-choice section and still receive a perfect score of 800. Likewise, even if no question is answered correctly, 200 is the lowest score possible.
Scaled score percentiles
The percentiles of the current test are as follows:
Scaled score | Verbal Reasoning % | Quantitative Reasoning % |
---|---|---|
800 | 99 | 94 |
780 | 99 | 89 |
760 | 99 | 85 |
740 | 99 | 80 |
720 | 98 | 75 |
700 | 97 | 71 |
680 | 96 | 66 |
660 | 94 | 62 |
640 | 92 | 57 |
620 | 89 | 52 |
600 | 86 | 48 |
580 | 82 | 44 |
560 | 77 | 39 |
540 | 72 | 35 |
520 | 67 | 31 |
500 | 62 | 28 |
480 | 57 | 24 |
460 | 52 | 21 |
440 | 46 | 18 |
420 | 40 | 16 |
400 | 35 | 13 |
380 | 29 | 11 |
360 | 24 | 9 |
340 | 19 | 7 |
320 | 13 | 6 |
300 | 8 | 4 |
280 | 5 | 3 |
260 | 2 | 2 |
240 | 1 | 1 |
220 | 0 | 0 |
200 | 0 | 0 |
mean | 457 | 586 |
Analytical Writing score | Writing % |
---|---|
6 | 98 |
5.5 | 92 |
5 | 81 |
4.5 | 63 |
4 | 41 |
3.5 | 23 |
3 | 10 |
2.5 | 3 |
2 | 1 |
1.5 | 0 |
1 | 0 |
0.5 | 0 |
mean | 3.9 |
Comparisons for "Intended Graduate Major" are "limited to those who earned their college degrees up to two years prior to the test date." ETS provides no score data for "non-traditional" students who have been out of school more than two years, although its own report "RR-99-16" indicated that 22% of all test takers in 1996 were over the age of 30.
No comments:
Post a Comment