iteman 4.1 manual

48
User’s Manual for Iteman Classical Item and Test Analysis Version 4.1

Upload: mistrasbourg9462

Post on 24-Nov-2014

1.015 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Iteman 4.1 Manual

User’s Manual for

Iteman

Classical Item and Test Analysis

Version 4.1

Page 2: Iteman 4.1 Manual

Contact Information Assessment Systems Corporation 2233 University Avenue, Suite 200 St. Paul, Minnesota 55114 Voice: (651) 647-9220 Fax: (651) 647-0412 E-Mail: [email protected] www.assess.com

Bookmarks To view PDF Bookmarks for this manual, select the Bookmark tab on the left side of the Acrobat window. The bookmark entries are hyperlinks that will take you directly to any section of the manual that you select.

License Unless you have purchased multiple licenses for Iteman 4.1, your license is a single-user license. You may install Iteman 4.1 on a single computer and one additional computer (e.g., a desktop and a laptop), provided that there is no possibility that both copies will be in use simultaneously. Instructions for transferring your license between computers are in Appendix E.

Technical Assistance If you need technical assistance using Iteman 4.1, please visit the Support section of our Web site, www.assess.com. If the answer to your question is not posted, please email us at [email protected]. Technical assistance for Iteman 4.1 is provided for as long as you maintain the then current version. Please provide us with the invoice number for your license purchase when you request technical assistance.

Citation Thompson, N.A., & Guyer, R. (2010). User’s Manual for Iteman 4.1. St. Paul MN: Assessment Systems Corporation.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means—electronic, mechanical, photocopying, recording, or otherwise—without

the prior written consent of the publisher.

Copyright © 2010 by Assessment Systems Corporation All Rights Reserved

Iteman is the trademark of Assessment Systems Corporation

Page 3: Iteman 4.1 Manual

Table of Contents 1. Introduction ...................................................................................................................... 1

Your Iteman 4.1 License and Unlocking Your Copy ......................................................... 1

2. Input Files ......................................................................................................................... 4The Data Matrix File ........................................................................................................... 4Iteman 3 Data Format ......................................................................................................... 4The Item Control File ......................................................................................................... 5

3. Running the Program ...................................................................................................... 7Using the Graphic User Interface ....................................................................................... 7

The Settings Tab ............................................................................................................. 7The Files Tab .................................................................................................................. 8The Input Format Tab ..................................................................................................... 9The Scoring Options Tab .............................................................................................. 11The Output Options Tab ............................................................................................... 12

Using Multiple Runs Files ................................................................................................ 14Creating a Multiple Runs File ....................................................................................... 14Opening a Multiple Runs File ....................................................................................... 15A Sample MRF File ...................................................................................................... 16

Running the sample files ................................................................................................... 18

4. Interpreting the Output ................................................................................................. 21Test-Level Output (Examinee Scores) .............................................................................. 21Test-Level Output (Reliability Analysis) ......................................................................... 22Test-Level Output (Graphics) ........................................................................................... 23Conditional Standard Error of Measurement (CSEM) .................................................... 23Item-Level Output ............................................................................................................. 24What to Look For .............................................................................................................. 28

Item Difficulty .............................................................................................................. 28 The P value (Multiple Choice) ........................................................................... 28 The Item Mean (Polytomous) ............................................................................ 28

Item Correlations .......................................................................................................... 29 Multiple Choice Items ........................................................................................ 29 Polytomous Items ............................................................................................... 29

Option statistics ............................................................................................................. 30Scores Output File ............................................................................................................ 30

Appendix A: The Iteman 3 Header ................................................................................... 32The Control Line ............................................................................................................... 32The Keyed Responses ....................................................................................................... 33The Number of Alternatives ............................................................................................. 33The Scale Inclusion Codes ................................................................................................ 33

Page 4: Iteman 4.1 Manual

Appendix B: Troubleshooting ........................................................................................... 34Please check the data file format specifications ............................................................... 34Please check the number of items or number of ID columns specified in the Iteman 3 Header ............................................................................................................................... 34Please select an input file with an Iteman 3 Header ......................................................... 35Valid item responses of 0 were identified. The Iteman 3 Header does not support item responses that begin at 0. .................................................................................................. 35At least one valid item response of 0 was identified. ....................................................... 36At least one unidentified response character was identified and will be scored as incorrect. ........................................................................................................................... 36Check the data matrix file, examinee XXX did not respond to all XXX items ................ 37

Appendix C: Formulas ....................................................................................................... 38Conditional Standard Error of Measurement Formulas .................................................... 38Livingston Classification Consistency Index ................................................................... 38

Appendix D: Program Defaults File ................................................................................. 39Appendix E: License Transfer .......................................................................................... 41

Summary ........................................................................................................................... 41Step 1 – Demo/Trial Program ........................................................................................... 41Step 2 – Licensed Program ............................................................................................... 42Step 3 – Demo/Trial Program ........................................................................................... 44

Page 5: Iteman 4.1 Manual

Iteman 4.1 Manual Page 1

1. Introduction Iteman™ is a Windows®

application designed to provide detailed item and test analysis reports using classical test theory (CTT). The purpose of these reports is to help testing programs evaluate the quality of test items by examining their psychometric characteristics.

Iteman has a friendly graphical user interface (GUI) that makes it easy to run the program, even if you are not familiar with psychometrics. The GUI is organized into five tabs: Settings, Files, Input Format, Scoring Options, and Output Options. These are discussed in detail in Chapter 3: Running the Program. Iteman 4.1 offers several substantial advantages over Iteman 3:

1. The most important advantage is the addition of graphics. It is now possible to produce an item quantile plot for each item. Moreover, you control the number of points in the plot.

2. Iteman 4.1 is able to handle pretest (trial or unscored) items—items that are not included in the final score but for which statistics are still desired.

3. More statistics are calculated, including the alpha (KR-20) reliability coefficient with each item deleted, several split-half reliability coefficients (both with and without Spearman-Brown correction) and subgroup P (proportion correct) statistics for up to seven ordered groups.

4. Instead of simple ASCII text files, the output is now rich text file (RTF) format prepared as a formal report, and also in a comma-separated value (CSV) format that is able to be manipulated (sorted, highlighted, etc.) in spreadsheet software. It additionally produces a CSV file of examinee scores.

5. Scaled scores and subscores can be added to the output.

6. Scores can be classified into two groups at a specified cut score, and the two groups can use your labels.

7. Items can be analyzed relative to an external score rather than the total score on a test.

8. The maximum number of items that can be analyzed has been increased to 10,000.

9. A “batch” type of capability, using a “Multiple Runs File” has, been added to allow you to run multiple data sets without having to use the graphic user interface for each run. Multiple Runs files can be created outside Iteman in a text editor or interactively within Iteman.

Your Iteman 4.1 License and Unlocking Your Copy Unless you have purchased a network or multiple-computer license, your license for Iteman 4.1 is a single-user license. Under this license you may install Iteman 4.1 on two computers (e.g., a

Page 6: Iteman 4.1 Manual

Iteman 4.1 Manual Page 2

desktop and a laptop) so long as there is no possibility that the two copies of the software will be in use simultaneously. If you would like to use Iteman 4.1 on a network or by more than one user, please contact us to arrange for the appropriate number of additional licenses. Iteman 4.1 is shipped as a functionally-limited demonstration copy. It is limited to no more than 50 items and 50 examinees, but has no expiration date. We can permanently convert your demo copy to the fully functioning software by email, phone, or fax once you have completed the license purchase. To unlock Iteman 4.1, please email/phone/fax to ASC:

1. Your name and email address. 2. Your organization or affiliation. 3. Your invoice number (in the top right corner of your invoice). You should make a

record of your invoice number since you might be asked for it if you request technical support.

4. The “unlock codes,” which are two numeric codes that are unique to the installation of Iteman 4.1 on any given computer. To obtain these two codes, click on the “Unlock Program” button when Iteman 4.1 starts (Figure 1.1) This license window can also be reached by clicking on the License button and selecting “Unlock” when Iteman 4.1 is running in demo mode.

Figure 1.1: Screen Visible When Iteman 4 is Locked

From the unlock screen you will need to send us the two blue Computer ID and Session ID numbers (Figure 1.2). For your convenience we have provided a “Copy IDs to Clipboard” button. This will copy both IDs to the Windows clipboard along with a brief message and the email address to which to send your payment information. This can then be pasted into an email message, filled in, and sent to [email protected]. If you have already paid for your Iteman 4 license, be sure to add your invoice number to this message. When we receive these codes from you, we will respond with a single numeric Activation Code (if you have purchased a permanent license) or two codes (if you have purchased an annual subscription license) that you will need to enter into this same window from which you obtained

Page 7: Iteman 4.1 Manual

Iteman 4.1 Manual Page 3

your Activation Codes (the red labels in Figure 1.2). Once you enter the code(s) that we send you, your copy will be unlocked and fully functional.

Figure 1.2: The Unlock Screen

Note that if you install Iteman 4.1 on a second computer, you will need to repeat this process for that computer since the unlock codes are specific to a given computer.

• Iteman 4.1 is permanently unlocked for academic use, but is an annual subscription for non-academic use. The license status box (see Figure 3.1) will display the current license status, including the number of days remaining for your subscription. As the subscription nears the end, the background color of the box will change to alert you to the need to renew your subscription for another year (red if you have less than 30 days remaining, yellow if 30-90 days, and green if more than 90 days).

.

Page 8: Iteman 4.1 Manual

Iteman 4.1 Manual Page 4

2. Input Files Iteman 4.1 requires two input files: the Data Matrix File and an Item Control File. The formats for these files are described in the following sections.

The Data Matrix File The Data Matrix File is the file that contains examinee identification (ID) and the responses to each item. Responses can be alphabetical (A,B,C,D… or a,b,c,d…) or numerical (1,2,3,4…), where A = a = 1, etc. Each line presents the information for one person. An example of this is shown in Figure 2.1 for 10 items and 5 examinees. In this file, there are 9 columns of ID (the last two are blank) and 10 columns of responses.

Figure 2.1: Example of an Input Data File (No Ignored Columns)

Additional columns can be ignored, so it is not necessary to delete any data if your data file has information other than ID and responses. For example, your file might contain exam dates, locations, education level, or sensitive personal data that you do not want included in the output. An example of this is shown in Figure 2.2; you might want to include examinee ID numbers (the first six columns) in your output but not names. Chapter 3: Running the Program describes how to skip these columns.

Figure 2.2: Example of an Input Data File (Columns to Ignore)

Iteman 3 Data Format Iteman 4.1 permits the analysis of a Data Matrix File in the format used with Iteman 3 (and other programs in the Item and Test Analysis package , which includes four header lines of control information in the data file rather than in a separate control file. If the Iteman 3 header is included in the Data Matrix File, then the user should specify this with the checkbox on the

Person1 4213323412 Person2 1213323410 Person3 3323123413 Person4 1223323414 Person5 2214323411

6153425 John Doe 4213323412 5947824 Jane Doe 1213323410 5976281 Jack Hall 3323123413 1359687 Jim Hill 1223323414 9778236 Jen Smith 2214323411

Page 9: Iteman 4.1 Manual

Iteman 4.1 Manual Page 5

“Files” tab of Iteman 4.1. Note that this method does not allow the use of Item IDs. See Appendix A for a description of the Iteman 3 header.

The Item Control File The previous version of Iteman required that the specifications for the test be provided on the first four lines of the data file, with all the data itself moved down. Iteman 4.1 provides the specifications as a separate Item Control File. This makes it easier to produce, as well as allows for the handling of a greater amount of information. This file is tab-delimited, which means that you can construct it in a spreadsheet program and then “Save As” a tab-delimited text file. There are six columns of information in the control file for each item. Begin each item on a new line:

1. Item ID (no spaces are allowed in the item ID); 2. Key(s) (correct answer(s) as ABCD or 1234 if dichotomous, + or – if polytomous); 3. Number of alternatives (maximum is 15); 4. Domain or content area (all “1” if there is only one domain on your test, maximum is

50); 5. Scoring status: Y = Yes (scored), N = No (not scored), P = Pretest. 6. Item type: M = Multiple choice (dichotomous), R = Rating scale items (responses

begin at 1 or A and are polytomous), P = Partial credit items (responses begin at 0 and are polytomous or are scored dichotomous)

An example of the control file is shown in Figure 2.3. There are ten items, with nine multiple choice items and one partial credit item. The first five are in Domain 1, while the latter five are in Domain 2. The first four items in each domain are scored, while the fifth item in domain 2 is a pretest item. The keyed answers are either 1, 2, 3, or 4 for each multiple choice item since each item has 4 alternatives. Keys can be alphabetical or numeric. Item 7 has two keyed responses (3 and 1). If an item is polytomously scored, the key should be “+” if positively scored and “-” if negatively (reverse) scored. Item 10 is a positively scored (+) partial credit item with item responses that begin at 0. For item 10, the item responses will be 0, 1, 2, 3, and 4, since the item has five options. The control file should have as many lines as there are items in the test. The program counts the lines of information in the control file, and that serves as the total number of items in the test. There is a maximum of 10,000 items (lines) in Iteman 4.1.

Page 10: Iteman 4.1 Manual

Iteman 4.1 Manual Page 6

Figure 2.3: Example of an Item Control File

Item01 1 4 1 Y M Item02 2 4 1 Y M Item03 3 4 1 Y M Item04 4 4 1 Y M Item05 1 4 1 P M Item06 2 4 2 Y M Item07 31 4 2 Y M Item08 4 4 2 Y M Item09 1 4 2 Y M Item10 + 5 2 P P

Page 11: Iteman 4.1 Manual

Iteman 4.1 Manual Page 7

3. Running the Program Iteman’s interface is divided into five tabs.

• The Settings tab (Figure 3.1) is the only tab to activate when the program is loaded. • The Files tab specifies the files to be used: Data Matrix, Item Control, Output, and an

optional external score file. • The Input Format tab specifies the columns of the Data Matrix File for IDs and item

responses and permits you to specify the character code used in the Data Matrix to indicate omitted/skipped and not administered items.

• The Scoring Options tab enables you to perform scaled scoring and to perform dichotomous classification.

• The Output Options tab specifies options for the output. This chapter describes these five tabs.

Using the Graphic User Interface

The Settings Tab The Settings Tab is active when the program opens. Selecting ‘Run Interactively from the Graphical User Interface” will enable the other four tabs (as shown by Figure 3.1). Selecting the “Create a Multiple Runs File” opens a new window to interactively create a multiple runs file. Saving an options file To save the current GUI settings to an external file of your choice, you can do so on the Settings Tab. In order to make changes to the GUI settings you will first need to select “Run Interactively from the GUI”. This is necessary for a multiple runs file where the program settings are read in from an external file and not selected using the GUI. For more information on the options file see Appendix D. Saving the current program settings to the program defaults file This will overwrite the existing program defaults with the changes made during the current run of the program. These changes will appear the next time the program is loaded. For more information on the program defaults file see Appendix D.

Page 12: Iteman 4.1 Manual

Iteman 4.1 Manual Page 8

Figure 3.1: The Settings Tab

The Files Tab To specify the files on the Files tab (Figure 3.2), click on

for each file. This will activate a standard dialog window to specify the path and name of each file. If the Data Matrix File has an Iteman 3 (ITAP) header, be sure to check this box:

The Item Control file box will be disabled when the Iteman 3 Header box is checked, as will the options on the Input format Tab. The output file must have an .rtf extension. The fourth box is used if you have a file containing examinee scores that have been produced by some method other than number-correct that you wish to use as the basis for your statistics (for example, a scaled score reported by your testing

Page 13: Iteman 4.1 Manual

Iteman 4.1 Manual Page 9

vendor). The scores in this file, one line per examinee, must be in the same order as those in the examinee data file. The fifth box allows you to use a previously saved options file. The selected options file will override the current program defaults when opened. The last file text box allows you to provide a title for your report.

Figure 3.2: The Files Tab

The Input Format Tab The Input Format tab (Figure 3.3) contains six pieces of information about the Data Matrix File. First, specify the number of columns devoted to examinee ID information that you want to capture for your score output, then specify the column in which the IDs begin. Next, specify the column in which item responses begin. This column number can be increased to skip unwanted columns. Figure 2.2 in Chapter 2 has 7 ID columns and 13 unwanted columns before the responses started, so item responses begin in column 21. Note that the item responses must begin in at least column 1.

Page 14: Iteman 4.1 Manual

Iteman 4.1 Manual Page 10

Figure 3.3: The Input Format Tab

If you have a special character in your data representing omitted responses or not-administered items, these are specified next. These responses will be treated separately, with frequencies provided in the output. If all items were answered by all examinees, you can leave these characters as the default value, and of course no examinees will be noted as having such characters. If your Data Matrix File includes an ITAP heard, the options on this tab will be deactivated and the following message will be displayed:

Page 15: Iteman 4.1 Manual

Iteman 4.1 Manual Page 11

The Scoring Options Tab The Scoring Options tab (Figure 3.4) provides the ability to perform scaled scoring and/or dichotomous classification.

Figure 3.4: The Scoring Options Tab

♦ If your testing program reports scaled scores based on raw number-correct scores, these can

be calculated directly. Scaled scores are computed using the scaling function (detailed below) for the total number correct scores and/or the domain number-correct scores. Scaled scoring is often used to mask details about the test, such as exact number of items or raw cutoff score, or to express scores on a different scale than number correct.

o Linear scaling: The raw scores are first multiplied by the slope coefficient then the

intercept is added to the product. For example, if you want the scores to be reported on a scale of 100 to 200 for a test of 50 items, the scaled score could be specified as SCALE = RAW × 2 + 100.

Page 16: Iteman 4.1 Manual

Iteman 4.1 Manual Page 12

o Standardized scaling: The raw scores are converted to have a mean of X and a standard deviation of Y. This form of scaling is useful if you desire to center the mean of the test around a constant value (e.g., 50) for use in a report.

♦ If you want to perform dichotomous classification for the total number-correct scores. click

the box next to that statement. It is possible to classify based on either total number-correct or the scaled total number-correct scores. o Cutpoint: The cutpoint is the value at which scores are classified as in the high group.

Scores below the cutpoint are classified as being in the low group. o Low group label: Label used in the Scores output file for those in the low group. o High group label: Label used in the Scores output file for those in the high group.

The Output Options Tab The Output Options tab (Figure 3.5) provides the ability to tailor the output report to your specific needs.

Figure 3.5: The Output Options Tab

Page 17: Iteman 4.1 Manual

Iteman 4.1 Manual Page 13

♦ Item statistic flagging allows you to specify an acceptable range for a statistic. For example, if you want to identify all items that have a P (proportion correct) between 0.20 and 0.95, it can be specified here, and then the output will label items with low P as “LP” and high P as “HP.” Figure 3.5 is set up with these bounds, as well as a minimum point-biserial of 0.10. The “acceptable item mean” range is used to flag the item means of polytomous items to identify “outlier” items. Flags are further explained in Chapter 4.

♦ Selecting the “Exclude omits from option statistics” box will prevent omits from having the full complement of option statistics computed for them. The default of scoring omits as incorrect affects the reliability coefficients, and provides the full complement of option statistics for omits. For polytomous items, omits are automatically excluded from the option statistics.

♦ If you want to have the point-biserial and biserial correlations corrected for spuriousness, click the check box next to that statement. Spuriousness refers to the fact that an item’s scores are included in the total score, so correlating an item with the total score implies that it is being correlated with itself to some extent. This effect is negligible if there are a large number of items on the test (e.g., more than 30), but Iteman 4.1 provides the option to correct for this issue, which should be utilized for tests of 30 items or less.

♦ “Produce quantile plots for each item” will produce a graphical plot of the specified number of subgroups (up to 7) for each item; interpretation of these plots is discussed in Chapter 4: Interpretation of the Output. The quantile plot will be produced for only the first 9 alternatives for an item. Click the check box for this option if you wish to produce quantile plots for each item, with every page of the output containing the plot and the statistics table for a given item.

♦ “Produce the quantile plot data table” will provide a table for each item that contains the proportions in each subgroup that are shown graphically in the quantile plot. The quantile plot data table will present the subgroup proportions for up to 15 alternatives plus the omit and not administered codes.

♦ “Use X points for quantile plots” allows you to increase or decrease the number X of examinee groups used for constructing the quantile plots. This number can range from 2 to 7. Larger numbers of points are recommended only for large sample sizes of at least 1,000 examinees.

♦ If you need to convert multiple-choice (ABCD) data into dichotomously-scored (0/1) data, Iteman 4.1 provides an option for this. In addition, if you have reverse-scored (4, 3, 2, 1) polytomous data, the saved scored item response matrix will have the item responses reversed (1, 2, 3, 4). This option is present because some psychometric software requires scored data, such as PARSCALE. The scored item responses will be saved with the name of your primary output file, but with a .TXT extension. o “Include omit codes in the data matrix” and “Include not administered codes in the data

matrix” determines whether omit/not administered codes are kept in the scored matrix or scored as incorrect (0). Omit/not administered codes are automatically left in the data matrix for polytomous items.

♦ To save the Item Control File to an external file, check this box. The control file will also be saved with the same name as the output file, but with ‘Control.txt’ appended to the end of the filename.

Page 18: Iteman 4.1 Manual

Iteman 4.1 Manual Page 14

Using Multiple Runs Files

Creating a Multiple Runs File If you would like to perform multiple item analyses with a single run of the program, then you should create a Multiple Runs File (MRF). To interactively create an MRF, select the “Create a multiple runs file” button on the Settings tab. This will open the window shown in Figure 3.6 that allows the interactive setup of the multiple runs file. Note that the options are grayed out because no path has been selected. This interactive window contains the MRF text editor window which shows the files/options selected for the multiple runs file.

Figure 3.6: The Multiple Runs File Window

To create an MRF:

1. Select the folder where the files used for the analysis are stored. Click “Add Path” to add the Path to the MRF. (You must complete steps 2, 3, and 4 to perform an analysis.)

2. Select the Options File: a. If you saved the program options to an external file, open this file and select “Add

Options”. The Options file will be added to the MRF. b. If you wish to use the program defaults, select “Use Defaults.” The Keyword

“DEFAULTS” will appear in the MRF text editor next to OPTS. 3. Select the item control file (the data file(s) must follow the item control keyword):

Page 19: Iteman 4.1 Manual

Iteman 4.1 Manual Page 15

a. If you are using an Item Control File, use the file open icon to select the then select “Add Control”. The name of the control file will appear in the MRF box next to CTRL.

b. If the data matrix includes an Iteman 3 Header then select the “Skip Control” box. A blank space will appear next to the CTRL statement in the MRF box.

4. Select the data file(s) and click “Add Data”. Note that if you enter a file name that does not exist in the selected folder, and select “Add”, the program will not add the file to the MRF. It is important to note that the options*, control**, and data files for a specific analysis all must reside within the same folder. *Unless the defaults are used **Unless an Iteman 3 Header is used You may delete entries in the MRF text editor by clicking on the line and hitting “Delete” or “Backspace”. However the following file sequence must be observed for the MRF to work correctly:

1. The first PATH keyword must be followed by the OPTS, CTRL, and DATA lines 2. If you wish to use a different OPTS file, that file must appear after the PATH statement. 3. The CTRL statement must be followed by the DATA line(s).

To Save the text in the MRF editor box to an external file, select the “Save MRF” button. This will allow you to save the MRF to a folder of your choosing. An example of a completed MRF file is shown below. To Run the MRF select the “RUN MRF” box. Note that the text in the MRF editor box will automatically be saved to an external file when you run the MRF. The saved MRF text file will have the word ‘MRF’ appended to the end of the filename of the last selected data file. The following output files will be generated for each DATA file in the MRF

1. DATA.rtf The main rich text output file that includes the graphics and tables 2. DATA.csv The comma-separated values output file 3. DATA Scores.csv The scores saved as a comma-separated values file

The following output files are optional and will be generated for each DATA file in the MRF if requested in the Options File:

4. DATA Matrix.txt The scored data matrix file 5. DATA Control.txt The item control file if the original data matrix file used an Iteman 3

Header and a scored data matrix was requested

Opening a Multiple Runs File A previously saved multiple runs file can be opened on the Settings tab (shown in Figure 3.1). To do so select the file and click “Open.” Iteman 4.1 will automatically run the opened multiple runs file.

Page 20: Iteman 4.1 Manual

Iteman 4.1 Manual Page 16

The file can be one saved from the interactive window described above or one created in a text editor. The format of the MRF file is as follows:

1. Keyword “PATH” separated by a tab followed by the Windows path 2. Keyword “OPTS” separated by a tab followed by the options file name (if an external options

file is used) or DEFAULTS if the program defaults are to be used 3. Keyword “CTRL” separated by a tab followed by the item control file name (if an item

control file is used) or Iteman 3 if the data matrix includes an Iteman 3 Header. 4. Keyword “DATA” separated by a tab followed by the data file name.

MRFs may also be created or edited in a test editor. They must, however, be saved as pure text (not word professing) files.

A Sample MRF File Figure 3.7 displays a sample Multiple Runs File:

Figure 3.7: A Sample Multiple Runs File The data files ‘Exam1.txt’, ‘Exam2.txt’, and ‘Exam3.txt’ all make use of the control file ‘Control.txt’. The data file ‘Exam4.txt’ uses an Iteman 3 Header, so the CTRL line with ‘ITEMAN 3’ precedes the DATA line. The new CTRL line overrides the previous CTRL file ‘Control.txt’ and the keyword ‘ITEMAN 3’ deactivates the input of the control file. A new PATH statement at the end of this file would change the folder location of any following OPTS, CTRL and DATA files to be analyzed. An MRF file can have any number of lines. Figure 3.8 shows the multiple runs window following the successful completion of the multiple runs analysis. The window above the “Add Path” button reports the following information:

1. The dataset being analyzed 2. One of two things:

a. If no errors were encountered then “The analysis was completed successfully” will be reported

b. Any error messages will be reported here if any are encountered. See Figure 3.9 below for sample error messages that may be encountered.

3. If the analysis was completed successfully then the number of items and examinees is reported on the third line for that data file.

PATH C:\Sample Files\ OPTS Sample.options CTRL Control.txt DATA Exam1.txt DATA Exam2.txt DATA Exam3.txt CTRL ITEMAN 3 DATA Exam4.txt

Page 21: Iteman 4.1 Manual

Iteman 4.1 Manual Page 17

Figure 3.8: The Multiple Runs Window Following the Sample Analysis

Page 22: Iteman 4.1 Manual

Iteman 4.1 Manual Page 18

Figure 3.9: Sample Error Messages in the Multiple Runs Window

Running the sample files For a new user, the best way to start is by running the sample files that come with the software. This will provide experience with the necessary steps to run the program after the input files have been successfully made. Version 4.1 of Iteman 4 is installed with three sets of sample files: multiple choice (MC) only, rating scale (RS) only, and a mixed test. The mixed test is intended to simulate an educational exam where there are a large number of multiple-choice items (40 in this case) and a few constructed response items (2 in this case). To run the sample files, follow these steps, one step for each tab. 1. Specify your files. For the MC only sample files, the Data matrix file is Sample data file 1 (MC only).txt and the Item control file is Sample control file 1 (MC only).txt. You can name your output file whatever you like. Figure 3.10 shows what the Files tab should now look like. 2. The sample data file has 6 columns of ID information, beginning in column 1, while item responses begin in column 7. These are determined by counting columns in the data file (advanced text editors can count this for you, such as PSPad; www.pspad.com). Specify these numbers on the Input Format Tab, as shown in Figure 3.11. There is no missing data in the sample file, so you do not have to be concerned with the Omit or Not Administered characters.

Page 23: Iteman 4.1 Manual

Iteman 4.1 Manual Page 19

Figure 3.10: The Files Tab with the Sample Files

Figure 3.11: The Input Format Tab for the Sample Files

3. Specify any Scoring Options and Output Options you wish. The program will run successfully if you do not make any changes on the fourth and fifth tabs. Once the program has successfully run, you will be shown the message in Figure 3.12 to tell you that the run is complete, and where to find the output file. Clicking “Yes” will open the relevant directory.

Page 24: Iteman 4.1 Manual

Iteman 4.1 Manual Page 20

Figure 3.12: “The run is complete.”

Page 25: Iteman 4.1 Manual

Iteman 4.1 Manual Page 21

4. Interpreting the Output Iteman 4.1 provides three default output files: (1) an RTF report, (2) a CSV file of item statistics and (3) a CSV file of examinee scores. In addition, there is the optional output of scored item responses. The CSV file of test and item statistics includes the same statistics as are in the RTF report, but in CSV form so you can manipulate the data in a spreadsheet or easily upload it into item banking software such as the FastTEST Test Development System, the FastTEST Professional Testing System, or FastTEST Web. The primary output, the RTF report, is presented as a formal report that can be provided to test developers. It begins with a title page which is followed by summary information of the input specifications. This is important for historical purposes; if the report is read in the future, it will be evident how Iteman 4.1 was set up to produce the report. If more than 300 items are analyzed, the item-level RTF report will be divided into separate files. The test-level output and the item-level output for the first 300 items will be saved in the first file. The second file will be comprised of the item-level output for items 301-600. Additional item-level RTF files will be created for all k items with each RTF file containing the output for up to 300 items.

Test-Level Output (Examinee Scores) Next, the report provides test-level summary statistics based on raw number-correct scores (or external scores if utilized). This is done for the total score (all items) as well as the actual score (scored items only), pretest items only, and all domains or content areas. The following are definitions of the columns in this table.

Label Explanation Score which portion of the test that the row is describing Items number of items in that portion of the test Mean average number correct SD standard deviation, a measure of dispersion (a range of ± two SDs

from the mean includes approximately 95% of the examinees, if their number-correct scores are normally distributed)

Min score the minimum number of items an examinee answered correctly Max score the maximum number of items an examinee answered correctly Mean P average item difficulty statistic for that portion; also the average

proportion-correct score if there are no omitted responses (not reported if there are no multiple choice items)

Item Mean average of the item means for polytomous items (not reported if there are no polytomous items)

Mean R average item-total correlation for that portion of the test The test-level summary table (Table 4.1) allows you to make important comparisons between these various parts of the test. For example, are the new pretest items of comparable difficulty to

Page 26: Iteman 4.1 Manual

Iteman 4.1 Manual Page 22

the current scored items? Are items in Domain 2 more difficult than Domain 1? Were the mean and standard deviation (SD) of the raw scores what should be expected?

Table 4.1: Example Summary Statistics

Score Items Mean SD Min Score

Max Score

Mean P Item Mean

Mean R

All items 42 38.560 5.288 27 46 0.863 2.020 0.224 Scored Items 36 33.600 4.703 23 40 0.869 2.020 0.223 Pretest items 6 4.960 1.087 2 6 0.827 0.000 0.230 Domain 1 8 7.360 0.776 5 8 0.920 0.000 0.130 Domain 2 16 13.600 2.185 7 16 0.850 0.000 0.259 Domain 3 12 12.640 2.926 7 17 0.860 2.020 0.239

Test-Level Output (Reliability Analysis) The reliability analysis provides a table that summarizes the reliability statistics computed by Iteman 4.1. Coefficient α (alpha) and the SEM (based on α) are computed for all items, scored items only, pretest items only, and for each domain separately. Three forms of split-half reliability are computed. First the test is randomly divided into two halves and the Pearson product-moment correlation is computed between the total score for the two halves. Also provided is the Pearson correlation between the total scores for the first half and the second half of the test, and the odd- and even-numbered items on the test. Since these correlations are computed using half the total number of items, the Spearman-Brown corrected correlations are also provided.

Table 4.2: Example Reliability Analysis Score Alpha SEM Split-Half

(Random) Split-Half (First-Last)

Split-Half (Odd-Even)

S-B Random

S-B First-Last

S-B Odd-Even

All items 0.765 2.561 0.537 0.473 0.707 0.699 0.643 0.829 Scored items 0.731 2.439 0.462 0.434 0.682 0.632 0.605 0.811 Pretest items 0.519 0.754 - - - - Domain 1 0.073 0.747 0.014 0.182 -0.008 0.028 0.308 -0.016 Domain 2 0.642 1.307 0.607 0.380 0.328 0.755 0.551 0.494 Domain 3 0.590 1.874 0.209 0.149 0.600 0.345 0.259 0.750

If a dichotomous classification was performed, and all the scored items are multiple choice, the Livingston decision consistency index is computed at the cut-score (expressed as number-correct scores). The equation for the Livingston index is provided in Appendix C.

Page 27: Iteman 4.1 Manual

Iteman 4.1 Manual Page 23

Test-Level Output (Graphics) After the test-level statistical table, a grouped frequency distribution figure is presented, showing the distribution of number-correct scores for the scored items, as seen in Figure 4.1. Similar graphs are produced for each domain, if you have more than one domain.

Figure 4.1: Example Score Distribution

After the histograms for the scored items, histograms for the item statistics are provided, each followed by a table of numerical values corresponding to the histograms.. If there were scored multiple-choice items, the histogram for the item P values and Rpbis correlations are provided. If there were scored polytomous items then the histogram for the item means and the Pearson r correlations are provided. Next scatterplots are provided of the P value by Rpbis if there are scored multiple-choice items, and of the item mean by Pearson’s r if there are scored polytomous items.

Conditional Standard Error of Measurement (CSEM) The classical CSEM function is plotted for observed number-correct scores between 0 and the total number of scored items. The CSEM plot is provided only if there are no scored polytomous items. The plot is computed using Lord’s (1984) Formula IV. CSEM Formula IV makes the explicit assumption that all items are scored (0/1), so it cannot be computed for total score when there are polytomous scored items. A sample CSEM plot isshown in Figure 4.2:

Page 28: Iteman 4.1 Manual

Iteman 4.1 Manual Page 24

Figure 4.2: Example CSEM Function

If dichotomous classification was performed, then the CSEM is reported at the cutscore (expressed as number correct). If you used a scaled cutscore, this scaled cutscore is converted to the raw number-correct scale for reporting.

Item-Level Output After the test-level statistics, a detailed table of the statistics for each item is provided, one item to a page. If the quantile plots option is selected, that is also provided on the same page, as shown in Figure 4.3 for a dichotomously scored item and Figure 4.4 for a polytomous item. These quantile plots can be pasted into the item record for test items that are stored in ASC’s FastTEST item banker, FastTEST Pro, or FastTEST Web. The quantile plot, as seen in Figure 4.3, can be difficult to interpret, but is arguably the best way to graphically depict the performance of an item with classical test theory. It is constructed by dividing the sample into X groups based on overall number-correct score, or an external score if used, and then calculating the proportion of each group that selected each option. For a four-option multiple-choice item with three score groups as in the example, there are 12 data points. The 3 points for a given option are connected by a colored line. A good item will typically have a positive slope on the line for the correct/keyed answer, while the slope for the incorrect options should be negative. Note: Quantile plots might not be visible if the output is opened in WordPad.

Page 29: Iteman 4.1 Manual

Iteman 4.1 Manual Page 25

Figure 4.3: Example Item-Level Output for a Multiple-Choice Item

Page 30: Iteman 4.1 Manual

Iteman 4.1 Manual Page 26

Figure 4.4: Example Item-Level Output for a Polytomous Item

Page 31: Iteman 4.1 Manual

Iteman 4.1 Manual Page 27

The item information table in Figures 4.3 and 4.4 provides the item sequence number, item ID, keyed response, number of options, and the domain the item is in. The item statistics table provides item-level statistics and is described separately for multiple-choice and polytomous items.

Multiple-Choice Items Label Explanation N Number of examinees that responded to the item P Proportion correct Domain Rpbis* Point-biserial correlation of keyed response with domain score Domain Rbis* Biserial correlation of keyed response with domain score Total Rpbis Point-biserial correlation of keyed response with total score Total Rbis Biserial correlation of keyed response with total score Alpha w/o The coefficient alpha of the test if the item was removed Flags Any flags, given the bounds provided

Polytomous Items

Label Explanation N Number of examinees that responded to the item Mean Average score for the item Domain r* Correlation of item (Pearson’s r) with domain score Domain Eta* Coefficient eta from an ANOVA using item and domain scores + Total r Correlation of item (Pearson’s r ) with total score Total Eta Coefficient eta from an ANOVA using item and total scores + Alpha w/o The coefficient alpha of the test if the item was removed Flags Any flags, given the bounds provided

*Output provided if there are 2+ domains. +

Eta is reported if the item has 3+ categories, otherwise the biserial correlation will be reported.

The following table provides explanations for option-level information in the third table seen in Figures 4.3 and 4.5, “Option statistics.”

Label Explanation Option Letter/Number of the option Weight Scoring weight for polytomous items N Number of examinees that selected the option Prop. Proportion of examinees that selected the option Rpbis Point-biserial correlation of option with total score Rbis Biserial correlation of option with total score Mean Average score of examinees that selected the option Color Color of the option on the quantile plot (key) The keyed answer will be denoted by **KEY** for multiple

choice items

Page 32: Iteman 4.1 Manual

Iteman 4.1 Manual Page 28

The final table in Figures 4.3 and 4.4 presents the calculations for the quantile plots. The number of columns in this table will match the number of score groups you specified on the Output Options tab. Iteman 4.1 was designed to produce RTF output instead of PDF output to allow you to make additions to the report. A very useful addition would be to paste item text and comments below the plot/table for each item (Figures 4.3 and 4.4). The report can then be delivered to content experts with an easy-to-read plot, detailed tables, and the item text neatly arranged on each page, one page for each item.

What to Look For At a higher level, the use of Iteman 4.1 output has two steps: first, to identify which items perform poorly, and secondly to diagnose the problems present in those items. The following are some definitions of, and considerations for, item statistics.

Item Difficulty

• The P value (Multiple Choice) The P value is the proportion of examinees that answered an item correctly (or in the keyed direction). It ranges from 0.0 to 1.0. A high value means that the item is easy, and a low value means that the item is difficult. The minimum P value bound represents what you consider the cut point for an item being too difficult. For a relatively easy test, you might specify 0.50 as a minimum, which means that 50% of the examinees have answered the item correctly. For a test where we expect examinees to do poorly, the minimum might be lowered to 0.4 or even 0.3. The minimum should take into account the possibility of guessing; if the item is multiple-choice with four options, there is a 25% chance of randomly guessing the answer, so the minimum should probably not be 0.20. The maximum P value represents the cut point for what you consider to be an item that is too easy. The primary consideration here is that if an item is so easy that nearly everyone gets it correct, it is not providing much information about the examinees. In fact, items with a P of 0.95 or higher typically have very poor point-biserial correlations.

• The Item Mean (Polytomous) The item mean is the average of the item responses converted to numeric values across all examinees. The range of the item mean is dependent on the number of categories and whether the item responses begin at 0. The interpretation of the item mean depends on the type of item (rating scale or partial credit). A good rating scale item will have an item mean close to ½ of the maximum, as this means that on average, examinees are not endorsing categories near the extremes of the continuum. The minimum item mean bound represents what you consider the cut point for the item mean being too low. The maximum item mean bound represents what you consider the cut point for the item mean being too high. The number of categories for the items must be considered when setting the

Page 33: Iteman 4.1 Manual

Iteman 4.1 Manual Page 29

bounds of the minimum/maximum values. This is important as all items of a certain type (e.g., 3-category) might be flagged.

Item Correlations

• Multiple Choice Items The item point-biserial (r-pbis) correlation. The Pearson point-biserial correlation (r-

pbis) is a measure of the discrimination, or differentiating strength, of the item. It ranges from −.0 to 1.0. A good item is able to differentiate between examinees of high and low ability, and will have a higher point-biserial, but rarely above 0.50. A negative point-biserial is indicative of a very poor item, because then the high-ability examinees are answering incorrectly, while the low examinees are answering it correctly. A point-biserial of 0.0 provides no differentiation between low-scoring and high-scoring examinees, essentially random “noise.” The minimum item-total correlation bound represents the lowest discrimination you are willing to accept. This is typically a small positive number, like 0.10 or 0.20. If your sample size is small, it could possibly be reduced. The maximum item-total correlation bound is almost always 1.0, because it is typically desired that the r-pbis be as high as possible.

The item biserial (r-bis) correlation. The biserial correlation is also a measure of the discrimination, or differentiating strength, of the item. It ranges from −1.0 to 1.0. The biserial correlation is computed between the item and total score as if the item was a continuous measure of the trait. Since the biserial is an estimate of Pearson’s r it will be larger in absolute magnitude than the corresponding point-biserial. The biserial makes the strict assumption that the score distribution is normal. The biserial correlation is not recommended for traits where the score distribution is known to be non-normal (e.g., pathology).

• Polytomous Items

Pearson’s r correlation. The Pearson’s r correlation is the product-moment correlation between the item responses (as numeric values) and total score. It ranges from −1.0 to 1.0. The r correlation indexes the linear relationship between item score and total score and assumes that the item responses for an item form a continuous variable. The r correlation and the r-pbis are equivalent for a 2-category item. The minimum item-total correlation bound represents the lowest discrimination you are willing to accept. Since the typical r correlation (0.5) will be larger than the typical r-pbis (0.3) correlation, you may wish to set the lower bound higher for a test with polytomous items (0.2 to 0.3). If your sample size is small, it could possibly be reduced. The maximum item-total correlation bound is almost always 1.0, because it is typically desired that the r-pbis be as high as possible.

Eta coefficient. The eta coefficient is computed using an analysis of variance with the item response as the independent variable and total score as the dependent variable. The eta

Page 34: Iteman 4.1 Manual

Iteman 4.1 Manual Page 30

coefficient is the ratio of the between groups sum of squares to the total sum of squares and has a range of 0 to 1. The eta coefficient does not assume that the item responses are continuous and also does not assume a linear relationship between the item response and total score. As a result, the eta coefficient will always be equal or greater than Pearson’s r. Note that the biserial correlation will be reported if the item has only 2 categories.

Option statistics Each option has a P value and a r-pbis. The values for the keyed response serve as the statistics for the item as a whole, but it is the values for the incorrect options (the distractors) that provide the opportunity to diagnose issues with the item. A high P for a distractor means that many examinees are choosing that distractor; a high positive r-pbis means that many high-ability examinees are choosing that distractor. Such a situation identifies a distractor that is too attractive, and could possibly be argued as correct.

Scores Output File The CSV score output file provides the scores for each examinee, separated by item type. Figure 4.5 displays the scores for 10 examinees. The columns in this file are as follows: 1. Sequence – The row number of the examinee in the data file. 2. ID – Examinee ID from the data file. 3. Scored items – The total number-correct (raw) score for all scored items across all domains. 4. All items – The total number-correct score for all scored items across all domains plus all of

the pretest items included in the test. 5. Pretest items – Provides the number-correct scores for the pretest items only. 6. Scored Proportion Correct – The proportion of scored items correct across all domains.

This value is only output if there are no scored polytomous items. 7. Rank – The rank of the examinee’s score, from 1 to N, which is calculated as the number of

examinees with total scores for the scored items (column 1) greater than or equal to the given examinee. Examinees that are tied with the same score receive the lowest rank available (e.g., Examinees 4 and 8 have ranks of 4,167). Examinee(s) with the highest score will receive a rank of 1.

8. Percentile – The percentage of examinees whose total score falls below the given score. 9. Group – The score group an examinee is classified into is based on total score for all the

scored items. The number of score groups is determined by the value set in the “Create X score groups for the quantile plots / summary table” box on the Output Options tab.

10. Domain X – The final column(s) in the scores output file contain the domain scores separately for each domain. If more than one domain was specified in the Item Control File, then the total number-correct scores calculated separately for each domain would be found after the “Group” column. If the test has only one domain, this will not be included, as it will be the same as the “Scored items” column.

12. Scaled Total Score – If scaled scores were computed for total score these will appear. 12. Scaled Domain Score X – If scaled scores were requested for each domain – and there is

more than one domain – these will be provided. 13. Classification – If dichotomous classification was performed the results of the classification

will be provided. Classification can be performed with the raw total number-correct scores or the scaled total number correct score.

Page 35: Iteman 4.1 Manual

Iteman 4.1 Manual Page 31

14. CSEM III – The conditional standard error of measurement Formula III from Lord (1984) if there are no scored polytomous items.

15. CSEM IV – The conditional standard error of measurement formula IV from Lord (1984) if there are no scored polytomous items. Across all observed scores, this formula is most comparable to the classical SEM found in Table 4.2.

Figure 4.5: Sample Examinee Scores Output

Page 36: Iteman 4.1 Manual

Iteman 4.1 Manual Page 32

Appendix A: The Iteman 3 Header The input data file shown below is an ASCII/text file in the format required by the previous version of Iteman, version 3. All the item response data and all control information are contained in a single input file, with the control information in the first four lines of the file. An example of a data file of multiple-choice items that includes the Iteman 3 header is shown below:

Figure A.1: Example Input File With an Iteman 3 Header 30 O N 5 143534243521132435241342351423 KEY 555555555555555555555555555555 NO. ALTERNATIVES YYYYYYYYYYYYYYYYYYYYYYYYYYYYYY ITEMS TO INCLUDE EX001543542143554321542345134332413 EXAMINEE #1 EX002143534244522133OO2542531342513 EXAMINEE #2 EX003143534223521132435244342351233 EXAMINEE #3 EX004143534243521132435241342352NNN EXAMINEE #4 EX005143534243412132435452132341323 EXAMINEE #5

A data file with an Iteman 3 control header consists of five primary components:

1. A control line describing the data; 2. A line with keyed responses; 3. A line with the numbers of alternatives for the items; 4. A line specifying which items are to be included in the analysis; and 5. The examinee data.

Comments may also be included in the data file. Each of these elements is described in the following sections.

The Control Line The first line of the data file must contain the following data:

1. Number of items for which responses are recorded for each examinee (maximum is 10,000)

2. One space or tab 3. Alphanumeric code for omitted responses 4. One space or tab 5. Alphanumeric code for items not reached by the examinee 6. One space or tab 7. Number of characters of identification data recorded for each examinee.

Page 37: Iteman 4.1 Manual

Iteman 4.1 Manual Page 33

The first entry in the Iteman 3 header file specifies the number of items to be scored. Unlike Iteman 3, Iteman 4.1 does not require that this number be located in a fixed position on the first line. A space or tab must separate the number of items from the next character, the omit code.

The column immediately following the space/tab must contain the alphanumeric code for items that the examinee has omitted. This may be a digit larger than the number of alternatives, a letter, or some other character, including a “blank.” For example, it might be “9” for a five-alternative item, an “O” for omitted, or a period. Following the omit character must be a space or tab. Immediately following the space/tab must be the alphanumeric code for items that the examinee did not reach and therefore did not have a chance to answer. Like the omission code, it may be a digit larger than the number of alternatives or any other character. In Figure A.1, the letter “O” indicates an omitted item, and “N” indicates a not-reached item.

A space or tab must separate the not-reached code from the number of ID columns. In Iteman 4 this value can now range from 0 to 1,000 columns of examinee identification. A zero must be placed on the control line when there is no examinee ID information provided. The example in Figure A.1 indicates that there are 5 characters of identification for each examinee; in the data lines (beginning on line 5 of the input file in Figure A.1), you will note that examinees are identified by characters “EX001” through “EX005.”

The Keyed Responses The second line of the file contains the keyed response for each item in the data file. The code in column 1 corresponds to the key for item 1, and so forth. The entire key must be contained on a single line. Thus, for the example in Figure A.1, Item 1 is keyed “1,” Item 2 is keyed “4,” and the last item (Item 30) is keyed “3”. Note also the optional comment on the key line following item 30, which identifies the data on that line. For polytomous (e.g., rating scale) items, the entry on this line is a “+” if item scores in the data portion of the file are not to be reversed and a “-“ if they are to be reverse scored.

The Number of Alternatives The third line of the file must specify the number of alternatives for each item. For dichotomously scored items, this must be equal to the number of choices allowed for the item. In the example in Figure A.1, each of the items has five alternatives. In Iteman 4, the number of alternatives is used in computing the response-alternative statistics.

The Scale Inclusion Codes The fourth line contains scale inclusion codes, which indicate whether an item should be included in the analysis. Items coded “Y” are included in the analysis; those coded “N” are not. In the example shown in Figure A.1, all of the items will be included in the analysis. The scoring status on the inclusion line can be specified as follows: Y = scored, N = not scored, and P = pretest. All scored items are assumed to belong to a single domain. If scoring for more than one domain is desired, the header should be converted to an Item Control File which permits domain scoring.

Page 38: Iteman 4.1 Manual

Iteman 4.1 Manual Page 34

Appendix B: Troubleshooting The following section documents the different error messages you might encounter when you use Iteman 4.1.

Please check the data file format specifications You will receive the error message shown below when the program reached the end of the line before reading in the item responses for the first examinee. If you received this error you should check the following:

1. The number of items in the Item Control File versus the Data Matrix File. 2. The column in the data matrix where item responses begin versus the value in the ”Item

response begins in column” box. 3. Whether the Data Matrix File includes an Iteman 3 header. You should remove the

Iteman 3 header from the Data Matrix File if you are using an Item Control File. This is because the four lines that make up the Iteman 3 header would be scored as the first four examinees.

Figure B.1: Data File Format Error

Please check the number of items or number of ID columns specified in the Iteman 3 Header You will receive the error message shown below when the program reached the end of the line before reading in the item responses for the first examinee, and you are using the Iteman 3 header rather than a control file. If you received this error you should check the following:

1. The number of items specified in the Iteman 3 header versus the Data Matrix File 2. The column in the data matrix where item responses begin versus the value found on the

first line of the Iteman 3 header.

Page 39: Iteman 4.1 Manual

Iteman 4.1 Manual Page 35

Figure B.2: Iteman 3 Header Error

Please select an input file with an Iteman 3 Header If you received the error message shown below you should check the following:

1. If the ”Data matrix file includes an Iteman 3 Header” box is checked and you are not using an Iteman 3 header format. If so, then make sure the box is not checked before running the program again.

2. The Data Matrix File to see if the Iteman 3 header is included or formatted properly.

Figure B.3: Iteman 3 Header Missing Error

Valid item responses of 0 were identified. The Iteman 3 Header does not support item responses that begin at 0. If you received the error message shown below you should check the following:

1. The omit and not reached characters. 2. The number of examinee id characters. If too few examinee ID characters were

specified, then you might receive this error as ID characters can often include ‘0’.

You need to create an item control file if you wish to analyze item responses that begin at 0. The item control file provides additional flexibility and permits mixed-format tests with items that begin at both 0 and 1.

Page 40: Iteman 4.1 Manual

Iteman 4.1 Manual Page 36

Figure B.4: Iteman 3 Header Responses of 0 Identified Error

At least one valid item response of 0 was identified. If you received the error message shown below you should check the following:

1. The omit and not reached characters. 2. The item control file. If you have item responses of ‘0’ for an item then you must set the

item type in the item control file to “P” for partial credit. 3. The column where item responses begin. If too few examinee ID characters were

specified, then you might receive this error as ID characters can often include ‘0’.

Figure B.5: Item Control File Responses of 0 Identified Error

At least one unidentified response character was identified and will be scored as incorrect. If you received the error message shown below you should check the following:

1. The omit and not reached characters. 2. The format of the item responses. Item responses must be numbers from 0 to 9 or letters

from “A” to “I”. Any letters that occur later in the alphabet than “I” should not be used in the data matrix as item responses. Letters after “I” or non-alphanumeric characters such as “#” can cause this error message.

3. The column where item responses begin. If too few examinee ID characters were specified, then you might receive this error as ID characters are often not valid item responses (e.g., spaces, letters after “I”).

Page 41: Iteman 4.1 Manual

Iteman 4.1 Manual Page 37

If you are using different characters for the omitted responses in a single data set, then you should consider consolidating them for use in Iteman 4.1. Unidentified responses will be scored as incorrect, but will not have any option statistics calculated for them.

Figure B.6: Unidentified Response Character Error

Check the data matrix file, examinee XXX did not respond to all XXX items You will receive this error when Iteman 4.1 reaches the end of the line before all of the item responses are read in for any examinee other than the first one. If you received this error you should check the following:

1. Whether one or more examinees have an incomplete identification record. 2. Whether one or more examinees are missing item responses (or did not respond to all of

the items on the test and responses were not coded as “not reached”).

Figure B.7: Examinee Did Not Respond to All Items Error

It should be noted that the examinee number reported in the dialog box is only the last examinee in the data matrix to have an incomplete record. It is possible that multiple examinees did not have a complete record.

Page 42: Iteman 4.1 Manual

Iteman 4.1 Manual Page 38

Appendix C: Formulas

Conditional Standard Error of Measurement Formulas

( )( 1)

x n xCSEM IIIn−

=−

(1)

where : x = number-correct score n = number of items

[ ]2(1 )CSEM IV K CSEM III= − × (2)

where : 2

2 2

( 1)( ) ( )

P

x P

n n sKx n x s n s

−=

− − − (3)

and 2Ps = variance of the proportion correct

x = mean of the number correct scores 2

xs = variance of the number correct scores and

Livingston Classification Consistency Index

2 2

2 2

( )( )

x c

x c

s x npLs x npα + −

=+ −

(4)

where : α = Cronbach’s alpha pc

= proportion correct at the cutscore

Note: L equals α when the cutscore is at the mean of the number-correct scores.

Page 43: Iteman 4.1 Manual

Iteman 4.1 Manual Page 39

Appendix D: Program Defaults File Default values for the program’s specifications are stored in the file Defaults.options. This configuration file can be opened and edited with a text editor (not a word processor). Figure D.1 shows the contents of this file.

Figure D.1: The Defaults File for Iteman 4.1 The defaults file allows you to change the values for the components of Iteman 4.1 listed below. The lines of the default file include the following information (this information is case sensitive): Line 1. Run title

Line 2. The following options control the starting values found on the Input Format tab and must be separated by a single space:

a) Number of examinee ID columns

b) Column where examinee IDs begin

c) Column where item responses begin

d) Omit character

e) Not administered character

f) Exclude omits from the option statistics (Y or N)

Line 3. Lower and upper bounds for acceptable P (multiple-choice items)

Line 4. Lower and upper bounds for acceptable item means (rating scale items)

Line 5. Lower and upper bounds for acceptable r (item-total correlation)

Line 6. Linear scaling slope and intercept; standard score scaling mean and standard deviation

User Test 1 10 1 11 o n N 0.00 1.00 0.00 9.00 0.00 1.00 1.000 0.000 0.000 1.000 N N N N N N N N 0.000 N Y Y N N N 3 3 K LP HP LR HR LM HM Low High

Page 44: Iteman 4.1 Manual

Iteman 4.1 Manual Page 40

Line 7. Compute scaled total score (Y or N); compute scaled domain scores (Y or N); correct for spuriousness (Y or N); perform linear scaling (Y or N); perform standardized scaling (Y or N)

Line 8. Perform classification (Y or N); use total scores for classification (Y or N); use scaled scores for classification (Y or N); numeric cutpoint for classification

Line 9. Dataset includes an Iteman 3 header (Y or N)

Line 10. Produce quantile plots (Y or N); produce quantile tables (Y or N); save matrix (Y or N); include omit codes (Y or N); include not admin codes (Y or N); number of digits of precision; number of score groups

Line 11. The following flag characters must each be separated by a single space: a) Keyed response does not have the highest positive point-biserial correlation b) P value is below the lower acceptable boundary c) P value is above the upper acceptable boundary d) r-pbis is below the lower acceptable boundary e) r-pbis is above the upper acceptable boundary f) Item mean is below the lower acceptable boundary g) Item mean is above the upper acceptable boundary

Line 12. Low group label

Line 13. High group label All of the program options, except the flags, can be saved to the defaults file by making changes to the options in the GUI and clicking “Save the current settings to the Defaults File”. You will be notified that the defaults file is missing upon start-up of Iteman 4.1 if you move, rename, or destroy the file. If the defaults file is missing you can easily save a new one by clicking on the “Save current settings” button as described above.

Page 45: Iteman 4.1 Manual

Iteman 4.1 Manual Page 41

Appendix E: License Transfer

Summary License transferring is a 3-step process that takes a license from a licensed program on one computer, and gives it to a program already installed in demo mode on another computer. The original demo program (new computer) becomes a licensed program, and the original licensed program (old computer) reverts to a demo. This process can transfer a license between PCs running the same program on different versions of Windows such as XP and Vista. This process starts with two computers, one that has an unlicensed program (original demo computer), and one that has an already licensed program (original licensed computer). It starts on the original demo computer, where the program creates a transfer file. This transfer file is taken to the original licensed computer, where the program there puts its license in the transfer file. The transfer file, now containing the license, is carried back to the original demo computer. The program on the demo computer takes the license out of the transfer file, becoming licensed. The program on the original license computer becomes a demo after it puts its license in the transfer file. This process requires the use of a separate drive, such as:

• An external removable drive such as a USB flash/thumb drive. • Blank formatted floppy disk. • Other connected or networked drives.

This transfer drive will carry the transfer file from the (new) original demo computer to the (old) original licensed computer to get the license from the licensed program and back to the (new) original demo computer to give the license to the demo program.

Step 1 – Demo/Trial Program Start with the unlicensed demo program on the original demo computer. Run the program in Administrative mode, logging in as administrator if necessary. Click on the License button (Figure E.1) to bring up a dialog with transfer license menu in upper left corner (Figure E.2). Select “Start Transfer” and follow the prompts. Be sure to connect the appropriate drive for use as the transfer drive when prompted, if it isn’t already connected (Figure E.3). Remember the drive letter assignment for this drive.

Figure E.1: License Button Figure E.2: Transfer License Menu and “Start Transfer” Option

Page 46: Iteman 4.1 Manual

Iteman 4.1 Manual Page 42

Figure E.3: Final Prompt to Connect Drive or Insert Disk

Once OK is clicked, the drive dialog is displayed (Figure E.4). “Removable (A:)” will always be the floppy drive. Internal hard drives are marked by their drive letter only. USB flash/thumb drives and other externally connected drives will be marked as “Removable”.

Figure E.4: Choose a Drive

Select the drive to carry the transfer file. Once the process is complete, if a USB flash/thumb drive or external hard drive is used, carefully disconnect it. If there is a problem during this step, an error message will be shown. Please note any error codes and report the error to Assessment Systems at [email protected].

Step 2 – Licensed Program If a USB flash/thumb drive or external hard drive is carrying the transfer file, connect it to the original licensed computer. If a networked hard drive is carrying the transfer file, make sure it can be reached on the original licensed computer. Regardless of which type of drive is used for the transfer, it might have a different drive letter assignment on the original licensed computer than on the original demo computer. Run the program on the original licensed computer in Administrative mode, logging in as administrator if necessary. Click on the License button to bring up the license window, and click on the transfer license menu in the upper left again. Select the “Transfer This License” option (Figure E.5).

Page 47: Iteman 4.1 Manual

Iteman 4.1 Manual Page 43

Figure E.5: Transfer This License The program will ask for confirmation, then prompt once again to connect the drive or diskette carrying the transfer file (Figure E.6). If this has not been done already, please do so, and remember which drive letter Windows assigns to it.

Figure E.6: Drive Dialog

Follow the prompts to the drive dialog (Figure E.6), and select the appropriate drive, which might have a different drive letter on the original licensed computer than on the original demo computer. The program will transfer the license to the transfer file and will indicate that it is now in demo/trial mode (Figure E.7).

Figure E.7: Notification of Change in Mode

Carefully disconnect the drive once this step is complete. If there have been any errors, please note them along with any specific codes and report them to Assessment Systems at [email protected].

Page 48: Iteman 4.1 Manual

Iteman 4.1 Manual Page 44

Step 3 – Demo/Trial Program Connect the transfer drive to the original demo computer. Run the demo/trial program, in Administrative mode, logging in as administrator if necessary. and click on the License button to bring up the license window, then click on the transfer license menu in the upper left again. Select the “Complete Transfer” option (Figure E.8).

Figure E.8: Complete Transfer

Follow the prompts to connect the transfer drive if this hasn’t already been done, and to select the drive. If the license transfer was successful, a message will appear.

Figure E.9: Successful Transfer

If there have been any errors, please note them along with any specific codes and report them to Assessment Systems at [email protected].