win assessments technical manual...win assessments technical manual | v5.2018 4 responding to...
TRANSCRIPT
WIN Assessments Technical Manual | v5.2018 1
WIN Assessments Technical Manual
May 2018
© 2019. All rights reserved. winlearning.com
WIN Assessments Technical Manual | v5.2018 2
About This Technical Manual
IMPORTANT NOTE | UNTIL FURTHER NOTICE: This WIN Assessments Technical Manual is undergoing substantial revision and therefore may not yet include the most current data / analyses and may be subject to inadvertent errors and omissions. Please consult with WIN before reference or use. This Technical Manual summarizes the design and lifecycle of the WIN Ready to Work Assessments
(Applied Mathematics, Reading for Information, Locating Information) and the WIN Essential Soft
Skills Assessment.
Chapter I provides general information about WIN Learning and the evolution of its career readiness
curriculum, assessments and certificates / credentials.
Chapter II provides a general overview of the WIN assessment development and quality assurance
process.
Chapter III includes specific information and data about the WIN Ready to Work Assessments
(Applied Mathematics, Reading for Information, Locating Information) including, but not limited to,
the underlying research and standards / learning objectives, current characteristics, development
process over time including field testing and validity and reliability analyses, and scoring.
Chapter IV includes specific information and data about the WIN Essential Soft Skills Assessment
including, but not limited to, the underlying research and standards / learning objectives, current
characteristics, development process over time including field testing and validity and reliability
analyses, and scoring.
Chapter V includes information about use of WIN assessments to support state-branded
foundational career readiness certification / credentialing.
Additional Resources: For information about the online assessment delivery platform, paper-based administration, accessibility and accommodations, security protocols, and reporting tools, refer to the WIN Assessment Administration Guide.
WIN Assessments Technical Manual | v5.2018 3
Chapter I | WIN Learning Overview
WIN Learning is a national career readiness solutions provider and a leading publisher of next
generation career readiness curriculum, assessments, and exploration tools. Since its inception in
1996, WIN has designed and implemented more than 2,000 education, workforce development and
economic development projects serving 10 million learners across all 50 states.
WIN began as a career readiness curriculum company, introducing the first version of its WIN Ready
to Work Courseware (originally branded the WIN Career Readiness Courseware 1.0) in workbook
form to support skill development in preparation for career and college readiness assessments
including, but not limited to, the original ACT WorkKeys®, TABE®, COMPASS® and GED®. In 1998,
WIN introduced the first computer-based version of WIN Ready to Work Courseware, and WIN
quickly became a primary source of online, career readiness instruction supporting various
statewide, regional and local K-12 education, adult education, workforce development, community
college, technical college, corrections, juvenile justice and other community-based talent
development initiatives.
The WIN Ready to Work Courseware was originally and continues to be directly aligned to the skill
sets of the original ACT WorkKeys® System, teaching what is measured on the original ACT
WorkKeys® assessments (including, but not limited to, Applied Mathematics, Reading for Information
and Locating Information), and the organization of the courseware content is a direct match to the
organizational structure (content categories and levels) of the ACT WorkKeys® assessments, the ACT
branded National Career Readiness Certificate® and numerous aligned state-branded career
readiness certificates / credentials. WIN was the first and longest continuous instructional provider
chosen by ACT® to participate in its WorkKeys® Partners Program in the late 1990s, and for more
than a decade, WIN championed implementation of solutions around the ACT WorkKeys® System
including administering more WorkKeys® assessments and profiling more jobs than any other single
entity in the country. This partnership continued until ACT discontinued its partners program in 2010.
WIN continues actively supporting implementation of the ACT WorkKeys® assessments and the ACT
branded National Career Readiness Certificate® today in several states including Oregon, Indiana,
Ohio, West Virginia, North Carolina, and South Carolina. The strength of this alignment is described
further in this manual. The WIN Ready to Work Assessments also currently serve as the primary
qualifying assessments for state-branded career readiness certificates in Florida, Arizona and
Kentucky.
WIN Assessments Technical Manual | v5.2018 4
Responding to customer demand, in 2012, WIN introduced a series of proctored capstone
assessments, now branded as the WIN Ready to Work Assessments (Applied Mathematics, Reading
for Information, Locating Information), to measure customer return on their investment in the WIN
Ready to Work Courseware and at the same time provide an opportunity for learners to prove their
individual readiness for career education / training programs, industry certification, apprenticeship
and/or employment. The assessments were developed by WIN in partnership with Measured
Progress (measuredprogress.org), a national nonprofit leader in the standards-based assessment
industry.
Responding again to customer demand for applied, career contextualized soft skills instruction, WIN
introduced the WIN Soft Skill Curriculum in 2012, and in 2015 introduced what is now referred to as
the WIN Essential Soft Skills Assessment (also branded the Situational Judgement Assessment) as a
capstone assessment for its soft skills curriculum. The WIN Essential Soft Skills Assessment was
originally developed by Castle Worldwide (castleworldwide.com), a national high-stakes industry
licensure testing and credentialing company, in partnership with the National Work Readiness
Council (nwrc.org) and the U.S. Chamber of Commerce.
In 1999, WIN collaborated with the state of Kentucky, including the Cabinet for Workforce
Development (CWD), Kentucky Community and Technical College System (KCTCS), Kentucky
Department of Education (KDE) and Empower Kentucky to:
▪ institutionalize a common language and common metrics around foundational workplace skills
among public sector partners and the business community;
▪ develop linkages and pathways from secondary education to adult education, postsecondary
education, apprenticeship, and ultimately employment and life-long learning;
▪ identify and fill workforce skill gaps; and
▪ implement a common assessment tool based upon U.S. Department of Labor SCANS
(Secretary’s Commission on Achieving Necessary Skills) competencies to measure and validate
mastery of skills.
The resulting framework conceptualized by WIN led to the introduction of the Kentucky
Employability Certificate, the nation’s first career readiness certificate and the credentialing model
upon which today’s ACT branded National Career Readiness Certificate® and numerous other
aligned state-branded foundational career readiness certificates / credentials are based. The WIN
Ready to Work Courseware was implemented as the primary diagnostic assessment and training
solution, and ACT WorkKeys® was adopted as the qualifying assessment tool for this pioneering
project.
WIN Assessments Technical Manual | v5.2018 5
For nearly two decades, WIN has continued to support the evolution of Kentucky Adult Education
and Kentucky Department of Workforce Investment career readiness initiatives. The WIN Ready to
Work Courseware continues to be a preferred training tool for foundational career readiness literacy,
numeracy and problem solving skill development statewide. The WIN Soft Skills Curriculum was
adopted in 2015 to support a new initiative focusing on essential soft skills development statewide.
And in 2017, the WIN Ready to Work Assessments (Applied Mathematics, Reading for Information,
Locating Information) and the WIN Essential Soft Skills Assessment were selected as the qualifying
assessments for the state-issued career readiness certificates.
The success of the early Kentucky initiative led to WIN engagement in career and college readiness
projects across the country. In the early 1990s, WIN partnered with the Alabama Department of
Education to conduct validation studies for a statewide basic skills certification test for teachers. WIN
conducted job profiling sessions in all 128 schools districts in Alabama and developed and deployed
a survey to more than 51,000 teachers ranking the task list for relative time spent and job criticality.
The return rate was almost 90 percent, providing the foundational research for the validation of the
Alabama Teacher Test. A customized assessment was built based upon this WIN research, and the
program remained in place for more than 10 years. WIN also developed and implemented an
industry-specific initiative – Go Build Alabama – to educate and inspire Alabama jobseekers to
consider skilled trade careers. The project featured a series of WIN “test your metal” online
diagnostic career readiness assessments supported by skills gap training, an interest inventory, and
videos and other searchable information about in-demand careers including projected openings,
wages, and career specific training and apprenticeship resources by region.
In 2008, WIN introduced a first generation online assessment tool to benchmark the foundational
career readiness skills of unemployment insurance claimants in five languages and provide targeted
online instruction to remediate identified skills gaps in partnership with the Oregon Department of
Community Colleges and Workforce Development.
Other current WIN projects include:
Florida Ready to Work – WIN has been directly engaged for 11 continuous years in all aspects of the
project design and statewide implementation of Florida Ready to Work, a foundational career
readiness training, assessment and credentialing program authorized by Florida Statute. The
program utilizes the WIN Ready to Work Courseware, and the WIN Ready to Work Assessments
(Applied Mathematics, Reading for Information, Locating Information) are the qualifying assessments
for the Florida Ready to Work Credential. Project stakeholders include on average 300 school
WIN Assessments Technical Manual | v5.2018 6
districts / schools, adult education programs, the state workforce development system career
centers, community colleges, technical centers, juvenile justice and corrections programs, and other
community-based partners. WIN also provides full-service project management for Florida Ready to
Work including, but not limited to, employer engagement, implementation partner planning, training
and coaching, customer services, data management, and reporting.
Florida Skills Assessment – WIN designed and launched the all-new custom Florida Skills
Assessment system in 2011, including integration with the state reemployment assistance system, in
accordance with Florida Statute. Since inception, the system has benchmarked the career readiness
skills of more than 1.7 million reemployment assistance claimants using a custom application of the
WIN Career Readiness System, the WIN Ready to Work Courseware, WIN Soft Skills Curriculum, WIN
MyStrategic Compass, and non-proctored versions of the WIN Ready to Work Assessments (Applied
Mathematics, Reading for Information, Locating Information) in English, Spanish and Creole. WIN also
provides user training, customer services, data management, and reporting for the project.
New York Career Development and Occupational Studies (CDOS) Credential – The WIN Essential
Skills Assessment was adopted in spring 2017 by the New York Board of Regents as a universal
foundational skills assessment for high school graduation that indicates student achievement of the
knowledge, skills and abilities necessary for entry-level employment across multiple industries and
occupations. The assessment is also utilized by the New Department of Labor and other education
and workforce providers statewide.
Arizona At Work – The Arizona Office of Economic Opportunity partnered with WIN in 2017 as the
sole provider of career readiness training and assessments supporting the state-branded Arizona
Career Readiness Credential. The program is being implemented statewide in collaboration with high
schools, adult education, and the state workforce development system career centers. To earn the
credential, jobseekers must achieve a Level 3 score on each of the component WIN Ready to Work
Assessments (Applied Mathematics, Reading for Information, Locating Information) and pass the WIN
Essential Soft Skills Assessment. WIN also provides employer engagement, implementation partner
planning, training and coaching, customer services, data management, and reporting for the project.
WIN was founded and continues to be led by CEO and President Teresa Chasteen, Ph.D. Dr.
Chasteen has a doctorate degree in curriculum / instruction, measurement and evaluation, a
master’s degree in education and 16 years of higher education teaching and administrative
experience. WIN is incorporated in Tennessee as Worldwide Interactive Network. Inc. The corporate
office is located just west of Knoxville in Kingston, Tennessee, and has regional offices in Florida,
Kentucky, North Carolina, West Virginia, Pennsylvania and Texas.
WIN Assessments Technical Manual | v5.2018 7
Chapter II | WIN Assessment Development and Quality Assurance
The WIN assessments are uniquely career contextualized to simulate on-the-job experiences and
application of foundational workplace skills.
WIN assessments are designed in accordance with the nationally accepted Standards for
Educational and Psychological Testing developed by the American Educational Research Association,
the American Psychological Association, and the National Council on Measurement in Education. To
support use of the assessments as an indicator of foundational skill readiness for career education /
training program placement, industry certification, apprenticeship and/or employment, the
assessments are further designed in accordance with the Uniform Guidelines for Employee Selection
Procedures adopted by the U.S. Equal Employment Opportunity Commission, the Civil Service
Commission, the U.S. Department of Labor, and the U.S. Department of Justice.
Key considerations in all phases of the WIN assessment development process include:
▪ Alignment of assessment and standards, which affects both validity and interpretability;
▪ Integration of assessment with curriculum;
▪ Quality of assessment including validity, the degree to which the assessment is aligned to the
specific standards or instructional objectives, and reliability, the degree to which the
assessment produces consistent results; and
▪ Clear definition of the purpose of the assessment, including high versus low stakes,
proctored versus self-administered, diagnostic curriculum placement test, and/or
summative curriculum posttest.
Assessment quality is measured in terms of reliability and validity. In the standards referenced world,
validity is determined by how well a test corresponds to standards / learning objectives. Reliability is
the degree to which a test works in a predictable way. In principle, if you give a test to the same
examinee twice, a perfectly reliable test would always produce the same score. Thus, a valid test
must also have proven reliability. A comprehensive item testing and analysis process is used to
ensure WIN assessments achieve and maintain the quality necessary for national, state and local
implementation.
WIN adheres to the principles of Universal Design, a process to ensure accessible environments for
all people through equitable use, simple and intuitive design, effective communication, tolerance for
variability and minimal fatigue (Mace, 1998; Knecht, 2004). The application of Universal Design is
defended by research that links these principles to higher performance for all students and results in
WIN Assessments Technical Manual | v5.2018 8
assessments that are inclusive and amenable to accommodations because the items are accessible,
free of bias, and have maximum readability and legibility (Pisha & Coyne, 2001; Thompson,
Johnstone, & Thurlow, 2002; Johnstone, 2003).
All phases of the WIN assessment development process are driven by the principles of Universal
Design including, but not limited to:
▪ ensure the assessments themselves are not obstacles to improved learning;
▪ provide valid inferences about the performance of all examinees; and
▪ provide each examinee with a comparable opportunity to demonstrate understanding of the
content tested.
Universal Design further conforms with the 2004 reauthorization of the Individuals with Disabilities
Education Act (IDEA) (Public Law No. 108-446) and the United States Assistive Technology Act of
2004 (Public Law No. 108-364-ATA 2004).
WIN assessment items are carefully crafted to meet industry standards that ensure all students have
a fair opportunity to demonstrate their knowledge and skills (Frary, 1995; Haladyna, 2004; Haladyna &
Downing, 1998a, b; Kehoe, 1995). Each question typically includes a stem and four options. The
following criteria are applied to develop the item stems:
▪ Stems are in the form of a question or an incomplete sentence. If an incomplete sentence is
used, the options are always placed at the end of the statement.
▪ Stems provide clear direction; what is being asked is generally clear even without looking at
the options.
▪ Stems are concise and do not include irrelevant material.
▪ Stems are worded positively; use of negative phrasing such as not or except, is minimized.
The following criteria are applied to develop the response options for each item:
▪ Options are as clear and concise as possible.
▪ Options are grammatically parallel to the stem; i.e., if the stem is a question, all the options
are answers. If the stem is an incomplete sentence all the options complete the sentence.
▪ Options are uniform in length and grammatical structure. When uniformity of length is not
feasible, the options are written as pairs.
▪ Options are appropriate and logical answers to what is being asked.
▪ Options are plausible.
WIN Assessments Technical Manual | v5.2018 9
▪ Options do not include verbal associations with the stem that reveal the correct answer. Key
words from the stem do not appear in the options or are distributed equally among the
options.
▪ Options do not include only one positive or one negative choice.
▪ Options do not contain absolute qualifiers (e.g., “never” or “always”).
Following development, each item undergoes an extensive review using the following criteria:
▪ Content is accurately aligned to the assessed objectives.
▪ There is one correct or best answer.
▪ Graphics are clear, accurate and appropriate.
▪ Language is clear.
▪ Options are parallel.
▪ There are no bias or sensitivity issues.
The context for items and content of passages and supporting graphics are also carefully chosen
and reviewed to avoid potential bias and sensitivity issues. An item is biased if equally proficient
individuals from different groups do not have equal probabilities of answering the item correctly. Bias
is not the same as stereotyping, although both are undesirable properties of a test item. Stereotyping
is the consistent representation of a particular group in a certain light. Stereotyping does not
generally lead to differential performance. In contrast, bias is the presence of some characteristic of
an item that results in differential performance for two individuals of the same ability but from
different groups, such as Asian/Pacific Island Americans, Black Americans, Hispanic Americans,
Native Americans/American Indians, women and individuals with disabilities. Thus, the purpose of a
bias and sensitivity review is to assure that the test items contain no language, symbols, or content
considered potentially offensive or inappropriate for major subgroups of the test-taking population.
The following checklist is used for determining that no stereotyping is present:
▪ Does the test item contain material that is inflammatory, controversial or emotionally charged
for particular subgroups?
▪ Does the test item contain language or material that is demeaning or offensive to particular
subgroups?
▪ Does the test item depict members of particular subgroups as having stereotype
characteristics?
▪ Does the test item contain offensive or biased artwork?
WIN Assessments Technical Manual | v5.2018 10
The following checklist is used to determine if test items contain bias based on gender, ethnicity,
religion or class:
▪ Does the test item use situations or scenarios that are different or unfamiliar to some
subgroups?
▪ Does the test item use information that could make the question easier of harder for certain
subgroups?
▪ Does the test item use some characteristic or feature that could lead certain subgroups to
answer the question correctly or incorrectly for the wrong reason?
▪ Does the test item use words that have different or unfamiliar meaning for different
subgroups?
▪ Does the test item use choices in a multiple-choice item that are especially attractive to
members of certain subgroups for cultural reasons?
▪ Does the test item use a correct or best answer that changes for different subgroups?
After the stereotyping and bias reviews are conducted, items are reviewed to ensure that they do not
contain potential emotionally charged topics including, but not limited to:
▪ Child abuse / child neglect
▪ Death, including suicide
▪ Discrimination, including racism, sexism, ageism
▪ Family issues, including divorce
▪ Drugs, alcohol, tobacco
▪ Homelessness
▪ Politics
▪ Questioning parental authority
▪ Sex/sexuality, including sexual preference or orientation, gender roles, abortion, birth control,
pregnancy
▪ Use of animals, animal rights
▪ Violence and crime, including gun control
▪ Weight
▪ Sensitivity to different cultures, religions, ethnic and socio-economic groups, and disabilities
Live item statistics are continually monitored to ensure optimum performance of the assessment
instruments as a whole and of individual items. New items are initially spiraled into the live item pool
as non-scored items until sufficient data is collected to ensure validity and reliability.
WIN Assessments Technical Manual | v5.2018 11
Chapter III | WIN Ready to Work Assessments
The WIN Ready to Work Assessments (Applied Mathematics, Reading for Information, Locating
Information) were developed from 2010 to 2012 and first published in 2012 in direct response to a
request from the state of Florida for a more cost-effective and flexible alternative to the career
readiness assessment and credentialing system offered at the time by ACT WorkKeys®.
To ensure the highest level of quality, WIN partnered with Measured Progress
(measuredprogress.org), a national nonprofit leader in the standards based assessment industry, to
develop the initial assessments, independently verify the initial field test data and validity and
reliability analyses, and verify the initial scoring methodology. Measured Progress also supported
online and paper-based delivery and scoring of the assessments until 2016 when assessment
delivery and scoring was transitioned to a new state-of-the-art online platform maintained by WIN.
Continuous item development and quality assurance monitoring is managed by a WIN team of highly
experienced instructional design, assessment development, psychometric and technology
professionals. The development process is informed by industry research and subject matter experts
including educators, workforce developers and employers.
In fall 2017, WIN partnered with Scantron (scantron.com), an assessment and psychometric services
company with a long-standing reputation for delivery of large-scale, client-focused testing solutions,
to support ongoing psychometric analyses, including validity and reliability, for the WIN Ready to
Work Assessments.
General Characteristics
The WIN Ready to Work Assessments are based on the original ACT WorkKeys® Targets for
Instruction and are supported by 20 years of employer focused research including, but not limited to,
the U.S. Department of Labor Secretary’s Commission on Achieving Necessary Skills (SCANS); U.S.
Department of Labor Building Blocks Competency Model; U.S. Department of Education Employability
Skills Framework; and National Network of Business and Industry Associations Common Employability
Skills.
There are three component WIN Ready to Work Assessments – Applied Mathematics, Reading for
Information and Locating Information. Together these component assessments measure the ability
to apply the foundational workplace numeracy and literacy, communication, critical thinking, and
WIN Assessments Technical Manual | v5.2018 12
problem-solving skills that employers nationwide commonly define as essential to gain and maintain
employment.
Applied Mathematics Assessment – Measures foundational workplace mathematical reasoning and
problem-solving skills.
Reading for Information Assessment – Measures comprehension and critical thinking using written
workplace text including emails, websites, letters, contracts, signs, notices, policies, and regulations.
Locating Information Assessment – Measures comprehension and application of workplace
graphics such as charts, graphs, tables, forms, flowcharts, diagrams, maps, and instrument gauges.
The assessments are aligned to, and examinee skill development is supported by, the original WIN
Ready to Work Courseware (formerly branded the WIN Career Readiness Courseware 1.0) learning
objectives. The assessments are further aligned and comparable to the original ACT WorkKeys®
branded assessments that were ultimately discontinued by ACT® in 2017.
Correlated to grade levels, the assessments may be used as a benchmarking tool to satisfy federal
Workforce Innovation Opportunity Act (WIOA) regulations requiring prioritization of workforce
development services for jobseekers who are “basic skills deficient.”
The WIN Ready to Work Assessments are criterion-referenced against an absolute standard or
“criterion” for performance. Thus, the assessments measure mastery of specific learning objectives
rather than comparing an individual’s scores to the performance of other examinees. The items are
career contextualized and are designed to measure the ability to apply the targeted career readiness
skills and attributes, not simply demonstrate knowledge of the related concepts.
The number of items for each component assessment is as follows:
● Applied Mathematics 34 items
● Reading for Information 34 items
● Locating Information 21 items
The items are multiple choice with one stem and four options.
The assessments are proctored. Standard administration time is 55 minutes for each assessment.
Accommodations, including extension of time, are permissible based on commonly accepted
policies and procedures.
WIN Assessments Technical Manual | v5.2018 13
The assessment is delivered online through the WIN Career Readiness System. The assessment is
currently available in English only. A paper-based version of the assessments is in development and
scheduled for release in spring 2018. WIN has partnered with Scantron (scantron.com) to support
printing, distribution and scoring of the paper-based version of the assessments. Large print and
Braille versions of the assessments are available upon request.
As of fall 2017, more than 235,000 of the WIN Ready to Work Assessments have been effectively
administered by more than 500 high schools, adult education programs, community colleges,
technical centers, state workforce development system career centers, juvenile justice and
corrections providers, community-based organizations, and employers.
Standards
The Applied Mathematics Assessment requires examinees to solve math problems in career
context, as opposed to straight computation, with a few exceptions. Items developed to assess
division of negative numbers are not contextualized.
Examinees are allowed to consult a math formula sheet including measurement equivalents (e.g., 1
foot = 12 inches), measurement conversions (e.g., 1 mile = 1.61 kilometers), and formulas (e.g., area =
length × width) and use a calculator.
The assessment targets 34 primary learning objectives as follows:
Table 1: Applied Mathematics | Learning Objectives
Level 3 1. Solve problems that require a single type of mathematical operation (addition, subtraction,
multiplication, and division) using whole numbers 2. Add or subtract negative numbers 3. Change numbers from one form to another using whole numbers, fractions, decimals, or
percentages 4. Convert simple money and time units (e.g., hours to minutes)
Level 4
5. Solve problems that require one or two operations 6. Multiple negative numbers 7. Calculate averages, simple ratios, simple proportions, or rates using whole numbers or
decimals 8. Add up to three fractions that share a common denominator 9. Add commonly known fractions, decimals, or percentages (e.g., ½, .75, 25%) 10. Multiple a mixed number by a whole number or decimal 11. Put information in the right order before performing calculations
Level 5
12. Decide what information, calculations, or unit conversion to use to solve the problem
WIN Assessments Technical Manual | v5.2018 14
13. Look up a formula and perform a single-step conversion within or between systems of measurement
14. Calculate using mixed units (e.g., 3.5 hours and 4 hours 30 minutes) 15. Divide negative numbers 16. Find the best deals using one- and two-step calculations and then compare results 17. Calculate the perimeters and areas of basic shapes (circles and rectangles) 18. Calculate percent discounts or markups
Level 6
19. Use fractions, negative numbers, ratios, percentages, or mixed numbers 20. Rearrange a formula before solving a problem 21. Use two formulas to change from one unit to another within the same system of
measurement 22. Use two formulas to change from one unit in one system of measurement to a unit in
another system of measurement 23. Find mistakes in questions that belong at Levels 3, 4, and 5 24. Find the best deal and use the result for another calculation 25. Find areas of basic shapes when it may be necessary to rearrange the formula, convert units
of measurement in the calculation, or use the result in further calculation 26. Find the volume of rectangular solids 27. Calculate multiple rates
Level 7
28. Solve problems that include nonlinear functions and/or that involve more than one unknown 29. Find mistakes in Level 6 questions 30. Convert between systems of measurement that involve fractions, mixed numbers, decimals,
and/or percentages 31. Calculate multiple areas and volumes of spheres, cylinders, or cones 32. Set up and manipulate complex ratios or proportions 33. Find the best deal when there are several choices 34. Apply basic statistical concepts (calculate percent change)
Development of the Reading for Information Assessment begins with the creation of reading
passages designed to address the learning objectives for each level. Item passages are created or
adapted based on readability, opportunities to assess learning objectives at each level, topic and the
absence of bias or sensitivity issues.
Passages are generally adapted from work-related literature in the public domain to parallel the type
of reading generally required in the workplace. Passages are informational text, rather than literary or
persuasive, and include memoranda from employers to employees, instructions, or articles. The
majority of the passages are “how to” passages, instructing examinees on how to accomplish some
objective or use a product or system.
Three passages with similar characteristics are included for each level. Within a level, passages are
intentionally similar in length, readability and complexity.
WIN Assessments Technical Manual | v5.2018 15
To ensure criterion related validity, sample items and passages are aligned to both the WIN Ready to
Work Courseware and the original ACT WorkKeys® assessments. The items are carefully reviewed
and analyzed to determine and document the expectations for length, complexity, content and
readability. Sample items are further examined to clarify the intended interpretation of the learning
objectives and gauge the difficulty of items for each level.
Regarding readability, experts recommend the use of both subjective measures and readability or
leveling formulas to estimate the readability level of text (Gunning, 2003). To judge the readability of
passages, the length of text, word usage, sentence structure, and text density are considered. The
Dale-Chall readability formula is also applied. First developed in 1948 and revised in 1995 (Chall and
Dale, 1995), the Dale-Chall formula is considered the most accurate formula for texts above the
fourth-grade level (Koenke, K., 1971). The formula is unique because it calculates the grade level of a
text based on sentence length and the number of “hard words” that do not appear on an established
list of common words familiar to most fourth-grade students.
Reading passages are created or adapted to maximize opportunities to assess the learning
objectives at each level. For example, the learning objectives for level three include the following
expectations:
● Choose the correct meaning of a word that is clearly defined in the reading;
● Choose the correct meaning of common, everyday workplace words; and
● Choose when to perform each step in a short series of steps.
Based on these objectives, a level three passage must include the definition of a word that is
presumably unknown to test takers, one or more common workplace terms, and a series of steps or
directions.
The assessment targets 24 primary learning objectives as follows:
Table 2: Reading for Information | Learning Objectives
Level 3 1. Choose the correct meaning of common, everyday workplace words 2. Apply instructions to a situation that is the same as the one in the reading materials 3. Choose the correct meaning of a word that is clearly defined in the reading 4. Identify main idea and clearly stated details
Level 4
5. Choose when to perform each step in a short series of steps 6. Identify important details that may not be clearly stated 7. Choose what to do when changing conditions call for a different action (follow directions that
contain “if-then” statements) 8. Use the reading material to figure out the meaning of words that are not defined.
WIN Assessments Technical Manual | v5.2018 16
Level 5
9. Identify the paraphrased definition of a technical term or jargon that is defined in the document 10. Figure out the meaning of a word based on how the word is used 11. Identify the correct meaning of an acronym that is defined in the document 12. Apply technical terms and jargon and relate them to stated situations 13. Apply straightforward instructions to a new situation that is similar to the one described in the
material 14. Apply complex instructions that include conditionals to situations described in the materials
Level 6
15. Identify implied details 16. Use technical terms and jargon in new situations 17. Figure out the less common meaning of a word based on context 18. Apply complicated instructions to new situations 19. Apply general principles from the materials to similar and new situations 20. Figure out the principles behind policies, rules, and procedures 21. Explain the rationale behind a procedure, policy, or communication
Level 7
22. Figure out the meaning of difficult, uncommon words based on how they are used 23. Figure out the meaning of jargon or technical terms based on how they are used 24. Figure out the general principles behind policies and apply them to situations that are quite
different from any described in the materials
The Locating Information Assessment requires examinees to reference common workplace
graphics including, but not limited to, charts, graphs, spreadsheets, signs, and instrument gauges,
and then answer questions based on the graphics.
Graphics are generally adapted from work-related literature in the public domain to parallel the type
of graphics generally required in the workplace.
Graphics with varying characteristics are included for each level. Within a level, graphics are
intentionally similar in complexity.
To ensure criterion related validity, sample items are aligned to both the WIN Ready to Work
Courseware and the original ACT WorkKeys® assessments. The items are carefully reviewed and
analyzed to determine and document the expectations for complexity, content and readability.
Sample items are further examined to clarify the intended interpretation of the learning objectives
and gauge the difficulty of items for each level.
The assessment targets 11 primary learning objectives:
Table 3: Locating Information | Learning Objectives
Level 3
WIN Assessments Technical Manual | v5.2018 17
1. Fill in one or two pieces of information that are missing from a graphic 2. Find one or two pieces of information in a graphic
Level 4
3. Understand how graphics are related to each other 4. Find several pieces of information in one or two graphics 5. Summarize information from one or two straightforward graphics 6. Identify trends in one or two straightforward graphics 7. Compare information and trends shown in one or two straightforward graphics
Levels 5 & 6
8. Identify trends shown in one or more detailed or complicated graphics 9. Compare information and trends from one or more complicated graphics 10. Summarize information from one or more detailed graphics 11. Sort through distracting information
Field Testing
Initial field testing of the first version of the WIN Ready to Work Assessments (Applied Mathematics,
Reading for Information, Locating Information) was conducted in 2011.
The field testing was conducted at the content area level (Applied Mathematics, Reading for
Information, Locating Information) with items for each content area placed on the field test forms
sequentially, by level.
Examinees were instructed to answer as many questions as possible. These linear forms were
administered online with random forms assigned to each examinee; that is, within a given site, there
was a random assignment of one of the forms to an examinee.
The number of forms and the number of items varied by content/skill area. The details for each skill
area follow. The field test item-to-form mapping document is provided in Appendix.
For Applied Mathematics, items were distributed across three main forms A, B, and C (with 30 items
on each form), and three equating forms a, b, and c (with 8, 7, and 7 items, respectively). The main
forms were constructed to ensure that learning objectives were spread as evenly across the forms,
with at least one item for each learning objective in each form. Forms were also constructed with the
goal of minimizing testing fatigue, and forms were reviewed to ensure that keys (correct answer
choices) were balanced across forms. Each examinee took one main form and one equating form,
yielding nine combined forms for administration (Aa, Ab, Ac, Ba, Bb, Bc, Ca, Cb, Cc).
For Reading for Information, there are numerous passages with associated items. Since all of the
items associated with a passage should be field tested together, four forms A, B, C and D were used,
WIN Assessments Technical Manual | v5.2018 18
with 33 items per form. As with the other content areas, the forms were constructed to ensure that
learning objectives were spread evenly across the forms, with at least one item for each learning
objective in each form. Forms were also constructed with the goal of minimizing testing fatigue, and
forms were reviewed to ensure that keys (correct answer choices) were balanced across forms. For
reading, there were no “equating forms” per se. Instead, equating was accomplished by spiraling so
that each passage and accompanying items appeared on two of the forms. Thus, each passage and
accompanying items were tested on at least two of the field test groups.
For Locating Information, items were distributed across two main forms A and B (with 16 items on
each form) and two equating forms, a and b (with 12 items on each form). The main forms for locating
information were constructed to ensure that learning objectives were spread evenly across the
forms, with at least one item for each learning objective in each form. Forms were also constructed
with the goal of minimizing testing fatigue, and forms were reviewed to ensure that keys (correct
answer choices) were balanced across forms. Each examinee took one main form and one equating
form, yielding four combined forms (Aa, Ab, Ba, Bb).
All examinees were presented with two online assessment batteries assigned in random order.
Battery A included the WIN Ready to Work Courseware (formerly branded the WIN Career
Readiness Courseware 1.0) placement tests and posttests, and Battery B was the field test form. All
examinees took the placement test. Based upon the highest skill level achieved on the placement
test, they then took the corresponding posttest. If examinees passed the posttest, they took the next
posttest. Examinees continued to take the posttests until they reached the level of failure. For
reporting purposes, the appropriate posttest was determined by having examinees take levels of the
posttests until they failed a level. To ensure counter-balancing, some examinees received the field
test form first, while others received the placement test/posttest combination first. This process
yields higher classification consistency and greater reliability.
The field testing was structured to produce at least 200 initial responses per item to ensure a sound
psychometric analysis on which the resulting fixed-form assessments were based.
The equating design was a common person equating design, rather than a common-item design.
Because of this, the field test samples were selected to be representative of the total population of
interest. For example, if 10 percent of the population is high school students, then approximately 10
percent of the field test sample should be high school students. Thus, the following information was
collected about the examinees who participated in the field testing:
▪ Broad group of examinees (high school, adult learner, etc.);
▪ Gender;
WIN Assessments Technical Manual | v5.2018 19
▪ Ethnicity; and
▪ Academic achievement (credentials earned).
Because the WIN Ready to Work Assessments are used across multiple populations and
demographic groups, every effort was made to solicit a broad population for participation in the field
test. The target distribution across the field test population was:
▪ Population: o High school age students = 56% o Adult jobseekers / postsecondary education students = 42% o Juvenile justice students = 2%
▪ Race: o Caucasian = 50% o Minority = 50%
▪ Gender: o Male = 53% o Female = 47%
Field test participants resided in Alabama, Florida, Georgia, Tennessee, or West Virginia, with the
majority (97%) in Tennessee and Florida. Table 4 shows the number and percent of participants from
each state by content area. All examinees who participated in the field test for Applied Mathematics
also responded to field test items for Locating Information, thus the distributions are the same for
these content areas.
Table 4: Geographic Location
State AM RI LI
Frequency percent frequency percent frequency percent
Alabama 4 0.5 3 0.5 4 0.5
Florida 219 25.8 186 30.0 219 25.8
Georgia 7 0.8 6 1.0 7 0.8
Tennessee 608 71.5 416 67.1 608 71.5
West Virginia 12 1.4 9 1.4 12 1.4
TOTAL 850 100.0 620 100.0 850 100.0
AM = Applied Mathematics RI = Reading for Information LI = Locating Information
Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
Slightly more males than females participated in the field test. Table 5 shows the gender distribution
of the field test population by content area.
WIN Assessments Technical Manual | v5.2018 20
Table 5: Gender
Gender AM RI LI
frequency percent frequency percent frequency percent
Female Male I prefer not to respond
338 497
15
39.8 58.5
1.8
256 355
9
41.3 57.3
1.5
338 497
15
39.8 58.5
1.8 TOTAL 850 100.0 620 100.0 850 100.0
AM = Applied Mathematics RI = Reading for Information LI = Locating Information
Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering Ph.D.
Slightly more than half of the field test population identified themselves as Caucasian, with the
remainder distributed across a variety of minority groups. Table 6 shows the distribution of the field
test population across ethnic groups by content area.
Table 6: Ethnicity
Ethnic Group AM RI LI
Frequency percent frequency percent frequency percent
African-American 249 29.3 172 27.7 249 29.3
American Indian/Alaskan Native
5 0.6 4 0.6 5 0.6
Asian-American/ Pacific Islander
12 1.4 6 1.0 12 1.4
Caucasian/White, Non-Hispanic
444 52.2 335 54.0 444 52.2
Cuban 9 1.1 7 1.1 9 1.1
Mexican 3 0.4 3 0.5 3 0.4
Mexican-American 8 .9 7 1.1 8 .9
Non-Hispanic 3 .4 1 .2 3 .4
Other 41 4.8 28 4.5 41 4.8
Other Hispanic 16 1.9 13 2.1 16 1.9
Puerto Rican 21 2.5 20 3.2 21 2.5
I prefer not to respond 39 4.6 24 3.9 39 4.6
TOTAL 850 100.0 620 100.0 850 100.0
AM = Applied Mathematics RI = Reading for Information LI = Locating Information
Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
WIN Assessments Technical Manual | v5.2018 21
The field test population included adult learners / jobseekers, high school students, and students in
the juvenile justice system. Table 7 shows the distribution of the population across these categories
/ levels by content area.
Table 7: Distribution by Educational Program
Program AM RI LI
frequency percent frequency percent frequency percent
Adult Learners / Jobseekers
619 72.8 425 68.5 619 72.8
Juvenile Justice 64 7.5 49 7.9 64 7.5
High School 167 19.6 146 23.5 167 19.6
TOTAL 850 100.0 620 100.0 850 100.0
AM = Applied Mathematics RI = Reading for Information LI = Locating Information
Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
The majority of the field test population reported having attained a high school diploma or GED.
Table 8 shows the distribution of the field test population based on the highest degree attained by
content area.
Table 8: Highest Degree Attained
Degree AM RI LI
frequency percent frequency percent frequency percent
None 98 11.5 73 11.8 98 11.5
Elementary/Middle School 90 10.6 78 12.6 90 10.6
High School 397 46.7 299 48.2 397 46.7
GED 89 10.5 54 8.7 89 10.5
Associate 47 5.5 30 4.8 47 5.5
Trade/Proprietary School Certification 73 8.6 51 8.2 73 8.6
Bachelors 13 1.5 8 1.3 13 1.5
Masters 5 0.6 3 0.5 5 0.6
Doctorate 2 0.2 2 0.3 2 0.2
I prefer not to respond 36 4.2 22 3.5 36 4.2
TOTAL 850 100.0 620 100.0 850 100.0
AM = Applied Mathematics RI = Reading for Information LI = Locating Information
Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
WIN Assessments Technical Manual | v5.2018 22
Field Test Data Analyses
After field testing, items were analyzed. The item analysis from the field test data focused on three
primary measures: item difficulty, item discrimination, and distractor analysis.
The classical item difficulty estimate is the p-value, or the proportion of examinees who answer the
item correctly. Most achievement test developers prefer the item difficulties to range from 0.20 to
0.90. Items answered correctly by fewer than 20 percent of the examinees might be too difficult to
provide accurate information. Likewise, items answered correctly by more than 90 percent of the
examinees might be too easy to provide meaningful information.
For Applied Mathematics, 107 items were field tested. The item difficulty for each item is presented
in the Appendix. Six math items exhibited low item difficulties, less than 0.20. These items may be too
difficult to provide information about examinees and were excluded from the final forms.
For Reading for Information, 99 items were field tested. The item difficulty for each item is
presented in the Appendix. All item difficulties are in the acceptable range. No items were excluded
from the forms based on difficulty.
For Locating Information, 53 items were field tested. The item difficulty for each item is presented in
Appendix. One item in this content area (L.6.7.24.11) exhibited low item difficulty and was excluded
from the final forms.
Item discrimination provides information about how well the item differentiates between examinees
of high ability and low ability. The most common index of discrimination is the point-biserial
correlation.
Typically, a point biserial of 0.30 or higher is considered acceptable. Items with point biserials in the
range of 0.10 to 0.30 are considered marginal but can be retained if they increase the validity of the
test. Items with correlations less than 0.10 are considered unacceptable and were removed from the
item pool.
Items with discrimination values between 0.10 and 0.30 were considered marginal items, while items
with discrimination values less than 0.10 were considered unacceptable. Using these criteria, 73
Applied Mathematics, 12 Reading for Information, and 6 Locating Information items were classified
as marginal or unacceptable. The item discrimination for each field test item is presented in the
Appendix. Items flagged as problematic are indicated with an asterisk.
WIN Assessments Technical Manual | v5.2018 23
For multiple-choice items, the incorrect answer choices, or distractors, should be viable options for at
least some of the examinees. If no one selects a particular option, then the effective number of
answer options is reduced, leading to a greater probability of an examinee selecting the correct
answer by chance.
Field test items were considered problematic and subjected to further review if both of the following
criteria were met:
▪ One or more of the distractors was selected by less than 5 percent of examinees; and
▪ One or more of the distractors was selected by a higher percentage of examinees than the
percentage of examinees who selected the correct item.
In addition, the discrimination of each distractor was considered. For the distractors, negative
discrimination values are desirable. When interpreting the distractor analysis, the rules should be
used as guidelines rather than absolute rules. Therefore, the complete profile of the distractors was
used to determine which items should be considered problematic and eliminated from the item pool.
The results of the distractor analysis for each content area are provided in the Appendix. For each
content area, three sets of analyses are included: the number of examinees who responded to each
answer option, the percent of examinees who responded to each answer option, and the
discrimination (point biserial) of each answer option.
Using these criteria, 16 Applied Mathematics, 16 Reading for Information, and 10 Locating
Information has one or more of the distractors selected by fewer than 5 percent of examinees and
one or more of the distractors selected by a higher percentage of examinees than the percentage of
examinees that select the correct item.
Negative discrimination values for the incorrect answer options are desirable. Items flagged on this
criterion and on the two criteria described above should be considered problematic. Using these
criteria, 5 Applied Mathematics, 11 Reading for Information, and 10 Locating Information were
flagged on all three criteria.
Initial Form Construction
The field test statistics were used to ensure the initial final forms of the WIN Ready to Work
Assessments for each content area (Applied Mathematics, Reading for Information, Locating
Information) would be similar in difficulty, discrimination and reliability and to determine which items
should be eliminated for consideration in the construction of the forms. Items were discarded if they
WIN Assessments Technical Manual | v5.2018 24
did not meet the criteria for items with reasonable psychometric properties. Once the poorly
functioning items were eliminated, parallel test forms were constructed.
Two fixed forms were constructed for each content area. The forms were constructed so that the
two test forms are (1) equal in difficulty, meaning that the mean difficulty for the test forms is as close
as possible given the available items, (2) equal in reliability, meaning that the reliability of the two
forms is as close as possible given the available items, and (3) equivalent in content representation,
meaning that the two test forms have similar content coverage.
In constructing the two initial fixed forms, the goal was to achieve an acceptable level of reliability.
The purpose and use of an assessment determines what constitutes an acceptable level of reliability.
When decisions about individuals are made, the desired reliability is 0.90, while reliabilities in the
range of 0.85-0.90 might be considered acceptable. There are many different approaches to
estimate the reliability of a score. The context of the test administration and the item types are the
primary factors used to determine the most appropriate measure of reliability. When there is a single
administration of a test, the most common measures of reliability are those that assess the internal
consistency of the test. Internal consistency can be measured by calculating coefficient alpha (often
referred to as Cronbach’s alpha) or a split-half reliability. Since coefficient alpha is essentially the
average of all possible split-half reliabilities, it is preferred to the calculation of a split-half reliability.
The field test was originally designed to develop an item bank for a computerized adaptive test using
Item Response Theory. Given the change to use classical test theory to construct fixed forms, the
field test design was not optimal for some of the planned analyses. When the goal is to construct
fixed forms, the fixed forms typically are field tested and problematic items are subsequently
removed. This design allows the publisher to hold variables such as item position constant between
the field test and the operational test. Maintaining item position is psychometrically desirable, as item
statistics sometimes vary depending on context effects such as item position. Therefore, the final
forms were constructed using items from the same field test form when possible.
The best estimate of reliability requires the examinees to respond to all items on the test. Given the
number of items, this was not practical. There were not enough examinees who responded to the
fixed forms to complete a traditional reliability analysis. Instead, coefficient alpha was estimated
using the inter-item correlation matrix. Coefficient alpha was estimated using the following formula
where N represents the number of items, and 𝑟 represents the average inter-item correlation.
𝛼 =𝑁 ∗ 𝑟
1 + (𝑁 − 1)𝑟
WIN Assessments Technical Manual | v5.2018 25
The two fixed forms constructed for each content area were developed to be as similar as possible in
terms of item difficulty, discrimination, and content representation. Summary data for each content
area follows with additional detail included in the Appendix.
Table 9 provides the item difficulty and discrimination for the two fixed forms for Applied
Mathematics. Each performance level within a form includes a relatively small number of items.
Median values are reported along with the mean values since mean values could be misleading due
to the small number of items. Additional information regarding the form construction can be found in
the Appendix.
Table 9: Applied Mathematics | Average Item Difficulty and Discrimination for Each Performance Level for Each Form
Level Statistic Form 1 Form 2 3 Difficulty Mean 0.618 0.740
Median 0.610 0.765 Discrimination Mean 0.250 0.303
Median 0.255 0.280 4 Difficulty Mean 0.629 0.641
Median 0.630 0.700 Discrimination Mean 0.274 0.274
Median 0.280 0.280 5 Difficulty Mean 0.576 0.561
Median 0.550 0.620 Discrimination Mean 0.219 0.249
Median 0.230 0.290 6 Difficulty Mean 0.348 0.391
Median 0.350 0.370 Discrimination Mean 0.164 0.172
Median 0.180 0.190 7 Difficulty Mean 0.504 0.525
Median 0.510 0.495 Discrimination Mean 0.208 0.223
Median 0.205 0.205 Overall Difficulty Mean 0.504 0.525
Median 0.510 0.495 Discrimination Mean 0.208 0.223
Median 0.205 0.205 Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
Overall, the mean and median difficulties and discriminations are quite similar across the two forms.
Within each of the levels, the average values also are similar across the two forms.
The content representation of the two forms was also evaluated. The learning objectives for each
form and level are presented in Table 10.
WIN Assessments Technical Manual | v5.2018 26
Table 10: Applied Mathematics | Number of Items from Each Learning Objective for Each Performance Level and Each Form
Level Objective Form 1 Form 2
3
1 1 1 2 1 1 3 1 1 4 1 1
4
5 1 1 6 1 1 7 1 1 8 1 1 9 1 1 10 1 1 11 1 1
5
12 1 1 13 1 1 14 1 1 15 1 1 16 1 1 17 1 1 18 1 1
6
19 1 1 20 1 1 21 1 1 22 1 1 23 1 1 24 1 1 25 1 1 26 1 1 27 1 1
7
28 1 1 29 1 1 30 1 1 31 1 1 32 1 1 33 1 1 34 1 1
Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
Across the performance levels there is a high degree of consistency in the content representation of
each form for mathematics. Each form contains one item for each objective.
The reliability of each form was assessed for each level on each form and across levels on each
form. Since each level has relatively few items, it was expected that the reliability of each level would
be somewhat low. For the entire form, a reliability in the range of 0.80-0.90 was expected. Table 11
provides the values for coefficient alpha for each level and form, as well across all levels for each
form.
WIN Assessments Technical Manual | v5.2018 27
Table 11: Applied Mathematics | Reliability of Each Level and Form
Level Form 1 Form 2 3 0.45 0.69 4 0.71 0.76 5 0.55 0.63 6 0.38 0.52 7 0.33 0.30 Overall 0.86 0.90
Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
For each level, the reliabilities are fairly similar with some exceptions at levels 3 and 6. At the total
test form level, the reliabilities are quite close, and both are adequately reliable.
Table 12 provides the item difficulty and discrimination for each Reading for Information form. For
each performance level (3-7), there are relatively small numbers of items in each form. Median values
are reported along with the mean values since mean values could be misleading due to the small
number of items. Additional information regarding the form construction can be found in the
Appendix.
Table 12: Reading for Information | Average Item Difficulty and Discrimination for Each Performance Level for Each Form
Level Statistic Form 1 Form 2
3 Difficulty Mean 0.758 0.703
Median 0.795 0.715
Discrimination Mean 0.437 0.402 Median 0.425 0.400
4 Difficulty
Mean 0.617 0.600 Median 0.625 0.630
Discrimination Mean 0.430 0.415 Median 0.420 0.395
5 Difficulty
Mean 0.637 0.702 Median 0.620 0.680
Discrimination Mean 0.543 0.490 Median 0.570 0.500
6 Difficulty
Mean 0.651 0.551 Median 0.660 0.510
Discrimination Mean 0.472 0.453 Median 0.510 0.420
7 Difficulty
Mean 0.500 0.573 Median 0.460 0.575
Discrimination Mean 0.348 0.428 Median 0.320 0.415
Overall Difficulty
Mean 0.642 0.629 Median 0.650 0.635
Discrimination Mean 0.463 0.444
WIN Assessments Technical Manual | v5.2018 28
Median 0.465 0.430 Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
Overall, the mean and median difficulties and discriminations are quite similar. Within the levels, the
average values are similar across the two forms, with greater discrepancy at the upper performance
levels (6 and 7). This is not surprising given the relatively small items per level. The degree of
consistency is within acceptable limits given the small number of items.
The content representation of the two forms was also evaluated. The learning objectives for each
form and level are presented in Table 13.
Table 13: Reading for Information | Number of Items Addressing Each Learning Objective for Each Performance Level and Form
Level Objective Form 1 Form 2
3
1 1 2 2 1 1 3 2 1 4 1 1 5 1 1
4
5 1 1 6 2 2 7 1 2 8 2 1
5
9 2 1 10 2 2 11 1 2 12 1 1 13 1 1 14 2 2
6
15 1 1 16 1 1 17 2 2 18 1 1 19 2 2 20 1 1 21 1 1
7 22 1 2 23 1 1 24 2 1
Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
The reliability of each form was assessed in two ways: for each level on each form, and across levels
on each form. Since each level has relatively few items, it was expected that the reliability of each
WIN Assessments Technical Manual | v5.2018 29
level would be somewhat low. For the entire form, a reliability in the range of 0.80-0.90 was
expected. Table 14 provides the values for coefficient alpha for each level and form, as well across
all levels for each form.
Table 14: Reading for Information | Reliability of Each Level and Form
Level Form 1 Form 2 3 0.71 0.50 4 0.61 0.62 5 0.79 0.72 6 0.70 0.67 7 0.30 0.50 Total 0.91 0.89
Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D. For each level, the reliabilities are fairly similar with some exceptions at levels 3 and 7. At the total
test form level, the reliabilities are quite close, and both are adequately reliable.
Table 15 provides the item difficulty and discrimination for each form of Locating Information
assessment. Since separate forms were constructed for each performance level (3-7), there are
relatively small numbers of items in each form. Median values are reported along with the mean
values since mean values could be misleading due to the small number of items. Additional
information regarding the form construction can be found in the Appendix.
Table 15: Locating Information | Average Item Difficulty and Discrimination for Each Performance Level and Form
Level Statistic Form 1 Form 2
3 Difficulty Mean 0.694 0.616
Median 0.730 0.710
Discrimination Mean 0.418 0.396 Median 0.410 0.410
4 Difficulty
Mean 0.658 0.660 Median 0.680 0.680
Discrimination Mean 0.538 0.529 Median 0.560 0.540
5 Difficulty
Mean 0.483 0.475 Median 0.470 0.535
Discrimination Mean 0.478 0.403 Median 0.485 0.400
6 Difficulty
Mean 0.413 0.557 Median 0.420 0.530
Discrimination Mean 0.400 0.460 Median 0.380 0.420
Overall Difficulty Mean 0.598 0.600 Median 0.600 0.560
WIN Assessments Technical Manual | v5.2018 30
Discrimination Mean 0.478 0.463 Median 0.480 0.450
Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
Overall, the mean and median difficulties and discriminations are quite similar across the two forms.
Within each level, the average values are similar across the two forms, with more discrepancy at the
upper performance level (6). The degree of consistency is within acceptable limits given the small
number of items.
The content representation of the two forms was also evaluated. The learning objectives for each
form and level are presented in Table 16.
Table 16: Locating Information | Number of Items from Each Learning Objective for Each Performance Level and Form
Level Objective Form 1 Form 2
3 1 2 2 2 3 3
4
3 2 2 4 1 1 5 2 2 6 2 2 7 2 2
5 8 1 1 10 1 1 11 2 2
6 7 1 1 9 1 1 10 1 1
Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
Across the performance levels, there is a high degree of consistency in the content representation.
Both forms contain the same number of items for each objective.
The reliability of each form was assessed in two ways: for each level on each form, and across levels
on each form. Since each level has relatively few items, it was expected that the reliability of each
level would be somewhat low. For the entire form, a reliability in the range of 0.80-0.90 was
expected. Table 17 provides the values for coefficient alpha for each level and form, as well across
all levels for each form.
WIN Assessments Technical Manual | v5.2018 31
Table 17: Locating Information | Reliability of Each Level and Form
Level Form 1 Form 2 3 0.56 0.50 4 0.79 0.80 5 0.53 0.33 6 0.35 0.35 Overall 0.85 0.83
Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
For each level, the reliabilities are fairly similar. At the total test form level, the reliabilities are quite
close, and both are adequately reliable.
Initial Form Equating
The design for equating assessments is an important first step in the data collection process because
an understanding of the equating methodology is required to determine how data should be
collected. The WIN Career Ready to Work Assessments by content area (Applied Mathematics,
Reading for Information, Locating Information) and the corresponding WIN Ready to Work
Courseware posttests were equated directly to one another via a common-person design. That is,
the same or “common” people took both the field test and the posttests. The WIN Ready to Work
posttests were previously aligned to the various career readiness assessments including the original
ACT WorkKeys® assessments, college placement tests such as COMPASS and Accuplacer, and
other assessments such as the TABE, GED, and the ASVAB, so the posttests served as an
intermediate link between the WIN Ready to Work Assessments and these nationally recognized
assessments.
The common person equating design is a special case of random groups equating. When random
groups are used, the most general form of the equating function is obtained from the equipercentile
equating methods (Kolen & Brennan, 2004). The strength of equipercentile equating stems from the
fact that identical score distributions for the two test forms are created. The creation of identical
score distributions is a reasonable goal, since the same examinees responded to both tests.
Equipercentile equating function is defined such that the scores on one test have the same
percentile rank as forms on the test to which it is equated (Kolen & Brennan, 2004). This is
accomplished through the construction of cumulative probability distributions for the total test
scores on the forms to be equated. The cumulative probability function provides the probability of a
correct response, conditional on the total score of the examinee. One important feature of this
function is that it increases monotonically. Since total test scores were not available for the newly
WIN Assessments Technical Manual | v5.2018 32
constructed WIN Ready to Work Assessment forms, the process was modified, while maintaining the
goals of equipercentile equating.
For each item on the WIN Ready to Work Assessments, the conditional probability of a correct
response was computed by calculating the conditional proportion of examinees answering the item
correctly. Since total test scores were not available, the conditioning variable was the score on the
WIN Ready to Work Courseware posttests. For each score category of the posttest, the percent of
examinees in that category who answered the item correctly was calculated. This allowed the
construction of a cumulative probability function. However, the limited number of examinees at the
extreme score categories / levels diminished the stability and reliability of the function and could
lead to a function that is not monotonically increasing. This lack of mononicity violates the
assumption that as the examinee’s ability increases, so does the probability of a correct response.
To correct for the instability, it is common to use a smoothing technique that ensures the
monotonicity of the function (Kolen & Brennan, 2004). The smoothing was accomplished using data
from the score categories / levels with the most examinees: score Levels 2, 3, 4, 5 on the WIN Ready
to Work Courseware posttests. Using the conditional probabilities, a smoothing function was
obtained with a least squares approach. This smoothing function was then used to calculate the
smoothed conditional probabilities for each score category, resulting in a smoothed conditional
probability distribution.
The probabilities for each item were summed across all items on the test form to produce an
expected score on the total test. Thus, a cumulative probability function was obtained for the total
test score. This function was used to establish the cut points on the WIN Ready to Work
Assessments that classify scores into the score categories / levels on the WIN Ready to Work
Courseware posttests. For each adjacent score category on the posttest, a cut score was obtained to
separate the scores on the WIN Ready to Work Assessments. The cut scores were assumed to be
the average of the two scores on the WIN Ready to Work Assessments that corresponded to the
expected scores for the adjacent score categories / levels For example, to find the cut score
between Level 1 and 2, the expected score on the WIN Ready to Work Assessment for Level 1 was
averaged with the expected score on the WIN Ready to Work Assessment for Level 2. This value was
then rounded to the nearest integer value.
Since the WIN Ready to Work Assessments at each level are short, and there are many score
categories / levels for the WIN posttests, the score categories / levels on the WIN Ready to Work
Assessments that correspond to the various levels of the WIN posttests are quite small. Table 18
WIN Assessments Technical Manual | v5.2018 33
provides the cut scores on the WIN Ready to Work Assessments that correspond to the WIN Ready
to Work Courseware posttest scores categories / levels for each form and content area.
Table 18: WIN Ready to Work Assessment Cut Score Related To WIN Posttest Level Scores
Content Area/Form Level
0/1 ½ 2/3 3/4 4/5 5/6 6/7 AM Form 1 12 14 16 19 21 23 26 Form 2 11 14 17 19 22 25 28 RI Form 1 20 21 22 24 25 26 27 Form 2 16 19 21 23 25 28 30 LI Form 1 12 12 13 13 14 15 n/a Form 2 11 12 13 13 14 15 n/a
AM = Applied Mathematics RI = Reading for Information LI = Locating Information
Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
To equate the WIN Ready to Work Assessments to the ACT WorkKeys® Assessments, a common
person design was implemented, whereby the same examinees took both the WIN Ready to Work
Courseware posttests and the ACT WorkKeys® Assessments for each content area.
To equate the scores, an equipercentile approach was used, which utilized both pre-smoothing and
post-smoothing of the data. The pre-smoothing was done using log-linear models, and the post-
smoothing was conducted using cubic splines (Kolen & Brennan, 2004). This resulted in conversion
that linked the scores on the WIN posttests and the ACT WorkKeys® Assessments. The WIN
posttests were one level harder than the ACT WorkKeys® Assessments for Levels 3 and 4 in all three
content areas and for Level 5 in Reading for Information. Level 5 was of equal difficulty in Applied
Mathematics and Locating Information.
WIN Assessments Technical Manual | v5.2018 34
Scoring Methodology | Scale Scores
There are several types of transformations that can be used to create scale scores, the primary
distinctions being linear, piece-wise linear (dog-legged), and non-linear. Depending on the use of the
score, the appropriate transformation should be used. Non-linear transformations create scale scores
that are not interval-level scores. What that means is that a two-point change in scale score means
different things at different points in the scale. This can be confusing for users of the scores,
especially in cases where changes in scores might be calculated. A special case of the non-linear
transformation is a piece-wise linear transformation, where multiple linear transformations are used
throughout the score range. These types of transformations also lead to scales that are non-interval
level scales. To maintain the interval-level nature that is desirable for most score scales, a linear
transformation should be used. In these instances, changes of two points (or any other change) mean
the same thing throughout the score scale. For this reason, linear transformations are typically used.
In some cases, it is impossible to use a strict linear transformation, and a piecewise linear
transformation is used.
Scale compression exists when multiple raw scores are given the same scale score. When a linear or
piecewise linear transformation is used, it is easy to control for score compression by choice of the
number of scale points used (Kolen & Brennan, 2004). For example, if a test has 20 raw points, and
scores are transformed to a scale with fewer than 20 points, scale compression will necessarily
occur. Since this is not logical, it is easy to control scale compression by choosing a reporting scale
that has at least as many scale score points as there are raw score points. With non-linear
transformations, scale compression is a bigger issue and care must be taken to ensure that scale
compression does not occur.
Scale expansion exists when small changes in raw scores lead to large changes in scale scores
(Briggs & Weeks, 2009; Kolen & Brennan, 2004). Scale expansion is extremely common with non-
linear transformations. Piecewise linear transformations are also susceptible to scale expansion as
well, and as such, care must be exercised when constructing piecewise linear scales. With linear
scale transformations, scale expansion is controlled by limiting the range of scores that is used. For
instance, if there are 20 points on a test, transforming this to a 100-point scale would necessitate a
five-point change in scale score for a one-point change in raw score. Conversely, if a 40-point scale is
chosen, a raw score change of one point would translate only to a two-point change in scale score.
Since Applied Mathematics and Reading for Information have the same number of items, the scales
would be the same for each test, and the options analyzed first apply to both of these tests. Locating
WIN Assessments Technical Manual | v5.2018 35
Information has fewer score points, and as a result, the score options for this test were analyzed
separately.
Applied Mathematics and Reading for Information Assessments: 34 Points
As noted above, using a fully linear transformation would result in a scale with desirable properties.
Given the range of possible scores on Applied Mathematics and Reading for Information, a fully linear
transformation would result in a reporting scale of 200-268, using the following transformation:
Scale Score = Raw Score *2 + 200
In this instance, all scale scores would be even integers, and a change in one raw point would result
in a two-point change in the scale score. There would be no scale compression. In this case, the
scores would be only even integers. The potential downside to this scale is that the top score is 268,
which is unappealing. To create the scale such that the top score is 270, which is more appealing and
a more accepted practice, a piecewise linear transformation, or dog-legged transformation, was
applied. If the preceding transformation were used for all scores except the maximum score, which
would be set to 270, then there would be a four-point difference for a change in one raw score, but
only for the change from a raw score of 33 to a score of 34. The magnitude of the change in the scale
score is twice that of the change anywhere else in the scale. This is not a desirable feature in a score
scale. Instead, the difference is split between the top score (34) and the bottom score (0), since these
scores are less likely to occur. In this case, we have the following transformations:
Minimum Score (0) = 200
Maximum Score (34) = 270
All Other Scores = Raw Score *2 + 201
The result is a scale score within the range of 200-270. There is no scale compression and some
minor scale expansion relative to the fully linear transformation presented above. The dog-legged
transformation scores, as follows, are used for Applied Mathematics and Reading for Information.
Table 19: Scale Scores | Applied Mathematics and Reading for Information
Raw Score Linear Transformation Dog-legged Transformation
0 200 200 1 202 203 2 204 205 3 206 207 4 208 209 5 210 211
WIN Assessments Technical Manual | v5.2018 36
6 212 213 7 214 215 8 216 217 9 218 219 10 220 221 11 222 223 12 224 225 13 226 227 14 228 229 15 230 231 16 232 233 17 234 235 18 236 237 19 238 239 20 240 241 21 242 243 22 244 245 23 246 247 24 248 249 25 250 251 26 252 253 27 254 255 28 256 257 29 258 259 30 260 261 31 262 263 32 264 265 33 266 267 34 268 270
Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
Locating Information: 21 Points
Locating Information has only 21 possible score points. Therefore, it is ideal to use a different scale
than that used for the other two tests, to both avoid confusion among consumers and scale
expansion. In this case, the scale can be either 200-242, in the case of a strictly linear scale
transformation, or 200-240, with the dog-legged transformation.
For the strictly linear case, the reporting scale would be 200-242, using the following transformation:
Scale Score = Raw Score *2 + 200
In this instance, all scale scores would be even integers, and a change in one raw point would result
in a two-point change in the scale score. There would be no scale compression. In this case, the
scores would be only even integers. The potential downside to this scale is that the top score is 242,
which may be unappealing. To create the scale such that the top score is 240, which is more
appealing, a piecewise linear transformation, or dog-legged transformation was be used. Therefore,
we have the following transformations:
WIN Assessments Technical Manual | v5.2018 37
Minimum Score (0) = 200
Maximum Score (34) = 240
All Other Scores = Raw Score *2 +199
The result is a scale score within the range of 200-240. There is no scale compression and some
minor scale expansion relative to the fully linear transformation presented above. The dog-legged
transformation scores, as follows, are used for Locating Information.
Table 20: Scale Scores | Locating Information
Raw Score Linear Transformation Dog-legged Transformation
0 200 200 1 202 201 2 204 203 3 206 205 4 208 207 5 210 209 6 212 211 7 214 213 8 216 215 9 218 217 10 220 219 11 222 221 12 224 223 13 226 225 14 228 227 15 230 229 16 232 231 17 234 233 18 236 235 19 238 237 20 240 239 21 242 240
Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
WIN and Measured Progress continued to monitor scores for the WIN Ready to Work Assessments
and adjust the scoring methodology in 2012-13, using a categorical comparison process to compare
percentage categories of data at each level attained (<3 – 6/7) in the 2011-12 fiscal year to a
representative proportionate sample of the 2012-13 fiscal year to date. The following steps were
used:
1. Data were compiled from the 2011-12 Florida Ready to Work program, which administered the ACT WorkKeys® assessments to participants.
2. The percentage of examinees scoring at each level (<3 – 6/7) in 2011-2012 was calculated.
WIN Assessments Technical Manual | v5.2018 38
3. A proportionate representative sample assessed in 2012-13 was identified.
4. The proportionate sample data set was scrubbed to exclude clearly incomplete or non-representative data. This included removing assessments taken under the WIN Learning Assessment Center for internal testing of the Measured Progress system and removing data for proctors who were testing the Measured Progress system.
5. The proportionate sample data set was analyzed to determine the number and percentage of examinees who achieved each score point (1-21/34 items answered correctly) and cumulative percentages associated with each level within the test. The cumulative percentage categories were compared to those for the 2011-12 Florida Ready to Work program to determine cut scores that would result in the same score distribution as the 2011-12 assessments.
This process was chosen because an item-level data set was not available for the previous
assessments. However, the new items are considered aligned to prior items because the new items
are aligned to the WIN Ready to Work Courseware, which is aligned to the prior ACT WorkKeys®
assessments. Tables 21-23 summarize the analysis of the 2011-12 data.
Table 21: Applied Mathematics | Cumulative Percentages of Examinees Scoring at each Level (2011-12)
Level AM 2011-12
Number of Records 2011-12
Percent of Total 2011-12
Cumulative Percentage
2011-12 <3 790 5.76% 5.76% 3 2085 15.21% 20.97% 4 3009 21.95% 42.92% 5 4334 31.61% 74.53% 6 2850 20.79% 95.32% 7 641 4.68% 100.00%
Total AM Scores: 13709 100.00% 100.00% Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
Table 22: Reading for Information | Cumulative Percentages of Examinees Scoring at each Level (2011-12)
Level RI 2011-12
Number of Records 2011-12
Percent of Total 2011-12
Cumulative Percentage
2011-12 <3 346 2.58% 2.58% 3 376 2.80% 5.38% 4 3732 27.83% 33.21% 5 5462 40.73% 73.94% 6 2533 18.89% 92.83% 7 961 7.17% 100.00%
Total RI Scores: 13410 100.00% 100.00% Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
WIN Assessments Technical Manual | v5.2018 39
Table 23: Locating Information | Cumulative Percentages of Examinees Scoring at each Level (2011-12)
Level LI 2011-12
Number of Records 2011-12
Percent of Total 2011-12
Cumulative Percentage
2011-12 <3 717 5.06% 5.06% 3 2171 15.31% 20.37% 4 9031 63.69% 84.06% 5 2232 15.74% 99.80% 6 28 0.20% 100.00%
Total LI Scores: 14179 100.00% 100.00% Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D. .
The tables show the total number of scores earned at each level, the percentage of the total at each
level, and the cumulative percentages for each assessment: Applied Mathematics, Reading for
Information, and Locating Information. Tables 21-23 established a proportionate percentage that
should be attained at each level.
Using the combined data collected to date (fiscal year 2012-13), a process to determine cut scores in
equivalent categories / levels of performance (<3 – 6/7) was applied consistently. Tables 24-26
display the results of this process.
Table 24: Applied Mathematics
Applied Mathematics
Number Correct Total # Participants Percent Cumulative Percentage
1 0 0.00% 0.00% 2 0 0.00% 0.00% 3 0 0.00% 0.00% 4 1 0.13% 0.13% 5 1 0.13% 0.25% 6 6 0.75% 1.00% 7 10 1.25% 2.25% 8 12 1.50% 3.75% 9 24 3.00% 6.76%
10 17 2.13% 8.89% 11 18 2.25% 11.14% 12 20 2.50% 13.64% 13 33 4.13% 17.77% 14 29 3.63% 21.40% 15 36 4.51% 25.91% 16 44 5.51% 31.41% 17 45 5.63% 37.05% 18 44 5.51% 42.55%
WIN Assessments Technical Manual | v5.2018 40
19 41 5.13% 47.68% 20 44 5.51% 53.19% 21 45 5.63% 58.82% 22 56 7.01% 65.83% 23 36 4.51% 70.34% 24 26 3.25% 73.59% 25 33 4.13% 77.72% 26 22 2.75% 80.48% 27 32 4.01% 84.48% 28 25 3.13% 87.61% 29 25 3.13% 90.74% 30 21 2.63% 93.37% 31 20 2.50% 95.87% 32 17 2.13% 98.00% 33 10 1.25% 99.25% 34 6 0.75% 100.00%
799 100.00% Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
Table 25: Reading for Information
Reading for Information
Number Correct Total # Participants Percent Cumulative Percentage
1 0 0.00% 0.00% 2 0 0.00% 0.00% 3 0 0.00% 0.00% 4 0 0.00% 0.00% 5 0 0.00% 0.00% 6 0 0.00% 0.00% 7 0 0.00% 0.00% 8 1 0.14% 0.14% 9 1 0.14% 0.28%
10 1 0.14% 0.41% 11 3 0.41% 0.83% 12 2 0.28% 1.11% 13 7 0.97% 2.07% 14 7 0.97% 3.04% 15 8 1.11% 4.15% 16 10 1.38% 5.53% 17 9 1.24% 6.78% 18 10 1.38% 8.16% 19 18 2.49% 10.65% 20 17 2.35% 13.00% 21 22 3.04% 16.04% 22 25 3.46% 19.50% 23 31 4.29% 23.79% 24 28 3.87% 27.66% 25 39 5.39% 33.06% 26 42 5.81% 38.87% 27 50 6.92% 45.78% 28 59 8.16% 53.94% 29 54 7.47% 61.41%
WIN Assessments Technical Manual | v5.2018 41
30 69 9.54% 70.95% 31 70 9.68% 80.64% 32 63 8.71% 89.35% 33 49 6.78% 96.13% 34 28 3.87% 100.00%
723 100.00% Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
Table 26: Locating Information
Locating Information
Number Correct Total # Participants Percent Cumulative Percentage
1 0 0.00% 0.00% 2 0 0.00% 0.00% 3 2 0.27% 0.27% 4 0 0.00% 0.27% 5 3 0.40% 0.67% 6 6 0.81% 1.48% 7 13 1.75% 3.23% 8 10 1.35% 4.58% 9 16 2.16% 6.74%
10 25 3.37% 10.11% 11 18 2.43% 12.53% 12 25 3.37% 15.90% 13 38 5.12% 21.02% 14 63 8.49% 29.51% 15 65 8.76% 38.27% 16 89 11.99% 50.27% 17 84 11.32% 61.59% 18 83 11.19% 72.78% 19 80 10.78% 83.56% 20 78 10.51% 94.07% 21 44 5.93% 100.00%
742 100.00%
Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
Tables 21-23 were extended using the score distribution in Tables 24-26 to produce the cut scores
displayed in the following tables.
Table 27: Applied Mathematics | 2012-13 Cut Scores
Level AM 2011-12
Number of Records 2011-12
Percent of Total 2011-12
Cumulative Percentage
2011-12
Cut Score Range 2012-13
# Correct <3 790 5.76% 5.76% 1-9 3 2085 15.21% 20.97% 10-14 4 3009 21.95% 42.92% 15-18
WIN Assessments Technical Manual | v5.2018 42
5 4334 31.61% 74.53% 19-24 6 2850 20.79% 95.32% 25-31 7 641 4.68% 100.00% 32-34
Total AM Scores: 13709 100.00% 100.00% Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
The cumulative percentage for the 2011-12 Applied Mathematics test is 5.76%. By comparing this to
the 2012-13 sample it was determined that a Level 3 score begins when an examinee answers 10
questions correctly. This leads to a <3 score range of 1-9 correct. The same process was applied to
determine subsequent levels and to create similar cut score tables for Reading for Information and
Locating Information.
Table 28: Reading for Information | 2012-13 Cut Scores
Level RI 2011-12
Number of Records 2011-12
Percent of Total 2011-12
Cumulative Percentage
2011-12
Cut Score Range 2012-13
# Correct <3 346 2.58% 2.58% 1-14 3 376 2.80% 5.38% 15-16 4 3732 27.83% 33.21% 17-25 5 5462 40.73% 73.94% 26-30 6 2533 18.89% 92.83% 31-32 7 961 7.17% 100.00% 33-34
Total RI Scores: 13410 100.00% 100.00% Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
Table 29: Locating Information | 2012-13 Cut Scores
Level LI 2011-12
Number of Records 2011-12
Percent of Total 2011-12
Cumulative Percentage
2011-12
Cut Score Range 2012-13
# Correct <3 717 5.06% 5.06% 1-8 3 2171 15.31% 20.37% 9-13 4 9031 63.69% 84.06% 14-19 5 2232 15.74% 99.80% 20 6 28 0.20% 100.00% 21
Total LI Scores: 14179 100.00% 100.00% Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
The following scoring matrix was developed to align cut scores with scale scores.
Table 30: WIN Ready to Work Assessments | Scoring Matrix
WIN Ready to Work Assessments Scoring Matrix
WIN Assessments Technical Manual | v5.2018 43
Number
Correct
AM and RI Scale Score
AM Form 1
AM Form 2
RI Form 1
RI Form 2
LI Form 1
LI Form 2
LI Scale Score
0 200 0 0 0 0 0 0 200 1 203 0 0 0 0 0 0 201 2 205 0 0 0 0 0 0 203 3 207 0 0 0 0 0 0 205 4 209 0 0 0 0 0 0 207 5 211 0 0 0 0 0 0 209 6 213 0 0 0 0 0 0 211 7 215 0 0 0 0 0 0 213 8 217 0 0 0 0 0 0 215 9 219 0 0 0 0 3 3 217 10 221 3 3 0 0 3 3 219 11 223 3 3 0 0 3 3 221 12 225 3 3 0 0 3 3 223 13 227 3 3 0 0 3 3 225 14 229 3 3 0 0 4 4 227 15 231 4 4 3 3 4 4 229 16 233 4 4 3 3 4 4 231 17 235 4 4 4 4 4 4 233 18 237 4 4 4 4 4 4 235 19 239 5 5 4 4 4 4 237 20 241 5 5 4 4 5 5 239 21 243 5 5 4 4 6 6 240 22 245 5 5 4 4 23 247 5 5 4 4 24 249 5 5 4 4 25 251 6 6 4 4 26 253 6 6 5 5 27 255 6 6 5 5 28 257 6 6 5 5 29 259 6 6 5 5 30 261 6 6 5 5 31 263 6 6 6 6 32 265 7 7 6 6 33 267 7 7 7 7 34 270 7 7 7 7
AM = Applied Mathematics RI = Reading for Information LI = Locating Information
Adjusted cut scores implemented 8.17.2012. Prepared by WIN Senior Psychometrician Lisa Keller, Ed.D. and
independently verified by Measured Progress Psychometricians Stuart Kahl, Ph.D. and Michael Nering, Ph.D.
Ongoing Analyses and Development
Ongoing review of the relevance and rigor of the WIN Ready to Work Assessments, including the
standards / learning objectives and scoring methodology, is substantively and continuously
informed by current industry research, employers and customer feedback.
WIN Assessments Technical Manual | v5.2018 44
Live item statistics are continually monitored to ensure optimum performance of the assessment
instruments as a whole and of individual items.
A comprehensive refresh and replacement of items is currently underway with new items projected
to be ready for field testing in summer 2018 and new equated forms projected for release by winter
2018-19.
WIN Assessments Technical Manual | v5.2018 45
Chapter IV | WIN Essential Soft Skills Assessment
The WIN Essential Soft Skills Assessment (also known as the Situational Judgment Assessment)
was developed from 2004 to 2006 and first published in 2006 by the National Work Readiness
Council (nwrc.org), a national nonprofit workforce development, training and advocacy organization
founded by the U.S. Chamber of Commerce; the state departments of labor, workforce development
or equivalent for New York, New Jersey, Florida, Washington, Rhode Island, and the District of
Columbia; and Junior Achievement.
The NWRC contracted with Castle Worldwide (castleworldwide.com), a national high-stakes industry
licensure testing and credentialing company, to lead the initial design, development, and delivery of
the assessment. Education, workforce development, and business / industry subject matter experts
directly participated in every stage of development and validation of assessment and included:
▪ National Institute for Literacy ▪ National Skills Standards Board ▪ U.S. Department of Education and U.S. Department of Labor ▪ New York State AFL-CIO representing labor ▪ National business / industry associations, including the National Retail Federation and
National Association of Manufacturers, and more than 100 employers, large and small, spanning industry sectors
The Human Resources Research Organization (humrro.org), nonprofit organization with expertise in
human capital management, training, credentialing and program analysis, coordinated the initial field
test and development of the initial cut scores. SRI International (sri.com), an independent, nonprofit
research center with expertise in assessment design and evaluation, conducted and authored the
initial validity and reliability study.
To reconfirm content validity of the assessment, a role delineation study was conducted in 2007-08
and published in 2009 by Castle Worldwide. The work of the four role delineation panels was then
validated by a national sample of supervisors. The number and content of the assessment items are
linked directly to the role delineations. The number of each type of item is the direct result of how
important the validation sample felt the content is and how frequently the content is used by an
entry-level worker. The assessment items were developed and initially reviewed by adult educators.
The item count and content were driven by the assessment blueprints that were derived from the
role delineation validation surveys. The items were reviewed by English and psychometric editors
applying the assessment design and development principles described in Chapter II. Final reviews
were conducted by field reviewers designated by the National Work Readiness Council and included
educators, workforce developers and employers.
WIN Assessments Technical Manual | v5.2018 46
The assessment items were refreshed, and the scoring methodology was revalidated and adjusted in
2010 by Castle Worldwide. In 2015, WIN became the exclusive provider of the assessment. Castle
Worldwide continued to support online delivery, scoring and psychometric analyses of the
assessment until 2016 when delivery and scoring was transitioned to a new state-of-the-art online
platform maintained by WIN.
Continuous item development and quality assurance monitoring is now managed by a WIN team of
highly experienced instructional design, assessment development, psychometric and technology
professionals. The development process is informed by industry research and subject matter experts
including educators, workforce developers and employers.
General Characteristics
The WIN Essential Soft Skills Assessment is based on the Equipped for the Future standards
developed by National Institute for Literacy (nifl.gov) in partnership with the Center for Literacy,
Education and Employment (clee.utk.edu) at the University of Tennessee and is supported by 20
years of employer-focused research including, but not limited to, the U.S. Department of Labor
Secretary’s Commission on Achieving Necessary Skills (SCANS); U.S. Department of Labor Building
Blocks Competency Model; U.S. Department of Education Employability Skills Framework; and
National Network of Business and Industry Associations Common Employability Skills.
The WIN Essential Soft Skills Assessment measures foundational work habits and employability
skills in demand by employers including, but not limited to, the ability to solve problems and make
decisions, cooperate with others, resolve conflict and negotiate, observe critically, and take
responsibility for learning.
The assessment is aligned to the WIN Soft Skills Curriculum. The curriculum is designed based on
the aforementioned research and a more recent soft skills specific study commissioned by the South
Carolina Workforce Investment Board and published in 2010 by Dr. Richard Nagle, Ph.D. in
partnership with the University of South Carolina. The study gathered input from the South Carolina
business community through 46 focus groups statewide and further defined essential soft skills
required for employment and job retention to include communicating effectively, conveying
professionalism, promoting teamwork and collaboration, and thinking critically and solving problems.
The WIN Essential Soft Skills Assessment is criterion-referenced against an absolute standard or
“criterion” for performance. Thus, the assessment measures mastery of specific learning objectives
rather than comparing an individual’s scores to the performance of other examinees. The items are
WIN Assessments Technical Manual | v5.2018 47
career contextualized and are designed to measure the ability to apply the targeted career readiness
skills and attributes, not simply demonstrate knowledge of the related concepts.
The assessment consists of 40 multiple-choice items. Each item presents a brief workplace situation,
followed by a question and four options. Examinees are prompted to make two answer selections for
each situation – the best option and the worst option – therefore yielding 80 potential responses.
The assessment is proctored. Standard administration time is 60 minutes. Accommodations,
including extension of time, are permissible based on commonly accepted policies and procedures.
The assessment is delivered online through the WIN Career Readiness System. The assessment is
currently available in English only. A paper-based version of the assessment is in development and
scheduled for release in spring 2018. WIN has partnered with Scantron (scantron.com) to support
printing, distribution and scoring of the paper-based version of the assessment. Large print and
Braille versions of the assessment are available upon request.
As of fall 2017, approximately 100,000 of the WIN Essential Soft Skills Assessment have been
effectively administered by more than 1,000 implementation partners nationwide including high
schools, adult education programs, community colleges, technical centers, state workforce
development system career centers, juvenile justice and corrections providers, community-based
organizations, and employers.
Standards
The primary essential employability / soft skills targeted by the WIN Essential Soft Skills
Assessment are summarized by Table 31.
Table 31: WIN Essential Soft Skills Assessment | Standards
Domains / Skills Number of Items
Total Points
Domain I: Resolve Conflict Ability to identify the source of conflict, suggest options to resolve it, and assist the parties in conflict reach a mutually satisfactory agreement.
7 14
Acknowledge conflict by defining conflict 1 Acknowledge conflict by identifying areas of agreement and disagreement
1
Acknowledge conflict by accurately restating conflict with some detail and examples
1
Generate options for resolving the conflict that have a win/win potential 1
WIN Assessments Technical Manual | v5.2018 48
Negotiate an agreement that will satisfy the conflicted parties using a range of strategies to facilitate negotiation
1
Negotiate an agreement that will satisfy the conflicted parties by monitoring the process for its effectiveness and fairness
1
Evaluate the results of the negotiation 1 Domain II: Cooperate with Others Ability to interact and communicate with others in a friendly and courteous manner, show respect for other ideas and options, and adjust actions to take into the account the needs of others and/or the task to be accomplished.
9 18
Interact with others in ways that are friendly, courteous, and tactful and that demonstrate respect for ideas, opinions, and contributions of others
2-3
Use strategies to seek input from others in order to understand their actions and reactions
2-3
Offer clear input on personal interests and attitudes so that others can understand one’s actions and reactions and to clarify one’s position
2-3
Try to adjust one’s actions to take into account the needs of others and/or the tasks to be accomplished
2-3
Domain III: Take Responsibility for Learning Ability to identify own strengths and weaknesses, set goals for learning, identify and pursue opportunities for learning, and monitor progress toward goals.
9 18
Identify how you learn effectively 1-2 Identify a learning goal 1-2 Select and use strategies and information appropriate to learning goal 1-2 Monitor/manage progress toward achieving learning goal 1-2 Monitor effectiveness of learning process 1-2 Domain IV: Solve Problems and Make Decisions Ability to identify the nature of a problem, evaluate ways to solve the problem, and select the best alternative.
8 16
Identify problem to be solved or decision to be made 1-2 Understand and communicate root causes of problem 1-2 Generate possible solutions that address the root causes of the problem 1-2 Evaluate options and select the one most likely to succeed based on apparent causal connection and appropriateness to the context
1-2
Plan and implement chosen solution 1-2 Monitor effectiveness toward a solution 1-2 Domain V: Observe Critically Ability to look carefully at visual sources of information, evaluate the information for accuracy, and develop clear understanding of the information.
7 14
Focus attention on visual information and locate information relevant to a purpose
2-3
Analyze and interpret relevant information 2-3 Monitor comprehension of visual information to achieve purpose 2-3
WIN Assessments Technical Manual | v5.2018 49
Field Testing
To evaluate the validity and reliability of the WIN Essential Soft Skills Assessment, a field test was
administered in nine states and the District of Columbia. Participants included entry level employees
and their supervisors and represented statistically valid samples based on race / ethnicity, gender
and industry. The resulting evaluation affirms that the Essential Soft Skills Assessment assessments
are statistically valid and reliable.
Field test results were computed for the assessment and targeted subgroups to detect any
differential performance. Table 32 shows the descriptive statistics for the assessment. There were
adequate levels of variability in the scores on the assessment.
Table 32: WIN Essential Soft Skills Assessment | Field Test Total Test Statistics
N Minimum Maximum Mean SD
485 19 96 71.07 17.41
Based on 51 items; 4 poorly performing field test items were deleted from the analysis. Prepared by Castle Worldwide Psychometrician James Penny, Ph.D.
The assessment was then examined for group differences by race, gender, employment status, and
industry to determine if certain subgroups were more adversely affected by the assessment than
other groups. The following table presents comparisons by race/ethnicity and gender. The last
column of the table shows the effect size, or d statistic, which estimates how meaningful the
difference is in a practical sense.
Moderate differences occurred between whites and African-Americans (with whites scoring higher).
In terms of white/Hispanic comparisons, differences were relatively small (with whites scoring
higher). In terms of gender, females scored slightly higher than males.
Table 33: Race and Gender
White Mean
SD
N
African- American Mean
SD
N
da
75.14 13.74 220 64.70 20.04 169 .62
White Mean
SD
N Hispanic Mean
SD
N
d 76.14 13.74 220 71.89 16.50 75 .29
WIN Assessments Technical Manual | v5.2018 50
Male Mean
SD
N Female Mean
SD
N
d 67.90 19.77 202 73.34 15.18 283 -.32
a Effect size (d) conventions include .20 as small, .50 as medium, and .80 as large. Note it is common to find a 1.0 white/African-American effect size for cognitive tests. (Chan et al., 1997; Roth et al., 2001; and Rushton & Jensen, 2005)
Prepared by Castle Worldwide Psychometrician James Penny, Ph.D.
The following table shows the mean (average) differences in the scores by employment status.
Table 34: Mean Scores by Employed / Not Employed Status
Mean SD N
Employed 71.79 17.17 307
Not employed 69.84 17.86 178
Prepared by Castle Worldwide Psychometrician James Penny, Ph.D.
Scores from participants employed in each industry were compared with the “not employed” referent
group. Scores did not differ by industry.
Field Test Data Analyses
Criterion-related validation evidence was collected for each predictor and for the assessment. Table
36 shows the correlations between the assessment score and its corresponding supervisor skill
ratings. As expected, higher scores were associated with higher ratings on the respective skills.
Table 36: WIN Essential Soft Skills Assessment | Correlations
N Correlation Sig (p)
232 0.251 0.000
a Correlated with an average rating across five skills: Cooperate, Resolve Conflict, Solve Problems, Take Responsibility for Learning, and Observe Critically
Prepared by Castle Worldwide Psychometrician James Penny, Ph.D.
The following table presents criterion-related validity results for the assessment. When using high-
stakes tests (e.g., for selection purposes), it is also important to examine their effects in the best
WIN Assessments Technical Manual | v5.2018 51
possible light; that is, under conditions in which the test will perform ideally. Although it is common
practice to use supervisor ratings of performance as a criterion variable, supervisor ratings always
entail some amount of error or imperfection. For example, supervisors cannot see every aspect of
performance, and they are often susceptible to “rating errors” when evaluating employees (e.g.,
rating the same across competencies, rating everyone highly). Thus, we also corrected the criterion-
related validity estimates for unreliability in the criterion measure. The Corrected R was .54,
suggesting that under ideal conditions the full battery explains 29.1% of the variance in the criterion.
Table 37: WIN Essential Soft Skills Assessment | Criterion-Related Validity Results
Model Instrument Ra R Squareb Sig (p) 2 ESSA .327 .107 .000
Corrected Rc 0.540 Validity coefficient 0.291
a R indicates the correlation between the assessments and the criterion. The value of a correlation is between –1 and +1. The farther away a correlation coefficient is from 0, the stronger the relationship between the two (or more variables). The R includes all four assessments, with the assessment listed being the first assessment to be entered into the regression equation. The criterion consists of the supervisors’ aggregated average rating across nine skills.
b R Square indicates how much of entry-level work readiness is accounted for or explained by the assessment.
c R is corrected for the unreliability of the criterion. The estimated reliability for single judgment ratings by supervisors is .52. The formula for correcting error in criterion measurement is: Rxy
√Ryy
Where, Rxy is the correlation between the predictors and the criterion, and Ryy is the reliability of the criterion. (Hunter & Schmidt, 2004; Oswald & Converse, 2005; Visweswaran, 1996) Prepared by Castle Worldwide Psychometrician James Penny, Ph.D.
Item statistics were computed separately for best and worst choices (i.e., as two separate 55-item
tests). IRT (Rasch) analysis was conducted. An item was dropped if either the best or worst items was
a poor discriminator, or if it was too easy or too difficult. In addition, it was necessary to keep a
balanced number of items measuring each of the five domains. Therefore, some items were dropped
that were better than others.
Table 38: Item Selection
WIN Essential Soft Skills Assessment Item Selection
Items Field Tested Reliability Items Recommended Dropped
Items Include in Final Version
Reliability
WIN Assessments Technical Manual | v5.2018 52
55 .90 15 40 .92 Prepared by Castle Worldwide Psychometrician James Penny, Ph.D.
Scoring Methodology
Each WIN Essential Soft Skills Assessment item contains two linked questions that are each scored
as dichotomies, 0 or 1 for incorrect and correct, the first of the two linked questions asks the
examinee to identify the best response to the presented situation. The second of the two linked
questions asks the examinee to identify the worse response to the presented situation. The scores on
the two linked items are added together to create the score for the situation. Those values are 0, 1 or
2. The sum of the score for each situation represents the score for the item. The passing score is 48.
The passing point was determined using a modified Angoff method (1971), a well-researched and
widely used method for setting criterion-referenced cut scores. Subject matter experts were asked
to take the assessment and then predict the proportion of minimally-qualified candidates who would
likely answer each item correctly. The panel consisted of educators, workforce development experts,
and employers.
After the Angoff study, the assessment was given to approximately 100 examinees. The score
distributions of the cohort groups were reviewed by field reviewers. Estimates for each question
were examined and outliers, estimates greater than 1.5 standard deviations away from the mean,
were deleted.
Table 39: WIN Essential Soft Skills Assessment | Scoring Matrix
WIN Essential Soft Skills Assessment Scoring Matrix
Number Correct
ESSA Scale Score
Number Correct
ESSA Scale Score
0 0 41 58 1 0 42 60 2 0 43 61 3 0 44 63 4 0 45 65 5 0 46 66 6 0 47 68 7 1 48 70 8 3 49 71 9 5 50 73 10 6 51 75 11 8 52 76 12 9 53 78 13 11 54 80 14 13 55 81
WIN Assessments Technical Manual | v5.2018 53
15 15 56 83 16 16 57 85 17 18 58 86 18 20 59 88 19 21 60 90 20 23 61 91 21 25 62 93 22 26 63 95 23 28 64 96 24 30 65 98 25 31 66 100 26 33 67 100 27 35 68 100 28 36 69 100 29 38 70 100 30 40 71 100 31 41 72 100 32 43 73 100 33 45 74 100 34 46 75 100 35 48 76 100 36 50 77 100 37 51 78 100 38 53 79 100 39 55 80 100 40 56
Prepared by Castle Worldwide Psychometrician James Penny, Ph.D.
Ongoing Analyses and Development
Ongoing review of the relevance and rigor of the WIN Essential Soft Skills Assessment, including
the standards / learning objectives and scoring methodology, is substantively and continuously
informed by current industry research, employers and customer feedback.
Live item statistics are continually monitored to ensure optimum performance of the assessment
instrument as a whole and of individual items.
A comprehensive refresh and replacement of items is currently underway with new items projected
to be ready for field testing in summer 2018 and new equated forms are projected for release by
winter 2018-19.
WIN Assessments Technical Manual | v5.2018 54
References
Frary, R.B. (1995). More multiple-choice item writing do's and don'ts. Practical Assessment, Research & Evaluation,
4(11). Retrieved September 19, 2006 from http://PAREonline.net/getvn.asp?v=4&n=11 Gunning, T.G. (2003). The role of readability in today’s classrooms. Topics in Language
Disorders, 23(3), 175-189. Haladyna, T. (2004). Developing and validating multiple-choice test items. Mahwah, NJ: Lawrence Erlbaum. Haladyna, T. & S. Downing. (1989a). A taxonomy of multiple-choice item-writing rules. Applied Measurement in
Education, 2(1), 37-50. Haladyna, T. & S. Downing. (1989b). Validation of a taxonomy of multiple-choice item-writing rules. Applied
Measurement in Education, 2(1), 51-78. Haladyna, T., Downing, S. & M. Rodriguez. (2002). A Review of Multiple-Choice Item-Writing Guidelines for
Classroom Assessment. Applied Measurement in Education, 15(3), 309-334. Johnstone, C. J. (2003). Improving validity of large-scale tests: Universal design and student performance (Technical
Report 37). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved August 7, 2006, from the World Wide Web: http://education.umn.edu/NCEO/OnlinePubs/Technical37.htm
Kehoe, J. (1995). Writing multiple-choice test items. Practical Assessment, Research
& Evaluation, 4(9). Retrieved September 19, 2006 from http://PAREonline.net/getvn.asp?v=4&n=9 Knecht, B. (2004). Accessibility regulations and a universal design philosophy inspire the design process instead of
stifling creativity: A climate of access pushes architects to be inventive. Adaptive Environments. Retrieved January 27, 2007, from the World Wide Web: http://www.adaptenv.org/index.php?option=Resource&articleid=356&topicid=28
Koenke, K. (1971). Another Practical Note on Readability Formulas. Journal of Reading,
15(3), 203-208. Mace, R. (1998). A perspective on universal design. An edited excerpt of a presentation at Designing for the 21st
Century: An International Conference on Universal Design. Retrieved January 29, 2007, from the World Wide Web: http://www.adaptenv.org/index.php?option=Resource&articleid=156&topicid=28
Pisha, B., & Coyne, P. (2001). Smart from the start: The promise of universal design for learning. Remedial and
Special Education, 22(4), 197-203. Rivera, C., Stansfield, C.W., Scialdone, L., & Sharkey, M. (2000). An analysis of state policies for the inclusion and
accommodation of English language learners in state assessment programs during 1998-1999. Arlington, VA: The George Washington University Center for Equity and Excellence in Education.
Stansfield, C.W. & Bowles, M. (2006). Study 2: Test translation and state assessment policies for English language
learners. In C. Rivera & E. Collum, (Eds.), State assessment policy and practice for English Language Learners: A national perspective. Mawah, NJ: Lawrence Erlbaum.
Thompson, S. J., Johnstone, C. J., & Thurlow, M. L. (2002). Universal design applied to large-scale assessments
(Synthesis Report 44). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.
Nagle, Richard (January 2010) Hiring, Retention and Training: Employers’ Perspective on Trade and Soft Skills in
South Carolina. Columbia, South Carolina: University of South Carolina
WIN Assessments Technical Manual | v5.2018 55
Appendix A WIN Ready to Work Assessments | ACT WorkKeys® Alignment Table A-1: WIN Ready to Work Assessments | ACT WorkKeys® Alignment
ACT
WorkKeys® Skill Area and
Level
ACT WorkKeys® Objectives
WIN Ready to Work Assessment Objectives
Applied Mathematics Level 3
● Solve problems that require a single type of mathematics operation
● Change numbers from one form to another
● Convert simple money and time units
● Solve problems that require a single type of mathematical operation (addition, subtraction, multiplication, and division) using whole numbers
● Add or subtract negative numbers ● Change numbers from one form to another
using whole numbers, fractions, decimals, or percentages
● Convert simple money and time units (e.g., hours to minutes)
Applied Math Level 4
● Put the information in the right order before they perform calculations.
● Solve problems that require one or two operations
● Figure out averages, simple ratios, simple proportions, or rates using whole numbers and decimals
● Add commonly known fractions, decimals, or percentages
● Add three fractions that share a common denominator
● Multiply a mixed number by a whole number or decimal
● Solve problems that require one or two operations
● Multiply negative numbers ● Calculate averages, simple ratios, simple
proportions, or rates using whole numbers or decimals
● Add up to three fractions that share a common denominator
● Add commonly known fractions, decimals, or percentages (e.g., ½, .75, 25%)
● Multiple a mixed number by a whole number or decimal
● Put information in the right order before performing calculations
Applied Math Level 5
● Decide what information, calculations, or unit conversions to use to find the answer to a problem
● Calculate perimeters and areas of basic shapes
● Look up a formula and change from one unit to another in a single step within a system of measurement or between systems of measurement
● Calculate using mixed units ● Divide negative numbers ● Calculate percent discounts or markups ● Identify the best deal by doing one- and
two step calculations
● Decide what information, calculations, or unit conversions to use to solve the problem
● Look up a formula and perform single-step conversions within or between systems of measurement
● Calculate using mixed units (e.g., 3.5 hours and 4 hours 30 minutes)
● Divide negative numbers ● Find the best deal using one- and two-step
calculations and then comparing results ● Calculate perimeters and areas of basic
shapes (rectangles and circles) ● Calculate percent discounts or markups
WIN Assessments Technical Manual | v5.2018 56
Applied Math Level 6
● Use fractions, negative numbers, ratios, percentages, or mixed numbers
● Rearrange a formula before solving a problem
● Look up and use two formulas to change from one unit to another unit within the same system of measurement
● Look up and use two formulas to change from one unit in one system of measurement to a unit in another system of measurement
● Find the area of basic shapes (rectangles and circles)
● Find the volume of rectangular solids ● Find the best deal and use the result for
another calculation ● Find mistakes in Levels 3, 4, and 5
problems
● Use fractions, negative numbers, ratios, percentages, or mixed numbers
● Rearrange a formula before solving a problem
● Use two formulas to change from one unit to another within the same system of measurement
● Use two formulas to change from one unit in one system of measurement to a unit in another system of measurement
● Find mistakes in questions that belong at Levels 3, 4, and 5
● Find the best deal and use the result for another calculation
● Find areas of basic shapes when it may be necessary to rearrange the formula, convert units of measurement in the calculations, or use the result in further calculations
● Find the volume of rectangular solids ● Calculate multiple rates
Applied Math Level 7
● Solve problems that include nonlinear functions (such as rate of change) and/or that involve more than one unknown
● Convert between systems of measurement that involve fractions, mixed numbers, decimals, and/or percentages
● Calculate volumes of spheres, cylinders, or cones
● Calculate multiple areas and volumes. ● Set up and manipulate complex ratios or
proportions ● Find the best deal when they have
several choices ● Find mistakes in Level 6 problems ● Apply basic statistical concepts
● Solve problems that include nonlinear functions and/or that involve more than one unknown
● Find mistakes in Level 6 questions ● Convert between systems of measurement
that involve fractions, mixed numbers, decimals, and/or percentages
● Calculate multiple areas and volumes of spheres, cylinders, or cones
● Set up and manipulate complex ratios or proportions
● Find the best deal when there are several choices
● Apply basic statistical concepts
Reading for Information Level 3
● Pick out the main ideas and clearly stated details
● Choose the correct meaning of a word when the word is clearly defined in the reading
● Choose the correct meaning of common every day and workplace words
● Choose when to perform each step in a short series of steps
● Apply instructions to a situation that is the same as the one they are reading about
● Identify main ideas and clearly stated details
● Choose the correct meaning of a word that is clearly defined in the reading
● Choose the correct meaning of common, everyday workplace words
● Choose when to perform each step in a short series of steps
● Apply instructions to a situation that is the same as the one in the reading materials
Reading for Information Level 4
● Identify important details that may not be clearly stated
● Use the reading material to figure out the meaning of words that are not defined for them
● Apply instructions with several steps to a situation that is the same as the situation in the reading materials
● Choose what to do when changing conditions call for a different action
● Recognize cause-effect relationships
● Identify important details that may not be clearly stated
● Use the reading material to figure out the meaning of words that are not defined
● Apply instructions with several steps to a situation that is the same as the situation in the reading materials
● Choose what to do when changing conditions call for a different action (follow directions that include "if-then" statements)
WIN Assessments Technical Manual | v5.2018 57
Reading for Information Level 5
● Figure out the correct meaning of a word based on how the word is used
● Identify the correct meaning of an acronym that is defined in the document
● Identify the paraphrased definition of a technical term or jargon that is defined in the document
● Apply technical terms and jargon and relate them to stated situations
● Apply straightforward instructions to a new situation that is similar to the one described in the material
● Apply complex instructions that include conditionals to situations described in the materials
● Figure out the correct meaning of a word based on how the word is used
● Identify the correct meaning of an acronym that is defined in the document
● Identify the paraphrased definition of a technical term or jargon that is defined in the document
● Apply technical terms and jargon and relate them to stated situations
● Apply straightforward instructions to a new situation that is similar to the one described in the material
● Apply complex instructions that include conditionals to situations described in the materials
Reading for Information Level 6
● Identify implied details ● Use technical terms and jargon in new
situations ● Figure out the less common meaning of
a word based on the context ● Apply complicated instructions to new
situations ● Figure out the principles behind policies,
rules, and procedures. ● Apply general principles from the
materials to similar and new situations
● Explain the rationale behind a procedure, policy, or communication
● Identify implied details ● Use technical terms and jargon in new
situations ● Figure out the less common meaning of a
word based on the context ● Apply complicated instructions to new
situations ● Figure out the principles behind policies,
rules, and procedures ● Apply general principles from the materials
to similar and new situations
● Explain the rationale behind a procedure, policy, or communication
Reading for Information Level 7
● Figure out the definitions of difficult, uncommon words based on how they are used
● Figure out the meaning of jargon or technical terms based on how they are used
● Figure out the general principles behind the policies and apply them to situations that are quite different from any described in the materials
● Figure out the definitions of difficult, uncommon words based on how they are used
● Figure out the meaning of jargon or technical terms based on how they are used
● Figure out the general principles behind policies and apply them to situations that are quite different from any described in the materials
Locating Information Level 3
● Find one or two pieces of information in a graphic
● Fill in one or two pieces of information that are missing from a graphic
● Find one or two pieces of information in a graphic
● Fill in one or two pieces of information that are missing from a graphic
Locating Information Level 4
● Find several pieces of information in one or two graphics
● Understand how graphics are related to each other
● Summarize information from one or two straightforward graphics
● Identify trends shown in one or two straightforward graphics
● Compare information and trends shown in one or two straightforward graphics
● Find several pieces of information in one or two graphics
● Understand how graphics are related to each other
● Summarize information from one or two straightforward graphics
● Identify trends shown in one or two straightforward graphics
● Compare information and trends shown in one or two straightforward graphics
Locating Information Level 5
● Sort through distracting information ● Summarize information from one or
more detailed graphics ● Identify trends shown in one or more
detailed or complicated graphics
● Sort through distracting information ● Summarize information from one or more
detailed graphics ● Identify trends shown in one or more
detailed or complicated graphics
WIN Assessments Technical Manual | v5.2018 58
● Compare information and trends from one or more complicated graphics
● Compare information and trends from one or more complicated graphics
Locating Information Level 6
● Draw conclusions based on one complicated graphic or multiple related graphics
● Apply information from one or more complicated graphics to specific situations
● Use the information to make decisions
● Sort through distracting information ● Summarize information from one or more
detailed graphics ● Identify trends shown in one or more
detailed or complicated graphics ● Compare information and trends from one
or more complicated graphics
WIN Assessments Technical Manual | v5.2018 59
Appendix B WIN Ready to Work Assessments | Grade Equivalency Applied Mathematics | Level - Grade Equivalencies To determine an approximate grade equivalency (GE) for each level of the WIN Ready to Work
Courseware posttests and proctored WIN Ready to Work Assessment for Applied Mathematics, the
objectives and corresponding test items were compared to the Common Core State Standards
(CCSS) for Mathematics. The goal was to identify the CCSS standard most closely aligned to the
items for each WIN objective. The alignment relationships were used to estimate the grade or grade
span most closely associated with content at each Applied Mathematics level.
Table B-1 shows the results of this study. The CCSS mathematics codes come from CCSS
documentation. The first digit indicates the grade level. The second set of characters indicates the
domain, such as “Operations and Algebraic Thinking” (OA) or “Measurement and Data” (MD).
Figure B-1 provides a key to reading the CCSS codes.
Figure B-1: Key to K–8 Common Core Mathematics Codes
4.OA.1.a
Grade Domain Standard Sub-standard
Table B-1: Applied Mathematics Objectives Compared to CCSS for Mathematics
WIN Leve
l
“Best Fit” GE
WIN Applied Mathematics Objective Corresponding CCSS for
Mathematics
3 Grade 4
Solve problems that require a single type of mathematics operation (addition, subtraction, multiplication, and division) using whole numbers
4.OA.3
Add or subtract negative numbers 7.NS.1.d Change numbers from one form to another using whole numbers, fractions, decimals, or percentages
4.NF.5 percentages: grade 6
Convert simple money and time units (e.g., hours to minutes) 4MD.2
4 Grades
4-7
Solve problems that require one or two operations 4.OA.1, 2, 3 Multiply negative numbers 7.NS.3 Calculate averages, simple ratios, simple proportions, or rates using whole numbers and decimals
6.RP.1, 2, 3
Add commonly known fractions, decimals, or percentages (e.g., 1/2, .75, 25%)
4.NF.2; 5.NF; 5.NO.7
WIN Assessments Technical Manual | v5.2018 60
Add up to three fractions that share a common denominator 4.NF.3.d Multiply a mixed number by a whole number or decimal 5.NF.6; 6.NS.3 Put the information in the right order before performing calculations
No corresponding standard
5 Grades
5-7
Decide what information, calculations, or unit conversions to use to solve the problem
5.MD.1; 5.OA.1
Calculate using mixed units (e.g., 3.5 hours and 4 hours 30 minutes)
No corresponding standard
Divide negative numbers 7.NS.3 Find the best deal using one- and two-step calculations and then comparing results
6.RP.2, 3
Calculate perimeters and areas of basic shapes (rectangles and circles)
4.MD.4 (rectangles) 7.G.4 (circles)
Calculate percent discounts or markups 7.RP.3 Look up a formula and perform single-step conversions within or between systems of measurement
5.MD.1 (within systems) 6.RP.3.d (between systems)
6 Grades
6-7
Use fractions, negative numbers, ratios, percentages, or mixed numbers
7RP.3; 7.NS.3
Rearrange a formula before solving a problem 6.EE.2 Use two formulas to change from one unit to another within the same system of measurement
Relates to 6.EE and 7.EE
Use two formulas to change from one unit in one system of measurement to a unit in another system of measurement
Relates to 6.EE and 7.EE
Find mistakes in questions that belong at Levels 3, 4, and 5 No corresponding standard
Find the best deal and use the result for another calculation 6.RP.2, 3 Find areas of basic shapes when it may be necessary to rearrange the formula, convert units of measurement in the calculations, or use the result in further calculations
Relates to 6.EE and 7.EE
Find the volume of rectangular solids 6.G.2 Calculate multiple rates 6.RP.3
7 Grades
7-8
Solve problems that include nonlinear functions and/or that involve more than one unknown
8.EE.8 non-linear equations: high school algebra
Find mistakes in Level 6 questions No corresponding standard
Convert between systems of measurement that involve fractions, mixed numbers, decimals, and/or percentages
6.EE; 7.EE
Calculate multiple areas and volumes of spheres, cylinders, or cones
8.G.9
Set up and manipulate complex ratios or proportions 7.G.1; 7.RP.3 Find the best deal when there are several choices 6.RP.2,3 Apply basic statistical concepts (compute percent change) 7.RP.3
WIN Assessments Technical Manual | v5.2018 61
Appendix C WIN Ready to Work Assessments | Grade Equivalency Reading for Information | Level – Lexile Measures
To determine an approximate grade equivalency (GE) for each level of the WIN Ready to Work
Courseware posttests and proctored WIN Ready to Work Assessment for Reading for Information,
the objectives and corresponding test items were compared to Lexile measures.
MetaMetrics® developed Lexiles to measure both an individual’s reading ability and the complexity
of a text. The measure was designed to match readers with appropriate books and to monitor a
reader's growth in reading ability over time. The Lexile measure is shown as a number with an "L"
after it — 950L is 950 Lexile. Higher Lexile measures represent a higher level of reading ability or a
greater text complexity. A Lexile measure can range from below 200L for beginning readers to above
1600L for advanced readers.
The measure was not intended to translate to specific grades. However, to understand and describe
the typical Lexile measures of texts and students at specific grades, MetaMetrics® studied the
ranges of Lexile text and reader measures at specific grades. Due to the wide range of student
reading abilities and texts, there is considerable overlap between the grades. The studies considered
only the middle 50% of readers and texts, so the ranges for each grade do not include the top and
bottom 25% of readers and texts.
MetaMetrics® also studied the ranges of Lexile text measures for sample texts provided in the 2012
Common Core State Standards (CCSS) for English Language Arts. The CCSS texts represent the
cognitive demand of texts that students should be reading at each grade to be “college and career
ready” by the end of grade 12. Table C-1 shows the Lexile ranges that resulted from these studies.
Table C-1 | Typical Reader and Text Lexile Measures by Grade
Grade Reader Measures* Classroom Text
Measures** 2012 CCSS Test Measures
Range Mean Range Mean Range Mean 1 Up to 300L 230L to 420L 325L 190L to 530L 360L 2 140L to 500L 320L 450L to 570L 510L 420L to 650L 535L 3 330L to 700L 515L 600L to 730L 665L 520L to 820L 670L 4 445L to 810 L 628L 640L to780L 710L 740L to 940L 840L 5 565L to 910 L 738L 730L to 850L 790L 830L to 1010L 920L 6 665L to 1000L 833L 860L to 920L 890L 925L to 1070L 998L 7 735L to 1065L 900L 880L to 960L 920L 970L to 1120L 1045L 8 805L to 1100L 953L 900L to 1010L 955L 1010L to 1185L 1098L 9 855L to 1165L 1010L 960L to 1110L 1035L 1050L to 1260L 1155L 10 905L to 1195L 1050L 920L to 1120L 1020L 1080L to 1335L 1208L 11-12 940L to 1210L 1075L 1070L to 1220L 1145L 1185L to 1385L 1286L
* Mid-year, 25th percentile to 75th percentile ** 25th percentile to 75th percentile
WIN Assessments Technical Manual | v5.2018 62
NOTE: WIN added the mean values for each grade to create the grade links shown in table 2.
Table C-2 shows the Lexile measures for passages used on the WIN end-of-course Reading for Information assessments. The grade ranges represent the span of grades where MetaMetrics® typically found texts or readers with the same Lexile measures as the WIN passages. The “best fit” grade was determined by comparing the mean Lexile measure of the passages with the mean Lexile measure of the students or texts at each grade and identifying the closest grade or grade span.
Table C-2: Reading for Information | Passage Lexile Measures
WIN Passages by Level Lexile
Grade (based on Lexile)
Readers Classroom Texts
CCSS Texts
Level 3 Community service memo 1060L Range: 6-12
Best fit: 8 or 9 Range: 6-10 Best fit: 8 or 9
Range: 5-8 Best fit: 6 Security badge memo 940L
Culture committee memo 910L Business team memo 950L Evacuation memo 1060L Retirement plan memo 1060L Mean Lexile Measure for Level 3 985L Level 4 Changing a toner cartridge 1080L Range: 7-12
Best fit: 9 or 10 Range: 6-10 Best fit: 9 or 10
Range: 5-9 Best fit: 7
Family and medical leave 1060L Vacation and factory shut down 1120L Resumes 1030L Flu shots 1020L Hard drive diagnosis 920L Mean Lexile Measure for Level 4 1038L Level 5 Conducting an inventory 1240L Range: 7-12
Best fit: 11-12 Range: 7-12 Best fit: 11-12
Range: 5-12 Best fit: 8 Returning merchandise 1050L
TV jargon 950L Buyer Beware 960L Store Closing 1230L Flexible schedules 1170L Mean Lexile Measure for Level 5 1100L Level 6 Museum insurance 1020L Range: 7-12
Best fit: 11-12
Range: 9-12 Best fit: 11-12
Range: 7-12 Best fit: 9 MRSA in healthcare 1220L
Safety for hearing-impaired 1190L Grant proposals 1260L Carpal Tunnel 1230L Inventories 1060L Mean Lexile Measure for Level 6 1163L Level 7 Workplace safety (PtD) 1430L Post-secondary Post-secondary Range:
8 to post-secondary Best fit: Post-secondary
Managing workplace stress 1250L Whistleblower protection 1370L Computer Use 1450L Lab safety 1160L
WIN Assessments Technical Manual | v5.2018 63
Mean Lexile Measure for Level 7 1332L
Appendix D WIN Ready to Work Assessments | Field Test Form Maps Table D-1: Applied Mathematics | Field Test Form Map
Forms
Accession Number
Graphic (G) or Passage
Level (3-7)
Item Sequence
Learning Objective A B C a b c
E.M.3.0.1.1 3 1 1 1 E.M.3.0.2.1 3 2 1 1 E.M.3.0.3.1 3 3 1 1 E.M.3.0.4.2 G.E.M.3.0.4.0 3 4 2 1 E.M.3.0.5.2 3 5 2 1 E.M.3.0.6.2 3 6 2 1 E.M.3.0.7.2 3 7 2 1 1 1 E.M.3.0.8.3 3 8 3 1 E.M.3.0.9.3 3 9 3 1 E.M.3.0.10.3 3 10 3 1 E.M.3.0.11.4 3 11 4 1 E.M.3.0.12.4 3 12 4 1 E.M.3.0.13.4 3 13 4 1 E.M.4.0.1.5 4 1 5 1 E.M.4.0.2.5 4 2 5 1 E.M.4.0.3.5 4 3 5 1 E.M.4.0.4.6 4 4 6 1 E.M.4.0.5.6 4 5 6 1 E.M.4.0.6.6 4 6 6 1 E.M.4.0.7.7 G.E.M.4.0.7.0 4 7 7 1 E.M.4.0.8.7 4 8 7 1 E.M.4.0.9.7 4 9 7 1 E.M.4.0.10.8 4 10 8 1 E.M.4.0.11.8 4 11 8 1 E.M.4.0.12.8 4 12 8 1 E.M.4.0.13.9 4 13 9 1 E.M.4.0.14.9 4 14 9 1 E.M.4.0.15.9 4 15 9 1 E.M.4.0.16.10 4 16 10 1 E.M.4.0.17.10 4 17 10 1 E.M.4.0.18.10 4 18 10 1 E.M.4.0.19.11 G.E.M.4.0.19.0 4 19 11 1 E.M.4.0.20.11 G.E.M.4.0.20.0 4 20 11 1 E.M.4.0.21.11 G.E.M.4.0.21.0 4 21 11 1 E.M.5.0.1.12 G.E.M.5.0.1.0 5 1 12 1 E.M.5.0.2.12 G.E.M.5.0.2.0 5 2 12 1 E.M.5.0.3.12 G.E.M.5.0.3.0 5 3 12 1 E.M.5.0.4.13 5 4 13 1 E.M.5.0.5.13 5 5 13 1
WIN Assessments Technical Manual | v5.2018 64
E.M.5.0.6.13 5 6 13 1 E.M.5.0.7.14 5 7 14 1 E.M.5.0.8.14 G.E.M.5.0.8.0 5 8 14 1 Forms
Accession Number
Graphic (G) or Passage
Level (3-7)
Item Sequence
Learning Objective A B C a b c
E.M.5.0.9.14 G.E.M.5.0.9.0 5 9 14 1 E.M.5.0.10.15 5 10 15 1 E.M.5.0.11.15 5 11 15 1 E.M.5.0.12.15 5 12 15 1 E.M.5.0.13.16 G.E.M.5.0.13.0 5 13 16 1 E.M.5.0.14.16 G.E.M.5.0.14.0 5 14 16 1 E.M.5.0.15.16 G.E.M.5.0.15.0 5 15 16 1 E.M.5.0.16.17 G.E.M.5.0.16.0 5 16 17 1 E.M.5.0.17.17 5 17 17 1 E.M.5.0.18.17 5 18 17 1 E.M.5.0.19.18 5 19 18 1 E.M.5.0.20.18 5 20 18 1 E.M.5.0.21.18 5 21 18 1 E.M.6.0.1.19 6 1 19 1 E.M.6.0.2.19 G.E.M.6.0.2.0 6 2 19 1 E.M.6.0.3.19 6 3 19 1 E.M.6.0.4.19 6 4 19 1 1 1 E.M.6.0.5.20 6 5 20 1 E.M.6.0.6.20 6 6 20 1 E.M.6.0.7.20 6 7 20 1 E.M.6.0.8.21 6 8 21 1 E.M.6.0.9.21 G.E.M.6.0.9.0 6 9 21 1 E.M.6.0.10.21 G.E.M.6.0.10.0 6 10 21 1 E.M.6.0.11.21 6 11 21 1 E.M.6.0.12.21 6 12 21 1 E.M.6.0.13.22 6 13 22 1 E.M.6.0.14.22 6 14 22 1 E.M.6.0.15.22 6 15 22 1 E.M.6.0.16.22 6 16 22 1 E.M.6.0.17.23 G.E.M.6.0.17.0 6 17 23 1 E.M.6.0.18.23 G.E.M.6.0.18.0 6 18 23 1 E.M.6.0.19.23 G.E.M.6.0.19.0 6 19 23 1 E.M.6.0.20.24 6 20 24 1 E.M.6.0.21.24 6 21 24 1 E.M.6.0.22.24 G.E.M.6.0.22.0 6 22 24 1 E.M.6.0.23.25 G.E.M.6.0.23.0 6 23 25 1 E.M.6.0.24.25 G.E.M.6.0.24.0 6 24 25 1 E.M.6.0.25.25 G.E.M.6.0.25.0 6 25 25 1 E.M.6.0.26.26 6 26 26 1 E.M.6.0.27.26 6 27 26 1 E.M.6.0.28.26 6 28 26 1 E.M.6.0.29.27 6 29 27 1 E.M.6.0.30.27 6 30 27 1 E.M.6.0.31.27 6 31 27 1
WIN Assessments Technical Manual | v5.2018 65
E.M.7.0.1.28 7 1 28 1 E.M.7.0.2.28 G.E.M.7.0.2.0 7 2 28 1 E.M.7.0.3.28 7 3 28 1 E.M.7.0.4.29 7 4 29 1 E.M.7.0.5.29 G.E.M.7.0.5.0 7 5 29 1 Forms
Accession Number
Graphic (G) or Passage
Level (3-7)
Item Sequence
Learning Objective A B C a b c
E.M.7.0.6.29 G.E.M.7.0.6.0 7 6 29 1 E.M.7.0.7.30 7 7 30 1 E.M.7.0.8.30 7 8 30 1 E.M.7.0.9.30 7 9 30 1 E.M.7.0.10.30 7 10 30 1 E.M.7.0.11.31 7 11 31 1 E.M.7.0.12.31 G.E.M.7.0.12.0 7 12 31 1 E.M.7.0.13.31 7 13 31 1 E.M.7.0.14.32 7 14 32 1 E.M.7.0.15.32 G.E.M.7.0.15.0 7 15 32 1 1 E.M.7.0.16.33 7 16 33 1 E.M.7.0.17.33 G.E.M.7.0.17.0 7 17 33 1 E.M.7.0.18.33 7 18 33 1 E.M.7.0.19.34 7 19 34 1 E.M.7.0.20.34 7 20 34 1 E.M.7.0.21.34 7 21 34 1 30 30 30 8 7 7
Table D-2: Reading for Information | Field Test Form Map
Forms
Accession Number
Graphic (G) or Passage
Level (3-7)
Item Sequence
Learning Objective A B C D
E.R.3.1.1.1 E.R.3.1.0.0 3 1 1 1 1 E.R.3.1.2.2 E.R.3.1.0.0 3 2 2 1 1 E.R.3.1.3.3 E.R.3.1.0.0 3 3 3 1 1 E.R.3.1.4.4 E.R.3.1.0.0 3 4 4 1 1 E.R.3.1.5.5 E.R.3.1.0.0 3 5 5 1 1 E.R.3.2.1.1 E.R.3.2.0.0 3 1 1 1 E.R.3.2.2.3 E.R.3.2.0.0 3 2 3 1 E.R.3.2.3.1 E.R.3.2.0.0 3 3 1 1 E.R.3.2.4.4 E.R.3.2.0.0 3 4 4 1 E.R.3.2.5.5 E.R.3.2.0.0 3 5 5 1 E.R.3.2.6.2 E.R.3.2.0.0 3 6 2 1 E.R.3.3.1.4 E.R.3.3.0.0 3 1 4 1 E.R.3.3.2.3 E.R.3.3.0.0 3 2 3 1 E.R.3.3.3.1 E.R.3.3.0.0 3 3 1 1 E.R.3.3.4.5 E.R.3.3.0.0 3 4 5 1 E.R.3.3.5.2 E.R.3.3.0.0 3 5 2 1
WIN Assessments Technical Manual | v5.2018 66
E.R.4.1.1.6
E.R.4.1.0.0, G.E.R.4.1.0.0.P.A, G.E.R.4.1.0.0.P.B. 4 1 6 1 1
E.R.4.1.2.7
E.R.4.1.0.0, G.E.R.4.1.0.0.P.A, G.E.R.4.1.0.0.P.B. 4 2 7 1 1
E.R.4.1.3.6
E.R.4.1.0.0, G.E.R.4.1.0.0.P.A, G.E.R.4.1.0.0.P.B. 4 3 6 1 1
Forms
Accession Number
Graphic (G) or Passage
Level (3-7)
Item Sequence
Learning Objective A B C D
E.R.4.1.4.5
E.R.4.1.0.0, G.E.R.4.1.0.0.P.A, G.E.R.4.1.0.0.P.B. 4 4 5 1 1
E.R.4.1.5.8
E.R.4.1.0.0, G.E.R.4.1.0.0.P.A, G.E.R.4.1.0.0.P.B. 4 5 8 1 1
E.R.4.2.1.6 E.R.4.2.0.0 4 1 6 1 E.R.4.2.2.8 E.R.4.2.0.0 4 2 8 1 E.R.4.2.3.7 E.R.4.2.0.0 4 3 7 1 E.R.4.2.4.8 E.R.4.2.0.0 4 4 8 1 E.R.4.2.5.6 E.R.4.2.0.0 4 5 6 1 E.R.4.2.6.5 E.R.4.2.0.0 4 6 5 1 E.R.4.3.1.8 E.R.4.3.0.0 4 1 8 1 E.R.4.3.2.6 E.R.4.3.0.0 4 2 6 1 E.R.4.3.3.6 E.R.4.3.0.0 4 3 6 1 E.R.4.3.4.7 E.R.4.3.0.0 4 4 7 1 E.R.4.3.5.5 E.R.4.3.0.0 4 5 5 1 E.R.5.1.1.9 E.R.5.1.0.0 5 1 9 1 1 E.R.5.1.2.10 E.R.5.1.0.0 5 2 10 1 1 E.R.5.1.3.11 E.R.5.1.0.0 5 3 11 1 1 E.R.5.1.4.10 E.R.5.1.0.0 5 4 10 1 1 E.R.5.1.5.12 E.R.5.1.0.0 5 5 12 1 1 E.R.5.1.6.13 E.R.5.1.0.0 5 6 13 1 1 E.R.5.1.7.10 E.R.5.1.0.0 5 7 10 1 1 E.R.5.1.8.14 E.R.5.1.0.0 5 8 14 1 1 E.R.5.2.1.11 E.R.5.2.0.0 5 1 11 1 E.R.5.2.2.10 E.R.5.2.0.0 5 2 10 1 E.R.5.2.3.9 E.R.5.2.0.0 5 3 9 1 E.R.5.2.4.10 E.R.5.2.0.0 5 4 10 1 E.R.5.2.5.12 E.R.5.2.0.0 5 5 12 1 E.R.5.2.6.14 E.R.5.2.0.0 5 6 14 1 E.R.5.2.7.13 E.R.5.2.0.0 5 7 13 1 E.R.5.3.1.10 E.R.5.3.0.0 5 1 10 1 E.R.5.3.2.10 E.R.5.3.0.0 5 2 10 1 E.R.5.3.3.11 E.R.5.3.0.0 5 3 11 1 E.R.5.3.4.9 E.R.5.3.0.0 5 4 9 1 E.R.5.3.5.13 E.R.5.3.0.0 5 5 13 1 E.R.5.3.6.11 E.R.5.3.0.0 5 6 11 1 E.R.5.3.7.12 E.R.5.3.0.0 5 7 12 1
WIN Assessments Technical Manual | v5.2018 67
E.R.5.3.8.9 E.R.5.3.0.0 5 8 9 1 E.R.5.3.9.14 E.R.5.3.0.0 5 9 14 1 E.R.6.1.1.15 E.R.6.1.0.0 6 1 15 1 E.R.6.1.2.16 E.R.6.1.0.0 6 2 16 1 E.R.6.1.3.17 E.R.6.1.0.0 6 3 17 1 E.R.6.1.4.17 E.R.6.1.0.0 6 4 17 1 E.R.6.1.5.19 E.R.6.1.0.0 6 5 19 1 E.R.6.1.6.20 E.R.6.1.0.0 6 6 20 1 E.R.6.1.7.19 E.R.6.1.0.0 6 7 19 1 E.R.6.1.8.21 E.R.6.1.0.0 6 8 21 1 Forms
Accession Number
Graphic (G) or Passage
Level (3-7)
Item Sequence
Learning Objective A B C D
E.R.6.2.1.17 E.R.6.2.0.0 6 1 17 1 1 E.R.6.2.2.15 E.R.6.2.0.0 6 2 15 1 1 E.R.6.2.3.15 E.R.6.2.0.0 6 3 15 1 1 E.R.6.2.4.17 E.R.6.2.0.0 6 4 17 1 1 E.R.6.2.5.19 E.R.6.2.0.0 6 5 19 1 1 E.R.6.2.6.21 E.R.6.2.0.0 6 6 21 1 1 E.R.6.2.7.20 E.R.6.2.0.0 6 7 20 1 1 E.R.6.2.8.16 E.R.6.2.0.0 6 8 16 1 1 E.R.6.2.9.19 E.R.6.2.0.0 6 9 19 1 1 E.R.6.3.1.15 E.R.6.3.0.0 6 1 15 1 E.R.6.3.2.16 E.R.6.3.0.0 6 2 16 1 E.R.6.3.3.19 E.R.6.3.0.0 6 3 19 1 E.R.6.3.4.17 E.R.6.3.0.0 6 4 17 1 E.R.6.3.5.19 E.R.6.3.0.0 6 5 19 1 E.R.6.3.6.18 E.R.6.3.0.0 6 6 18 1 E.R.6.3.7.20 E.R.6.3.0.0 6 7 20 1 E.R.6.3.8.17 E.R.6.3.0.0 6 8 17 1 E.R.6.3.9.21 E.R.6.3.0.0 6 9 21 1 E.R.7.1.1.22 E.R.7.1.0.0 7 1 22 1 1 E.R.7.1.2.23 E.R.7.1.0.0 7 2 23 1 1 E.R.7.1.3.24 E.R.7.1.0.0 7 3 24 1 1 E.R.7.1.4.24 E.R.7.1.0.0 7 4 24 1 1 E.R.7.1.5.22 E.R.7.1.0.0 7 5 22 1 1 E.R.7.1.6.22 E.R.7.1.0.0 7 6 22 1 1 E.R.7.2.1.24 E.R.7.2.0.0 7 1 24 1 E.R.7.2.2.22 E.R.7.2.0.0 7 2 22 1 E.R.7.2.3.24 E.R.7.2.0.0 7 3 24 1 E.R.7.2.4.24 E.R.7.2.0.0 7 4 24 1 E.R.7.2.5.23 E.R.7.2.0.0 7 5 23 1 E.R.7.2.6.23 E.R.7.2.0.0 7 6 23 1 E.R.7.3.1.22 E.R.7.3.0.0 7 1 22 1 E.R.7.3.2.24 E.R.7.3.0.0 7 2 24 1 E.R.7.3.3.22 E.R.7.3.0.0 7 3 22 1 E.R.7.3.4.23 E.R.7.3.0.0 7 4 23 1 E.R.7.3.5.22 E.R.7.3.0.0 7 5 22 1 33 33 33 33
WIN Assessments Technical Manual | v5.2018 68
Table D-3: Locating Information | Field Test Form Map
Forms
Accession Number
Graphic (G) or Passage
Level (3-7)
Item Sequence
Learning Objective A B a b
E.L.3.0.1.1 G.E.L.3.0.1.0 3 1 1 1 E.L.3.0.2.1 G.E.L.3.0.2.0 3 2 1 1 E.L.3.0.3.1 G.E.L.3.0.3.0 3 3 1 1 E.L.3.1.4.2 G.E.L.3.1.4.0 3 4 2 1 E.L.3.1.5.2 G.E.L.3.1.4.0 3 5 2 1 E.L.3.2.6.2 G.E.L.3.2.6.0 3 6 2 1 E.L.3.2.7.2 G.E.L.3.2.6.0 3 7 2 1 Forms
Accession Number
Graphic (G) or Passage
Level (3-7)
Item Sequence
Learning Objective A B a b
E.L.3.2.8.2 G.E.L.3.2.6.0, G.E.L.3.2.8.2.A, G.E.L.3.2.8.2.B, G.E.L.3.2.8.2.C, G.E.L.3.2.8.2.D 3 8 2 1
E.L.3.3.9.2 G.E.L.3.3.9.0 3 9 2 1 E.L.3.3.10.2 G.E.L.3.3.9.0 3 10 2 1 E.L.3.4.11.2 G.E.L.3.4.11.0 3 11 2 1 E.L.3.4.12.2 G.E.L.3.4.11.0 3 12 2 1 E.L.4.0.1.3 G.E.L.4.0.1.0
G.E.L.4.0.1.0-A G.E.L.4.0.1.0-B G.E.L.4.0.1.0-C G.E.L.4.0.1.0-D 4 1 3 1
E.L.4.0.2.3 G.E.L.4.0.2.0 G.E.L.4.0.2.0-A G.E.L.4.0.2.0-B G.E.L.4.0.2.0-C G.E.L.4.0.2.0-D 4 2 3 1
E.L.4.0.3.3 G.E.L.4.0.3.0 G.E.L.4.0.3.0-A G.E.L.4.0.3.0-B G.E.L.4.0.3.0-C G.E.L.4.0.3.0-D 4 3 3 1 1
E.L.4.1.4.4 G.E.L.4.1.4.0 4 4 4 1 E.L.4.1.5.4 G.E.L.4.1.4.0 4 5 4 1 E.L.4.1.6.4 G.E.L.4.1.4.0 4 6 4 1 E.L.4.0.7.5 G.E.L.4.0.7.0 4 7 5 1 E.L.4.0.8.5 G.E.L.4.0.8.0 4 8 5 1 E.L.4.0.9.5 G.E.L.4.0.9.0 4 9 5 1 1 E.L.4.0.10.6 G.E.L.4.0.10.0
G.E.L.4.0.10.0-A G.E.L.4.0.10.0-B G.E.L.4.0.10.0-C G.E.L.4.0.10.0-D 4 10 6 1
E.L.4.0.11.6 G.E.L.4.0.11.0 4 11 6 1 E.L.4.0.12.6 G.E.L.4.0.12.0 4 12 6 1 1 E.L.4.0.13.7 G.E.L.4.0.13.0 4 13 7 1
WIN Assessments Technical Manual | v5.2018 69
E.L.4.0.14.7 G.E.L.4.0.14.0-A; G.E.L.4.0.14.0-B 4 14 7 1
E.L.4.0.15.7 G.E.L.4.0.15.0 4 15 7 1 E.L.5.1.1.8 G.E.L.5.1.1.0 5 1 8 1 E.L.5.1.2.8 G.E.L.5.1.1.0 5 2 8 1 E.L.5.0.3.8 G.E.L.5.0.3.0 5 3 8 1 E.L.5.2.4.8 G.E.L.5.2.4.0 5 4 8 1 E.L.5.2.5.10 G.E.L.5.2.4.0 5 5 10 1 E.L.5.3.6.10 G.E.L.5.3.6.0
G.E.L.5.0.6.0 5 6 10 1 E.L.5.3.7.11 G.E.L.5.3.6.0 5 7 11 1 E.L.5.3.8.10 G.E.L.5.3.6.0 5 8 10 1 E.L.5.4.9.11 G.E.L.5.4.9.0 5 9 11 1 E.L.5.4.10.9 G.E.L.5.4.9.0 5 10 9 1 Forms
Accession Number
Graphic (G) or Passage
Level (3-7)
Item Sequence
Learning Objective A B a b
E.L.5.4.11.11 G.E.L.5.4.9.0 5 11 11 1 E.L.5.5.12.11 G.E.L.5.5.12.0 5 12 11 1 E.L.5.5.13.11 G.E.L.5.5.12.0 5 13 11 1 E.L.6.6.14.8 G.E.L.6.6.14.0-M 6 14 8 1 E.L.6.6.15.11 G.E.L.6.6.14.0-G 6 15 11 1 E.L.6.6.16.8 G.E.L.6.6.14.0-G
G.E.L.6.6.14.0-M 6 16 8 1 E.L.6.6.17.8 G.E.L.6.6.14.0-G 6 17 8 1 E.L.6.0.18.10 G.E.L.6.0.18.0 6 18 10 1 E.L.6.7.19.8 G.E.L.6.7.19.0 6 19 8 1 E.L.6.7.20.7 G.E.L.6.7.19.0 6 20 7 1 E.L.6.0.21.10 G.E.L.6.0.21.0 6 21 10 1 E.L.6.0.22.9 G.E.L.6.0.22.0 6 22 9 1 E.L.6.0.23.10 G.E.L.6.0.23.0 6 23 10 1 E.L.6.7.24.11 G.E.L.6.7.24.0 6 24 11 1 E.L.6.7.25.9 G.E.L.6.7.24.0 6 25 9 1 E.L.6.7.26.9 G.E.L.6.7.24.0 6 26 9 1 16 16 12 12
WIN Assessments Technical Manual | v5.2018 70
Appendix E
WIN Ready to Work Assessments | Field Test Item Statistics
Table E-1: Applied Mathematics | Item Difficulty and Discrimination for All Items
Item number Discrimination Difficulty M.3.0.1.1 0.25* 0.63 M.3.0.2.1 0.19* 0.85 M.3.0.3.1 0.18* 0.91 M.3.0.4.2 0.26* 0.64 M.3.0.5.2 0.46 0.58 M.3.0.6.2 0.06** 0.74 M.3.0.7.2 0.26* 0.68 M.3.0.8.3 0.31 0.71 M.3.0.9.3 0.23* 0.57 M.3.0.10.3 0.26* 0.59 M.3.0.11.4 0.23* 0.81 M.3.0.12.4 0.25* 0.82 M.3.0.13.4 0.27* 0.84 M.4.0.1.5 0.27* 0.63 M.4.0.2.5 0.31 0.77 M.4.0.3.5 0.05** 0.47 M.4.0.4.6 0.31 0.81 M.4.0.5.6 0.25* 0.76 M.4.0.6.6 0.09** 0.77 M.4.0.7.7 0.28* 0.51 M.4.0.8.7 0.27* 0.72 M.4.0.9.7 0.28* 0.70 M.4.0.10.8 0.36 0.66 M.4.0.11.8 0.18* 0.85 M.4.0.12.8 0.18* 0.77 M.4.0.13.9 0.28* 0.83 M.4.0.14.9 0.32 0.62 M.4.0.15.9 0.19* 0.87 M.4.0.16.10 0.28* 0.61 M.4.0.17.10 0.30 0.47 M.4.0.18.10 0.24* 0.56 M.4.0.19.11 0.14* 0.35 M.4.0.20.11 0.28* 0.40 M.4.0.21.11 0.05** 0.18*** M.5.0.1.12 0.33 0.62 M.5.0.2.12 0.10* 0.31 M.5.0.3.12 0.23* 0.66 M.5.0.4.13 0.13* 0.44 M.5.0.5.13 0.29* 0.70 M.5.0.6.13 0.22* 0.69 M.5.0.7.14 0.24* 0.62 M.5.0.8.14 0.17* 0.48 M.5.0.9.14 0.30 0.50 M.5.0.10.15 0.17* 0.65 M.5.0.11.15 0.23* 0.78 M.5.0.12.15 0.15 0.79 M.5.0.13.16 0.25* 0.37 M.5.0.14.16 0.25* 0.34 M.5.0.15.16 0.15* 0.31
WIN Assessments Technical Manual | v5.2018 71
M.5.0.16.17 0.24* 0.53 M.5.0.17.17 0.30 0.62 M.5.0.18.17 0.17* 0.28 M.5.0.19.18 0.19* 0.55 M.5.0.20.18 0.22* 0.23 M.5.0.21.18 0.20* 0.53 M.6.0.1.19 0.16* 0.43 M.6.0.2.19 0.25* 0.59 M.6.0.3.19 0.22* 0.44 M.6.0.4.19 0.24* 0.62 M.6.0.5.20 0.20* 0.43 M.6.0.6.20 0.29* 0.52 M.6.0.7.20 0.12* 0.35 M.6.0.8.21 0.22* 0.28 M.6.0.9.21 0.26* 0.18*** M.6.0.10.21 0.04** 0.31 M.6.0.11.21 0.22* 0.37 M.6.0.12.21 0.29* 0.51 M.6.0.13.22 0.14* 0.24 M.6.0.14.22 0.09** 0.33 M.6.0.15.22 0.28* 0.37 M.6.0.16.22 0.31 0.44 M.6.0.17.23 0.23* 0.52 M.6.0.18.23 0.11* 0.47 M.6.0.19.23 0.19* 0.37 M.6.0.20.24 0.07** 0.42 M.6.0.21.24 0.18* 0.35 M.6.0.22.24 0.19* 0.42 M.6.0.23.25 0.04** 0.08*** M.6.0.24.25 -0.19** 0.20 M.6.0.25.25 0.06** 0.36 M.6.0.26.26 0.20* 0.51 M.6.0.27.26 0.14* 0.50 M.6.0.28.26 0.04** 0.46 M.6.0.29.27 0.21* 0.24 M.6.0.30.27 0.02** 0.33 M.6.0.31.27 0.19* 0.41 M.7.0.1.28 0.18* 0.17*** M.7.0.2.28 0.05** 0.17*** M.7.0.3.28 0.17* 0.49 M.7.0.4.29 0.01** 0.19*** M.7.0.5.29 0.15* 0.60 M.7.0.6.29 0.18* 0.34 M.7.0.7.30 0.05** 0.25 M.7.0.8.30 0.13* 0.54 M.7.0.9.30 0.14* 0.37 M.7.0.10.30 0.26* 0.32 M.7.0.11.31 0.09** 0.30 M.7.0.12.31 -0.03** 0.36 M.7.0.13.31 0.11* 0.26 M.7.0.14.32 0.02** 0.16*** M.7.0.15.32 0.20* 0.36 M.7.0.16.33 0.27* 0.44 M.7.0.17.33 0.20* 0.41 M.7.0.18.33 0.18* 0.35 M.7.0.19.34 0.18* 0.46 M.7.0.20.34 0.01** 0.17*** M.7.0.21.34 0.10* 0.35
WIN Assessments Technical Manual | v5.2018 72
Table E-2: Applied Mathematics | Number of Examinees Choosing Each Answer Option for Each Item
Answer Option ITEM NUMBER A B C D M.3.0.1.1 51 16 14 142 M.3.0.2.1 176 12 7 8 M.3.0.3.1 177 5 6 5 M.3.0.4.2 144 45 9 15 M.3.0.5.2 44 18 11 119 M.3.0.6.2 142 23 6 8 M.3.0.7.2 112 37 421 16 M.3.0.8.3 12 17 13 142 M.3.0.9.3 45 38 126 6 M.3.0.10.3 22 112 37 11 M.3.0.11.4 13 180 11 6 M.3.0.12.4 11 11 164 4 M.3.0.13.4 15 3 160 4 M.4.0.1.5 13 17 42 137 M.4.0.2.5 10 22 151 7 M.4.0.3.5 20 89 63 8 M.4.0.4.6 11 19 174 5 M.4.0.5.6 149 15 13 8 M.4.0.6.6 32 8 0 145 M.4.0.7.7 45 15 99 25 M.4.0.8.7 6 136 10 29 M.4.0.9.7 13 150 35 4 M.4.0.10.8 33 18 11 141 M.4.0.11.8 156 14 11 1 M.4.0.12.8 24 149 11 2 M.4.0.13.9 9 174 13 6 M.4.0.14.9 8 29 29 117 M.4.0.15.9 11 5 160 0 M.4.0.16.10 17 41 21 128 M.4.0.17.10 19 51 20 88 M.4.0.18.10 20 7 45 103 M.4.0.19.11 41 74 34 52 M.4.0.20.11 75 34 45 27 M.4.0.21.11 102 33 25 12 M.5.0.1.12 9 25 30 116 M.5.0.2.12 57 47 63 30 M.5.0.3.12 25 9 20 119 M.5.0.4.13 90 29 54 21 M.5.0.5.13 129 28 14 4 M.5.0.6.13 18 12 124 20 M.5.0.7.14 8 33 115 17 M.5.0.8.14 46 17 84 18 M.5.0.9.14 15 54 99 22 M.5.0.10.15 4 115 42 6 M.5.0.11.15 5 22 158 6 M.5.0.12.15 141 11 21 0 M.5.0.13.16 51 28 24 65 M.5.0.14.16 68 76 30 21
WIN Assessments Technical Manual | v5.2018 73
M.5.0.15.16 16 55 38 62 M.5.0.16.17 9 105 63 15 M.5.0.17.17 29 24 109 5 M.5.0.18.17 76 27 50 16 M.5.0.19.18 36 23 108 20 M.5.0.20.18 29 55 40 39 M.5.0.21.18 50 8 18 93 M.6.0.1.19 41 84 49 14 M.6.0.2.19 14 103 30 19 M.6.0.3.19 46 77 24 9 M.6.0.4.19 41 92 334 34 M.6.0.5.20 48 19 82 34 M.6.0.6.20 20 25 88 25 M.6.0.7.20 16 71 61 13 M.6.0.8.21 9 52 93 27 M.6.0.9.21 30 45 61 15 M.6.0.10.21 42 54 46 18 M.6.0.11.21 23 38 37 65 M.6.0.12.21 83 35 25 7 M.6.0.13.22 33 39 43 41 M.6.0.14.22 34 52 48 16 M.6.0.15.22 23 49 69 39 M.6.0.16.22 80 31 40 19 M.6.0.17.23 12 97 51 19 M.6.0.18.23 19 21 74 36 M.6.0.19.23 11 63 47 32 M.6.0.20.24 71 51 31 5 M.6.0.21.24 30 33 55 21 M.6.0.22.24 47 77 33 17 M.6.0.23.25 38 59 57 14 M.6.0.24.25 27 74 31 11 M.6.0.25.25 30 46 60 12 M.6.0.26.26 33 32 91 9 M.6.0.27.26 21 39 77 10 M.6.0.28.26 21 40 77 13 M.6.0.29.27 43 48 56 15 M.6.0.30.27 50 46 28 13 M.6.0.31.27 68 37 29 21 M.7.0.1.28 36 65 35 30 M.7.0.2.28 40 29 26 45 M.7.0.3.28 26 81 24 19 M.7.0.4.29 34 28 46 31 M.7.0.5.29 9 31 106 19 M.7.0.6.29 16 55 56 17 M.7.0.7.30 36 60 39 9 M.7.0.8.30 80 22 19 17 M.7.0.9.30 27 64 49 16 M.7.0.10.30 49 47 39 6 M.7.0.11.31 26 49 49 18 M.7.0.12.31 28 39 51 12 M.7.0.13.31 55 44 44 13 M.7.0.14.32 33 37 58 26 M.7.0.15.32 58 106 76 33 M.7.0.16.33 71 27 38 16 M.7.0.17.33 39 21 55 11 M.7.0.18.33 38 32 54 13 M.7.0.19.34 33 72 26 16 M.7.0.20.34 28 36 31 22
WIN Assessments Technical Manual | v5.2018 74
M.7.0.21.34 44 26 53 12
Table E-3: Applied Mathematics | Percent of Examinees Choosing Each Answer Option for Each Item
Answer Option ITEM NUMBER A B C D M.3.0.1.1 0.22 0.07 0.06 0.63 M.3.0.2.1 0.85 0.06 0.03 0.04 M.3.0.3.1 0.91 0.03 0.03 0.03 M.3.0.4.2 0.64 0.20 0.04 0.07 M.3.0.5.2 0.22 0.09 0.05 0.59 M.3.0.6.2 0.74 0.12 0.03 0.04 M.3.0.7.2 0.18 0.06 0.68 0.03 M.3.0.8.3 0.06 0.09 0.07 0.71 M.3.0.9.3 0.20 0.17 0.57 0.03 M.3.0.10.3 0.12 0.59 0.19 0.06 M.3.0.11.4 0.06 0.82 0.05 0.03 M.3.0.12.4 0.06 0.06 0.82 0.02 M.3.0.13.4 0.08 0.02 0.84 0.02 M.4.0.1.5 0.06 0.08 0.19 0.63 M.4.0.2.5 0.05 0.11 0.77 0.04 M.4.0.3.5 0.11 0.47 0.33 0.04 M.4.0.4.6 0.05 0.09 0.81 0.02 M.4.0.5.6 0.76 0.08 0.07 0.04 M.4.0.6.6 0.17 0.04 0.00 0.77 M.4.0.7.7 0.23 0.08 0.51 0.13 M.4.0.8.7 0.03 0.72 0.05 0.15 M.4.0.9.7 0.06 0.70 0.16 0.02 M.4.0.10.8 0.15 0.08 0.05 0.66 M.4.0.11.8 0.85 0.08 0.06 0.01 M.4.0.12.8 0.12 0.77 0.06 0.01 M.4.0.13.9 0.04 0.83 0.06 0.03 M.4.0.14.9 0.04 0.16 0.16 0.63 M.4.0.15.9 0.06 0.03 0.87 0.00 M.4.0.16.10 0.08 0.20 0.10 0.61 M.4.0.17.10 0.10 0.27 0.11 0.47 M.4.0.18.10 0.11 0.04 0.25 0.56 M.4.0.19.11 0.20 0.35 0.16 0.25 M.4.0.20.11 0.40 0.18 0.24 0.14 M.4.0.21.11 0.56 0.18 0.14 0.07 M.5.0.1.12 0.05 0.13 0.16 0.62 M.5.0.2.12 0.28 0.23 0.31 0.15 M.5.0.3.12 0.14 0.05 0.11 0.66 M.5.0.4.13 0.44 0.14 0.26 0.10 M.5.0.5.13 0.70 0.15 0.08 0.02 M.5.0.6.13 0.10 0.07 0.69 0.11 M.5.0.7.14 0.04 0.18 0.62 0.09 M.5.0.8.14 0.26 0.10 0.48 0.10 M.5.0.9.14 0.08 0.27 0.50 0.11 M.5.0.10.15 0.02 0.65 0.24 0.03 M.5.0.11.15 0.02 0.11 0.78 0.03 M.5.0.12.15 0.79 0.06 0.12 0.00 M.5.0.13.16 0.29 0.16 0.14 0.37
WIN Assessments Technical Manual | v5.2018 75
M.5.0.14.16 0.34 0.38 0.15 0.11 M.5.0.15.16 0.09 0.31 0.21 0.35 M.5.0.16.17 0.05 0.53 0.32 0.08 M.5.0.17.17 0.17 0.14 0.62 0.03 M.5.0.18.17 0.43 0.15 0.28 0.09 M.5.0.19.18 0.18 0.12 0.55 0.10 M.5.0.20.18 0.17 0.32 0.23 0.22 M.5.0.21.18 0.28 0.05 0.10 0.53 M.6.0.1.19 0.21 0.43 0.25 0.07 M.6.0.2.19 0.08 0.59 0.17 0.11 M.6.0.3.19 0.26 0.44 0.14 0.05 M.6.0.4.19 0.08 0.17 0.62 0.06 M.6.0.5.20 0.25 0.10 0.43 0.18 M.6.0.6.20 0.12 0.15 0.52 0.15 M.6.0.7.20 0.09 0.41 0.35 0.08 M.6.0.8.21 0.05 0.28 0.49 0.14 M.6.0.9.21 0.18 0.27 0.37 0.09 M.6.0.10.21 0.24 0.31 0.27 0.10 M.6.0.11.21 0.13 0.22 0.21 0.37 M.6.0.12.21 0.51 0.22 0.15 0.04 M.6.0.13.22 0.19 0.23 0.25 0.24 M.6.0.14.22 0.21 0.33 0.30 0.10 M.6.0.15.22 0.12 0.26 0.37 0.21 M.6.0.16.22 0.44 0.17 0.22 0.10 M.6.0.17.23 0.06 0.52 0.28 0.10 M.6.0.18.23 0.12 0.13 0.47 0.23 M.6.0.19.23 0.06 0.37 0.28 0.19 M.6.0.20.24 0.42 0.30 0.18 0.03 M.6.0.21.24 0.19 0.21 0.35 0.13 M.6.0.22.24 0.26 0.42 0.18 0.09 M.6.0.23.25 0.21 0.32 0.31 0.08 M.6.0.24.25 0.17 0.47 0.20 0.07 M.6.0.25.25 0.18 0.28 0.36 0.07 M.6.0.26.26 0.18 0.18 0.51 0.05 M.6.0.27.26 0.14 0.25 0.50 0.06 M.6.0.28.26 0.13 0.24 0.46 0.08 M.6.0.29.27 0.24 0.27 0.31 0.08 M.6.0.30.27 0.33 0.30 0.18 0.08 M.6.0.31.27 0.41 0.22 0.17 0.13 M.7.0.1.28 0.20 0.37 0.20 0.17 M.7.0.2.28 0.26 0.19 0.17 0.30 M.7.0.3.28 0.16 0.49 0.14 0.11 M.7.0.4.29 0.23 0.19 0.31 0.21 M.7.0.5.29 0.05 0.18 0.60 0.11 M.7.0.6.29 0.10 0.34 0.34 0.10 M.7.0.7.30 0.23 0.38 0.25 0.06 M.7.0.8.30 0.54 0.15 0.13 0.11 M.7.0.9.30 0.16 0.37 0.28 0.09 M.7.0.10.30 0.32 0.31 0.25 0.04 M.7.0.11.31 0.16 0.30 0.30 0.11 M.7.0.12.31 0.20 0.28 0.36 0.09 M.7.0.13.31 0.33 0.26 0.26 0.08 M.7.0.14.32 0.20 0.22 0.35 0.16 M.7.0.15.32 0.20 0.36 0.26 0.11 M.7.0.16.33 0.44 0.17 0.23 0.10 M.7.0.17.33 0.29 0.16 0.41 0.08 M.7.0.18.33 0.25 0.21 0.35 0.08 M.7.0.19.34 0.21 0.46 0.17 0.10
WIN Assessments Technical Manual | v5.2018 76
M.7.0.20.34 0.22 0.28 0.24 0.17 M.7.0.21.34 0.29 0.17 0.35 0.08
Table E-4: Applied Mathematics | Point Biserials for Each Answer Option for Each Item
Answer Option ITEM NUMBER A B C D M.3.0.1.1 -0.07 -0.11 -0.21 0.25 M.3.0.2.1 0.19 -0.15 -0.10 -0.10 M.3.0.3.1 0.18 -0.05 -0.14 -0.11 M.3.0.4.2 0.26 -0.13 -0.23 0.02 M.3.0.5.2 -0.22 -0.23 -0.24 0.47 M.3.0.6.2 0.06 0.04 -0.03 -0.01 M.3.0.7.2 -0.05 -0.22 0.26 -0.15 M.3.0.8.3 -0.17 -0.14 -0.02 0.31 M.3.0.9.3 -0.14 -0.08 0.23 -0.05 M.3.0.10.3 0.00 0.26 -0.22 -0.08 M.3.0.11.4 -0.05 0.23 -0.17 -0.03 M.3.0.12.4 -0.12 -0.19 0.25 -0.06 M.3.0.13.4 -0.14 -0.06 0.27 -0.18 M.4.0.1.5 -0.05 -0.08 -0.18 0.27 M.4.0.2.5 -0.18 -0.24 0.31 0.07 M.4.0.3.5 -0.10 0.05 0.09 -0.05 M.4.0.4.6 -0.13 -0.19 0.31 -0.03 M.4.0.5.6 0.25 -0.22 -0.11 0.00 M.4.0.6.6 0.00 -0.07 0.00 0.09 M.4.0.7.7 -0.08 -0.10 0.28 -0.14 M.4.0.8.7 -0.08 0.27 -0.15 -0.11 M.4.0.9.7 -0.09 0.28 -0.13 -0.02 M.4.0.10.8 -0.21 -0.15 -0.08 0.36 M.4.0.11.8 0.18 -0.14 -0.02 -0.10 M.4.0.12.8 -0.07 0.18 -0.17 -0.08 M.4.0.13.9 -0.13 0.28 -0.05 -0.14 M.4.0.14.9 -0.10 -0.08 -0.27 0.32 M.4.0.15.9 -0.17 0.00 0.19 0.00 M.4.0.16.10 -0.14 -0.11 -0.10 0.28 M.4.0.17.10 -0.19 -0.11 -0.05 0.30 M.4.0.18.10 -0.14 -0.06 -0.08 0.24 M.4.0.19.11 -0.07 0.14 -0.15 0.14 M.4.0.20.11 0.28 -0.08 -0.12 -0.13 M.4.0.21.11 -0.02 0.05 -0.04 0.05 M.5.0.1.12 -0.12 -0.19 -0.13 0.33 M.5.0.2.12 0.04 -0.06 0.10 -0.05 M.5.0.3.12 -0.20 -0.16 0.04 0.23 M.5.0.4.13 0.13 -0.06 0.12 -0.09 M.5.0.5.13 0.29 -0.13 -0.23 -0.08 M.5.0.6.13 -0.05 -0.07 0.22 -0.19 M.5.0.7.14 -0.04 -0.18 0.24 -0.03 M.5.0.8.14 -0.01 -0.20 0.17 -0.12 M.5.0.9.14 0.00 -0.22 0.30 -0.05 M.5.0.10.15 -0.12 0.17 0.06 -0.19 M.5.0.11.15 -0.10 -0.09 0.23 -0.05
WIN Assessments Technical Manual | v5.2018 77
M.5.0.12.15 0.15 -0.10 -0.03 0.00 M.5.0.13.16 -0.14 -0.06 0.02 0.25 M.5.0.14.16 0.25 -0.18 -0.01 -0.04 M.5.0.15.16 0.00 0.15 -0.02 -0.03 M.5.0.16.17 -0.16 0.24 -0.03 -0.19 M.5.0.17.17 -0.09 -0.17 0.30 -0.07 M.5.0.18.17 -0.03 -0.18 0.17 0.03 M.5.0.19.18 -0.19 -0.01 0.19 0.07 M.5.0.20.18 -0.09 -0.06 0.22 0.06 M.5.0.21.18 -0.11 -0.15 0.01 0.20 M.6.0.1.19 -0.11 0.16 0.07 -0.12 M.6.0.2.19 -0.11 0.25 -0.12 -0.13 M.6.0.3.19 -0.01 0.22 -0.05 -0.06 M.6.0.4.19 -0.06 -0.14 0.24 -0.05 M.6.0.5.20 -0.03 0.01 0.20 -0.12 M.6.0.6.20 -0.12 -0.18 0.29 -0.07 M.6.0.7.20 -0.05 0.01 0.12 -0.08 M.6.0.8.21 0.08 0.22 -0.02 -0.17 M.6.0.9.21 0.26 -0.01 -0.11 -0.03 M.6.0.10.21 0.14 0.04 -0.01 -0.02 M.6.0.11.21 -0.16 -0.03 -0.02 0.22 M.6.0.12.21 0.29 -0.05 -0.20 -0.12 M.6.0.13.22 -0.10 0.10 -0.03 0.14 M.6.0.14.22 -0.08 0.09 0.05 -0.07 M.6.0.15.22 -0.19 -0.05 0.28 -0.08 M.6.0.16.22 0.31 -0.09 -0.06 -0.28 M.6.0.17.23 -0.06 0.23 0.03 -0.25 M.6.0.18.23 0.07 -0.03 0.11 -0.13 M.6.0.19.23 0.04 0.19 0.04 -0.06 M.6.0.20.24 0.07 0.00 0.03 -0.10 M.6.0.21.24 0.04 -0.10 0.18 -0.06 M.6.0.22.24 0.10 0.19 -0.26 -0.02 M.6.0.23.25 0.06 0.03 0.01 0.04 M.6.0.24.25 0.05 0.28 -0.19 -0.21 M.6.0.25.25 0.03 0.01 0.06 -0.02 M.6.0.26.26 -0.07 0.04 0.20 -0.11 M.6.0.27.26 0.08 -0.15 0.14 0.01 M.6.0.28.26 0.02 0.00 0.04 -0.04 M.6.0.29.27 0.21 0.00 -0.04 -0.13 M.6.0.30.27 0.02 0.15 -0.03 -0.08 M.6.0.31.27 0.19 0.01 -0.07 -0.11 M.7.0.1.28 -0.04 -0.05 0.03 0.18 M.7.0.2.28 0.00 -0.15 0.05 0.15 M.7.0.3.28 0.01 0.17 -0.15 0.02 M.7.0.4.29 0.10 0.01 -0.19 0.19 M.7.0.5.29 -0.01 0.04 0.15 -0.10 M.7.0.6.29 -0.09 0.18 0.06 -0.10 M.7.0.7.30 -0.06 0.11 0.05 0.01 M.7.0.8.30 0.13 -0.20 -0.06 0.18 M.7.0.9.30 -0.01 0.14 -0.10 0.00 M.7.0.10.30 0.26 -0.06 -0.16 -0.01 M.7.0.11.31 0.01 0.08 0.09 -0.06 M.7.0.12.31 0.10 0.10 -0.03 -0.01 M.7.0.13.31 0.17 -0.17 0.11 -0.07 M.7.0.14.32 0.05 -0.01 0.01 0.02 M.7.0.15.32 -0.01 0.20 -0.11 -0.03 M.7.0.16.33 0.27 -0.14 -0.01 -0.06 M.7.0.17.33 -0.06 -0.19 0.20 0.07
WIN Assessments Technical Manual | v5.2018 78
M.7.0.18.33 -0.08 0.08 0.18 -0.12 M.7.0.19.34 -0.11 0.18 0.11 -0.19 M.7.0.20.34 0.02 0.21 -0.12 0.01 M.7.0.21.34 -0.06 0.02 0.10 0.10
Table E-5: Reading for Information | Item Difficulty and Discrimination for All Items
ITEM NUMBER Discrimination Difficulty R.3.1.1.1 0.32 0.87 R.3.1.2.2 0.52 0.71 R.3.1.3.3 0.46 0.78 R.3.1.4.4 0.39 0.81 R.3.1.5.5 0.37 0.88 R.3.2.1.1 0.10* 0.52 R.3.2.2.3 0.35 0.73 R.3.2.3.1 0.43 0.72 R.3.2.4.4 0.31 0.43 R.3.2.5.5 0.34 0.71 R.3.2.6.2 0.43 0.79 R.3.3.1.4 0.37 0.66 R.3.3.2.3 0.39 0.81 R.3.3.3.1 0.49 0.61 R.3.3.4.5 0.54 0.57 R.3.3.5.2 0.38 0.68 R.4.1.1.6 0.37 0.63 R.4.1.2.7 0.47 0.53 R.4.1.3.6 0.56 0.75 R.4.1.4.5 0.20* 0.37 R.4.1.5.8 0.39 0.63 R.4.2.1.6 0.26* 0.38 R.4.2.2.8 0.21* 0.41 R.4.2.3.7 0.41 0.39 R.4.2.4.8 0.46 0.80 R.4.2.5.6 0.38 0.57 R.4.2.6.5 0.07** 0.32 R.4.3.1.8 0.43 0.84 R.4.3.2.6 0.52 0.71 R.4.3.3.6 0.56 0.68 R.4.3.4.7 0.40 0.68 R.4.3.5.5 0.34 0.42 R.5.1.1.9 0.51 0.53 R.5.1.2.10 0.54 0.82 R.5.1.3.11 0.60 0.78 R.5.1.4.10 0.45 0.68 R.5.1.5.12 0.54 0.64 R.5.1.6.13 0.27* 0.41 R.5.1.7.10 0.43 0.63 R.5.1.8.14 0.59 0.64 R.5.2.1.11 0.17* 0.51 R.5.2.2.10 0.53 0.75 R.5.2.3.9 0.44 0.74 R.5.2.4.10 0.57 0.59
WIN Assessments Technical Manual | v5.2018 79
R.5.2.5.12 0.60 0.47 R.5.2.6.14 0.47 0.61 R.5.2.7.13 0.58 0.62 R.5.3.1.10 0.47 0.71 R.5.3.2.10 0.38 0.82 R.5.3.3.11 0.45 0.85 R.5.3.4.9 0.50 0.59 R.5.3.5.13 0.53 0.75 R.5.3.6.11 0.47 0.81 R.5.3.7.12 0.23* 0.46 R.5.3.8.9 0.35 0.38 R.5.3.9.14 0.50 0.54 R.6.1.1.15 0.03** 0.36 R.6.1.2.16 0.37 0.45 R.6.1.3.17 0.59 0.64 R.6.1.4.17 0.45 0.51 R.6.1.5.19 0.41 0.54 R.6.1.6.20 0.49 0.67 R.6.1.7.19 0.29* 0.26 R.6.1.8.21 0.40 0.38 R.6.2.1.17 0.59 0.63 R.6.2.2.15 0.21* 0.37 R.6.2.3.15 0.53 0.58 R.6.2.4.17 0.58 0.39 R.6.2.5.19 0.51 0.74 R.6.2.6.21 0.39 0.66 R.6.2.7.20 0.32 0.37 R.6.2.8.16 0.52 0.66 R.6.2.9.19 0.07** 0.23 R.6.3.1.15 0.35 0.44 R.6.3.2.16 0.26* 0.52 R.6.3.3.19 0.47 0.42 R.6.3.4.17 0.41 0.68 R.6.3.5.19 0.57 0.84 R.6.3.6.18 0.42 0.51 R.6.3.7.20 0.31 0.56 R.6.3.8.17 0.62 0.62 R.6.3.9.21 0.43 0.34 R.7.1.1.22 0.31 0.32 R.7.1.2.23 0.41 0.68 R.7.1.3.24 0.42 0.60 R.7.1.4.24 0.16* 0.36 R.7.1.5.22 0.40 0.46 R.7.1.5.25 -- -- R.7.1.6.22 0.48 0.55 R.7.2.1.24 0.05** 0.20 R.7.2.2.22 0.33 0.41 R.7.2.3.24 0.48 0.51 R.7.2.4.24 0.31 0.34 R.7.2.5.23 0.27* 0.74 R.7.2.6.23 0.20* 0.41 R.7.3.1.22 0.29* 0.25 R.7.3.2.24 0.45 0.55 R.7.3.3.22 0.42 0.46 R.7.3.4.23 0.31 0.33 R.7.3.5.22 0.26* 0.31
WIN Assessments Technical Manual | v5.2018 80
Table E-6: Reading for Information | Number of Examinees Choosing Each Answer Option for Each Item
Answer Option ITEM NUMBER A B C D
R.3.1.1.1 263 9 11 14 R.3.1.2.2 7 53 18 215 R.3.1.3.3 13 41 234 2 R.3.1.4.4 11 6 31 240 R.3.1.5.5 18 257 9 0 R.3.2.1.1 79 20 29 22 R.3.2.2.3 17 13 112 6 R.3.2.3.1 26 3 7 109 R.3.2.4.4 19 65 14 50 R.3.2.5.5 8 18 104 13 R.3.2.6.2 116 4 19 6 R.3.3.1.4 109 13 11 31 R.3.3.2.3 7 133 13 5 R.3.3.3.1 9 27 99 19 R.3.3.4.5 91 16 38 8 R.3.3.5.2 19 12 15 107 R.4.1.1.6 49 28 180 21 R.4.1.2.7 152 28 40 52 R.4.1.3.6 23 27 17 215 R.4.1.4.5 60 86 106 23 R.4.1.5.8 23 177 39 27 R.4.2.1.6 75 3 57 9 R.4.2.2.8 42 22 16 60 R.4.2.3.7 31 57 16 35 R.4.2.4.8 118 4 16 5 R.4.2.5.6 25 29 3 82 R.4.2.6.5 46 41 46 4 R.4.3.1.8 9 129 4 6 R.4.3.2.6 19 108 12 6 R.4.3.3.6 104 5 24 10 R.4.3.4.7 23 16 102 2 R.4.3.5.5 62 5 13 63 R.5.1.1.9 26 25 150 71 R.5.1.2.10 230 15 24 7 R.5.1.3.11 219 17 25 9 R.5.1.4.10 56 189 15 7 R.5.1.5.12 51 177 20 18 R.5.1.6.13 53 53 42 113 R.5.1.7.10 9 170 25 56 R.5.1.8.14 31 27 24 172 R.5.2.1.11 6 72 55 4 R.5.2.2.10 106 11 11 8 R.5.2.3.9 8 12 104 12 R.5.2.4.10 21 83 22 12 R.5.2.5.12 65 33 31 3 R.5.2.6.14 10 18 83 22 R.5.2.7.13 27 11 11 85 R.5.3.1.10 18 103 11 8 R.5.3.2.10 119 9 9 4 R.5.3.3.11 4 9 4 123 R.5.3.4.9 86 9 40 4 R.5.3.5.13 13 109 15 3
WIN Assessments Technical Manual | v5.2018 81
R.5.3.6.11 14 5 3 117 R.5.3.7.12 60 12 66 4 R.5.3.8.9 36 54 3 44 R.5.3.9.14 77 12 21 29 R.6.1.1.15 62 15 50 5 R.6.1.2.16 63 8 16 51 R.6.1.3.17 11 89 24 11 R.6.1.4.17 40 70 15 7 R.6.1.5.19 38 14 73 7 R.6.1.6.20 90 24 5 13 R.6.1.7.19 8 35 14 72 R.6.1.8.21 29 8 43 51 R.6.2.1.17 17 165 59 13 R.6.2.2.15 0 109 97 42 R.6.2.3.15 20 150 58 24 R.6.2.4.17 100 32 56 66 R.6.2.5.19 12 21 25 192 R.6.2.6.21 39 35 7 167 R.6.2.7.20 17 94 91 43 R.6.2.8.16 15 164 22 43 R.6.2.9.19 57 12 25 153 R.6.3.1.15 31 62 24 19 R.6.3.2.16 72 20 16 23 R.6.3.3.19 53 13 58 6 R.6.3.4.17 11 94 19 5 R.6.3.5.19 5 5 7 116 R.6.3.6.18 70 23 30 11 R.6.3.7.20 77 28 15 11 R.6.3.8.17 28 11 85 5 R.6.3.9.21 25 7 51 46 R.7.1.1.22 89 44 36 81 R.7.1.2.23 28 169 29 20 R.7.1.3.24 41 34 150 16 R.7.1.4.24 76 88 38 30 R.7.1.5.22 112 69 15 41 R.7.1.5.25 0 0 0 0 R.7.1.6.22 33 40 26 129 R.7.2.1.24 25 18 53 30 R.7.2.2.22 12 52 30 31 R.7.2.3.24 64 17 23 18 R.7.2.4.24 29 26 19 43 R.7.2.5.23 5 18 93 6 R.7.2.6.23 6 48 9 52 R.7.3.1.22 35 17 34 45 R.7.3.2.24 74 17 19 20 R.7.3.3.22 35 61 16 18 R.7.3.4.23 16 42 29 43 R.7.3.5.22 39 24 27 34
Table E-7: Reading for Information | Percent of Examinees Choosing Each Answer Option for Each Item
Answer Option ITEM NUMBER A B C D
R.3.1.1.1 0.87 0.03 0.04 0.05
WIN Assessments Technical Manual | v5.2018 82
R.3.1.2.2 0.02 0.17 0.06 0.71 R.3.1.3.3 0.04 0.14 0.78 0.01 R.3.1.4.4 0.04 0.02 0.10 0.81 R.3.1.5.5 0.06 0.88 0.03 0.00 R.3.2.1.1 0.52 0.13 0.19 0.14 R.3.2.2.3 0.11 0.08 0.73 0.04 R.3.2.3.1 0.17 0.02 0.05 0.72 R.3.2.4.4 0.13 0.43 0.09 0.33 R.3.2.5.5 0.05 0.12 0.71 0.09 R.3.2.6.2 0.79 0.03 0.13 0.04 R.3.3.1.4 0.66 0.08 0.07 0.19 R.3.3.2.3 0.04 0.81 0.08 0.03 R.3.3.3.1 0.06 0.17 0.61 0.12 R.3.3.4.5 0.57 0.10 0.24 0.05 R.3.3.5.2 0.12 0.08 0.09 0.68 R.4.1.1.6 0.17 0.10 0.63 0.07 R.4.1.2.7 0.53 0.10 0.14 0.18 R.4.1.3.6 0.08 0.09 0.06 0.75 R.4.1.4.5 0.21 0.30 0.37 0.08 R.4.1.5.8 0.08 0.63 0.14 0.10 R.4.2.1.6 0.49 0.02 0.38 0.06 R.4.2.2.8 0.28 0.15 0.11 0.41 R.4.2.3.7 0.21 0.39 0.11 0.24 R.4.2.4.8 0.80 0.03 0.11 0.03 R.4.2.5.6 0.17 0.20 0.02 0.57 R.4.2.6.5 0.32 0.29 0.32 0.03 R.4.3.1.8 0.06 0.84 0.03 0.04 R.4.3.2.6 0.12 0.71 0.08 0.04 R.4.3.3.6 0.68 0.03 0.16 0.07 R.4.3.4.7 0.15 0.11 0.68 0.01 R.4.3.5.5 0.41 0.03 0.09 0.42 R.5.1.1.9 0.09 0.09 0.53 0.25 R.5.1.2.10 0.82 0.05 0.09 0.02 R.5.1.3.11 0.78 0.06 0.09 0.03 R.5.1.4.10 0.20 0.68 0.05 0.03 R.5.1.5.12 0.18 0.64 0.07 0.07 R.5.1.6.13 0.19 0.19 0.15 0.41 R.5.1.7.10 0.03 0.63 0.09 0.21 R.5.1.8.14 0.12 0.10 0.09 0.64 R.5.2.1.11 0.04 0.51 0.39 0.03 R.5.2.2.10 0.75 0.08 0.08 0.06 R.5.2.3.9 0.06 0.09 0.74 0.09 R.5.2.4.10 0.15 0.59 0.16 0.09 R.5.2.5.12 0.47 0.24 0.22 0.02 R.5.2.6.14 0.07 0.13 0.61 0.16 R.5.2.7.13 0.20 0.08 0.08 0.62 R.5.3.1.10 0.12 0.71 0.08 0.05 R.5.3.2.10 0.82 0.06 0.06 0.03 R.5.3.3.11 0.03 0.06 0.03 0.85 R.5.3.4.9 0.59 0.06 0.28 0.03 R.5.3.5.13 0.09 0.75 0.10 0.02 R.5.3.6.11 0.10 0.03 0.02 0.81 R.5.3.7.12 0.42 0.08 0.46 0.03 R.5.3.8.9 0.25 0.38 0.02 0.31 R.5.3.9.14 0.54 0.08 0.15 0.20 R.6.1.1.15 0.44 0.11 0.36 0.04 R.6.1.2.16 0.45 0.06 0.11 0.36 R.6.1.3.17 0.08 0.64 0.17 0.08
WIN Assessments Technical Manual | v5.2018 83
R.6.1.4.17 0.29 0.51 0.11 0.05 R.6.1.5.19 0.28 0.10 0.54 0.05 R.6.1.6.20 0.67 0.18 0.04 0.10 R.6.1.7.19 0.06 0.26 0.10 0.53 R.6.1.8.21 0.21 0.06 0.32 0.38 R.6.2.1.17 0.07 0.63 0.23 0.05 R.6.2.2.15 0.00 0.42 0.37 0.16 R.6.2.3.15 0.08 0.58 0.22 0.09 R.6.2.4.17 0.39 0.12 0.22 0.26 R.6.2.5.19 0.05 0.08 0.10 0.75 R.6.2.6.21 0.15 0.14 0.03 0.66 R.6.2.7.20 0.07 0.37 0.36 0.17 R.6.2.8.16 0.06 0.66 0.09 0.17 R.6.2.9.19 0.23 0.05 0.10 0.61 R.6.3.1.15 0.22 0.44 0.17 0.13 R.6.3.2.16 0.52 0.14 0.12 0.17 R.6.3.3.19 0.38 0.09 0.42 0.04 R.6.3.4.17 0.08 0.68 0.14 0.04 R.6.3.5.19 0.04 0.04 0.05 0.84 R.6.3.6.18 0.51 0.17 0.22 0.08 R.6.3.7.20 0.56 0.20 0.11 0.08 R.6.3.8.17 0.20 0.08 0.62 0.04 R.6.3.9.21 0.19 0.05 0.38 0.34 R.7.1.1.22 0.35 0.17 0.14 0.32 R.7.1.2.23 0.11 0.68 0.12 0.08 R.7.1.3.24 0.16 0.14 0.60 0.06 R.7.1.4.24 0.31 0.36 0.16 0.12 R.7.1.5.22 0.46 0.29 0.06 0.17 R.7.1.5.25 R.7.1.6.22 0.14 0.17 0.11 0.55 R.7.2.1.24 0.20 0.14 0.41 0.23 R.7.2.2.22 0.09 0.41 0.23 0.24 R.7.2.3.24 0.51 0.13 0.18 0.14 R.7.2.4.24 0.23 0.21 0.15 0.34 R.7.2.5.23 0.04 0.14 0.74 0.05 R.7.2.6.23 0.05 0.41 0.08 0.44 R.7.3.1.22 0.26 0.13 0.25 0.33 R.7.3.2.24 0.55 0.13 0.14 0.15 R.7.3.3.22 0.26 0.46 0.12 0.14 R.7.3.4.23 0.12 0.32 0.22 0.33 R.7.3.5.22 0.31 0.19 0.21 0.27
Table E-8: Reading for Information | Point Biserials for Each Answer Option for Each Item
Answer Option ITEM NUMBER A B C D
R.3.1.1.1 0.32 -0.16 -0.17 -0.10 R.3.1.2.2 -0.15 -0.31 -0.27 0.52 R.3.1.3.3 -0.16 -0.32 0.46 -0.10 R.3.1.4.4 -0.18 -0.15 -0.24 0.39 R.3.1.5.5 -0.24 0.37 -0.16 0.00 R.3.2.1.1 0.10 0.01 -0.13 0.08 R.3.2.2.3 -0.22 -0.09 0.35 -0.20 R.3.2.3.1 -0.19 -0.18 -0.25 0.43 R.3.2.4.4 -0.24 0.31 -0.12 -0.01 R.3.2.5.5 -0.03 -0.31 0.34 0.00
WIN Assessments Technical Manual | v5.2018 84
R.3.2.6.2 0.43 -0.23 -0.33 -0.17 R.3.3.1.4 0.37 -0.29 -0.20 -0.13 R.3.3.2.3 -0.26 0.39 -0.19 -0.08 R.3.3.3.1 -0.05 -0.21 0.49 -0.22 R.3.3.4.5 0.54 -0.36 -0.08 -0.24 R.3.3.5.2 -0.09 -0.29 -0.25 0.38 R.4.1.1.6 -0.25 -0.19 0.37 -0.05 R.4.1.2.7 0.47 -0.33 -0.14 -0.12 R.4.1.3.6 -0.36 -0.18 -0.27 0.56 R.4.1.4.5 -0.11 0.03 0.20 -0.18 R.4.1.5.8 -0.19 0.39 -0.14 -0.09 R.4.2.1.6 -0.09 -0.09 0.26 -0.04 R.4.2.2.8 0.02 -0.04 -0.11 0.21 R.4.2.3.7 -0.03 0.41 -0.26 -0.10 R.4.2.4.8 0.46 -0.26 -0.25 -0.11 R.4.2.5.6 -0.09 -0.26 -0.10 0.38 R.4.2.6.5 0.07 -0.13 0.22 -0.22 R.4.3.1.8 -0.20 0.43 -0.18 -0.14 R.4.3.2.6 -0.17 0.52 -0.34 -0.13 R.4.3.3.6 0.56 0.01 -0.30 -0.27 R.4.3.4.7 -0.08 -0.34 0.40 -0.13 R.4.3.5.5 -0.06 -0.16 -0.25 0.34 R.5.1.1.9 -0.16 -0.23 0.51 -0.21 R.5.1.2.10 0.54 -0.28 -0.26 -0.23 R.5.1.3.11 0.60 -0.30 -0.32 -0.26 R.5.1.4.10 -0.23 0.45 -0.19 -0.22 R.5.1.5.12 -0.21 0.54 -0.32 -0.23 R.5.1.6.13 -0.03 -0.09 -0.14 0.27 R.5.1.7.10 -0.22 0.43 -0.27 -0.16 R.5.1.8.14 -0.10 -0.23 -0.38 0.59 R.5.2.1.11 -0.20 0.17 -0.03 -0.08 R.5.2.2.10 0.53 -0.30 -0.29 -0.20 R.5.2.3.9 -0.38 -0.06 0.44 -0.21 R.5.2.4.10 -0.31 0.57 -0.25 -0.24 R.5.2.5.12 0.59 -0.32 -0.30 -0.09 R.5.2.6.14 -0.28 -0.23 0.47 -0.19 R.5.2.7.13 -0.35 -0.14 -0.28 0.58 R.5.3.1.10 -0.10 0.47 -0.38 -0.13 R.5.3.2.10 0.38 -0.32 -0.16 -0.10 R.5.3.3.11 -0.03 -0.32 -0.23 0.45 R.5.3.4.9 0.50 -0.30 -0.22 0.00 R.5.3.5.13 -0.33 0.53 -0.20 -0.17 R.5.3.6.11 -0.16 -0.33 -0.23 0.47 R.5.3.7.12 0.05 -0.36 0.23 -0.13 R.5.3.8.9 0.01 0.35 -0.24 -0.16 R.5.3.9.14 0.50 -0.36 -0.20 -0.12 R.6.1.1.15 0.25 -0.11 0.03 -0.10 R.6.1.2.16 0.37 -0.17 -0.26 -0.05 R.6.1.3.17 -0.39 0.59 -0.16 -0.29 R.6.1.4.17 -0.39 0.45 -0.17 0.00 R.6.1.5.19 -0.03 -0.45 0.41 -0.06 R.6.1.6.20 0.49 -0.17 -0.18 -0.26 R.6.1.7.19 -0.15 0.29 -0.16 0.02 R.6.1.8.21 -0.16 -0.30 -0.03 0.40 R.6.2.1.17 -0.22 0.59 -0.41 -0.15 R.6.2.2.15 0.00 -0.03 0.21 -0.19 R.6.2.3.15 -0.25 0.53 -0.19 -0.31 R.6.2.4.17 0.58 -0.25 -0.34 -0.10
WIN Assessments Technical Manual | v5.2018 85
R.6.2.5.19 -0.10 -0.33 -0.28 0.51 R.6.2.6.21 -0.04 -0.27 -0.27 0.39 R.6.2.7.20 -0.25 0.32 -0.09 -0.10 R.6.2.8.16 -0.23 0.52 -0.33 -0.20 R.6.2.9.19 0.07 -0.25 -0.29 0.25 R.6.3.1.15 0.12 0.35 -0.31 -0.14 R.6.3.2.16 0.26 -0.11 -0.19 0.14 R.6.3.3.19 -0.25 -0.16 0.47 -0.07 R.6.3.4.17 -0.12 0.41 -0.19 -0.10 R.6.3.5.19 -0.30 -0.30 -0.15 0.57 R.6.3.6.18 0.42 -0.21 -0.23 -0.02 R.6.3.7.20 0.31 -0.14 -0.02 -0.09 R.6.3.8.17 -0.33 -0.20 0.62 -0.25 R.6.3.9.21 -0.16 -0.24 -0.05 0.43 R.7.1.1.22 0.04 -0.13 -0.27 0.31 R.7.1.2.23 -0.01 0.41 -0.34 -0.26 R.7.1.3.24 -0.14 -0.13 0.42 -0.25 R.7.1.4.24 0.14 0.16 -0.21 -0.07 R.7.1.5.22 0.40 -0.17 -0.10 -0.21 R.7.1.5.25 R.7.1.6.22 -0.18 -0.18 -0.19 0.48 R.7.2.1.24 0.05 -0.26 0.28 -0.09 R.7.2.2.22 -0.16 0.33 -0.29 0.08 R.7.2.3.24 0.48 -0.14 -0.37 -0.07 R.7.2.4.24 0.04 -0.27 -0.11 0.31 R.7.2.5.23 -0.05 -0.19 0.27 -0.14 R.7.2.6.23 -0.28 0.20 -0.29 0.07 R.7.3.1.22 0.09 -0.27 0.29 -0.10 R.7.3.2.24 0.45 -0.12 -0.31 -0.13 R.7.3.3.22 -0.06 0.42 -0.18 -0.25 R.7.3.4.23 -0.15 0.07 -0.24 0.31 R.7.3.5.22 0.26 -0.07 -0.40 0.26
Table E-9: Locating Information | Item Difficulty and Discrimination for All Items
ITEM NUMBER Discrimination Difficulty L.3.0.1.1 0.39 0.63 L.3.0.2.1 0.36 0.48 L.3.0.3.1 0.41 0.42 L.3.1.4.2 0.41 0.85 L.3.1.5.2 0.37 0.71 L.3.2.6.2 0.54 0.74 L.3.2.7.2 0.36 0.46 L.3.2.8.2 0.42 0.73 L.3.3.9.2 0.47 0.76 L.3.3.10.2 0.42 0.80 L.3.4.11.2 0.46 0.84 L.3.4.12.2 0.42 0.74 L.4.0.1.3 0.46 0.70 L.4.0.2.3 0.48 0.76 L.4.0.3.3 0.49 0.68 L.4.1.4.4 0.47 0.60 L.4.1.5.4 0.37 0.42 L.4.1.6.4 0.48 0.52 L.4.0.7.5 0.56 0.47 L.4.0.8.5 0.54 0.49
WIN Assessments Technical Manual | v5.2018 86
L.4.0.9.5 0.57 0.82 L.4.0.10.6 0.54 0.73 L.4.0.11.6 0.45 0.75 L.4.0.12.6 0.56 0.72 L.4.0.13.7 -0.22** 0.20 L.4.0.14.7 0.59 0.54 L.4.0.15.7 0.60 0.66 L.5.1.1.8 0.28* 0.42 L.5.1.2.8 0.43 0.56 L.5.0.3.8 0.31 0.26 L.5.2.4.8 0.48 0.46 L.5.2.5.10 0.42 0.53 L.5.3.6.10 0.35 0.53 L.5.3.7.11 0.52 0.46 L.5.3.8.10 0.30 0.61 L.5.4.9.11 0.46 0.54 L.5.4.10.9 0.53 0.45 L.5.4.11.11 0.49 0.48 L.5.5.12.11 0.25* 0.28 L.5.5.13.11 0.37 0.27 L.6.6.14.8 0.46 0.74 L.6.6.15.11 0.59 0.73 L.6.6.16.8 0.32 0.48 L.6.6.17.8 0.31 0.39 L.6.0.18.10 0.41 0.50 L.6.7.19.8 0.14* 0.36 L.6.7.20.7 0.55 0.53 L.6.0.21.10 0.22* 0.39 L.6.0.22.9 0.22* 0.42 L.6.0.23.10 0.27* 0.29 L.6.7.24.11 -0.03** 0.03*** L.6.7.25.9 0.38 0.42 L.6.7.26.9 0.42 0.64
Table E-10: Locating Information | Number of Examinees Choosing Each Answer Option for Each Item
Answer Option ITEM NUMBER A B C D L.3.0.1.1 69 202 37 6 L.3.0.2.1 14 81 144 54 L.3.0.3.1 33 50 87 128 L.3.1.4.2 4 17 265 19 L.3.1.5.2 35 212 26 14 L.3.2.6.2 14 26 33 228 L.3.2.7.2 49 94 13 142 L.3.2.8.2 18 17 221 40 L.3.3.9.2 11 225 37 17 L.3.3.10.2 19 10 234 22 L.3.4.11.2 13 245 15 10 L.3.4.12.2 222 51 13 10 L.4.0.1.3 213 45 24 10 L.4.0.2.3 26 216 16 16 L.4.0.3.3 70 58 393 36 L.4.1.4.4 20 19 179 74 L.4.1.5.4 23 109 17 118
WIN Assessments Technical Manual | v5.2018 87
L.4.1.6.4 84 18 23 146 L.4.0.7.5 56 75 137 17 L.4.0.8.5 27 39 66 135 L.4.0.9.5 460 35 32 18 L.4.0.10.6 34 24 206 9 L.4.0.11.6 13 17 201 30 L.4.0.12.6 396 63 51 23 L.4.0.13.7 43 55 33 138 L.4.0.14.7 30 52 33 142 L.4.0.15.7 24 29 29 180 L.5.1.1.8 36 48 116 71 L.5.1.2.8 146 51 35 22 L.5.0.3.8 59 62 68 69 L.5.2.4.8 125 31 65 39 L.5.2.5.10 47 146 49 24 L.5.3.6.10 23 134 64 19 L.5.3.7.11 33 82 24 126 L.5.3.8.10 41 29 155 21 L.5.4.9.11 17 34 58 132 L.5.4.10.9 59 37 33 110 L.5.4.11.11 124 61 33 26 L.5.5.12.11 144 28 76 12 L.5.5.13.11 64 126 25 15 L.6.6.14.8 196 18 27 17 L.6.6.15.11 29 14 21 178 L.6.6.16.8 33 116 41 46 L.6.6.17.8 39 62 98 45 L.6.0.18.10 45 121 39 27 L.6.7.19.8 38 59 83 51 L.6.7.20.7 132 41 36 29 L.6.0.21.10 26 84 95 33 L.6.0.22.9 47 52 103 35 L.6.0.23.10 43 79 33 68 L.6.7.24.11 96 32 91 8 L.6.7.25.9 95 39 49 31 L.6.7.26.9 29 143 38 7
Table E-11: Locating Information | Percent of Examinees Choosing Each Answer Option for Each Item
Answer Option ITEM NUMBER A B C D L.3.0.1.1 0.22 0.63 0.12 0.02 L.3.0.2.1 0.05 0.27 0.48 0.18 L.3.0.3.1 0.11 0.16 0.28 0.42 L.3.1.4.2 0.01 0.05 0.85 0.06 L.3.1.5.2 0.12 0.71 0.09 0.05 L.3.2.6.2 0.05 0.08 0.11 0.74 L.3.2.7.2 0.16 0.31 0.04 0.46 L.3.2.8.2 0.06 0.06 0.73 0.13 L.3.3.9.2 0.04 0.76 0.12 0.06 L.3.3.10.2 0.06 0.03 0.80 0.08 L.3.4.11.2 0.04 0.84 0.05 0.03 L.3.4.12.2 0.74 0.17 0.04 0.03 L.4.0.1.3 0.70 0.15 0.08 0.03 L.4.0.2.3 0.09 0.76 0.06 0.06
WIN Assessments Technical Manual | v5.2018 88
L.4.0.3.3 0.12 0.10 0.68 0.06 L.4.1.4.4 0.07 0.06 0.60 0.25 L.4.1.5.4 0.08 0.39 0.06 0.42 L.4.1.6.4 0.30 0.06 0.08 0.52 L.4.0.7.5 0.19 0.26 0.47 0.06 L.4.0.8.5 0.10 0.14 0.24 0.49 L.4.0.9.5 0.82 0.06 0.06 0.03 L.4.0.10.6 0.12 0.08 0.73 0.03 L.4.0.11.6 0.05 0.06 0.75 0.11 L.4.0.12.6 0.72 0.11 0.09 0.04 L.4.0.13.7 0.15 0.20 0.12 0.49 L.4.0.14.7 0.11 0.20 0.13 0.54 L.4.0.15.7 0.09 0.11 0.11 0.66 L.5.1.1.8 0.13 0.17 0.42 0.26 L.5.1.2.8 0.56 0.20 0.13 0.08 L.5.0.3.8 0.22 0.23 0.26 0.26 L.5.2.4.8 0.46 0.12 0.24 0.14 L.5.2.5.10 0.17 0.53 0.18 0.09 L.5.3.6.10 0.09 0.54 0.26 0.08 L.5.3.7.11 0.12 0.30 0.09 0.46 L.5.3.8.10 0.16 0.11 0.61 0.08 L.5.4.9.11 0.07 0.14 0.24 0.54 L.5.4.10.9 0.24 0.15 0.14 0.45 L.5.4.11.11 0.48 0.24 0.13 0.10 L.5.5.12.11 0.54 0.10 0.28 0.04 L.5.5.13.11 0.27 0.53 0.10 0.06 L.6.6.14.8 0.74 0.07 0.10 0.06 L.6.6.15.11 0.12 0.06 0.09 0.73 L.6.6.16.8 0.14 0.48 0.17 0.19 L.6.6.17.8 0.15 0.25 0.39 0.18 L.6.0.18.10 0.19 0.50 0.16 0.11 L.6.7.19.8 0.16 0.25 0.36 0.22 L.6.7.20.7 0.53 0.17 0.15 0.12 L.6.0.21.10 0.11 0.35 0.39 0.14 L.6.0.22.9 0.19 0.21 0.42 0.14 L.6.0.23.10 0.19 0.34 0.14 0.29 L.6.7.24.11 0.41 0.14 0.39 0.03 L.6.7.25.9 0.42 0.17 0.22 0.14 L.6.7.26.9 0.13 0.64 0.17 0.03
Table E-12: Locating Information | Point Biserials for Each Answer Option for Each Item
Answer Option ITEM NUMBER A B C D L.3.0.1.1 -0.21 0.39 -0.22 -0.09 L.3.0.2.1 -0.22 -0.17 0.36 -0.10 L.3.0.3.1 -0.23 -0.11 -0.12 0.41 L.3.1.4.2 -0.16 -0.27 0.41 -0.19 L.3.1.5.2 -0.12 0.37 -0.24 -0.08 L.3.2.6.2 -0.21 -0.24 -0.30 0.54 L.3.2.7.2 -0.23 -0.04 -0.17 0.36 L.3.2.8.2 -0.19 -0.18 0.42 -0.16 L.3.3.9.2 -0.15 0.47 -0.30 -0.16 L.3.3.10.2 -0.22 -0.22 0.42 -0.20 L.3.4.11.2 -0.28 0.46 -0.26 -0.13 L.3.4.12.2 0.42 -0.25 -0.18 -0.18
WIN Assessments Technical Manual | v5.2018 89
L.4.0.1.3 0.46 -0.20 -0.33 -0.13 L.4.0.2.3 -0.26 0.48 -0.19 -0.19 L.4.0.3.3 -0.15 -0.33 0.49 -0.15 L.4.1.4.4 -0.32 -0.12 0.47 -0.23 L.4.1.5.4 -0.12 -0.09 -0.27 0.37 L.4.1.6.4 -0.11 -0.31 -0.28 0.48 L.4.0.7.5 -0.21 -0.24 0.56 -0.24 L.4.0.8.5 -0.20 -0.28 -0.22 0.54 L.4.0.9.5 0.57 -0.33 -0.31 -0.23 L.4.0.10.6 -0.27 -0.34 0.54 -0.14 L.4.0.11.6 -0.19 -0.32 0.45 -0.14 L.4.0.12.6 0.56 -0.22 -0.33 -0.28 L.4.0.13.7 -0.11 -0.22 -0.28 0.49 L.4.0.14.7 -0.25 -0.36 -0.16 0.59 L.4.0.15.7 -0.32 -0.33 -0.15 0.60 L.5.1.1.8 -0.33 -0.11 0.28 0.06 L.5.1.2.8 0.43 -0.19 -0.18 -0.22 L.5.0.3.8 0.08 -0.17 -0.17 0.31 L.5.2.4.8 0.48 -0.24 -0.19 -0.10 L.5.2.5.10 -0.01 0.42 -0.33 -0.14 L.5.3.6.10 -0.18 0.35 -0.05 -0.25 L.5.3.7.11 -0.08 -0.30 -0.23 0.52 L.5.3.8.10 -0.12 -0.11 0.30 -0.18 L.5.4.9.11 -0.29 -0.21 -0.15 0.46 L.5.4.10.9 -0.15 -0.28 -0.22 0.53 L.5.4.11.11 0.49 -0.16 -0.28 -0.09 L.5.5.12.11 0.04 -0.11 0.25 -0.23 L.5.5.13.11 0.37 -0.03 -0.21 -0.20 L.6.6.14.8 0.46 -0.29 -0.25 -0.14 L.6.6.15.11 -0.29 -0.31 -0.34 0.59 L.6.6.16.8 -0.21 0.32 -0.23 0.04 L.6.6.17.8 0.03 -0.32 0.31 0.00 L.6.0.18.10 -0.21 0.41 -0.16 -0.13 L.6.7.19.8 -0.12 -0.05 0.14 0.01 L.6.7.20.7 0.55 -0.36 -0.26 -0.01 L.6.0.21.10 -0.25 0.01 0.22 -0.07 L.6.0.22.9 -0.02 -0.16 0.22 -0.04 L.6.0.23.10 -0.13 -0.06 -0.10 0.27 L.6.7.24.11 0.34 -0.26 -0.12 -0.03 L.6.7.25.9 0.38 -0.23 -0.14 -0.04 L.6.7.26.9 -0.29 0.42 -0.15 -0.14
WIN Assessments Technical Manual | v5.2018 90
Appendix F
WIN Ready to Work Assessments | Test Form Composition
Included here are the complete details of the form construction for each of the content areas. The
information below provides the specific items that were selected for each form, their item statistics,
content specification, and if an item is associated with a passage or graphic. Two tables of this
information are provided for each content area: one for each of the forms constructed.
Table F-1: Applied Mathematics | Items Selected for Form
Level Accession Number
Graphic (G) or Passage
Discrimination Difficulty
Learning Objective
3 E.M.3.0.1.1 0.25 0.63 1 E.M.3.0.7.2 0.26 0.68 2 E.M.3.0.11.4 0.23 0.81 4 E.M.3.0.10.3 0.26 0.59 3
4 E.M.4.0.1.5 0.27 0.63 5 E.M.4.0.4.6 0.31 0.81 6 E.M.4.0.7.7 G.E.M.4.0.7.0 0.28 0.51 7 E.M.4.0.10.8 0.36 0.66 8 E.M.4.0.13.9 0.28 0.83 9 E.M.4.0.16.10 0.28 0.61 10 E.M.4.0.19.11 G.E.M.4.0.19.0 0.14 0.35 11
5 E.M.5.0.3.12 G.E.M.5.0.3.0 0.23 0.66 12 E.M.5.0.6.13 0.22 0.69 13 E.M.5.0.8.14 G.E.M.5.0.8.0 0.17 0.48 14 E.M.5.0.11.15 0.23 0.78 15 E.M.5.0.14.16 G.E.M.5.0.14.0 0.25 0.34 16 E.M.5.0.16.17 G.E.M.5.0.16.0 0.24 0.53 17 E.M.5.0.19.18 0.19 0.55 18
6 E.M.6.0.1.19 0.16 0.43 19 E.M.6.0.7.20 0.12 0.35 20 E.M.6.0.8.21 0.22 0.28 21 E.M.6.0.13.22 0.14 0.24 22 E.M.6.0.19.23 G.E.M.6.0.19.0 0.19 0.37 23 E.M.6.0.21.24 0.18 0.35 24 E.M.6.0.25.25 G.E.M.6.0.25.0 0.06 0.36 25 E.M.6.0.26.26 0.20 0.51 26 E.M.6.0.29.27 0.21 0.24 27
7 E.M.7.0.3.28 0.17 0.49 28 E.M.7.0.5.29 G.E.M.7.0.5.0 0.15 0.6 29 E.M.7.0.8.30 0.13 0.54 30 E.M.7.0.13.31 0.11 0.26 31 E.M.7.0.15.32 G.E.M.7.0.15.0 0.20 0.36 32 E.M.7.0.17.33 G.E.M.7.0.17.0 0.20 0.41 33 E.M.7.0.19.34 0.18 0.46 34
Table F-2: Applied Mathematics | Items Selected for Form 2
WIN Assessments Technical Manual | v5.2018 91
Level Accession Number
Graphic (G) or Passage
Discrimination Difficulty Learning Objective
3 E.M.3.0.2.1 0.19 0.85 1 E.M.3.0.5.2 0.46 0.58 2 E.M.3.0.8.3 0.31 0.71 3 E.M.3.0.12.4 0.25 0.82 4
4 E.M.4.0.2.5 0.31 0.77 5 E.M.4.0.5.6 0.25 0.76 6 E.M.4.0.9.7 0.28 0.70 7 E.M.4.0.12.8 0.18 0.77 8 E.M.4.0.14.9 0.32 0.62 9 E.M.4.0.17.10 0.30 0.47 10 E.M.4.0.20.11 G.E.M.4.0.20.0 0.28 0.40 11
5 E.M.5.0.1.12 G.E.M.5.0.1.0 0.33 0.62 12 E.M.5.0.5.13 0.29 0.70 13 E.M.5.0.9.14 G.E.M.5.0.9.0 0.30 0.50 14 E.M.5.0.10.15 0.17 0.65 15 E.M.5.0.15.16 G.E.M.5.0.15.0 0.15 0.31 16 E.M.5.0.17.17 0.30 0.62 17 E.M.5.0.21.18 0.20 0.53 18
6 E.M.6.0.3.19 0.22 0.44 19 E.M.6.0.7.20 0.12 0.35 20 E.M.6.0.11.21 0.22 0.37 21 E.M.6.0.15.22 0.28 0.37 22 E.M.6.0.18.23 G.E.M.6.0.18.0 0.11 0.47 23 E.M.6.0.22.24 G.E.M.6.0.22.0 0.19 0.42 24 E.M.6.0.25.25 G.E.M.6.0.25.0 0.06 0.36 25 E.M.6.0.27.26 0.14 0.50 26 E.M.6.0.29.27 0.21 0.24 27
7 E.M.7.0.3.28 0.17 0.49 28 E.M.7.0.5.29 G.E.M.7.0.5.0 0.15 0.6 29 E.M.7.0.9.30 0.14 0.37 30 E.M.7.0.13.31 0.11 0.26 31 E.M.7.0.15.32 G.E.M.7.0.15.0 0.20 0.36 32 E.M.7.0.17.33 G.E.M.7.0.17.0 0.20 0.41 33 E.M.7.0.19.34 0.18 0.46 34
Table F-3: Reading for Information | Items Selected for Form 1
Level Accession Number
Graphic (G) or Passage
Discrimination Difficulty Learning Objective
3 E.R.3.1.1.1 E.R.3.1.0.0 0.32 0.87 1 E.R.3.1.2.2 E.R.3.1.0.0 0.52 0.71 2 E.R.3.1.3.3 E.R.3.1.0.0 0.46 0.78 3 E.R.3.1.4.4 E.R.3.1.0.0 0.39 0.81 4 E.R.3.3.2.3 E.R.3.3.0.0 0.39 0.81 3
4 E.R.4.2.3.7 E.R.4.2.0.0 0.41 0.39 7 E.R.4.2.4.8 E.R.4.2.0.0 0.46 0.80 8 E.R.4.2.5.6 E.R.4.2.0.0 0.38 0.57 6 E.R.4.3.1.8 E.R.4.3.0.0 0.43 0.84 8 E.R.4.3.3.6 E.R.4.3.0.0 0.56 0.68 6 E.R.4.3.5.5 E.R.4.3.0.0 0.34 0.42 5
5 E.R.5.1.1.9 E.R.5.1.0.0 0.51 0.53 9
WIN Assessments Technical Manual | v5.2018 92
E.R.5.1.3.11 E.R.5.1.0.0 0.60 0.78 11 E.R.5.1.8.14 E.R.5.1.0.0 0.59 0.64 14 E.R.5.2.2.10 E.R.5.2.0.0 0.53 0.75 10 E.R.5.2.3.9 E.R.5.2.0.0 0.44 0.74 9 E.R.5.2.4.10 E.R.5.2.0.0 0.57 0.59 10 E.R.5.2.5.12 E.R.5.2.0.0 0.60 0.47 12 E.R.5.2.6.14 E.R.5.2.0.0 0.47 0.61 14 E.R.5.2.7.13 E.R.5.2.0.0 0.58 0.62 13
6 E.R.6.2.1.17 E.R.6.2.0.0 0.59 0.63 17 E.R.6.2.3.15 E.R.6.2.0.0 0.53 0.58 15 E.R.6.2.5.19 E.R.6.2.0.0 0.51 0.74 19 E.R.6.2.6.21 E.R.6.2.0.0 0.39 0.66 21 E.R.6.2.8.16 E.R.6.2.0.0 0.52 0.66 16 E.R.6.3.4.17 E.R.6.3.0.0 0.41 0.68 17 E.R.6.3.5.19 E.R.6.3.0.0 0.57 0.84 19 E.R.6.3.6.18 E.R.6.3.0.0 0.42 0.51 18 E.R.6.3.7.20 E.R.6.3.0.0 0.31 0.56 20
7 E.R.7.2.2.22 E.R.7.2.0.0 0.33 0.41 22 E.R.7.2.3.24 E.R.7.2.0.0 0.48 0.51 24 E.R.7.2.4.24 E.R.7.2.0.0 0.31 0.34 24 E.R.7.2.5.23 E.R.7.2.0.0 0.27 0.74 23
Table F-4: Reading for Information | Items Selected for Form 2
Level Accession Number
Graphic (G) or Passage
Discrimination Difficulty Learning Objective
3 E.R.3.2.2.3 E.R.3.2.0.0 0.35 0.73 3 E.R.3.2.3.1 E.R.3.2.0.0 0.43 0.72 1 E.R.3.2.5.5 E.R.3.2.0.0 0.34 0.71 5 E.R.3.2.6.2 E.R.3.2.0.0 0.43 0.79 2 E.R.3.3.1.4 E.R.3.3.0.0 0.37 0.66 4
4
E.R.4.1.1.6
E.R.4.1.0.0, G.E.R.4.1.0.0.P.A, G.E.R.4.1.0.0.P.B. 0.37 0.63
6
E.R.4.1.2.7
E.R.4.1.0.0, G.E.R.4.1.0.0.P.A, G.E.R.4.1.0.0.P.B. 0.47 0.53
7
E.R.4.1.5.8
E.R.4.1.0.0, G.E.R.4.1.0.0.P.A, G.E.R.4.1.0.0.P.B. 0.39 0.63
8
E.R.4.3.2.6 E.R.4.3.0.0 0.52 0.71 6 E.R.4.3.4.7 E.R.4.3.0.0 0.40 0.68 7 E.R.4.3.5.5 E.R.4.3.0.0 0.34 0.42 5
5 E.R.5.1.4.10 E.R.5.1.0.0 0.45 0.68 10 E.R.5.1.5.12 E.R.5.1.0.0 0.54 0.64 12 E.R.5.1.8.14 E.R.5.1.0.0 0.59 0.64 14 E.R.5.3.2.10 E.R.5.3.0.0 0.38 0.82 10 E.R.5.3.3.11 E.R.5.3.0.0 0.45 0.85 11 E.R.5.3.4.9 E.R.5.3.0.0 0.50 0.59 9 E.R.5.3.5.13 E.R.5.3.0.0 0.53 0.75 13 E.R.5.3.6.11 E.R.5.3.0.0 0.47 0.81 11 E.R.5.3.9.14 E.R.5.3.0.0 0.50 0.54 14
6 E.R.6.1.2.16 E.R.6.1.0.0 0.37 0.45 16
WIN Assessments Technical Manual | v5.2018 93
E.R.6.1.4.17 E.R.6.1.0.0 0.45 0.51 17 E.R.6.1.5.19 E.R.6.1.0.0 0.41 0.54 19 E.R.6.1.6.20 E.R.6.1.0.0 0.49 0.67 20 E.R.6.1.8.21 E.R.6.1.0.0 0.40 0.38 21 E.R.6.3.1.15 E.R.6.3.0.0 0.35 0.44 15 E.R.6.3.5.19 E.R.6.3.0.0 0.57 0.84 19 E.R.6.3.6.18 E.R.6.3.0.0 0.42 0.51 18 E.R.6.3.8.17 E.R.6.3.0.0 0.62 0.62 17
7 E.R.7.1.2.23 E.R.7.1.0.0 0.41 0.68 23 E.R.7.1.3.24 E.R.7.1.0.0 0.42 0.60 24 E.R.7.1.5.22 E.R.7.1.0.0 0.40 0.46 22 E.R.7.1.6.22 E.R.7.1.0.0 0.48 0.55 22
Table F-5: Locating Information | Items Selected for Form 1
Level Accession Number
Graphic (G) or Passage Discrimination Difficulty
Learning Objective
3 E.L.3.0.1.1 G.E.L.3.0.1.0 0.39 0.63 1 E.L.3.0.3.1 G.E.L.3.0.3.0 0.41 0.42 1 E.L.3.1.4.2 G.E.L.3.1.4.0 0.41 0.85 2 E.L.3.2.8.2 G.E.L.3.2.6.0,
G.E.L.3.2.8.2.A, G.E.L.3.2.8.2.B, G.E.L.3.2.8.2.C, G.E.L.3.2.8.2.D
0.42 0.73 2
E.L.3.4.11.2 G.E.L.3.4.11.0 0.46 0.84 2 4 E.L.4.0.1.3 G.E.L.4.0.1.0
G.E.L.4.0.1.0-A G.E.L.4.0.1.0-B G.E.L.4.0.1.0-C G.E.L.4.0.1.0-D
0.46 0.70 3
E.L.4.0.3.3 G.E.L.4.0.3.0 G.E.L.4.0.3.0-A G.E.L.4.0.3.0-B G.E.L.4.0.3.0-C G.E.L.4.0.3.0-D
0.49 0.68 3
E.L.4.1.4.4 G.E.L.4.1.4.0 0.47 0.60 4 E.L.4.0.7.5 G.E.L.4.0.7.0 0.56 0.47 5 E.L.4.0.9.5 G.E.L.4.0.9.0 0.57 0.82 5 E.L.4.0.10.6 G.E.L.4.0.10.0
G.E.L.4.0.10.0-A G.E.L.4.0.10.0-B G.E.L.4.0.10.0-C G.E.L.4.0.10.0-D
0.54 0.73 6
E.L.4.0.12.6 G.E.L.4.0.12.0 0.56 0.72 6 E.L.4.0.14.7 G.E.L.4.0.14.0-A;
G.E.L.4.0.14.0-B 0.59 0.54 7
E.L.4.0.15.7 G.E.L.4.0.15.0 0.60 0.66 7 5 E.L.5.2.4.8 G.E.L.5.2.4.0 0.48 0.46 8
E.L.5.2.5.10 G.E.L.5.2.4.0 0.42 0.53 10 E.L.5.3.7.11 G.E.L.5.3.6.0 0.52 0.46 11 E.L.5.4.11.11 G.E.L.5.4.9.0 0.49 0.48 11
6 E.L.6.7.20.7 G.E.L.6.7.19.0 0.55 0.53 7 E.L.6.0.23.10 G.E.L.6.0.23.0 0.27 0.29 10 E.L.6.7.25.9 G.E.L.6.7.24.0 0.38 0.42 9
WIN Assessments Technical Manual | v5.2018 94
Table F-6: Locating Information | Items Selected for Form 2
Level Accession Number
Graphic (G) or Passage
Discrimination Difficulty Learning Objective
3 E.L.3.0.2.1 G.E.L.3.0.2.0 0.36 0.48 1 E.L.3.0.3.1 G.E.L.3.0.3.0 0.41 0.42 1 E.L.3.1.5.2 G.E.L.3.1.4.0 0.37 0.71 2 E.L.3.2.8.2 G.E.L.3.2.6.0,
G.E.L.3.2.8.2.A, G.E.L.3.2.8.2.B, G.E.L.3.2.8.2.C, G.E.L.3.2.8.2.D
0.42 0.73 2
E.L.3.4.12.2 G.E.L.3.4.11.0 0.42 0.74 2 4 E.L.4.0.2.3 G.E.L.4.0.2.0
G.E.L.4.0.2.0-A G.E.L.4.0.2.0-B G.E.L.4.0.2.0-C G.E.L.4.0.2.0-D
0.48 0.76 3
E.L.4.0.3.3 G.E.L.4.0.3.0 G.E.L.4.0.3.0-A G.E.L.4.0.3.0-B G.E.L.4.0.3.0-C G.E.L.4.0.3.0-D
0.49 0.68 3
E.L.4.1.6.4 G.E.L.4.1.4.0 0.48 0.52 4 E.L.4.0.8.5 G.E.L.4.0.8.0 0.54 0.49 5 E.L.4.0.9.5 G.E.L.4.0.9.0 0.57 0.82 5 E.L.4.0.11.6 G.E.L.4.0.11.0 0.45 0.75 6 E.L.4.0.12.6 G.E.L.4.0.12.0 0.56 0.72 6 E.L.4.0.14.7 G.E.L.4.0.14.0-A;
G.E.L.4.0.14.0-B 0.59 0.54 7
E.L.4.0.15.7 G.E.L.4.0.15.0 0.60 0.66 7 5 E.L.5.1.2.8 G.E.L.5.1.1.0 0.43 0.56 8
E.L.5.3.6.10 G.E.L.5.3.6.0 G.E.L.5.0.6.0
0.35 0.53 10
E.L.5.4.9.11 G.E.L.5.4.9.0 0.46 0.54 11 E.L.5.5.13.11 G.E.L.5.5.12.0 0.37 0.27 11
6 E.L.6.0.18.10 G.E.L.6.0.18.0 0.41 0.50 10 E.L.6.7.20.7 G.E.L.6.7.19.0 0.55 0.53 7 E.L.6.7.26.9 G.E.L.6.7.24.0 0.42 0.64 9
WIN Assessments Technical Manual | v5.2018 95
Appendix G
WIN Essential Soft Skills Assessment | Test Form Composition
The following lists the specific field test items that were selected for the final form and the
corresponding rationale.
Table G-1: WIN Essential Soft Skills Assessment | Items Selected for Form
Item # Action Rationale 1 Drop Moderate discrimination 2 Keep Good difficulty; good discrimination 3 Drop Difficult; poor discrimination 4 Keep Good difficulty; good discrimination 5 Drop Poor discrimination 6 Keep Good difficulty; good discrimination 7 Keep Good difficulty; good discrimination 8 Keep Good difficulty; good discrimination 9 Keep Good difficulty; good discrimination 10 Keep Good difficulty; good discrimination 11 Keep Good difficulty; good discrimination 12 Keep Good difficulty; good discrimination 13 Keep Good difficulty; good discrimination 14 Drop Poor discrimination 15 Keep Good difficulty; good discrimination 16 Keep Good difficulty; good discrimination 17 Keep Good difficulty; good discrimination 18 Keep Good difficulty; good discrimination Item # Action Rationale 19 Keep Good difficulty; good discrimination 20 Keep Good difficulty; good discrimination 21 Keep Good difficulty; good discrimination 22 Keep Good difficulty; good discrimination 23 Keep Good difficulty; good discrimination 24 Keep Good difficulty; good discrimination 25 Keep Good difficulty; good discrimination 26 Drop Difficult; poor discrimination 27 Keep Good difficulty; good discrimination 28 Keep Good difficulty; good discrimination 29 Keep Good difficulty; good discrimination 30 Drop Moderate discrimination 31 Drop Difficult; poor discrimination 32 Drop Poor discrimination 33 Keep Good difficulty; good discrimination 34 Keep Good difficulty; good discrimination 35 Drop Moderate discrimination 36 Keep Good difficulty; good discrimination 37 Keep Good difficulty; good discrimination 38 Keep Good difficulty; good discrimination 39 Keep Good difficulty; good discrimination 40 Keep Good difficulty; good discrimination 41 Drop Poor discrimination 42 Drop Difficult; poor discrimination 43 Keep Good difficulty; good discrimination 44 Keep Good difficulty; good discrimination
WIN Assessments Technical Manual | v5.2018 96
45 Keep Good difficulty; good discrimination 46 Keep Good difficulty; good discrimination 47 Drop Moderate discrimination 48 Keep Good difficulty; good discrimination 49 Drop Moderate discrimination 50 Keep Good difficulty; good discrimination 51 Drop Poor discrimination 52 Keep Good difficulty; good discrimination 53 Keep Good difficulty; good discrimination 54 Drop Poor discrimination 55 Keep Good difficulty; good discrimination