Design Reviews

Formative evaluation can be conducted upon the output of each stage of design order to make revisions before any actual development of materials takes place. Smith & Ragan (1999) include the following as part of the design review phase of formative evaluation.

Goal Review
Conduct a formal needs assessment and have the client review the learning goal(s) once stated in formal performance terms.

Review of Environment and Learner Analysis
After data has been collected from an environmental and learner analysis, the instructional designer should review its adequacy. This may include collecting additional information to either confirm or extend the initial analysis.

Review of Task Analysis
Confirm the prerequisite relationship of skills by testing two separate groups of learners. One group of learners who have the targeted skills and another group of learners who do not. Learners that can achieve the terminal objective should also be tested to see if they can also perform the enabling objectives. It is also a good idea to have other instructional designers to review the task analysis for accuracy and completeness.

Review of Assessment Specifications and Blueprints
Have content and testing experts review assessment items and blueprints for the congruence of the objectives and test item specifications and verify if the type of items outlined in the specifications sufficiently describe and are representative of the domain. Skilled learners can also be administered test items before the materials are developed to determine reliability of the items.

Smith, P. and Ragan, T. (1999). Instructional design (2nd ed.). New York: John Wiley & Sons, Inc.

 

 

 

 

Expert review(s)/ Internal review

According to Tessmer (1993), expert reviews are when experts review the instruction with or without the evaluator. Who are the experts? The expert review or, as Seels and Glasgow (1990) refer to the same phase of formative evaluation, internal review, can be completed by one person or a team. Content experts, instructional design experts, content-specific education specialists, or experts on the learners, such as teachers (Smith & Ragan, 1999). These are all people that can help with the review process of formative evaluation. Content experts, also called subject matter experts (SMEs) review the content for accuracy and completeness. A content-specific educator is even better because they can check for the congruence of content with current educational theory in the specific subject area. For example, a science curriculum specialist.

Smith and Ragan (1999) suggest dividing "context experts' comments into three categories: revisions that should be made immediately, questions for which data should be collected during subsequent phases, and suggestions that should be ignored" (p. 340). Seels and Glasgow (1990) advise starting internal reviews at the problem definition and task analysis phases of the instructional design process and continuing until the responsible organization receives the final product.

Seels, B. and Glasgow, Z. (1990). Exercises in instructional design. Columbus, Ohio: Merrill Publishing Company.

Smith, P. and Ragan, T. (1999). Instructional design (2nd ed.). New York: John Wiley & Sons, Inc.

Tessmer, Martin. 1993. Planning and conducting formative evaluation. London: Kogan Page Limited.

 

 

 

 

Learner validation

During this stage of formative evaluation, the instructional designer finds out if learners can learn from the instruction by trying out the instruction with representative learners. Smith and Ragan (1999) include one-to-one evaluation, small group evaluation and field trials as part of the learner validation phase of formative evaluation.

Smith, P. and Ragan, T. (1999). Instructional design (2nd ed.). New York: John Wiley & Sons, Inc.

 

 

 

 

Ongoing evaluation

Ongoing evaluation involves the continuation of data collection for the purpose of revision even after the instruction is implemented (Smith & Ragan, 1999). Instructional materials that are intended for use over an extended period of time may have to be revised several times during that period. Factors that often prompt revisions include a change in entry-level skills of the learners, changes in content and changes in facilities, equipment, or social mores of the learning context. Much of the information gathered in ongoing evaluation dovetails with summative evaluation.

Smith, P. and Ragan, T. (1999). Instructional design (2nd ed.). New York: John Wiley & Sons, Inc.

 

 

 

 

 

Clinical evaluation (One-to-one evaluation/ One-on-one evaluation)

A clinical evaluation, also known as a one-to-one or one-on-one evaluation, is when the designer works with individual learners to obtain data to revise the materials (Dick & Carey, 1996). During one-to-one evaluation, one learner at a time reviews the instructional with evaluator and give comments upon it (Tessmer, 1993). The purpose is to identify gross problems in the instruction, such as typographical errors, unclear sentences, poor or missing directions, etc.

According to Smith and Ragan (1999), both the instructional designer and the learner are involved in one-to-one evaluation. The designer should emphasize that the material is being evaluated and NOT the learner. Learners selected should represent a variety of abilities.

For an example of a one-on-one evaluation, visit http://www.byu.edu/ipt/projects/student/jones/formeval_1-1.html.

Dick, W. and Carey, L. (1996). The Systematic design of instruction, 4th ed. New York: Harper Collins Publishing.

Smith, P. and Ragan, T. (1999). Instructional design (2nd ed.). New York: John Wiley & Sons, Inc.

Tessmer, Martin. 1993. Planning and conducting formative evaluation. London: Kogan Page Limited.

 

 

 

 

Small group evaluation/ Tutorial and small-group tryouts

Small group evaluation occurs when a group of 8-20 learners, representative of the target population, study the instructional materials independently and are tested to collect the required evaluation data (Dick & Carey, 1996). The purpose of small group evaluation is "to check the efficacy of the revisions based on one-to-one data, to ascertain how well the instruction works with more varied learners, and to see how well the instruction teacher without the designer's intervention" (Smith & Ragan, 1999, p. 342).

During small group evaluation, the evaluator tries out the instruction with the group of learners and records their performances and comments (Tessmer, 1993). The instructional designer should check if problems from the one-to-one evaluation have been rectified and collect data on attitude and time (Smith & Ragan, 1999). The output of a small group evaluation is a revised instructional lesson based upon time, performance and attitude.

Seels & Glasgow (1990) use a similar phase called tutorial and small-group tryout. With tryouts, students work through the instructional materials individually. The instruction is then revised and an additional group of two to five students go through the revised materials. The designer continues this cycle of tryouts and revisions until the standard specified in the objectives is met. Small-group tryouts consist of 8-10 students and are used to get feedback on how well the course accomplishes the learning objectives and how long the instruction lasts.

For an example of a small group evaluation, visit http://www.byu.edu/ipt/projects/student/Jones/formeval_sg.html.

Dick, W. and Carey, L. (1996). The Systematic design of instruction, 4th ed. New York: Harper Collins Publishing.

Seels, B. and Glasgow, Z. (1990). Exercises in instructional design. Columbus, Ohio: Merrill Publishing Company.

Smith, P. and Ragan, T. (1999). Instructional design (2nd ed.). New York: John Wiley & Sons, Inc.

Tessmer, Martin. 1993. Planning and conducting formative evaluation. London: Kogan Page Limited.

 

 

 

 

Field trial/ Field test/ Operational tryout

During a field test, also referred as operational tryout (Seels & Glasgow, 1990), "instruction is evaluated in the same environments in which it will be used when it is finished" (Tessmer, 1993, p. 137). Field trials, or field tests, are similar to beta tests in that instruction at this phase of evaluation is in its most polished state, yet is still open to revisions. A field test can be used to ensure revisions made during previous phases of formative evaluation have been corrected, create final suggestions for revision and observe the effectiveness of the instruction.

Smith and Ragan (1999) suggest field trials involve conducting evaluation in several different sites with at least 30 students. Training should be provided to the field trial teachers/ instructors prior to the evaluation. Usually the instructional designer is not present during the field trial. It is the designer's responsibility to ensure that the instruction has not been altered from its original design.

During this stage of evaluation, the designer should collect information about performance, time and attitude as well as information from the teachers/ trainers regarding the administration of the instruction, also known as process evaluation. According the Dick and Carey (1996), "The goal of the field trial is effective instruction that yields desired levels of learner achievement and attitudes and that functions as intended in the learning setting" (p. 267).

Dick, W. and Carey, L. (1996). The Systematic design of instruction, 4th ed. New York: Harper Collins Publishing.

Seels, B. and Glasgow, Z. (1990). Exercises in instructional design. Columbus, Ohio: Merrill Publishing Company.

Smith, P. and Ragan, T. (1999). Instructional design (2nd ed.). New York: John Wiley & Sons, Inc.

Tessmer, Martin. 1993. Planning and conducting formative evaluation. London: Kogan Page Limited.

 

 

 

 

Needs assessment

Needs assessment is used to obtain the reason for the instructional program, the content and the feasibility of the delivery system. The gathering of data involves reviews of existing studies, test and curricula, expert’s reviews, and measurement of target audience characteristics (Zulkardi, n.d.). Flagg (1990) considers needs assessment to be the first phase of formative evaluation.

Flagg, Barbara N. 1990. Formative evaluation for educational technologies. Hillsdale, New Jersey: Lawrence Erlbaum Associates, Publisher.

Zulkardi. (n.d.). Formative evaluation: What, why, when, and how. Retrieved October 2, 2002, from http://www.geocities.com/zulkardi/books

 

 

 

 

Pre-production formative evaluation

According to Flagg (1990), pre-production is the second phase of formative evaluation. The planning phase or pre-production of the program is guided by the preliminary scripts or writers’ notebook (Zulkardi, n.d.). During this phase, the target audience and teachers are used in the process of making design decisions about content, objectives, and production formats. Expert reviews of content and design are used to guide the creativity of the designers and reduce uncertainty of some critical decisions.

Flagg, Barbara N. 1990. Formative evaluation for educational technologies. Hillsdale, New Jersey: Lawrence Erlbaum Associates, Publisher.

Zulkardi. (n.d.). Formative evaluation: What, why, when, and how. Retrieved October 2, 2002, from http://www.geocities.com/zulkardi/books

 

 

 

 

Production formative evaluation

Flagg (1990) calls the third phase of formative evaluation production. During this phase, the instructional program is revised after considering the feedback from tryouts of early program versions with the target group. "Information of user-friendliness, comprehensibility, appeal, and persuasiveness can give the production team confidence of success in their revisions and decisions" (Zulkardi, n.d., Plan, para. 5). Subject matter specialists, designers, and other experts work together to improve versions of the instructional program.

Flagg, Barbara N. 1990. Formative evaluation for educational technologies. Hillsdale, New Jersey: Lawrence Erlbaum Associates, Publisher.

Zulkardi. (n.d.). Formative evaluation: What, why, when, and how. Retrieved October 2, 2002, from http://www.geocities.com/zulkardi/books

 

 

 

 

Implementation formative evaluation

The final phase of Flagg's (1990) model of formative evaluation is implementation. Implementation is concerned with how well the instructional program operates with target learners in the environment for which it was designed (Zulkardi, n.d.). During the implementation phase, field-testing is conducted to help designers identify how program managers will actually use their final products with target learners. Feedback from field-testing assists with the development of support materials and future programs. This phase differs from the summative evaluation which, measures the learners who have not been yet exposed to the program.

Flagg, Barbara N. 1990. Formative evaluation for educational technologies. Hillsdale, New Jersey: Lawrence Erlbaum Associates, Publisher.

Zulkardi. (n.d.). Formative evaluation: What, why, when, and how. Retrieved October 2, 2002, from http://www.geocities.com/zulkardi/books

 

 

 

 

Rapid Prototype

A rapid prototype is a simplified version of a computer-based instructional program "with just enough functionality that it can be assessed for effectiveness before finishing development" (Driscoll, 1998, p. 273). Prototypes are used by developers to identify blatant errors in the instruction and get to get a feel for how learners will react to the program before the entire course is built, saving time and money. Using a simplified version of the material allows the developer to "create and try out routines that can be overwhelming in their complexity if developed in full detail first" (Smith & Ragan, 1999).

Driscoll, M. (1998). Web-based training: Using technology to design adult learning experiences. San Francisco, CA: Jossey-Bass/Pfeiffer.

Smith, P. and Ragan, T. (1999). Instructional design (2nd ed.). New York: John Wiley & Sons, Inc.

 

 

 

 

Alpha testing

Alpha testing is a type of formative evaluation primarily used to review computer-based instruction (CBI). Often there are errors in software that are only revealed under certain circumstances (e.g. when a specific key is pressed during the execution of a particular part of the program). Therefore, in addition to the other types of formative evaluation techniques, alpha testing, which includes debugging, is used to review and revise CBI (Smith & Ragan, 1999).

During alpha testing, programmers or developers review and test the computer software. Sometimes alpha testing also involves an alpha class, comprised of learners representative of the target audience. An alpha-class is used to evaluate the effectiveness of changes made following a rapid prototype evaluation and to decide if the materials can successfully be used as planned (Driscoll, 1998). Fully developed materials are used in the alpha-class and learners supply feedback to the developers on technical and instructional issues.

Driscoll, M. (1998). Web-based training: Using technology to design adult learning experiences. San Francisco, CA: Jossey-Bass/Pfeiffer.

Smith, P. and Ragan, T. (1999). Instructional design (2nd ed.). New York: John Wiley & Sons, Inc.

 

 

 

 

Beta testing

Beta testing follows alpha testing in the review of computer-based instruction (CBI). Beta testing occurs when the software is released in real settings to users who are expected to report back any problems they find with the program (Smith & Ragan, 1999). A beta-class, learners who mirror the target audience in skill level and number, is used to evaluate the changes made as a result of alpha testing (Driscoll, 1998). If the CBI is to use an instructor, this is the first phase of evaluation in which an instructor, who was not part of the development team, is involved. Beta testing is a final formative assessment used to "assess the effectiveness of the complete course and clarity and usefulness of directions for the instructor" (p. 220).

Driscoll, M. (1998). Web-based training: Using technology to design adult learning experiences. San Francisco, CA: Jossey-Bass/Pfeiffer.

Smith, P. and Ragan, T. (1999). Instructional design (2nd ed.). New York: John Wiley & Sons, Inc.