Wednesday, February 27, 2008

Evaluation

Evaluation is the systematic acquisition and assessment of information to provide useful feedback about some object
The Goals of Evaluation
The generic goal of most evaluations is to provide "useful feedback" to a variety of audiences. Most often, feedback is perceived as "useful" if it aids in decision-making. But the relationship between an evaluation and its impact is not a simple one -- studies that seem critical sometimes fail to influence short-term decisions, and studies that initially seem to have no influence can have a delayed impact when more congenial conditions arise. Despite this, there is broad consensus that the major goal of evaluation should be to influence decision-making or policy formulation through the provision of empirically-driven feedback.

Types of Evaluation
There are many different types of evaluations depending on the object being evaluated and the purpose of the evaluation. Perhaps the most important basic distinction in evaluation types is that between formative and summative evaluation. Formative evaluations strengthen or improve the object being evaluated -- they help form it by examining the delivery of the program, the quality of its implementation, and the assessment of the organizational context, personnel, procedures, inputs, and so on. Summative evaluations, in contrast, examine the effects or outcomes of some object -- they summarize it by describing what happens subsequent to delivery of the program; assessing whether the object can be said to have caused the outcome; determining the overall impact of the causal factor beyond only the immediate target outcomes; and, estimating the relative costs associated with the object.
Formative evaluation includes several evaluation types:

•needs assessment determines who needs the program, how great the need is, and what might work to meet the need
•evaluability assessment determines whether an evaluation is feasible and how stakeholders can help shape its usefulness
•structured conceptualization helps stakeholders define the program or technology, the target population, and the possible outcomes
•implementation evaluation monitors the fidelity of the program or technology delivery
•process evaluation investigates the process of delivering the program or technology, including alternative delivery procedures
Summative evaluation can also be subdivided:
•outcome evaluations investigate whether the program or technology caused demonstrable effects on specifically defined target outcomes
•impact evaluation is broader and assesses the overall or net effects -- intended or unintended -- of the program or technology as a whole
•cost-effectiveness and cost-benefit analysis address questions of efficiency by standardizing outcomes in terms of their dollar costs and values.
•secondary analysis reexamines existing data to address new questions or use methods not previously employed
•meta-analysis integrates the outcome estimates from multiple studies to arrive at an overall or summary judgement on an evaluation question.

Evaluation Questions and Methods
Evaluators ask many different kinds of questions and use a variety of methods to address them. These are considered within the framework of formative and summative evaluation as presented above.

In formative research the major questions and methodologies are:

What is the definition and scope of the problem or issue, or what's the question?

Formulating and conceptualizing methods might be used including brainstorming, focus groups, nominal group techniques, Delphi methods, brain writing, stakeholder analysis, lateral thinking, input-output analysis, and concept mapping.

Where is the problem and how big or serious is it?

The most common method used here is "needs assessment" which can include: analysis of existing data sources, and the use of sample surveys, interviews of constituent populations, qualitative research, expert testimony, and focus groups.

How should the program be delivered to address the problem?

Some of the methods already listed apply here, as do detailing methodologies like simulation techniques, or multivariate methods like multi attribute utility theory or exploratory causal modeling; decision-making methods; and project planning and implementation methods like flow charting, PERT/CPM, and project scheduling.
How well is the program or technology delivered?
Qualitative and quantitative monitoring techniques, the use of management information systems, and implementation assessment would be appropriate methodologies here.


The questions and methods addressed under summative evaluation include:
What type of evaluation is feasible?

Evaluability assessment can be used here, as well as standard approaches for selecting an appropriate evaluation design.
What was the effectiveness of the program?
One would choose from observational and correlational methods for demonstrating whether desired effects occurred, and quasi-experimental and experimental designs for determining whether observed effects can reasonably be attributed to the intervention and not to other sources.

What is the net impact of the program?

Econometric methods for assessing cost effectiveness and cost/benefits would apply here, along with qualitative methods that enable us to summarize the full range of intended and unintended impacts.
Clearly, this introduction is not meant to be exhaustive. Each of these methods, and the many not mentioned, is supported by an extensive methodological research literature. This is a formidable set of tools. But the need to improve, update and adapt these methods to changing circumstances means that methodological research and development needs to have a major place in evaluation work.

Confirmative evaluation goes beyond formative and summative evaluation;it moves traditional evaluation a step closer to full-scope evaluation. During confirmative evaluation, the evaluation, training, or HPT practitioner collects, analyzes, and interprets data related to behavior, accomplishment, and results in order to determine “the continuing competence of learners or the continuing effectiveness of instructional materials” (Hellebrandt and Russell, 1993, p. 22) and to verify the continuous quality improvement of education and training programs (Mark and Pines, 1995). The concept of going beyond formative and summative evaluation is not new. The first reference to confirmative evaluation came in the late 1970s:
“The formative-summative description set ought to be expanded to include a third element, confirmative evaluation” (Misanchuk, 1978, p. 16). Eight years later, Beer and Bloomer (1986) from Xerox suggested a limited strategy for going beyond the formative and summative distinctions in evaluation by focusing on three levels for each type of evaluation:

1. Level one: evaluate programs while they are still in draft form, focusing on the needs of the learners and the developers.

2. Level two: continue to monitor programs after they are fully implemented, focusing on the needs of the learners and the program objectives.

3. Level three: assess the transfer of learning to the real world Geis and Smith (1992, p. 133) report: “The current emphasis is on evaluation as a means of finding out what is working well, why it is working well, and what can be done to improve things.” However, when the quality movement gained prominence and business thinking raised the bar, educators and trainers began to agree, at least in principle, that “quality control requires continuous evaluation including extending the cycle beyond summative evaluation”
(Seels and Richey,1994, p. 59). Summative evaluation has immediate usefulness, but it does not help planners make decisions for the future. Confirmative evaluation, on the other hand, is future-oriented; it focuses on enduring, long-term effects or results over the life cycle of an instructional or non instructional performance intervention: “Enduring or long-term effects refer to those changes that can be identified after the passage of time and are directly linked to participation in [education or training]” (Hanson and
Siegel, 1995, pp. 27–28).

Tuesday, February 26, 2008

Keller Plan

In the 1960's, Fred S. Keller, J. Gilmour Sherman, and others developed a synthesis of educational methods and practices that has often been called the Keller Plan or the Personalized System of Instruction (PSI) . Key aspects of this teaching method include [1

go-at-your-own-pace

so students can proceed according to their abilities, interests, and personal schedules;

unit-perfection requirement which means students must demonstrate mastery of a unit before proceeding to other units; lectures and demonstrations for motivation

instead of for communication of critical information; stress on the written word for teacher-student communication which helps develop comprehension and expression skills; and tutoring/proctoring which allows repeats on exams, enhanced personal-social interaction, and personalized instruction.

Research studies have shown PSI to have a number of advantages over conventional educational methods, and few disadvantages. Students, especially those who would normally perform at the lower or middle levels, learn significantly more, as measured by final examinations and by tests of long-term retention (given years later). They like the classes and tutoring, and develop good habits that carry over to other courses and learning activities. Disadvantages are mostly concerning extra effort being required by the instructor, a higher drop rate in some courses (especially by students who cannot break their habits of procrastination), and extra room requirements.


References


1. Fred S. Keller. Goodbye, teacher ... J. of Applied Behavioral Analysis, 1(1):79-89, Spring 1968.

2. J. Gilmour Sherman and Robert S. Ruskin. The Personalized System of Instruction. Educational Technology Publications, Englewood Cliffs, NJ, 1978. Vol. 13 in The Instructional Design Library, series ed. Danny G. Langdon.

3

3. J. Gilmour Sherman, Robert S. Ruskin, and George B. Semb, editors. The Personalized System of Instruction: 48 seminal papers. TRI Publications, Lawrence, Kansas, 1982.

Developing Instructional Materials

Designers must make concious efforts to present lessons for learners to fully crasp, making learning effective. Self-paced learning as an instructional format is a means by which learners learn or master certain skills at their individual pace. As learners go through the learning process, they become responsible and learning becomes successful based on the learning objectives and variety of activities. Learners must be catered for individually with different objectives as well as learning activities taking into cognisance each learner's characteristics, preparation, needs as well as interests.

Individual differences must be catered for if effective learning will be achieved and this can be made possible if variety of materials serving the objectives, with more than one instructional sequence is used. Some learners learn fast while others are slow. Some do well with printed materials whereas others perform better with hands on experience. This therefore means that, varied activities reflecting objectives should be prepared, making room for individual learners to make preferred choices.

For example, if an objective states that by the end of the lesson or instruction, learners will be able to say, design a fabric using the marbling technique, the programme may include printed steps to follow, still photographs, a film or videotape and the tools and materials, all focussed on producing a marbled fabric. Some learners may decide to watch the video and move straight to the real work of designing the fabric. Others might prefer reading the steps, study the still photographs before going on to the actual work. Another group might also read the steps, study the photographs, watch the video before proceeding to design the fabric, and there are other learners who will just go straight to designing the fabric using a trial-and-error attempt.

There is a way instructors can check on bad habits such as lack of self-descipline and procrastination with self paced instruction, and that can be acheived through setting deadlines within which some learners can adjust their own study pace so that learning can be beneficial for them.

Wednesday, February 20, 2008

Views on ID

The course has really been very interesting and very helpful. A lot has been learnt so far from colleagues, the text, case studies etc. Whenever faced with an issue, designers must first of all determine whether there is an instructional problem. This is because certain issues do not need instruction as the measure to address them. If for instance, if there is low output of work due to the breakdown of certain machines or equipments, then there is the need for repairing those machines or equipments and not designing instruction for this purpose.

If there is a Subject Matter Expert (SME) available, the designer need to do a lot of collaborative work with him or her so he or she (designer) can come out with a good material to work with in order to achieve the best results. In the absence of an SME, designers can also read or research into areas they are not conversant with so as to gain the necessary knowledge needed for designing effective i
nstruction.

Designing the Message

Designers must make concious effort in enhancing learners understanding. Designing effective message which basically deals with the presentation of the information (message), that is, how the content or topic is presented. This include using suitable PRE-INSTRUCTIONAL STRATEGIES, to help learners focus on the instruction, using WORDS AND TYPOGRAPHY to signal different aspects of the instruction and then using PICTURES to enhance their (learners) understanding.

Thursday, February 7, 2008

Strategies

Effective Instructional strategies enable the learner to relate his existing knowledge of content of an instruction to the new knowledge.Designers can only help learners achieve this if they include these strategies into their lessons. Each objective should have a strategy for treating it. The instructional strategy therefore is a "blueprint" for developing the lesson and as such flexibility in creatively presenting the lesson for learners to be motivated to learn and understand, should as much as possible be employed. Designers should be able to classify your objectives either as a concept, rule, procedure or application, then select the instructional strategy that is appropriate enough to address the objective.

The strategy is like the mothodology you would adopt to carry out the instructional task and as usual the LEARNER who is the focus should be taking into consideration.

Monday, February 4, 2008

Sequencing

The choice of a sequencing scheme depends on the characteristics of learners and the nature of the content. This makes it necessary to determine which sequencing scheme is most appropraite in presenting the information for an instruction. Since learners are the focus, in either the training or learning process, every effort should be made to arouse and maintain their interest. It is not always necessary to follow objectives in a logical manner when sequencing. Objective six (6) may for example, come third in the sequencing. This is to ensure learner motivation.

Saturday, February 2, 2008

  • After you have developed your objectives as a designer, there is the need to order your content in an appropraite way so that help learners who are your focus for designing the instruction, to achieve those objectives.