Various Steps involved in evaluating a Development Support Communication Campaign

Define evaluation and explain various steps involved in evaluating a DSC Campaign.

Evaluation

Words themselves do not carry the same meanings through time and space. A word which gave a specific meaning a hundred years ago may not give the same meanings today – for scores of reasons. Similarly meanings of same words are changed at some distance. A verbal message which once stood for certain meanings, may not always stand for the same meanings because static meaning evaluation does not hold in any language.

Various steps involved in evaluating a DSC Campaign

The term can be described as development planning and implementation in which adequate action is taken of human behavioral factors in the design of development project and their objectivities. It addresses development planning and the plan of operation for implementation. It (DSC) stands for linking all agencies involved in the planned development works such as political executives, political planners, development administrators, subject specialists, field workers, opinion leaders, media representatives, researchers and the beneficiaries who continue the final delivery points and the consumers of the information. The route of communication envisaged is not only vertical as flowing from upper level to bottom or bottom.

The practice of Development Support Communication, DSC, is a multi-sectoral process of information sharing about development agendas and planned actions. It links planners, beneficiaries and implementers of development action, including the donor community. It obligates planners and implementers to provide clear, explicit and intelligible data and information about their goals and roles in development, and explicitly provides opportunities for beneficiaries to participate in shaping development outcomes. It ensures that the donor community is kept constantly aware of the achievements and constraints of development efforts in the field.

In short, DSC is a legitimate function of development planning and implementation. DSC therefore needs to be examined as a valuable «technology» for using the social communication process to foster and strengthen sustainable development at local and national levels. It should be taken more seriously in programs of social change, and should be reflected explicitly in development policy and strategy. One way of doing so is through the enunciation of a national information and communication policy, which can be explicitly integrated into national development thinking and practice.

Ex-post evaluation would resume this effort several years after completion of the investment, to review comprehensively the experience and impact of a project as a basis for future policy formulation and project design.

Indicators used for evaluations include fields of major crops and changes in cropping intensity and patterns.

STEPS IN EVALUATION

The following steps are usually involved in evaluation process.

 Evaluation Plan : A detailed plan of activities to be undertaken in the process of evaluation is prepared before embarking on the journey. The plan identifies what, why things have and how to be done. This will make things clear that: how to conduct the evaluation within the stipulated budget the plan will help getting input of every one in the evaluation team and the existence of plan will also help to focus the evaluation on questions of the target audiences.

 Reasons for evaluation : These reasons have already been mentioned in the foregoing pages. However, the evaluator should determine his priorities which reasons are most important and focus the evaluation accordingly.

Audiences of Evaluation : The audiences for evaluation, may be the change agents, advisory councils, programme sponsors programme participants and the general public. These groups of audiences are so varied because,different audiences have different concerns about the programme.

The Criteria for Evaluating the Programme : Criteria are the yardsticks used to measure the merit or worth of a programme. For example, a criterion for an extension programme may be the number of women farmers who adopt a particular practice. If an evaluation indicates that the specified number did, and adopt the practice, the programme can be considered a sucicess as far as this criterion is concerned. For example, where programme emphasis is on increasing the output of cash crops, an unintended outcome may be that land formerly used to grow food crops changes to cash cropping land. This has particular effects on women farmers who frequently grow food crops. Unintended outcomes such as these should be a part of the evaluation. The main source of criteria should be the basic intent and objectives of a programme. If a programme was developed in response to a particular need, a major concern of the evaluation should be whether the programme is meeting the need, or to what extent it meets the need.

The evidence that will be available for Evaluation : Evidence consists of information related to a particular criterion. While deciding as to the type of evidence to be used, adjustments will almost always have to be made between what is the best or ideal type and what it is possible to obtain.

There are various ways of classifying evidence that can be used in DSC evaluations. Sabrosky (1967,p-26).) distinguished between two major types, such as, evidence in terms of changes in the behaviour of people, and evidence in terms of opportunity. In the former case the major consideration is whether audiences have changed their attitudes or practices as a result of the DSC method or activity. In the latter case, Sabrosky pointed out, “When it is difficult or impossible to measure progress at the level or original status or change in people themselves, it is desirable to measure work in terms of the learning situation we have set up. (No written materials go out, no talks are given, no demonstrations ai-e put on, no visits are made); we cannot expect the people to learn anything as a result of extension work.”

What can be considered an expanded version of Sabrosky’s classification of types of evidence has been given by Bennett (1977). He proposes seven levels of evidence for programme evaluations that can be arranged in a hierarchy. The levels of evidence, and examples of evidence at each level, are shown in Table I. At each level what was planned or anticipated can be compared to what was actually achieved. For example, at the “inputs” level, the actual time spent by extension staff on a programme, or aspect of a programme, can be compared with the amount of time such staff had planned to spend in many extension programmes, such sophistication in planning may be rare.

“Evidence of programme impact becomes stronger as the hierarchy is ascended to levels 1 to 3 provide ways of measuring possible opportunities for education to occur. He also pointed out that, “Ascentling to the fourth level, reactions, can provide somewhat better confirmation of whether given’activities are helpful as intended. But such evidence indicates less satisfactorily than evidence of KASA (knowledge, Attitudes, Skills, Aspirations } changes the extent of progress towards ultimate programme objective.” The ideal assessment of impact would be obtained at the highest level in the hierarchy, in terms of whether the desired end results have been achieved, and the assessment of any significant side effects. 2. “The difficulty and cost of obtaining evidence on programme accomplishments generally increases as the hierarchy is ascended” (Bennett, 1977, p.9). Although evidence at the lower levels does not provide as strong an indication of impact as those at the higher levels, it is relatively more difficult and costly to obtain evidence at the higher, levels. 3. “Evaluations are strengthened by assessing extension programmes at several levels of, the hierarch)/ including the inputs level” (Bennett. 1977, p.9).

“Evaluation is strengthened to the extent the specific criteria for evaluation are defined prior to conduct of the Extension programme” (Bennett, 1977, p.11). The basic point here is that early clarification of programme objectives will assist in the subsequent conduct of evaluations. Evidence obtained prior to programme execution (e.g. level of knowledge, attitudes and skills of programme participants) will provide a benchmark against which progress (as a result of participating in a programme) can be judged.

“The harder the evidence for evaluation, the more an evaluation may be relied upon in programme decision making” (Bennett, 1977, p.12).

Designs for Evaluation Studies : A variety of designs can be used in collecting evidence for evaluation studies. Bennett (1977) provides a list of these in order of their potential ability to provide strong Scientific evidence of the degree to which observed change is produced through extension programmes. A modified list of these designs is as follows:

1. The Field Experiment •

2. Matched Set Design

3. -Before-After” Study

4. The Survey

5. The Case Study

The field experiment provides the strongest scientific evidence and the case study the weakest, for the purposes of evaluation. Some evaluation studies may incorporate elements of several of the designs listed above. Generally, the first two designs are hardly used in the regular conduct of evaluations, because they are expensive and difficult to handle. The last three designs listed above will be described briefly below:

“Before-after” study. In this type of study, observations are made before and after participation in an extension programme. The changes in the status of participants can be attributed to the programme after other competitive explanations (for example, unusual weather affecting crop yields, other programmes) have been logically ruled out.

The survey. This design is perhaps the one most often used in conducting extension evaluations. It does not require observations before a programme is implemented. and is generally.easier to carry out and is less expensive than the “before-after” design. However, according to Bennett (1977, p. 19) it “generally provides rather weak conclusions about the extent to which extension rather than other forces, produces any observed differences between extension clientele and non-clientele.”

Surveys can be used to collect data on people’s perceptions and opinions about programme activities, and the results of programmes. Surveys can also seek information on the status of participants prior to their participation in a programme.

The survey design usually requires use of questionnaires sent through the mail, or administered through personal interviews. Sampling techniques are generally used to select the target population.

The case study. According to Bennett (1977, p.20), “Case studies observe intensively one or, only a few selected individuals, groups, or communities. Observation may involve examination of existing records, interviewing, or participant observation”. Although the evidence provided by this design is not as strong as those from other designs, case studies can reveal information about a programme which is not accessible by other means. It is usually most effectively used as a supplement to other evaluation designs.

Conduct the Evaluation :

I. Analyse the Data Different types of data analysis techniques can be used. It may be presented through the method of question-answer or can be provided in a report form. Good data analysis relies on emphasis-on those aspects that are related to the particular issues addressed by the evaluation.

II. Report the Findings The findings of evaluation should be reported at the completion to the audience being addressed.

III. Application of the Findings Evaluation would not end until the findings are properly reported and implemented in the on-going project for improvement or in the planning of future project.

You may also like