[ Contents ] [ Home ]

Chapter 3
Issues in the decision making process

Selecting tasks for courses

Much of the literature on curriculum in TAFE deals with instructional techniques rather than actual curriculum issues. There is not really a clear distinction, and curriculum developers find themselves asking how as often as what. The next four sections represent discussion on the former, but their inclusion is justified in that they are issues which curriculum developers in some states have considered very important. The differences in interpretation and emphasis are really the curriculum issue here.

An occupational analysis establishes what is done in the work place, but not necessarily the details of what is required for a course. Having identified the components of the occupation, it is then necessary to decide which parts of the job need training and which parts of this training should be undertaken by TAFE colleges. Courses can be prepared only "after tasks which do not require training have been eliminated and tasks which cannot be taught entirely in the training environment have been modified" (TAFE Vic, 1980a, p.38). Selection of tasks also

enables managers of training to determine what training is appropriate and when it should occur; [in] a basic course; advanced course; special course; or on-the-job training (Ellis, 1982, p.2).

It is commonly assumed that in the case of courses where there is an on the job or work experience component, the employer can cater for some parts of the training better than TAFE colleges. This might happen when a task or a group of tasks is used frequently enough in the work place for there to be no need to include it in a course. It could also occur when expensive in-plant equipment is used for productive use in industry. Students would learn to use the equipment and gain the necessary practice in their work place and colleges would avoid having to purchase, maintain and upgrade expensive and non-productive equipment centres.

Another factor in selecting task elements for college courses is the need, in trade courses specifically, to include "general education; theoretical base; planned practical component (to relate the theory and trade practice); skills development; and general industrial experience" (Parkinson, 1986, p.146). Availability of time is also an important factor, as Sandery (1984) points out.

It is unlikely that you will have enough course time available to include all the possible content items from your listing. Choices will have to be made based on some criterion (p.27).

Ellis (1982) discusses the criteria he used in rating task statements. He identified "time spent" and "involvement" scales for use in the occupational questionnaire and, from the responses, developed a coding system based on these categories.

Time Spent Scale

Involvement Scale

1. very little a. assist while receiving instruction on-the-job
2. below average b. partly assist and partly do
3. average c. do the task
4. above average d. partly supervise and partly do
5. very much e. supervise while giving on-the-job training (p.4)

From the codes (2c, ld, 5c, for example) given to each task he determined "the proportion of the occupation performing the task", "the frequency of task performance", and added "the specialist's assessment of ... difficulty" to decide "what training is appropriate and when it should occur" (p.4). In a later work, Ellis (1986) developed an algorithm based on importance, difficulty and frequency to determine whether a task should be included in the college based course or on the job (p.2). He states that a task can be selected into a college course when it satisfies at least one of the following considerations.

Sandery (1984) suggests some other possible ranking systems based on the importance of the tasks, but cautions that selecting tasks for inclusion in a course must also depend on the number of course hours available and the resources required (p.27). She warns that

the main thing to remember is that you want to use criteria which will generate usable information. There is no point in designing an elaborate information gathering chart or other instrument if you have no valid use for the data (p.29).

Sequencing and structuring

The issues of sequencing learning units are different in vocational education from those we expect to find in schools based curriculum development. Mager and Beach (1967) claim that the most important parameter for making decisions about sequencing is that of "meaning for the student". Here are their six guides to effective sequencing.

This is a comprehensive choice for curriculum developers to select from when deciding how to sequence the syllabus or course. Alternative methods of sequencing appear to be looking more at the nature of the performance objectives. Ammerman & Essex (1977) write

Completed statements of ... objectives can be ordered sequentially within the duty categories in which their tasks originally were listed. This grouping retains the structure of the validated tasks (p.42).

They also suggest further possible groupings which include:

Sandery (1984) suggests sequencing of skills which are common to several trades before those which are specialised. She also recommends going back to the data on the importance of skills in the workplace.

Another way to go about sequencing content is to refer to your chart used to rank content items. These can be grouped together in terms of their relationship to each other. Where the syllabus development task is really large, these groupings may represent a structure of courses. In this case, you want to sequence units of instruction within a course (p.34).

This is an important distinction. Courses and learning units within courses contain their own structure and the sequencing process applies to both.

Training courses which are more obviously discipline based are very often structured on the basis of traditional subject areas, but decisions on internal sequencing are still required. Knowles (1978) points to some interesting contradictions between traditional subject based sequencing and the needs of adult learners.

According to this line of reasoning, it would make sense for first-year social work students to acquire basic foundational knowledge about the field - history, philosophy, public policy, institutional structure, etc. - for second-year students to focus on the theory and principles of social work practice; and for third-year students to concentrate on skill development and field experience.

But this approach doesn't make any sense at all when working with mature people who are problem centred in their orientation to learning. At best they would see the first two years to be drudgery that has to be endured in order to get to the "real thing" in the third year. They would see as much more relevant a curriculum that is organised around the problem areas with which social work deals (p.58).

Modules or subjects

The glossary of terms used in TAFE (Parkinson, 1986) defines a learning module as

... discrete and integrated packages of knowledge and skill complete within themselves, dealing with one aspect or a number of aspects of vocational education at a given level of understanding or skill performance in accordance with stated aims and objectives. While the modules stand on their own, the learning of the modules must be assessable and the programmes capable of being linked to other modules either in the same or a related area. They tend to be task rather than discipline oriented and of variable length depending on the time taken to achieve objectives (p.77).

A NSW Department of TAFE study (1984) states that modular training, as described in the literature, involves a particular approach to organising course structure and instruction.

A course is divided into units of instruction i.e., modules, based on topics or capabilities whereby theory and practical aspects are integrated. The modules are self-contained and can be completed in various sequences with variable student progression. Instructional features normally incorporated in modular training systems are mastery, student pacing, performance objectives, and continuous assessment. Learning activities are typically presented to the student in a package of written and/or audio visual material (p.2).

In Victoria modular training refers strictly to "training based on the concept of building up the skills and knowledge required for job tasks, in independent units called learning units" (TAFE Vic, 1980, p.95). There is also the rider that "modules can usually be taken in a number of possible orders".

The relationship between modules and subjects is demonstrated by the following example from a modular plumbing course.

Module 4.3: Installing a basin

Theory

Maths

Drawing

Practice

Types of basins
Fixing methods
Waste requirements

Water supply Connection methods

Waste pipe grades Distance/Grade = Fall Sketch of method of connection Fix basin and connect waste pipe and water
(McDonald, 1982, p.9).

Instead of studying Theory, Maths, Drawing and Practice as separate subjects, the student performs the single task of installing a basin, integrating the basic skills and knowledge from a number of subjects simultaneously. McDonald (1982) states that the major difference between modular and subject based courses is that

The latter allow the student to build up skills in various areas and knowledge in other areas over a long period of time. Only in the final stage of the course - or perhaps never in the training situation - is the student allowed to work on a "project" which requires the integrated use of all skills and knowledge acquired. Modular courses, on the other hand, build up the ability to perform real-life tasks as a first aim and the acquisition of skills and knowledge takes second place (p.10).

Clover and Goode (1982) reporting on the Perth conference on occupational analysis, put forward a practical reason for curriculum developers using a modular design in a period of rapid technological and economic change.

... the restructuring of courses into modules ... has, as one of its advantages, the ability to revise one or more modules (based on a skill or group of skills) without having to revise the whole course. This means that a course can be constantly up-dated by adding on modules or changing existing ones (p.17).

O'Donnell (1978) studied the issue in NSW with a view to deciding "whether modules or subjects would be a better design on which to base new or revised courses". Her paper identified the essential features of a module as being "integrated, autonomous and performance orientated". Subjects may have these characteristics also, but she found "in practice they generally do not". As a guide to curriculum developers, O'Donnell recommended choosing a modular design when

Subject design, O'Donnell (1978) stated, should be followed when

In general, O'Donnell felt that vocational courses tended to have the sort of content which lent themselves to modular design, but warned that the inclusion of uniform minimum standards required that assessment be planned at the same time as the material is written (p.2). She also stated that teacher support should be sought by curriculum developers, when the introduction of module design required a major change in approach or style of teaching (p.2).

A later NSW study (1984) pointed out that very few of the courses which have a modular structure actually included all the components of a complete modular system, and weren't actually modular courses. This is still true of many modular courses in most states and territories in Australia, although that is not widely regarded as important. As the NSW study recommended,

special effort should be directed towards those features of course design that allow students/employers some choice in attendance patterns, module sequence and course specialisation (p.5).

Marks and mastery

Each teacher begins ... with the expectation that about a third of his students will adequately learn what he has to teach. He expects about one third of his students to fail or to just "get by". Finally he expects another third to learn a good deal of what he has to teach, but not enough to be regarded as "good students". This set of expectations, supported by ... policies and practices in grading, becomes transmitted to the students through the grading procedures ... [and] is the most wasteful and destructive aspect of the present educational system (Bloom, 1968, p.l).

Thus Bloom sparked off the marks versus mastery assessment debate over two decades ago. The issues are still complex and vigorously discussed in TAFE circles.

The case for mastery learning, achievement and testing is well documented, although there is confusion in the terminology used in different states. It is based on the interaction of student aptitude, the quality of instruction and the amount of time available for learning, and premises that when the right balance between these variables is reached, "the relationship between aptitude and achievement should approach zero" (Bloom, 1968, p.3) and that "virtually all students can and will learn well most of what they are taught" (McDonald, 1982, p.36).

It is argued that mastery learning in the sense of criterion referenced testing, is particularly appropriate for TAFE, because vocational training is based on the philosophy that "analysis of the job ... [determines] what is to be learned, and it is from analysis that the objectives are derived" (TAFE Vic, 1980, p.35). There is no alternative to mastering the skills required on the job if the training is to be relevant. Broderick (1981) defines mastery learning as follows

The vocational learning task is divided into its elements and the student is progressively and objectively assessed by his performance in meeting the behavioural objectives of each element or sub-set of the task. By using a simple "Go - No Go" criterion as the measure, the student is objectively assessed in a sequential manner; moreover, the student, by his achievement assesses his own performance. By achieving all the elemental task objectives, the student has achieved mastery of the whole learning task, which of itself is a part of a syllabus/subject/unit/ module (pp.xix and xx).

In practice, mastery learning implies that performance objectives are clearly defined, as McDonald (1982) points out.

... the course content must be spelt out, usually by listing students' performance objectives. These objectives should:

i) state precisely what the student should be able to do;
ii) describe the conditions under which the student must show his competence; and
iii) state the standards of performance expected of the student (p.36).

The Victorian TAFE (1980) document on the instructional systems model summarises the arguments against the use of defined performance objectives as follows:

(a) they can be ambiguous and fail to communicate what they intend unless particular care is taken in their framing;
(b) a list of behaviours does not always represent adequately the structure of knowledge that is being presented (this is less so in vocational educational programs where behaviours are critical to effective job performance);
(c) the formal language of objectives can mitigate against their effective use (p.36).

The arguments in favour are given far more attention in the same document.

In considering the value of objectives, the following summaries of research findings may prove useful. Research in the areas, although scant, has shown:

(a) Objectives can be helpful for learning.
(b) Objectives have never been shown to inhibit learning.

As guides to teaching and learning:

(a) Objectives are useful if developed initially as starting points in formulating details of curriculum.
(b) Objectives, whether written in general or specific terms, do have certain things in common: · they contain an action verb; · they tell learners what they will be required to do.
(c) There is no significant evidence to support the contention that writing "Mager" type objectives (highly specific objectives, written in the traditional behavioural form) is any better than writing performance statements. What is of importance is the teacher's commitment to realising them. There is a need therefore to focus upon inducting teachers into the use of syllabi framed in an objective format.
(d) Children taught by teachers using objectives appear to benefit significantly more than children taught by teachers not given them. This advantage would appear to hold regardless of whether the teachers were trained in their use as guides to teaching, although training significantly increased learning. Such clearcut findings are unusual in educational research.
(e) Objectives provide students with clear goal statements for learning.
(f) One study (M.O. Schneiderwent) presents evidence that males appear to benefit more from objectives than females.
(g) Studies tend to show that giving learners the objectives prior to starting a program significantly increases learning in the traditional teaching situation. As well, providing learners with objectives reduces initial anxiety.
(h) Middle level ability learners benefit most from objectives.
(i) Research tends to show that tests based upon objectives are more effective than tests based upon content matter (pp.35-36).

The issue of norm referenced and criterion referenced testing is a further aspect of the same debate. McDonald (1982) summarises the arguments briefly in the following definitions.

A norm referenced test measures each student's achievement by identifying his performance in relation to the performance of others on the same test. The main use of a norm referenced test is to rank students, and to do this, test items are usually written to emphasize variances in student performance, i.e., to spread out the student marks. Norm referenced tests can validly be used in situations where a degree of selectivity is required, e.g., when deciding which students should be advised to pursue further studies, or when trying to identify the "best" student in the class.

A criterion referenced test identifies an individual's status with respect to an established standard of performance. It is used to determine whether individuals possess a particular competence. Criterion referenced tests are especially useful for monitoring student progress in an individualized instruction system, and for diagnosing specific areas of weakness. More importantly, criterion referenced testing provides a way of building content meaning into test scores - it permits us to know what the student can do (p.38).

Arguments in favour of graded marks as distinct from "achieved/not yet achieved" categories are not widespread in the literature, even though the principle survives strongly in the practice of thousands of TAFE instructors. There appears to be an underlying belief in instructors' professional integrity that enables the practice to survive on a widespread basis without the need for it to be vigorously defended in the literature. In fact, the tendency in some states has been for the trade areas to move towards mastery, behavioural objectives and criterion reference testing, while those courses based on cognitive and affective skills have retained a marking or grading system based on norm referenced tests. However, these tendencies differ widely in different TAFE Authorities, and there is no national pattern. McDonald wrote in 1982

Mastery learning ... will require ... changes and will incur ... costs (not merely financial). The question which must be fully investigated and answered is, "Are the benefits worth the costs?" (p.38)

Information needed from the data

I will leave the literature at this stage for the sake of brevity, and attempt to sum up the approach a curriculum developer will take when designing a vocational training course. This information can be found in the literature but it is not easily extracted and not specific necessarily to the adult, vocational situation.

One of the problems of the curriculum process is the fuzzy area between occupational data collection and the design of the training programme. The existence of occupational data does not mean that the data have been analysed, or even interpreted. Often the curriculum developer has to do quite a lot of work on the data before important curriculum questions can be answered.

At best, the data consist of a definitive list of job derived performance objectives indicating the importance and frequency of their occurrence in a specified job area. Sometimes the data consist of a computer printout of the significance and relevance of hundreds of tasks and skills. Sometimes they are more like duty statements or job descriptions including tasks but not skills. Sometimes the information has been derived from the widest range of staff possible in the occupational area, from the cleaner to the managing director; at other times it is no more than a report on several unstructured interviews with employers. Sometimes it is an interpretative course evaluation, or an audit of training needs based on teachers' comments. There are instances where there is no written report at all.

Occupational data certainly doesn't always reach curriculum developers as a neatly analysed and annotated package providing all the answers they need to begin writing course objectives. They are looking for quite a lot of information and, more often than not, don't find it.

Before any curriculum decisions can be made, developers will have to look carefully and critically at the available data.

Sometimes the survey will include information about course level and student market. If it is not available, curriculum developers will need to collect further information to discover the answers to the following questions.

Before proceeding with curriculum development, it is necessary to ascertain whether the course is feasible, and that it will be used. The data should include information from unions, professional bodies and government agencies. It is certainly necessary to know whether it is acceptable to the TAFE Authority, as funding body and training institution.

Many of these questions are referred to in the literature on occupational data collection and analysis. Developers should have information on all these issues before the design stage can begin. However, many projects begin with far less than this and developers must often rely on little more than professional judgement to make correct decisions.

Translating data into curriculum

When the information outlined above is to hand, what are the curriculum questions to be addressed? How can the curriculum developer translate it into a course which will be valid to employers and unions, educationally justifiable to instructors and institutions, and realistic to the aspirations and expectations of the students?

From the data available, developers must determine the course content and structure, its length, level and format. They must adapt it to fit in with existing financial, staffing and accreditation constraints. In the process they must retain the highest possible degree of educational integrity, while catering realistically to the harsher demands of commerce and industry. Furthermore, the present and future needs of the students must remain paramount.

Curriculum developers must have some sort of framework within which to develop the course. Ascertaining the course structure is not necessarily the first step. Content or structure can be decided in any order, or even simultaneously. However when a number of people are working together on a course team, it is probably more important that they agree early on the structure of the course. The questions which follow are those which need to be answered to decide on course structure.

The objectives produced by the occupational survey, if indeed they have been produced at all, are job or terminal performance objectives. The developer selects those which will be included in the course, on the basis of the criteria identified in the last section. It is necessary however, to reformulate these in terms of student performance objectives, because it is on these that the course will take shape. In formulating these objectives, there are further questions to be answered.

Only when these questions have been answered and the objectives and testing framework written, will the developer be ready to face the issues of writing the syllabus document. With the completion of the syllabus document, the design and development stage is half complete. The learning media and materials have yet to be chosen, designed and produced, but the following chapters confine their interest to the curriculum decisions leading up to this point and how and why they were made.

Research into curriculum decision making

Some educational issues involved in TAFE curriculum development have been identified in chapters 2 and 3 and their importance justified by reference to the current literature. It is significant, however, that the literature includes little discussion of these issues from the point of view of curriculum decision making, nor, to any great extent, of the factors which will influence the way the issues are adapted to and incorporated in the structure and content of new curriculum projects.

The suggestion has been made that decisions on curriculum appear to be based on intuition or professional judgement. If this is so, how can we ensure that this intuition is trustworthy? How can professional judgement be developed? What are the ingredients? How can new curriculum developers begin before they have developed the intuition and professional judgement that, it is claimed, come from experience? In an area where procedures in most TAFE Authorities are clearly and tightly defined, and the educational issues are at least partly documented, is it possible that a whole series of questions has not yet been dealt with? Is it possible that some very important questions have not yet been asked, let alone answered, in the area of TAFE curriculum development? The object of these case studies was to find out.

It was necessary first to establish whether the key educational questions, or similar ones, were being satisfactorily addressed and answered. Can it be assumed that curriculum developers know what options are open to them? Are they familiar with current thought and development in the field of TAFE curriculum in Australia? Are they educated in TAFE curriculum issues?

Second, the research needed to discover what factors consciously and subconsciously influence decision making. This was to prove more difficult. People can answer questions about conscious influences, but the subconscious ones can only be estimated or partly guessed at. One of the reasons why TAFE curriculum literature hasn't dealt adequately with the problem of decision making is, without doubt, the number of important influences lying beneath the surface which can't be readily observed.

Third, the research set out to discover whether, at a time of increasing economic accountability, informed intuition and professional judgement could remain the basis of decision making, or whether guidelines or considerations could be established to make the process more efficient and effective. This was to be attempted on the basis of the findings of the two previous questions.

Experienced curriculum developers were asked what issues they considered, on what data they based their decisions, and how they justified these decisions. They were also asked about the decisions themselves, in particular which were the most difficult, the most significant, the most successful and least successful and whether they would do things differently if they were to tackle the problem again.

It was hoped to extrapolate from this information, guidelines or principles to assist curriculum developers in their decision making function. It was not envisaged that a procedural model or a check list of procedures would be developed, as procedure is already well catered for in the literature available to TAFE Authorities. Rather it was hoped that a set of principles could be identified which might free curriculum developers from the restriction of narrow procedural paths and set answers, and give them a broad and informed context in which they could make decisions realistically and with confidence. The research method and methodology is explained in Appendix A at the end of this book, while the interview schedule used for eliciting the data is included as Appendix B.


Curriculum Decision Making in TAFE [ Contents ]
HTML author: Clare McBeath [ c.mcbeath@bigpond.com ]
This URL last corrected 3 June 2009: http://www.clare-mcbeath.id.au/cdmt/cdmtch3.html
Previous URL 27 Oct 2006 to 4 June 2009: http://www.users.bigpond.net.au/atkinson-mcbeath/clare/cdmt/cdmtch3.html
Previous URL 19 Aug 2002 to 16 May 2006: http://education.curtin.edu.au/pubs/cdmt/cdmtch3.htm
Previous URL from 8 Mar 1997 to 19 Aug 2002: http://cleo.murdoch.edu.au/trdev-aus/pubs/cdmt/cdmtch3.htm
© Clare McBeath, Faculty of Education, Curtin University of Technology