Showing posts with label evaluation. Show all posts
Showing posts with label evaluation. Show all posts

14 March, 2009

Week 4 - Evaluation paradigms and models

Initially when reading Tom Reeves’ Educational Paradigms I was drawn to Paradigm # 4: eclectic-mixed methods – pragmatic paradigm – using qualitative and quantitative (multiple) evaluation methods; preferring to deal with the practical problems that confront educators and trainers and recognising that a tool is only meaningful within the context in which it is to be used.

Then, in Bronwyn’s comparison of two models: experimental and multiple methods identified in Reeves’ article, I discovered Paradigm 4’s purpose is to provide information to potentially “improve” better teaching and learning. In the evaluation activities, I found this multiple methods model identified a more extensive evaluation with suggestions of triangulation combinations. I agree with Sam's comments and Bronwyn's feedback) which indicated the importance of providing opportunities from which to learn (rather than slip into information providing.

Bronwyn’s 8 types of evaluation models provided further interpretations and from my current role in as an eLearning Advisor & Developer with the School of Business, I especially related to the:

  1. Multiple methods evaluation model – using a variety of methods to collect data. This is currently carried out in F2F classes, but not formally in our online programmes.
  2. Fourth generation evaluation model (Constructivist model) – involving discovery and assimilation - could this identify the ‘blanks’ we can face where we "don’t even know what we don't know" evaluation questions?;
  3. Satke’s responsive evaluation model – that identifies the need and views of our (external) stakeholders, employing qualitative and quantitative methods; and
  4. Tylerian objectives-based evaluation model – designed to determine whether the orginal objectives have been met (a necessity to meet internal moderation (Curriculum & Academic Services) and external moderation (NZQA).

In the past few weeks and previous study during this programme, we have referred to:

  1. OTARA instructional design model – Objectives, Themes, Activities (what the learners need to do to bridge the gap between Objectives and Assessment), Resources, Assessment;
  2. ADDIE instructional design model – Analysis, Design, Development, Implementation, Evaluation;
  3. multiple intelligences and learning styles;
  4. Accelerated learning & System 2000 learning cycle; and
  5. Salmon's 5-stage eLearning model.

As I consider the next few weeks of this paper and what I wish to achieve, I now seek to understand the following:

So ... in an elearning environment would a learning cycle, for example - System 2000, that promotes student-centred collaborative learning, still fit?

I think it does – in the A (Activities) of OTARA and in the D (Design) of ADDIE.

Your thoughts are welcome :-)!

Week 3 - eLearning guidelines for quality

This week’s task is to identify 2 eLearning guidelines relevant to my area of practice (with the School of Business staff teaching fully online and/or in a blended delivery environment).

eLearning guidelines

In the past year as the eLearning Advisor & Developer I have identified the following "eLearning Guidelines for New Zealand" that I wish to specifically to address relating to quality in online teaching and learning.

  1. 1.1 Teaching staff: learning design: learner/centred

    TD5: has a representative sample of students tested the elearning materials and any necessary modifications been made?

  2. 1.2 Teaching staff: teaching relationships: collaboration

    TT13: does the teacher evaluate the elearning during their course to identify its effectiveness and how to improve it?

Quality eLearning issues

In 2008 we introduced our third LMS – Moodle. Teaching staff had been using Colts LMS and Blackboard for just over five years. During this time, an ongoing issue from an eLearning Advisor and Developer’s perspective has been the lack of internal quality control procedures. There is no formal moderation processes of online resources centred around best practice models. We have yet to develop a relevant and checklist for design and delivery of an online paper at UCOL.


How the guidelines may address the issues

I would hope that the introduction of a student evaluation by Online Tutors, with the support and guidance of the eLearning Advisor and Developer would assist in the ongoing improvement and effectiveness of online courses.

The criteria for evaluating the quality of online courses would be developed based on research and collaboration.



Milne, J. & Dimock, E. (2006). eLearning guidelines - guidelines for the support of elearning in New Zealand tertiary institutions. (version 0.8). Massey University.

Wright, C. R. (n.d.). Criteria for evaluating the quality of online courses. Alberta.