10 July, 2009

Weeks 12 & 13 (cont.) - Summary of results

Data analyses of Student Survey (for usability)

Table 4 below indicates that overall, 72% of students responded in a positive manner to the 20 selected standards from Items 1, 2 and 4 . However 18% disagreed or strongly disagreed. These responses relate to Standards 1.8, 2.1, 2.6, 2.9, 2.10, 2.13, 4.1, 3.2 and 4.5 . Finally 9% of responses indicated N/A, referring to Standards 2.9, 2.10 and 2.14.


As well as responses to Standards 1, 2 and 4, there was an area at the bottom of the online questionnaire for respondents to write general comments. All responded and feedback was positive.

Data analyses of Staff survey (for quality)

Table 11 below indicates that overall, 88% of all staff responded in a positive manner to the 20 selectd standards from Items 1, 2 and 4. However 11% indicated a N/A response. Appendix 10 identified one role made these responses and they relate to Standard 1.6, 1.7 and 1.8, and all of Items 2 and 4.


Table 12 below indicates that overall, 100% of online staff responded in a positive manner to the 20 selected standards in Items 1, 2 & 3 (see also Appendix 9).

Table 12 – Summary of Online Staff only Results shown as % frequencies (n=5) graph





Table 13 below indicates that overall, feedback from the Curriculum & Academic Services staff member indicated that 95% of the selected standards in Items 1, 2 & 4 (see also Appendices 10 and 11) were N/A (not applicable).

As well as responses to structured questions, there was an area for each Item for respondents were able to write comments.
The full version of results are available - click here to access.

14 March, 2009

Weeks 12 & 13 - Analysis of data and results

My updated Assignment 2 - Evaluation Plan is now available here on google docs.

I trialled the initial Best Practice Checklist set of questions with a small group from Whanganui campus. I soon realised there were too many questions, some were repeated and some needed clarification. Also, most responses I received from staff tended to reflect an evaluation of a Moodle site, rather than scaling each standard according to whether they thought it relevant to their role if they were assessing a potential Moodle site. So it all ground to a halt! After navel-gazing - actually in-between selling and buying a house, selling all my old furniture, buying new and shifting ... then recovering (aka sleeping) ... I renamed the checklist to "Quality Matters Checklist" and revamped it to cover 6 Items:
  1. General Overview & Introduction
  2. Learner Support & Technology Support
  3. Resources & Materials
  4. Design & Navigation
  5. Engagement, Interaction & Reflection
  6. Assessment
Within each Item, there are a number of standards.

After a discussion with Bronwyn I also separated the two sets of questions -
  • Yesterday I sent version 2 of the (Quality Matters) Checklist to NZDB staff in Whanganui to complete (hopefully by Monday).

    TT13 - what criteria can be used during the design and development of an eLearning course (paper) to guide best practice (relevant UCOL staff and stakeholder/s to complete);

  • I then selected 20 questions from the same Checklist and created an online questionnaire in the NZDB (WG) Administration site in Moodle. I also sent an email to all participants (current students) in the site explaining the purpose of the questionnaire and seeking feedback, also by early next week.

    TD5 - has a representative sample of students tested the eLearning materials and any necessary modifications been made? (students to complete).

I would hope that the collation of qualitative data from meetings, interviews, discussions and quantitative data from the questionnaire will enable me to analyse and prepare an evaluation report.

Weeks 10 & 11 - Conducting the evaluation

Once I received feedback from Bronwyn (shared here at Google docs) I edited my initial draft to reflect these changes. Similar to Rachel I am having fun trying to get used to Google docs and use the collaborative tools effectively ... it's a work in progress!

On Wednesday 13 May I met with five of the eight staff from the Business School team (the Focus Group) in Whanganui to provide a background to the Evaluation questionnaires. We identified a number of appropriate stakeholders to invite to a luncheon the following week (20 May) and participate in this evaluation process:
  1. Questionnaire - student evaluation. Current and former students will be evaluating an NZIM CBS 808 Moodle site, currently in its initial stages of development and now available for critique. They will also be asked to comment on whether the Questionnaire Items and questions within each Item are relevant and where necessary, provide feedback.

    (TD5: usability - has a representative sample of students tested the eLearning materials and any necessary modifications been made?).

  2. Best Practice Checklist (or Guidelines). Stakeholders from within and outside of UCOL have been identified and are now being approached to attend the luncheon also and to participate in the completion of the Best Practice Checklist. They too will be asked to comment on whether the Checklist Items and questions within each Item are relevant and where necessary, provide feedback.

    (TT13: quality - what criteria can be used during the design and development of an eLearning course to guide best practice).

I also intend to collect data and information by interview (face-to-face, telephone, Skype, email, Discussion Forum) and where appropriate, by observation.

I also need to meet with UCOL Ethics Committee before I get to this stage! And ... to provide worthwhile feedback to my peers studying alongside me in this course!

Week 9 - prepare and present an evaluation plan

After preparing my first draft plan, I have now incorporated Bronwyn's feedback and now submit my Evaluation Plan in Google docs - to access, click here.

I now await your feedback comments in this blog posting from (at least two) of my fellow classmates. Once this is done, I will acknowledge the feedback and report back.

I look foward to your feedback - thank you!

Weeks 7 & 8 - Negotiate and write an evaluation plan

I continue to re-read, re-write and review my draft Evaluation Plan and blog postings and responses of others, and alongside Bronwyn's support and advice, I am in the process of defining more clearly my two eLearning guidelines:

  1. TD5 (currently): Has a representative sample of students tested the eLearning materials and any necessary modifications been made?

    Add the following to TD5:

    - how easy to use to students find the online materials?
    - how effective is the design of the online course (paper) for learning

  2. TD13 (currently): Does the teacher evaluate the eLearning during their course to identify its effectiveness and how to improve it?

    Bronwyn suggests rewriting TT13 to:

    "What criteria can be used during the (design and) development of an eLearning course to guide best practice?

    - do different personnel involved in the development phase have different perspectives?" Evaluators may include: SME, eLearning Advisor, Instructional Designer, Technical Expert, Curriculum & Academic Services (CAS).

A couple of thoughts where I would appreciate some feedback:

  1. What are the minimum acceptable standards for best practice! Who sets them? How?
  2. Would authority need to be given before the release of an eLearning course at our institution?

My draft Evaluation Plan will be available to view on Google documents once I learn how to do it!

Weeks 5 & 6 - Evaluation methods

The purpose of these two weeks has been to explore and eventually choose an appropriate evaluation method that would best suit my proposed project.

In Week 3, from the eLearning Guidelines for New Zealand I identified the following, relevant to my area of practice:

  1. TD5: has a representative sample of students tested the eLearning materials and any necessary modifications been made?

    (It is intended to measure the usability and effectiveness of the eLearning materials after implementation by students and respond by continual improvement to ensure viability over time).

  2. TT13: does the teacher evaluate the eLearning during their course to identify its effectiveness and how to improve it?

    (It is intended to measure the usability and effectiveness of the eLearning materials after implementation by teaching staff and respond by continual improvement to ensure viability over time).

Appropriate method for evaluation project

In Week 4, we considered Evaluation paradigms and models, I related to Tom Reeves’ Educational Paradigms # 4: eclectic-mixed methods (using multiple evaluation methods) and like Adrienne, where she states:

This will allow me to pick and choose a variety of research methods based on what will fit in with what I want to study.

Why? As Rachel suggests:

This triangulation should give a reasonable indication and support for the outcomes and success of adopting flexible options to a F2F course.

Background

I am currently working with teaching and administration staff at UCOL Whanganui who teach F2F (or administer) the Certificate of Business Studies (CBS) programme. This programme also incorporates the NZIM Certificate in Management qualification. Teaching staff are expected to have an eLearning presence (blended delivery) using Moodle in all the CBS (F2F) papers. Some staff had been using Blackboard, which is still available, but being phased out. Lecturing staff are expected to develop a Moodle site for each of the 12 papers to support the students in a mainly F2F environment. They have limited support in the design plan of their Moodle site and may lack time and eLearning expertise. There are also no quality control measures in place. My role is an eLearning advisor only.

What is the purpose of the evaluation?

From the Six Facets of Instructional Product Evaluation identified by Professor Tom Reeves, the purpose of my evaluation after the eLearning materials are implemented so students and UCOL staff can measure their Moodle site for:

  1. usability; and
  2. effectiveness

From the WikiEd eLearning Guidebook Analysis of evaluation data I plan to use the following methods:

  1. summative evaluation; and possibly
  2. monitoring evaluation

I plan to follow the University of Tasmania Project evaluation framework to guide me in the development, support and application of a best (or good) practice checklist (or guidelines) to ensure best practice methods and quality control measures are recognised and supported at our institution.

Week 4 - Evaluation paradigms and models

Initially when reading Tom Reeves’ Educational Paradigms I was drawn to Paradigm # 4: eclectic-mixed methods – pragmatic paradigm – using qualitative and quantitative (multiple) evaluation methods; preferring to deal with the practical problems that confront educators and trainers and recognising that a tool is only meaningful within the context in which it is to be used.

Then, in Bronwyn’s comparison of two models: experimental and multiple methods identified in Reeves’ article, I discovered Paradigm 4’s purpose is to provide information to potentially “improve” better teaching and learning. In the evaluation activities, I found this multiple methods model identified a more extensive evaluation with suggestions of triangulation combinations. I agree with Sam's comments and Bronwyn's feedback) which indicated the importance of providing opportunities from which to learn (rather than slip into information providing.

Bronwyn’s 8 types of evaluation models provided further interpretations and from my current role in as an eLearning Advisor & Developer with the School of Business, I especially related to the:

  1. Multiple methods evaluation model – using a variety of methods to collect data. This is currently carried out in F2F classes, but not formally in our online programmes.
  2. Fourth generation evaluation model (Constructivist model) – involving discovery and assimilation - could this identify the ‘blanks’ we can face where we "don’t even know what we don't know" evaluation questions?;
  3. Satke’s responsive evaluation model – that identifies the need and views of our (external) stakeholders, employing qualitative and quantitative methods; and
  4. Tylerian objectives-based evaluation model – designed to determine whether the orginal objectives have been met (a necessity to meet internal moderation (Curriculum & Academic Services) and external moderation (NZQA).

In the past few weeks and previous study during this programme, we have referred to:

  1. OTARA instructional design model – Objectives, Themes, Activities (what the learners need to do to bridge the gap between Objectives and Assessment), Resources, Assessment;
  2. ADDIE instructional design model – Analysis, Design, Development, Implementation, Evaluation;
  3. multiple intelligences and learning styles;
  4. Accelerated learning & System 2000 learning cycle; and
  5. Salmon's 5-stage eLearning model.

As I consider the next few weeks of this paper and what I wish to achieve, I now seek to understand the following:

So ... in an elearning environment would a learning cycle, for example - System 2000, that promotes student-centred collaborative learning, still fit?

I think it does – in the A (Activities) of OTARA and in the D (Design) of ADDIE.

Your thoughts are welcome :-)!

Week 3 - eLearning guidelines for quality

This week’s task is to identify 2 eLearning guidelines relevant to my area of practice (with the School of Business staff teaching fully online and/or in a blended delivery environment).

eLearning guidelines

In the past year as the eLearning Advisor & Developer I have identified the following "eLearning Guidelines for New Zealand" that I wish to specifically to address relating to quality in online teaching and learning.

  1. 1.1 Teaching staff: learning design: learner/centred

    TD5: has a representative sample of students tested the elearning materials and any necessary modifications been made?

  2. 1.2 Teaching staff: teaching relationships: collaboration

    TT13: does the teacher evaluate the elearning during their course to identify its effectiveness and how to improve it?

Quality eLearning issues

In 2008 we introduced our third LMS – Moodle. Teaching staff had been using Colts LMS and Blackboard for just over five years. During this time, an ongoing issue from an eLearning Advisor and Developer’s perspective has been the lack of internal quality control procedures. There is no formal moderation processes of online resources centred around best practice models. We have yet to develop a relevant and checklist for design and delivery of an online paper at UCOL.


How the guidelines may address the issues

I would hope that the introduction of a student evaluation by Online Tutors, with the support and guidance of the eLearning Advisor and Developer would assist in the ongoing improvement and effectiveness of online courses.

The criteria for evaluating the quality of online courses would be developed based on research and collaboration.



Milne, J. & Dimock, E. (2006). eLearning guidelines - guidelines for the support of elearning in New Zealand tertiary institutions. (version 0.8). Massey University.

Wright, C. R. (n.d.). Criteria for evaluating the quality of online courses. Alberta.

Week 2 - Quality & Evaluation

  1. Why is evaluation important to you and how do you define it?

    Evaluation is important as it can provide a range of different and useful information to assist in future course design, planning and implementation.

    Evaluation is the process through which we examine the learning opportunities and experiences we offer our students and make judgements about their effectiveness and educational value of techniques and resources, as well as the costs. As Heather stated in her Week 2 posting "evaluation is a way of determining what is needed". How? By looking at a broad range of evidence in order the gauge the effectiveness of the elearning – for example, including a course assessment as a method of evaluation data collection as a way of keeping quality issues clearly in focus.

    "Evaluation is a continuous ongoing process (Gunn, 1999) that is "fundamentally about asking questions, and then designing ways to try to find useful answers" (Manwaring and Calverley, 1998). It is an expensive and time-consuming process and it is essential that it is worthwhile:

    "If the answer to the question 'why evaluate?' is that the results will lead to action to improve the teaching and learning within the course or institution,
    then all the effort will be worthwhile." (Shaw, 1998). "
    (Higgison, C. 2001).

  2. What sort of evaluations mentioned on the presentation are familiar to you already and why?

    Observation – in some assessments, observation is required to identify whether the student is able to demonstrate competency in performing a task – this can be observed by audio, video recording or F2F.

    Questionnaires – I have used online surveys, hard-copy evaluation handouts and verbal questions to evaluate teaching and learning.

    Focus groups – regular meeting with our stakeholders –relevant Industry Advisory Board representatives.

    Expert review – internal (pre- and post-) within a School by peers and/or Senior Lecturers, by Subject Matter Experts, by Curriculum & Academic Services at UCOL and across campuses, and external moderation with NZQA.

    Checklists – depending on the paper or unit standard being assessed – often combined with Observation.

    Feedback in a discussion forum – by quality participation, text chat feedback and the use voting buttons (in Elluminate).

  3. Why is quality important in eLearning?

    Learning and teaching that incorporates the use of learning technologies is complex and evaluation is seen as the key to developing an understanding of the factors that influence its success. That success will depend on the quality of its instructional design and the academic and technical supported provided to learners and online tutors.

    I agree with Joy's comment: "evaluation is based on asking questions, collecting data or information that answer the questions, through analysis then make the decision and take action to ensure the whole project / system fulfil planned outcome. It can be varied depends on what questions we are asking and what result we would like to get."

    Quality assurance standards are the responsibility of the institution and therefore it is essential they evaluate their practices. It plays an important role is satisfying the demands of external (and internal) scrutiny. This ensures that innovations are subject to the same institutional scrutiny and evaluation processes as traditional F2F delivery.

    However, as Brownyn has suggested in her slide presentation – “we have got away it so far, and have not really bothered to do much evaluation apart from checking about what students think of the course at the end of the course …..” therefore examples of sound pedagogy and integrity of best practice in elearning are at risk. As Michelle states "it is imperative that the course material and activities are of a high quality."

    Specific indicators for measuring quality include:

    • assessment of student learning,
    • feedback from students,
    • peers and external reviewers and
    • institutional accreditation procedures.



Gunn, C. (1999). They love it but do they learn from it? Evaluating the educational impact of innovations. Higher Education Research and Development 18 (2): 185-199.

Higgison, C. (2001). Online tutoring e-book. Chapter 5 Evaluation. Institute for computer-based learning. Edinburgh.

07 March, 2009

Week 1 - Introductions

Hi everyone

This is my fifth paper towards the Grad Cert in eLearning, so the end is in sight! It's great to see some familiar names again - Elaine and Joy have studied alongside me during 2008 and have been a great support.

I am the eLearning Advisor and Developer for the Faculty of Humanities & Business at UCOL in Palmerston North. Currently I work alongside the New Zealand Diploma in Business and NZIM Certificate in Management teaching and administration staff (at Wanganui and PN campuses) to support them in developing fully online and blended delivery of resources using Moodle. Moodle was introduced to UCOL a year ago. Staff are also using Blackboard, currently being phased out.

I am also a Senior Lecturer with the School of Business and a certified Buzan Licensed Instructor in Mind Maps. When I itch to return to the F2F classroom I facilitate TLC (Think, Learn & Create) workshops for UCOL staff:
  • TLC using Mind Maps,
  • TLC for a Balanced Life,
  • TLC for Notemaking & Notetaking and
  • TLC for creative lesson planning.

I look forward to this paper to gain further knowledge and understanding in elearning evaluation.

I wish to create a relevant, valid and reliable evaluation that can be used for our fully online Moodle programmes for NZDB and NZIM, something I have not been able to develop yet at UCOL.