10 July, 2009

Weeks 12 & 13 (cont.) - Summary of results

Data analyses of Student Survey (for usability)

Table 4 below indicates that overall, 72% of students responded in a positive manner to the 20 selected standards from Items 1, 2 and 4 . However 18% disagreed or strongly disagreed. These responses relate to Standards 1.8, 2.1, 2.6, 2.9, 2.10, 2.13, 4.1, 3.2 and 4.5 . Finally 9% of responses indicated N/A, referring to Standards 2.9, 2.10 and 2.14.


As well as responses to Standards 1, 2 and 4, there was an area at the bottom of the online questionnaire for respondents to write general comments. All responded and feedback was positive.

Data analyses of Staff survey (for quality)

Table 11 below indicates that overall, 88% of all staff responded in a positive manner to the 20 selectd standards from Items 1, 2 and 4. However 11% indicated a N/A response. Appendix 10 identified one role made these responses and they relate to Standard 1.6, 1.7 and 1.8, and all of Items 2 and 4.


Table 12 below indicates that overall, 100% of online staff responded in a positive manner to the 20 selected standards in Items 1, 2 & 3 (see also Appendix 9).

Table 12 – Summary of Online Staff only Results shown as % frequencies (n=5) graph





Table 13 below indicates that overall, feedback from the Curriculum & Academic Services staff member indicated that 95% of the selected standards in Items 1, 2 & 4 (see also Appendices 10 and 11) were N/A (not applicable).

As well as responses to structured questions, there was an area for each Item for respondents were able to write comments.
The full version of results are available - click here to access.

14 March, 2009

Weeks 12 & 13 - Analysis of data and results

My updated Assignment 2 - Evaluation Plan is now available here on google docs.

I trialled the initial Best Practice Checklist set of questions with a small group from Whanganui campus. I soon realised there were too many questions, some were repeated and some needed clarification. Also, most responses I received from staff tended to reflect an evaluation of a Moodle site, rather than scaling each standard according to whether they thought it relevant to their role if they were assessing a potential Moodle site. So it all ground to a halt! After navel-gazing - actually in-between selling and buying a house, selling all my old furniture, buying new and shifting ... then recovering (aka sleeping) ... I renamed the checklist to "Quality Matters Checklist" and revamped it to cover 6 Items:
  1. General Overview & Introduction
  2. Learner Support & Technology Support
  3. Resources & Materials
  4. Design & Navigation
  5. Engagement, Interaction & Reflection
  6. Assessment
Within each Item, there are a number of standards.

After a discussion with Bronwyn I also separated the two sets of questions -
  • Yesterday I sent version 2 of the (Quality Matters) Checklist to NZDB staff in Whanganui to complete (hopefully by Monday).

    TT13 - what criteria can be used during the design and development of an eLearning course (paper) to guide best practice (relevant UCOL staff and stakeholder/s to complete);

  • I then selected 20 questions from the same Checklist and created an online questionnaire in the NZDB (WG) Administration site in Moodle. I also sent an email to all participants (current students) in the site explaining the purpose of the questionnaire and seeking feedback, also by early next week.

    TD5 - has a representative sample of students tested the eLearning materials and any necessary modifications been made? (students to complete).

I would hope that the collation of qualitative data from meetings, interviews, discussions and quantitative data from the questionnaire will enable me to analyse and prepare an evaluation report.

Weeks 10 & 11 - Conducting the evaluation

Once I received feedback from Bronwyn (shared here at Google docs) I edited my initial draft to reflect these changes. Similar to Rachel I am having fun trying to get used to Google docs and use the collaborative tools effectively ... it's a work in progress!

On Wednesday 13 May I met with five of the eight staff from the Business School team (the Focus Group) in Whanganui to provide a background to the Evaluation questionnaires. We identified a number of appropriate stakeholders to invite to a luncheon the following week (20 May) and participate in this evaluation process:
  1. Questionnaire - student evaluation. Current and former students will be evaluating an NZIM CBS 808 Moodle site, currently in its initial stages of development and now available for critique. They will also be asked to comment on whether the Questionnaire Items and questions within each Item are relevant and where necessary, provide feedback.

    (TD5: usability - has a representative sample of students tested the eLearning materials and any necessary modifications been made?).

  2. Best Practice Checklist (or Guidelines). Stakeholders from within and outside of UCOL have been identified and are now being approached to attend the luncheon also and to participate in the completion of the Best Practice Checklist. They too will be asked to comment on whether the Checklist Items and questions within each Item are relevant and where necessary, provide feedback.

    (TT13: quality - what criteria can be used during the design and development of an eLearning course to guide best practice).

I also intend to collect data and information by interview (face-to-face, telephone, Skype, email, Discussion Forum) and where appropriate, by observation.

I also need to meet with UCOL Ethics Committee before I get to this stage! And ... to provide worthwhile feedback to my peers studying alongside me in this course!

Week 9 - prepare and present an evaluation plan

After preparing my first draft plan, I have now incorporated Bronwyn's feedback and now submit my Evaluation Plan in Google docs - to access, click here.

I now await your feedback comments in this blog posting from (at least two) of my fellow classmates. Once this is done, I will acknowledge the feedback and report back.

I look foward to your feedback - thank you!

Weeks 7 & 8 - Negotiate and write an evaluation plan

I continue to re-read, re-write and review my draft Evaluation Plan and blog postings and responses of others, and alongside Bronwyn's support and advice, I am in the process of defining more clearly my two eLearning guidelines:

  1. TD5 (currently): Has a representative sample of students tested the eLearning materials and any necessary modifications been made?

    Add the following to TD5:

    - how easy to use to students find the online materials?
    - how effective is the design of the online course (paper) for learning

  2. TD13 (currently): Does the teacher evaluate the eLearning during their course to identify its effectiveness and how to improve it?

    Bronwyn suggests rewriting TT13 to:

    "What criteria can be used during the (design and) development of an eLearning course to guide best practice?

    - do different personnel involved in the development phase have different perspectives?" Evaluators may include: SME, eLearning Advisor, Instructional Designer, Technical Expert, Curriculum & Academic Services (CAS).

A couple of thoughts where I would appreciate some feedback:

  1. What are the minimum acceptable standards for best practice! Who sets them? How?
  2. Would authority need to be given before the release of an eLearning course at our institution?

My draft Evaluation Plan will be available to view on Google documents once I learn how to do it!

Weeks 5 & 6 - Evaluation methods

The purpose of these two weeks has been to explore and eventually choose an appropriate evaluation method that would best suit my proposed project.

In Week 3, from the eLearning Guidelines for New Zealand I identified the following, relevant to my area of practice:

  1. TD5: has a representative sample of students tested the eLearning materials and any necessary modifications been made?

    (It is intended to measure the usability and effectiveness of the eLearning materials after implementation by students and respond by continual improvement to ensure viability over time).

  2. TT13: does the teacher evaluate the eLearning during their course to identify its effectiveness and how to improve it?

    (It is intended to measure the usability and effectiveness of the eLearning materials after implementation by teaching staff and respond by continual improvement to ensure viability over time).

Appropriate method for evaluation project

In Week 4, we considered Evaluation paradigms and models, I related to Tom Reeves’ Educational Paradigms # 4: eclectic-mixed methods (using multiple evaluation methods) and like Adrienne, where she states:

This will allow me to pick and choose a variety of research methods based on what will fit in with what I want to study.

Why? As Rachel suggests:

This triangulation should give a reasonable indication and support for the outcomes and success of adopting flexible options to a F2F course.

Background

I am currently working with teaching and administration staff at UCOL Whanganui who teach F2F (or administer) the Certificate of Business Studies (CBS) programme. This programme also incorporates the NZIM Certificate in Management qualification. Teaching staff are expected to have an eLearning presence (blended delivery) using Moodle in all the CBS (F2F) papers. Some staff had been using Blackboard, which is still available, but being phased out. Lecturing staff are expected to develop a Moodle site for each of the 12 papers to support the students in a mainly F2F environment. They have limited support in the design plan of their Moodle site and may lack time and eLearning expertise. There are also no quality control measures in place. My role is an eLearning advisor only.

What is the purpose of the evaluation?

From the Six Facets of Instructional Product Evaluation identified by Professor Tom Reeves, the purpose of my evaluation after the eLearning materials are implemented so students and UCOL staff can measure their Moodle site for:

  1. usability; and
  2. effectiveness

From the WikiEd eLearning Guidebook Analysis of evaluation data I plan to use the following methods:

  1. summative evaluation; and possibly
  2. monitoring evaluation

I plan to follow the University of Tasmania Project evaluation framework to guide me in the development, support and application of a best (or good) practice checklist (or guidelines) to ensure best practice methods and quality control measures are recognised and supported at our institution.

Week 4 - Evaluation paradigms and models

Initially when reading Tom Reeves’ Educational Paradigms I was drawn to Paradigm # 4: eclectic-mixed methods – pragmatic paradigm – using qualitative and quantitative (multiple) evaluation methods; preferring to deal with the practical problems that confront educators and trainers and recognising that a tool is only meaningful within the context in which it is to be used.

Then, in Bronwyn’s comparison of two models: experimental and multiple methods identified in Reeves’ article, I discovered Paradigm 4’s purpose is to provide information to potentially “improve” better teaching and learning. In the evaluation activities, I found this multiple methods model identified a more extensive evaluation with suggestions of triangulation combinations. I agree with Sam's comments and Bronwyn's feedback) which indicated the importance of providing opportunities from which to learn (rather than slip into information providing.

Bronwyn’s 8 types of evaluation models provided further interpretations and from my current role in as an eLearning Advisor & Developer with the School of Business, I especially related to the:

  1. Multiple methods evaluation model – using a variety of methods to collect data. This is currently carried out in F2F classes, but not formally in our online programmes.
  2. Fourth generation evaluation model (Constructivist model) – involving discovery and assimilation - could this identify the ‘blanks’ we can face where we "don’t even know what we don't know" evaluation questions?;
  3. Satke’s responsive evaluation model – that identifies the need and views of our (external) stakeholders, employing qualitative and quantitative methods; and
  4. Tylerian objectives-based evaluation model – designed to determine whether the orginal objectives have been met (a necessity to meet internal moderation (Curriculum & Academic Services) and external moderation (NZQA).

In the past few weeks and previous study during this programme, we have referred to:

  1. OTARA instructional design model – Objectives, Themes, Activities (what the learners need to do to bridge the gap between Objectives and Assessment), Resources, Assessment;
  2. ADDIE instructional design model – Analysis, Design, Development, Implementation, Evaluation;
  3. multiple intelligences and learning styles;
  4. Accelerated learning & System 2000 learning cycle; and
  5. Salmon's 5-stage eLearning model.

As I consider the next few weeks of this paper and what I wish to achieve, I now seek to understand the following:

So ... in an elearning environment would a learning cycle, for example - System 2000, that promotes student-centred collaborative learning, still fit?

I think it does – in the A (Activities) of OTARA and in the D (Design) of ADDIE.

Your thoughts are welcome :-)!