The really really rough guide to e-Learning benchmarking in Higher Education (part 1)

Here’s a different type of Auricle post from my normal offering. As well as being a repository of information I’ve started to gather, the post is also a kind of thought lab I’m using to articulate and test out ideas, issues, and concerns related to benchmarking and e-learning. The post is a work-in-progress and so is pretty unrefined, but it may still be of some use to others contemplating work in this area.

Any views expressed in this Auricle posting are mine and should not be construed as necessarily representing the views of any other individual or organisation.

At ALT-C 2005 in Manchester, UK (6-8 September 2005) the Higher Education Academy and JISC launched the Higher Education E-Learning Benchmarking and Pathfinder initiative . On 5 September 2005, the Higher Education Academy and JISC published an Expressions of Interest call for the Benchmarking/Pathfinder initiative. The call is unlike any e-learning call that has gone before. Why? Because it signals a new focus, i.e. the initial Benchmarking Exercise will be less about technical innovation per se and more about establishing indicators and descriptions of how we are currently using (or not) learning technologies, systems and tools within Higher Education. The Pathfinder Programme which follows moves on to the sharing of designs, plans and implementation stories of achieving major educational change and benefit from the use of information and communication technologies.

Def – Benchmarking
A term originally borrowed from industry. A process through which practices are analysed to provide a standard measurement (‘benchmark’) of effective performance within an organisation (e.g. a university). (Source: Glossary of Terms in Learning and Teaching in Higher Education, Higher Education Academy).

Or

The concept of benchmarking originates from the practice of surveyor’s use of a permanent reference point against which levels can be compared and measured. (David Buss’ 2001 review of Benchmarking for Higher Education by Norman Jackson and Helen Lund).

Or

Benchmarking is a method of identifying what must be improved in an organisation, finding ways of making these improvements, and then implementing the improvements (Owen J, 2002, Benchmarking for the Learning and Skills Sector (PDF), Learning and Skills Development Agency).

Or

“Google speed, power and search criteria.” (Stephen Downes, The Buntine Oration, the Australian College of Educators (ACE) and the Australian Council of Educational Leaders (ACEL) conference in Perth, Australia, October 9, 2004.

Ok I cheated a bit with the last definition, which was actually Downes’ benchmark for open learning 🙂 But, for anyone familiar with his arguments, there’s a lot of relevance to e-learning benchmarking packaged up in that “Google speed, power and search criteria”.

So we are going to enable UK HE institutions to benchmark their e-learning? From one perspective this poses an interesting challenge, i.e. can ‘e-learning’ be isolated as discrete processes, activities, events, systems, artefacts, and tools from other ‘learning’ that’s taking place?

But, since at least the early 1980s, we have all been participating in this act of collective faith in the transformational potential of technology for teaching and learning purposes, and so it’s perhaps still reasonable to try and take stock of the contribution information and communication technologies are actually making to our learning and teaching environments. I say ‘still reasonable’ because, arguably, one indicator of ‘good practice’ could be that the technology has now become so embedded in everyday practice that it has been normalized, blended, integrated, or whatever, and it is no longer possible to isolate and, therefore, benchmark e-learning as a discrete entity. In some ways that’s similar to Stephen Ehrman’s contention that it makes little sense to evaluate the educational efficacy of a discrete ICT resource/treatment instead of the overall strategy which underpins its use (Source: Asking the Right Questions: What Does Research Tell Us About Technology and Higher Learning? originally published in Change: The Magazine of Higher Learning,1995 XXVII:2 (March/April), pp. 20-27.)

But, there are few among us who would now claim this transformation within UK Higher Education has yet occurred, so we can perhaps safely assume that the purpose of the benchmarking exercise is to help us begin to plan our journey to the state described above.

Before we can go on to consider how we should go about benchmarking, however, we first of all must be clear what we are attempting to benchmark.

Why?

Well, technology and learning is a potent mix and it would be pretty easy to end up benchmarking the technology rather than what’s being done with it or the outcomes from using it.

For example, let’s say institution x has 30,000 registered users of the institutional VLE, say 95% of all staff and students. What a fantastic benchmark for other institutions to aim for?

Or is it?

All this tells us is that 95% of staff and students are registered on the institutional VLE. We don’t know if VLE accounts are automatically granted to all staff and students (whether they intend to use them or not). We know nothing about how many ‘dead’ or empty courses’ there are? We don’t know what the VLE registrations are being used for. We don’t know who is using them, e.g. course administrators, students, academic staff, and research groups. Without collecting useful data we don’t get a chance to construct the really useful information.

A similar benchmark of limited use by itself would be for an institution to assert it had, say, 500 or even 5000 online modules. Is the online module the primary means of delivery of content, interaction and support? If not, what proportion does the online module represent? Does the online module simply a mechanism for the delivery of a small, medium, large fraction of the online content for the module? Again, without the meaning behind the first metric, there is perhaps little to be gained from establishing it as a benchmark and have institutions striving to place x% of their modules online just so they can claim equivalency with other institutions.

If only e-learning benchmarking could be like benchmarking Web sites where we could turn our tools loose on the system and some measures come out the other end. If our benchmarking shows something is broken we can quickly go in and fix it.

But learning isn’t a Web site where most (not all) variables can be fixed and where there is at least some semblance of agreement about what is good and less good.

Benchmarking isn’t a piece of technology or a physical artefact we are going to going to be able to compare to some agreed specification or standard. Instead, it’s about how technologies play their part in teaching and learning in one of the most diverse sectors in the world; a sector composed of a multitude of communities-of-practice whose interactions are not necessarily restricted to one organisation; a sector where institutions have vastly different characteristics, ranging from the uber collegiate to the uber corporate. But, yet, we are about to embark on benchmarking institutions and enabling them to draw comparisons with their institutional peers.

Or are we?

Getting to that state makes certain assumptions about the current state of our self-knowledge; or at least the knowledge our institutions have of themselves.

First up we need to decide what the purpose of the benchmarking activity is. Is it one or more of the following:

  1. internal (benchmark within the organisation, for example between academic departments?
  2. competitive (benchmark performance or processes with competitor institutions or consortia thereof)?
  3. functional (benchmark similar processes within the sector or, say, across subjects)?
  4. generic (comparing e-learning operations between unrelated organisations, e.g industry, or different parts of the educational sector)?

I suspect the intention is to enable institutional comparisons (quasi competitive) but, in order to do so, the institutions will have to look within themselves (internal and perhaps functional).

So what models can we draw upon to guide our deliberations?

First; are we are looking to performance oriented (comparison) benchmarking or process-oriented (improvement) benchmarking? Second; is our benchmarking to be focused on the strategic or operational level?

Are we going to adopt a Kaizen or Business Process Re-engineering (BPR) oriented approach?

Kaizen , i.e. a Japanese term meaning Kai (change) Zen (to become good). Key ingredients are quality, effort, involvement of all personnel at all levels, a willingness to change, and above all good communication. Kaizen is people-oriented, and relatively easy to implement, but requires long-term discipline. It may sound soft and wooly but implicit in Kaizen are the concepts which underpin Total Quality Management as promulgated so successfully by W E Deming as the driving ethos behind the post-war redevelopment of Japan.

At the other end of the spectrum from Kaizen and for those impatient for results is Business Process Re-engineering (BPR) which is harder to achieve. BPR is traditionally IT and ‘customer’-oriented, is focused on radical change, but requires considerable and constant input from personnel with major change management skills.

The fundamental reconsideration and radical redesign of organizational processes, in order to achieve drastic improvement of current performance in cost, service and speed. (Hammer, M and Champy, J, 2001, Reengineering the Corporation: A Manifesto for Business Revolution).

Radical redesign. Send in the technology. Drastic improvements. Costs down. Service up. Go faster. Managerially seductive stuff, and perhaps suitable in manufacturing or retail contexts, but is this how and why we want to benchmark education just because technology is involved? Perhaps not?

At this point, it’s perhaps worth reflecting on Matze Yorke’s warning about the adverse effects of benchmarking when it is driven from a regulatory and conformance perspective and not from a developmental one, i.e. provision of information to enable change, volunteerism, mutual trust, and a commitment to self-improvement (in Benchmarking for Higher Education , Norman Jackson, Helen Lund, Eds, 2000).

So it’s perhaps best we assume we are benchmarking as much for developmental as performance comparison purposes and so this implies considerable ‘buy in’ from staff at all levels of the institution. Sure, we need some ‘hard’ metrics to guide us and provide some comparisons between institutions but it’s also important that we gather the ‘softer’ data because it is that which will provide the context, the meanings, the perspectives, i.e. the institutional ‘stories’. But, it is acquiring and sharing these stories which will undoubtedly present the greatest challenge.

Why?

A metric like ‘x% of staff use institutional VLE’ may not be particularly informative and certainly, by itself, it certainly should not be considered an indicator of good practice or extraordinary performance. It’s also unlikely to compromise the image the institution has of itself and that it would like to project to the public, its competitors, its prospective students etc. Basically, the inclination will always be to tell the best possible story but, yet, we know the real lessons lie in the ‘bumps in the road’, the unexpected incidents, the things we would all do differently if we knew then what we know now. Furthermore, an institution willing to share ‘warts and all’ stories will need a high degree of confidence to project this as a ‘good thing’ and as indicative of a healthy organisation at work but also one which values the health and development of the wider community as much as its own.

Why?

If viewed from a competitive perspective such sharing of stories could be construed as either helping potential competitors avoid similar mistakes or possibly a route to weakening its own standing in the community. It’s for all the reasons above (and others) that, perhaps understandably, some institutions may decide not to gather such data at all.

Initiatives like the recent first UK National Student Satisfaction Survey (available from the Teaching Quality Information[(TQI) website) illustrates the dilemma for institutions. Participation risked analysis and press/public commentary over which the institutions had no control but, at the same time, non-participation also risked attention and commentary, but in the latter cases, in the absence of any data.

What the above illustrates is that the starting point for the institution needs to be some sense of ownership of the benchmarking exercise, i.e. it’s not something that’s done to them from the outside. At the same time benchmark metrics and descriptions are only really beneficial when they’re shared. The question then becomes, shared with whom; the public, only within the institution, a trusted consortium of other institutions? The latter in the benchmarking world are sometimes known as ‘clubs’. Either way, institutions need to build their confidence and feel comfortable with the benchmarking process and, so, self-organising groups of institutions who are prepared to share open and honest ‘stories’ with each other may well provide one way forward.

So we’re back to those ‘stories’ with all the terrors that can hold for institutional image management.

At this point I’ve going to take a slight ‘techie’ divergence.

[Start of techie divergence � but do read this, it’s important to my argument]

For the moment let’s put aside institutional concerns about the dissemination of its stories beyond borders it can control. Let’s assume that I am seeking solutions which can support institutional story telling and sharing.

An interesting story has a chronology, a sense of progression, a past leading into the present and onwards into the future. A good story isn’t static, it’s dynamic so that what was perceived to be true yesterday isn’t true today either due to events or changes in thinking.

Hang on! A good story is dynamic and the content changes?

Sure. If we leave the paper-based book model behind and think digital then the story can change and the medium can both reflect and support this. But how will we know what’s the truth then? Simple; we don’t erase the past version of the truth, we just look at and compare different versions of the truth. The ability to compare itself becomes a valuable part of the story.

Now I know the uber geeks will already be ahead of me. What if our institutional story telling machine was based on a weblog or wiki? The former is strong on chronology and commentary and the latter on collaboration and comparison. From a basic functionality perspective I think we are already there but let’s flip back to the ownership, control and confidence issues I raised before our little technical divergence.

The purpose of the techie divergence was to highlight that although there are easily-available ICT solutions to support the creation of institutional stories, support for the sharing of such stories is perhaps another matter, i.e. sharing between different parts of the institution and sharing between institutions whether as a member of a ‘club’ or not. We need to think from an institutional perspective. Institution A may decide they are prepared to share part, or all, of their story with Institution B but not with Institution Z. But I could be wrong about the paucity of such tools and so I would be interested to hear from anyone with a working solution based on weblogs, wikis or whatever, that would support inter-institutional sharing of stories. The key needs to be simplicity of use.

[end of techie divergence]

Learning from Others?
So what’s out there that can help us plan and implement our e-learning benchmarking? It’s not that there’s a lack of general advice on benchmarking but there’s a relative paucity of resources when we focus down on e-learning particularly as it’s related to Higher Education. Nevertheless, there has been some benchmarking activity with an e-learning oriented focus from which can learn.

The Association of Commonwealth Universities (ACU) launched a University Management Benchmarking Club in 1996. One member of this club is the University of Ontago, in New Zealand. The benchmarking methodology adopted by the ACU (PDF) appears to focus as much on processes as metrics. The Observatory on Borderless Higher Education also has a number of surveys and reports of relevance to benchmarking and online learning.

The Australian Flexible Learning Framework offers some useful reference points in their 2005 E-learning Benchmarking Project which has identified 12 e-learning indicators, survey results and a number of templates and tools. This is Vocational Education and Training (VET) oriented but is, nevertheless, certainly worth a look.

At ALT-C 2005 in Manchester, UK (6-8 September 2005) Paul Bacsich, now Visiting Professor at the University of Middlesex, led a workshop Benchmarks for e-learning in UK HE – adaptation of existing good practice. Paul has also presented at the Towards a Learning Society conference in Brussels, May 2005 (Powerpoint). He has also published the Theory of Benchmarking for E-Learning: a top-level literature review.

We should certainly revisit the deliverables from the ELTI project, i.e. Embedding Learning Technologies Institutionally. Although ELTI finished in 2003, variant parts of the toolkit have surfaced again in the JISC’s Effective Practice with E-Learning and Innovative Practice with e-Learning.

The JISC’s MLE’s for Lifelong Learning strand of the E-Learning Programme offers us one example of a report on a benchmarking exercise called Implementing the e-College Wales MLE. The exercise was led by the University of Glamorgan and also involved three of its e-College Wales partners. The report provides an example of a research question approach to benchmarking.

Of course not everything of potential relevance that may inform our thinking in this arena is conveniently labelled ‘benchmarking’. For example, there are the UCISA/JISC VLE Surveys, and the JISC Study of MLE Activity. But there’s also potentially relevant work that has been taking place in, for example, the University of Birmingham’s JISC funded Study on how innovative technologies are influencing the design of physical learning spaces in the post-16 sector. Would we not benefit from a benchmark which addressed the design of learning spaces?

On the European front, we have the European Institute for E-Learning (EIfEL) which offers us the SEEL Regional Benchmarking System Starters Pack which offers some useful processes and types of indicators. EIfEL’s SEEL (Supporting Excellence in E-Learning) project also has other deliverables of potential interest, e.g. quality guidelines for e-learning

The European Commission are also interested in benchmarking e-learning. First there was Benchmarking eEurope and its indicators. However, as the 2005 e-learning indicators (PDF) perhaps illustrate there is little of substance for UK HEIs, e.g. ” The number of pupils per computer per internet connection” or “Percentage of teachers using the Internet for non-computing teaching on a regular basis” lacks the fidelity required for meaningful institutional level comparison of learning. But I could be wrong 🙂

On the Antipodean front we have Quality Improvement, Quality Assurance, and Benchmarking: Comparing Two Frameworks for Managing Quality Processes in Open and Distance Learning, Alistair Inglis, Centre for Staff Learning and Educational Development, Victoria University Melbourne, Australia, International Review of Research in Open and Distance Learning(March � 2005).

The VLE vendors are also present in benchmarking related waters. For example, the Blackboard Educational Technology framework posits 5 Phases to benchmark against: Phase 1 � Exploratory; Phase 2 � Supported; Phase 3 – Strategic; Phase 4 – Mission Critical; Phase 5 � Transformational. It’s perhaps no surprise, given the provenance of the model, that the transformation phase is linked to the VLE becoming “a key component for educational delivery” and “actual curriculum changes being dependent on the academic technologies”. The author appears to view this as a good thing and not a problem. No concerns about technological determinism here then? Mmmm …

Our colleagues in other parts of the education system have also been doing some work in e-learning benchmarking. For instance over at FERL we find the rather ominous sounding The Matrix .

… an online tool, developed by the National Centre for School Leadership (NCSL) and Becta for schools, which has been further developed for use by organisations in the learning and skills sector … individuals or groups within a college can use the Matrix to review their current position against a set of statements typical of different stages of development. There are practical examples available that place the statements into context, and links to online resources … As you complete the Matrix, an action plan, based on your results, is produced. This action plan contains helpful support and guidance, which you may wish to consider along with links to resources.

And in the primary and secondary schools sector we also have NAACEmark.

The Naacemark has been developed by NAACE, in association with Becta. It is an award which recognises a school’s success in developing and implementing a strategic approach to ICT. It provides a framework for using ICT to enhance teaching and learning and provides opportunities for the school community to develop ICT capability. Working towards and gaining the Naacemark enables schools to move forward with the knowledge that they are implementing recognised good practice.

On the general benchmarking resource front there’s quite a bit of material, some of it Higher Education oriented.

The Higher Education Academy site offers the following generic benchmarking resources: Benchmarking for Self Improvement and Quality Assurance and Benchmarking.

The Academy can also offer Professor Norman Jackson’s Powerpoint Overview of Benchmarking and its Implications. And of course there is always Benchmarking for Higher Education (Norman Jackson and Helen Lund – Eds, 2000).

The European Centre for Strategic Management of Universities (ESMU) also offers a perspective on benchmarking but it is not e-learning focused. We are not just benchmarking the effectiveness of quality assurance policies and systems or the quality of the management … we are in effect attempting to benchmark technology supported learning processes … and that’s a lot harder.

The Public Sector Benchmarking Service (PSBS) highlights the following models:

  • W.E. Deming’s approach to quality control and consists of four stages – ‘plan, measure, analyse, and implement’; and
  • Robert Camp�s (Manager of Benchmarking Competency Quality and Customer Satisfaction at Xerox) model that has 5 phases divided into 12 steps

Has the UK Quality Assurance Agency got anything to offer? A search for ‘benchmarking’ on the QAA site uncovers lots about subject benchmarking, but I don’t know whether this is of relevance to what we want. For example the QAA subject benchmark statements At the subject-specific level there are benchmark statements which translate the qualifications frameworks into the knowledge, skills, and attributes (‘learning outcomes’) expected of students. This raises a question in my mind, should the e-learning benchmarks attempt to map to QAA subject benchmarks? … Or will this merely result in a high bureaucratic loading on HEIs?

Let’s pop over to QAA (Scotland) where we find ELIR, the Enhancement Led Institutional Review process. Ok, intended for a different context, but there could be something of interest in there, particularly with statements like:

We believe that this collaborative approach to quality is unique in many respects – in its balance between quality assurance and enhancement; in the emphasis which it places on the student experience; in its focus on learning and not solely on teaching; and (perhaps most importantly) in the spirit of cooperation and partnership which has underpinned all these developments.

Has the UK Office of Government Commerce got anything to offer? Perhaps. The OGC Successful Delivery Tookit could be a source of ideas at the least.

And what of the Public Sector Benchmarking Service (PSBS)? The PSBS appears to give some very pragmatic advice about the benchmarking process and the theoretical foundations to benchmarking.

Benchmarking Steps?
According to the Office of Government Commerce they are:

  1. Planning � identifying the subject area to be reviewed, defining the objectives for the study and the criteria that will be used to assess success, selecting the approach and type of benchmarking, identifying potential partners etc.
  2. Collecting data and information – developing with partners a mutual understanding and benchmarking protocol, agreeing terminology and performance measures to be used, undertaking information and data collection, collation of findings.
  3. Analysing the findings � review of findings, gap analysis, seeking explanation for the gaps in performance, ensuring comparisons are meaningful and credible, communicate the findings, identify realistic opportunities for improvement.
  4. Implement recommendations � examine the feasibility of making improvements with respect to organisational constraints and preconditions, obtain the support of key stakeholders for making the changes needed, implement action plans, monitor performance, keep stakeholders informed of progress.
  5. Monitoring and reviewing � evaluate the benchmarking process and the results of improvement initiatives against business objectives, document the lessons learnt, periodically re-consider the benchmarks in the light of changes.

Potential Benchmarking Indicators
In its purest form Benchmarking is meant to be about taking a reference point (an indicator) and comparing where we are against this reference. The following are not meant to be a reference model for good practice but represent the type of indicators that are ‘out there’. I leave the reader to decide which are most meaningful to their Higher Education context.

  • the existence of an ‘online learning strategy’ or equivalent
  • implementation of a learning platform
  • development of a formal policy on intellectual property associated with online learning materials
  • proportion of programmes offered online at a distance
  • students studying online at a distance as a proportion of all FTE (full-time equivalent) students.
  • the level of online learning/ ICT use in teaching and learning at the university; including identifying areas of innovation;
  • the nature of use of online learning/ ICT in teaching and learning at the university, including information on particular packages/ platforms in use;
  • how, if at all, the use of online learning/ ICT is evaluated; and how evaluation is used to enhance teaching and learning;
  • evidence/ perceptions of student demand, value-added to the teaching and learning environment and impact on student progress;
  • staff development arrangements, needs and barriers;
  • the emphasis on online learning/ ICT in teaching and learning in key corporate strategies and policies;
  • institutional barriers to innovation;
  • relationships between use of online learning/ ICT in teaching and learning and related central services and structures (eg central computing, human resources, library).
    Source: OBHE
    To this we could add other indicators, e.g.
  • alignment of e-learning activities with organisational goals and business processes.

But the above indicators are just one approach. As we have seen with the University of Glamorgan example it’s also possible to approach benchmarking from a research question perspective. What’s perhaps more important is the provision of an acceptable framework to provide some consistent and coherent structure and tools to facilitate the gathering of data and the generation of information. That’s not to say there shouldn’t be able to accommodate diversity and creativity in the benchmarking process that would say allow for, say, the ‘story telling’ tools I proposed earlier in my ‘techie’ divergence.

But … Here comes the inevitable Auricle but! 🙂

Because benchmarking can mean different things to different people, there will be different expectations of it. What’s important is that benchmarking doesn’t become ideologically driven, with advocates and acolytes of one model or another becoming entrenched in their belief that there is only one ‘true path’ to enlightenment, with the danger becoming that those who stray from it will be considered near heretics or, to put it with rather less hyperbole, as not undertaking ‘real’ benchmarking.

On the one hand, we could have those who take a fundamentalist operational research and management or accountancy stance and who revel in handling only solid metrics, Data Envelopment Analysis models, or whatever, and ignore everything else. On the other hand, we have those who those who believe that context is everything and it is impossible to define good practice from metrics.

As with most human constructs, the middle road is probably the most profitable in this particular arena, i.e. there is scope for metrics and there is scope for stories. As indicated earlier, the challenge will be in ensuring we get both. We’ve got a diverse Higher Education sector and so we need to explore and find benchmarking approaches which provide something meaningful to this diversity.

I’ll sign off for now but, as I said in opening this post, this is a work-in-progress which may eventually end up as a more refined resource, but, even in this rather raw state, I hope others find it of some benefit. I would also welcome ideas, links and comments from others.

You can leave a response, or trackback from your own site.
Subscribe to RSS Feed Follow new Auricle posts on Twitter!
error

Enjoy this blog? Please spread the word :)