Auricle Podcasts

So having nuzzled around the podcasting theme in the last few articles we've finally 'bit the bullet', 'got our hands dirty', 'put our money where our mouth is' and other multi-mixed metaphors. So here we offer you our first podcast. It's an 18 minute interview with Brian Kelly who works with UKOLN and is the holder of the WebFocus post for UK Further and Higher Education. Not content with this as a sufficient challenge, we also recorded the interview via Skype, the VoiP solution so loved by some, and so hated by others. Let's start with a reminder of the audit trail of recent Auricle articles on this theme. First up was Probing podcasting from the professionals which now includes some additional links to MP3 resources. Next was a series of 3 articles called Recording online audio interactions - the easy way?. The latter had high 'geek' index, but provided a useful record of our trials and tribulations. In the end it all proved to be pretty easy so I'll keep the techy bits of what we did until the end of this posting.

For those of you who haven't yet absorbed the term 'podcast' here's a ultra sparse description. Technically, a podcast is just some RSS metadata which contains some extra markup which contains one or more links to a media file (usually, but not always, an MP3). Not really earth shattering when put this way, is it?. There are some, however, who contend that this small addition to the Really Simple Syndication (aka Rich Site Summary, or RDF Site Summary) format is now a major catalyst for change in the way we expect to receive our media and information.

Why?

Well, lots of people have iPods and iPod like devices which can playback and store audio data and some supporting textual information in a minature hard disk or, alternatively, in a solid-state memory device which has no moving parts (making it 'jog' proof). The latter, in particular, are very small light devices containing up to 2GB of data. The former are a little bigger, but can store even more data, e.g. up to 6GB (and rising). Such devices can be used to store and listen to music, or speech, or be used as a computer's removable data store.

But what makes podcasting work is the growing number of applications that now exist which, when provided with an appropriate URI, will connect to a website and download any new media files it finds there, automatically. In some cases, the podcast application will also automatically update the person's iPod-like device with this latest data.

So what! you say?

This is a bit of a problem for organisations who are still thinking they are leading edge because they have built a business or distribution model based on 'streaming' media where the consumer is expected to 'consume' the media during the period of 'transmission' of the stream over the wires or ether. Even worse for those organisations whose business or distribution model is based on broadcasting programmes over the ether at scheduled times, forcing users to use recording devices so they can time switch their consumption to when they, not the broadcaster, is ready. The advertisers certainly won't be happy with this audience drain.

No! What the podcast can do is change behaviour. The consumer decides what they are interested in and their device 'goes and gets' and keeps everything up-to-date based on the metadata contained in that RSS wrapper (as well as metadata in the media file itself). So we now have consumers who are listening to (and soon watching) what is in effect their own private playlists which must be making those who are used to controlling what, when, how and why very nervous. The destabilizing of this status quo is, however, opening up opportunities elsewhere. There's a new type of media entity already 'out there' which is going to become part of the mainstream fairly soon. For example, it's no longer necessary to own a radio station to broadcast. There are already individuals who are outside the broadcast mainstream who have multiple thousands of listeners who download their 'programmes' via podcasting applications, e.g. iPodder. The best known of these podcasters is, arguably, Alan Curry whose programme The Daily Source Code contains the unforgetable line in its signature “we don't need no stinking transmitters” (warning: Curry is very politically incorrect so don't go there if your are of a sensitive disposition).

What Curry demonstrates, however, is an incredible mastery of a medium and distribution method which requires relatively modest resources to produce, is not tied to a studio, and, furthermore, benefits from contributions from his listeners, some of which have very high production values. The result is that Curry can produce his Daily Source Code from hotel rooms anywhere in the world by dynamically linking content and commentary via his Google GMail account or Skype interviews and then uploading it as an MP3 file to his server for his listeners (or their devices) to download.

Anyway that's enough of 'the world as we know it is coming to an end' rant. If you're interested in the results of our efforts then you can listen to the Brian Kelly interview either by downloading the MP3 directly or if you want to try this podcasting thingy for yourself why not download iPodder or any of the other similar applications and add the following URI to its 'add a podcast' field or equivalent:

http://www.bath.ac.uk/e-learning/Download/podcasts/auriclepodcasts.xml

We will do other occasional podcasts of this nature. So if you get a call from us in the near future we will offer no road to wealth, only even more recognition of your importance!

And finally, a little bit for the technically or production minded. We used Audacity for recording and editing at 44,100Hz 16 bit PCM wav (~166 MB) and exported as 64 Kbps MP3 (~8MB). Skype was used to talk to each other in different parts of the University. We used two cheap Plantronics headsets, put up 'do not disturb notices' on the office doors, turned off email and instant messaging sounds and redirected the phones. My Plantronics headset was cheaper than Brian's more expensive version, but IMHO my microphone was less noisy than Brian's … or perhaps it was just his penchant for heavy breathing:)

Blogs and podcasts: Educause shows the way?

There's a new comment in Recording online interactions the easy way? Part 1 from Matt Pasiewicz at Educause which invited me to visit their new trial podcast service. Matt's comment also attracted my attention to just how developed the Educause blog is becoming. The Educause blog is well populated and Matt indicates that although the new podcast feed (podcast URI) hasn't quite developed into a series yet, Educause is beginning to develop their ideas from initial experiments and they now want to explore ways to take it to the next level. If Auricle readers have any ideas let Matt know at matt@educause.edu.

What's really impressive is how Educause is acting as an example, or perhaps even as an exemplar, of how technologies can be used to support learning. I think the UK's ALT site could certainly benefit from adopting (or adapting) the Educause example instead of relying on the limited JISCMail service to facilitate, and sometimes hide, member interaction.

Email is where knowledge goes to die (Bill French, Myst-Technology, April 2003) and so it's really good to see Educause providing the tools for their members to share, not hide, knowledge and points of view.

P.S - I've also made several additions to the list of podcasts at the end of my recent Probing podcasting from the professionals article.

ePortfolios, but not as we know them …

A key purpose of ePortfolios is surely to enable the students to take a reflective approach to learning? Yet many of the existing systems appear to facilitate little more than form filling and CV creation. Still, at least one recent development provides some grounds for optimism. Some months ago, Derek Morrison highlighted a number of ePortfolio resources and initiatives that, in his opinion, were heading up this transformation. Derek's article suggested a number of synergies between ePortfolios and Weblogs, and referenced in particular the ERADC site which had:

“…been set up to provide a reference point for interested parties to contribute and learn more about e-portfolios and developments that may impact on the e-portfolio”.


Since Derek's inital article, the ERADC site has evolved to include references to the ELGG Personal Learning Landscape. Created by David Tosh and Ben Werdmuller, ELGG reports to be a fully customisable, mix of Weblogs, ePortfolios and social networking.

ELGG is going through some testing right now, so I've not been able to get my hands on it, however it promises to:

“…provide learners with the ability to control their own learning through the creation of their own online learning communities”.


According to the website, a basic version of ELGG will be available for free, whilst the code will be available as open source for those who wish to host or customise it further.

A concern, however, is that in common with enterprise class VLEs, in the rush to implement something which enables them to declare they are 'ePortfolio oriented', institutions may already have locked themselves into agreements and less satisfactory solutions which prevents them migrating to something which, on the surface at least, looks considerably richer and more powerful than most putative ePortfolio solutions.

Nevertheless, Tosh and Werdmuller's work looks very interesting and deserves to be more widely recognized and utilized.

Recording online audio interactions - the easy way? (Part 3)

Some people with early laptops may have found the Replay Telecorder option I proposed in Part 1 an unsuitable choice. So I've been doing a bit more experimenting. I've found it possible to record Skype conversations using my old Dell Inspiron 8200 which offers fairly unsophisticated audio controls. Some of what follows may be useful for those of you in a similar situation. The key concept is to set the Stereo Mixer as the recording device and not the microphone. You then select playback devices to feed the Stereo Mixer, one of which needs to be a microphone. And here's the gotcha! … early laptops may not provide an option to select a microphone as a playback device. Here's the workaround which worked for me.

  1. Double click the volume control (the speaker icon) in the Task Bar to open up Windows sound mixer.
  2. In Options>Properties select the Recording option.
  3. Tick both Stereo Mixer and Microphone in 'Show the following volume controls'.
  4. Select Stereo Mix and set the volume slider half way up
  5. In Options>Properties select the Playback option.
  6. Select Master Volume, Wave, and Line In

Some early audio drivers didn't provide a Rec control in the playback mixer which is necessary when Stereo Mix in selected as the recording source, i.e. playback sources are fed to the mixer. But … we can use Line In instead of Rec in this configuration.

I found I got good results recording Skype dialog using this technique.

Caveats

  • A powered microphone input is necessary when using Line In as a recording source; a standard headset microphone won't do. So if you can find a powered microphone or headset then that should do the business. I would borrow one first just to make sure it works for you.
  • Don't use the standard Windows Recorder. I found an early version of CoolEdit worked fine. WinAmp, or similar, may also do the business, but I haven't tried that.
  • Adjust the Wave setting on the playback side of your audio mixer if you find Skype sound levels are too high and leading to distortion.

Hope you find this works for those of you with limited sound controls on your Windows systems.

But there's apparently even more alternatives. For example, users of the open source Audacity or who have access to two computers may find this article in the Things that make you go hmm blog interesting.

Recording online audio interactions - the easy way? (Part 2)

Here's an update to last Thursday's Auricle post Recording online audio interactions - the easy way?. Some other interesting candidate tools are appearing, but so have a few problems. Over at Webfeed Central I came across MixCast Live a work-in-progress by James Prudente of TinyScience which aims to make creating a podcast as easy as possible. My interest is in its promised Skype recording potential. There's a 'pre-release' version available (which isn't free). MixCast Live requires the dotnet 1.1 framework, so it's for Windows users only.

I couldn't get MixCast Live to work at first, but with the help of several emails and a Skype conversation with James Prudente we identified the problem as a conflict with my Replay Radio installation which installs supplementary sound drivers. I temporarily removed Replay Radio which enabled me to put MixCast Live through its paces. Basically MixCast Live provides an easy interface for the simple recording and mixing of different audio sources and the production of either an MP3 or .wav file. I found some bugs in MixCast, e.g. it appeared to create two MP3 files for a single recording session (only one of which worked) and when a .wav file was created and played back in Windows media player it would get two thirds of the way through and then Windows Media Player would then declare is was an 'unrecognized format' (playing back the same .wav file in Real player presented no problem).

The pre-release version I evaluated doesn't yet include the facility to record from Skype but the developer tells me that's on its way in the very near future so I'll report on that when I can get my hands on it.

James Prudente, the developer of MixCast Live proved to be extremely responsive and has taken the clash with Replay Radio very seriously because there's an awful lot of users of the latter out there. So MixCast Live is certainly on my watch list but most users would probably be wise to hold back until some of the Skype functionality has been added and clash issues sorted. If these happen then MixCast Live could be one to watch.

In my previous article I also introduced Replay Telecorder as a candidate Skype recording solution. Telecorder certainly seems to do the job but I've found it difficult to maintain microphone levels during Skype recordings. Some test interviewees have complained about an annoying echo at their end although everything sounds fine at mine. I suspect this may be more of a Skype problem than a Telecorder one. Despite configuring Skype not to, this VOiP application seem able to adjust the microphone levels on my systems audio mixer so I can start off with good sound levels but then have Skype force them down leaving my voice as the junior partner (some may see this as an excellent feature:)

I also have found Skype to be extermely fussy about working with other applications. A fresh install of Skype may work perfectly, but start Replay Radio, or Replay Telecorder, and Skype may or may not decide to crash; it will only work again after a reinstall. Yet, on another system the same combination can work perfectly. When Skype works well it's very good, but it needs to be able to do this consistently and I just don't like it taking control of my recording settings. If I can't get the robustness we require for online interviews then Robin Good's recommendations regarding iVocalize (see previous Auricle article) could look a better bet, but I'll keep Auricle readers posted.

Recording online audio interactions - the easy way? (Part 1)

Conferencing recorder, Skype, iVocalize, online interviews, skypcasting Earlier this week I focused on podcasts and raw MP3s with high production values, some of which originated from modest production facilities. Some of the more interesting memes in these podcasts arose from recordings of online or telephone audio interviews and commentary. But recording online audio interactions can be a bit of a complex black art and so, in this article, I introduce a promising new easy-to-use solution. In my Auricle article Probing podcasting from the professionals I highlighted how fairly high production values were to be found in podcasts and raw MP3 downloads orginating from individual and groups with relatively modest technical facilities.

Even my tentative investigations into this area suggest that the quality of the recording can be better than that achievable from recordings made from plain old telephones. I can now see why the current telecoms incumbents are really worried about the transmission of voice over the internet.

Being able to interview a subject, or subjects, online has incredible potential for the gathering of information to be used for evaluation, research, and journalistic purposes. And just think of those case studies!

However, let's get the ethical, legal, and human-relations bit out of the way first. Get the approval of whoever you're recording before you do it. If you don't do that then your future is probably more with covert operations than with what you are currently pursuing … investigative journalists excluded.

So how do you go about creating your own quality audio resources for dissemination online?

After reading Robin Good's article from last December
The Online Audio Interview Recorder: Skype Recorder vs. iVocalize I tried out the iVocalize service using that company's free 2 hour trial account. Impressive results were easily attainable from iVocalize. Users unhappy with the iVocalize recording's proprietary Microsoft .wma format will need to invest in a format convertor, but this is an easy and relatively cheap process. iVocalize is an online service which attracts a cancellable monthy fee; the cheapest option is $10 per month for a 3 person account which will suffice for many doing one-to-one interviews.

I then downloaded Skype, which is a desktop client, and had the application up and running within 5 minutes. When this application works it's fidelity is pretty impressive. I say, 'when it works', because Skype can be fussy about what it works with. If it takes exception to other audio applications on your system it will throw errors. Unfortunately, when you are trying to record a Skype session there will invariably be another audio application active.

Recording a Skype session was apparently not for the faint hearted. However, a couple of gurus, Stuart Henshall and Bill Campbell, have worked out a solution they called Skypecasting, i.e. Skype plus Podcast Recorder. But Skypecasting required two instances of Skype and the purchase of some software called Virtual Cables. Now whilst I'm prepared to get stuck in, I couldn't help thinking that there had to be an easier way, preferably a single recording utility that was Skype friendly.

I located a potential candidate solution in the form of Applian Technologies' Replay Telecorder which can also act as a Skype answering machine. Replay Telecorder is free, currently, but it's still in the early beta stage of development and so there may be a few gotachas! hidden in there, but I couldn't find any in the time I had available to test it. Once I had made a few adjustments to my laptop's audio settings it made a reasonable to good job of recording two way conversations on Skype. The key Skype settings for me were Options>Hand/Headsets>untick 'Enable automating sound-device settings' and in Replay Telecorder, Settings>Record from Stereo Mixer and tick 'Also record from microphone'. I also needed to tick the boost setting on my microphone setting in my laptop's audio control, but I suspect this will vary on different systems. Here is a sample MP3 recording I made using Replay Telecorder and Skype. The sample was recorded using the Skype testing service which has the Skype user name 'echo123'. It's kind of hard to control the different levels of the different sources, but I think it's about usable.

It's also important that all participants use headsets, otherwise ambient and computer noise at both ends may detract from the message. A bit of 'atmphos' is fine, but Skype can find sounds you never knew existed in a room.

I view Skype, and its ilk, as potentially powerful research and information gathering and resource production tools. There are undoubtedly others, however, who will see such technologies as the spawn of the devil. What do you mean you can bypass the switchboard? Why are you taking up such bandwidth? What are the security risks? Where's the revenue coming from? But one thing is certain, Skype et al are the thin edge of a fast approaching wedge. These are, in another form, some of the 'etools' which could make effective distributed learning and production a reality much quicker than we ever envisaged. But heh! … let's see what emerges to spoil that vision.

Probing podcasting from the professionals

In previous Auricle articles I've alluded to the Internet and intranets as e-learning filling stations and, so, I thought it was time to engage with podcasting as one way of 'filling up'. For this article I was particularly interested in tracking down podcasts and raw MP3 files with high production values. I know, there's some real gems in amateur sites with access to limited production facilities or expertise, or in conference/presentation recordings, but some podcasts engage and involve because they have the listeners of the recording in mind and so 'speak' to their particular audience. Let's start with an operational definition

A podcast is a talk or music radio show that's sent directly to an iPod or other digital music player through your computer. It's a new take on the growing technology called RSS that pushes text-based Web content to computers. But with podcasting, a listener subscribes to audio feeds.” Source: Jon Gordon, Future Tense.

When I was a long distance commuter by train, small light media player devices were certainly objects of desire, but unobtainable. Now I usually walk to work (~10 miles a day) and the small light media player device is ubiquitous; I use one daily to feed my brain with items of interest whilst my legs do the walking, e.g. science, technology, food policy, environment, health, social policy. Ever hungry for new sources of quality information I've now included podcasts in my diet. Below I summarize my tentative findings and reflections on finding material with the potential to permanently change how, and when, people listen and … sometimes … learn.

I found a good starting point for my explorations of podcasting was Wikipedia closely followed by the Digital Podcast Directory and the categories section of iPodder.org.

Then over to JD Lasica's Darknet site which I found a good way to get steeped in some of the 'darker' issues related to podcasting and the struggle to control what we see and hear and what we see/hear it on.

The focus of this article is tracking down sources of podcasts and raw MP3 files with high production values so with this in mind I found the Radio category of iPodder.org particularly helpful.

What was also suprising from researching this article was that, far from leading the curve, I now think that the BBC really needs to get their skates on and provide rather richer fare than just In Our Time (podcast URI).

Interestingly, within the BBC local network there seems to be some activity that perhaps merits more attention from BBC HQ.

For example, let's consider The Naked Scientist archive. The Naked Scientist site describes itself as:

“… a media-savvy group of physicians and researchers from Cambridge University who use radio, live lectures, and the Internet to strip science down to its bare essentials, and promote it to the general public. Their award winning BBC weekly radio program, The Naked Scientists, reaches a potential audience of 6 million listeners across the east of England, and also has an international following on the web.”

Now what's intriguing is that The Naked Scientist BBC radio show is being syndicated across local radio stations and so covers only some of England (not the UK), as a result it's been below the radar for most of us. But their archive is a potential goldmine of MP3 resources for those interested in science. But what have we here? No RSS … no podcasting. Come on folks you should be shouting this one from the rooftops!

The Naked Scientist example is perhaps the tip of an iceberg. How much other MP3 stuff with educational potential is hidden just because the developers don't use RSS or any other syndication solution, and don't seem to have grasped the potential of wrapping up their MP3s in the podcasting format?

Digital rights anxieties on the part of the broadcasters may be part of the problem and so they either do little to promote MP3 downloads or opt soley for a streaming solution. But here's the news folks …. anything that you stream or broadcast on the airways can already be captured, converted and downloaded and all you're doing therefore doing is inconveniencing your audience who have already bought into 'listen again' but don't want to be told by you when they can 'listen again'. Nevertheless, archiving MP3s costs and so some will undoubtedly argue that it's reasonable to place a charge on back copies.

So what's happening on the North American broadcast front? I was pleased to find that CBC Radio One offers their science and technology broadcast Quirks and Quarks in both MP3 and ogg vorbis downloads (are you listening BBC?).

Next up was TWIS (This Week in Science) which is a 1 hour weekly science/technology radio news show broadcast on KDVS 90.3 FM on Tuesdays 8:30-9:30AM PST. TWIS appears to stream their MP3 via mpU files but downloads are also possible (just look at the URI in the mpU file).

Then over to Air America's Ecotalk (podcast URI) where one item put a different spin on why the Indian Ocean tsunami was so devastating in some parts of the region. It seems that removing mangrove swamps and replacing them with shrimp farms was perhaps a bad idea.

Science Friday's site is interesting, but here we have the classic example of the free stream … but pay for the MP3 download.

NASA has also entered the podcasting arena with their Science@NASA feature story podcasts (podcast URI) with the goal of helping … ” the public understand how exciting NASA research is and to help NASA scientists fulfill their outreach responsibilities”. The audio files can also be streamed or downloaded manually via the home page of the Science@NASA web site.

So is any of this podcasting stuff being taken seriously enough in education?

Way back in 2004 Duke University certainly thought so. If you missed this or want a reminder then last July the NPR site offered us the audio article New Freshman Requirement: The iPod.

MP3 searches also reveal that UC Davis School of Medicine are also offering pathophysiology lectures on MP3 but since their resources are password protected you'll have to take my word for it.

The business opportunity of distributing via MP3 has not been missed by some. For example the American Academy of Family Physicians would like interested parties to part with ~ USD 800-950 for the MP3 version of the Family Medicine Board Review … but there again it is worth 62.5 prescribed credits. It's pretty cool you actually gain merit from just listening … with only a postest to make sure you were really listening … or at least that short term memory is functioning… but that's OK the're professionals:)

Continuing the commercial theme, North American medicine appears to be a really rich source of MP3s. Here's CMEonly.com who offer “Continuing Education solutions for the time starved proessional.”

Over at the Audio Digest Foundation Mammon is less obvious; they offer a free MP3 of their lecture of the month.

Perhaps the most intriguing, but at the same time almost anarchistic, podcast is Adam Curry's Daily Source Code (podcast URI). The Daily Source Code website is here. Adam Curry was an MTV video jock some 10 years ago but has now become one of the key catalysts and exponents of the new podcast art. Although American, he produces his daily podcast from his home in Guildford, Surrey, UK and claims some 50,000 listeners. Daily Source Code is unpredictable and sometimes almost content free (and not for the sensitive), but then again some gem of a listener contribution sometimes pops up. For example, Daily Source Code introduced me to Soundseeing Tours via podcast. Soundseeing Tours surely have some broader potential beyond being a form of travel guide, particularly if they could be synchronized with, say, an image blog in an effective way. What Adam Curry demonstrates, however, is that production values matter. Daily Source Code may be low on content at times but Adam is always having a conversation with, and always engages with, his audience.

Another area of interest for me is audiobooks. For example Apple's iTunes download service claims to have ~9000 audiobook titles and rising. Alternatively, Audible.com or Audible.co.uk specialize in this area.

To me, the trend seems pretty clear. The growing success of personal media recorders and players is because end-users like downloads because it enables them to time shift and organize their listening or viewing to suit their leisure or work schedule. They purchase devices like personal media recorders and players because it puts themselves back in the driving seat. Meanwhile the broadcasters and distributors either try to limit opportunity for download by investing heavily in streaming solutions or attempt to wrap up their artefacts/resources in management systems that locks them to devices/specific users. As a consequence, the increasingly knowledgable and sophisticated end-users will seek sources or resources that offer a less restrictive path.

Many of the technical and technological limitations of small format devices are being conquered, and, so, it's perfectly feasible that within the next 5 years there will be converged multifunction devices which offer a satisfactory, if not excellent, viewing/listening of ebooks, web sites, music, speech and communication via phone, instant messaging and email, plus acting as a source for digital projection. Such devices will be as valuable as standalone arteracts as they will be for their connectivity. Proprietary formats and digital rights management systems will serve only to reduce the likelihood of those particular multifunction devices succeeding in the marketplace.

Before I sign off, it's perhaps noticeable that in this little sojourn I found it more profitable to think MP3 rather than podcast; this perhaps shows that there's more missionary work to do 🙂

To finish off here's some more podcast or MP3 download URIs I've found interesting and informative (sometimes) and, yes, some of them are even conference proceedings or presentations:)

UKeU inquiry draws to a close? - some reflections and a challenge

What may be last of the UK House of Commons Education and Skills Committee sessions investigating the demise of UKeU was held on 12 January 2005. First up in the hot seats this time were Sun Microsystems' Leslie Stretch, Vice President, Sun Microsystems UK Ltd, and David Beagle, Account Manager of UKeU project, Sun Microsystems Ltd. Following up were Sir Brian Fender, former Chairman, and Dr Adrian Lepper, Secretary to the Board, e-Learning Holding Company. Sir Brian is also former Chief Executive of the HEFCE. The key highlights for me of this part of the investigation are:

  • Sun Microsystems claim that when Sir Anthony Cleaver and John Beaumont took over UKeU there was move from a strategic partnership to a simple supplier relationship; there was a contract change to a fixed price contract. They claim to have lost +++ financially because they received only GBP 7.1 million, losses which were unprecedented in the 20 year history of Sun Microsystems UK Ltd. As the supplier they declare they were not responsible for the design of the system, the specification of the system, or the requirements definition of the system. Sun would now have wished for a stronger role in the governance of the initiative.
  • Sun still believes it's a billion dollar e-learning marketplace
  • Sun believes the whole venture was not given enough time and enough money to succeed (GBP 60 million). They contend that there were no reference architectures for what the UKeU was trying to achieve and so the initiative was breaking new ground.
  • Sun declare they had a working, scaleable e-learning platform which exceeded the initial design specification when the plug was pulled.
  • Sun declare they are discussion with HEFCE in how to resurrect the UKeU platform (which has cost GBP 14 million). Sun contends that the platform still has substantial value and they would like to exploit it commercially. They claim the majority of the IPR on the platform.
  • Sun contends a new platform was required because UK HE is not based on a simple content/information delivery model (Eds Note: Those of us working in e-learning are very gratified to hear this although we daily see much evidence of just such a simple content/information delivery model).
  • Sun intended to profit from their relationship with UKeU by reimplementing the platform in new contexts in the global marketplace, i.e they would sell or license the system.
  • Although there were plans to launch UKeU on the stock market Sun declare they did not perceive this as a major potential source of return on their investment. The Chairman and Chief Executive of UKeU, however, would have benefitted personally.
  • Sir Brian Fender thinks a fair amount of responsibility for UKeU falls on his shoulders and appears to be prepared to accept the blame if any is aportioned.
  • The intial focus on the overseas market was a strategy intended to overcome the difficulties of expecting UK universities to collaborate with a UK e-University whilst also competing within the local UK market.
  • UKeU arose from a successful bid to the 2000 Spending Round. There was no direct communication with the then Secretary of State for Education, David Blunkett or the Chancellor of the Exchequer by Sir Brian Fender then Chief Executive of the Higher Education Funding Council for England. Communication was between HEFCE and Departmental civil servants.
  • Sir Brian Fender accepts that there was not quite enough challenge about the basic assumptions, i.e. about the way of taking UKeU forward and the probability of success.
  • Because Funding Councils do not normally manage initiatives directly the unusual UKeU management structure was established as an organizational buffer for HEFCE. Despite this buffer Sir Brian Fender became Chair of the UKeU holding company for a period whilst still being Chief Executive of HEFCE … on reflection he accepts this 2-3 month overlap of roles reduced the buffer between HEFCE and UKeU.
  • Sir Brian contends UKeU faced four challenges, apparently, all were underestimated.
    • First existing platforms were not considered up to the job. Existing platforms, like Blackboard and WebCT were perceived as primarily campus-oriented products and … I applaud this … one goal was the ” … incorporation of material outside that which is available to the normal teaching and learning coordinates”.
    • Second, moving the culture of learning and teaching to a point where it could exploit new opportunities offered by digital technologies (Eds. note: Didn't Sun assert that UK HE has already made this leap?).
    • Third, understanding overseas markets and establishing relationships with other higher education bodies with different cultural traditions.
    • Fourth, attracting the private sector.
  • Sir Brain contends that signfiicant progress was made with the first three challenges.
  • As to the fourth, there was an inherent conflict in the concept of attracting venture capitalists or other investors who want a high degree of control and, if necessary, framgmentation of 'products' into a venture where UK universities would certainly not relinquish control or desire fragmentation. Put simply, it is not easy to come to contractual agreements with universities who are used to being in charge of their own space. Sir Brian Fender admitted he had doubts about the wisdom of private sector involvement (which was very much government policy at that time). The UKeU structure was established on the assumption that there would be 50:50 split in public/private investment. In reality, UKeU became a public funded body and had issues relating to the viability of the business plan not taken precedence a review of UKeU structures would have had to take place anyway.
  • The UKeU concept arose from within the HEFCE and only later became a political initiative.
  • UKeU begain operating as if they were a private company in order to attract private investment (which never came). Meanwhile HEFCE was expecting information as if they were a body spending public money.
  • Dr Adrian Lepper, Secretary to the Board of the e-Learning Holding Company, asserted that what Sun said in their presentation was not entirely accurate and he seems to challenge the concept of partnership, e.g.

    “(Sun) made available to UKeU certain services and licences to be provided in the future, and for that shares were vested in a nominee company under an escrow arrangement where they could be released when these services were actually provided.”

So what are we to make of all of this? Here's my sixpence/six cents worth based on my perspective from being seconded into UKeU for a year. I worked with the Learning Systems team.

I'm not going to bore people with my puny reflections on the big politics apart from to say I applaud Sir Brian Fender's vision and his apparent willingness to fall on his sword, which I hope he doesn't have to do … whatever that means at his level. For lesser mortals this usually means you disappear down the plughole never to be heard from again:)

The vision thing was fine and we should expect HEFCE to put forward such visions. The problems started when the vision thing was absorbed in to the big politics and Spending Review machinery at a pace which didn't allow for enough thought experiments and exploration of issues, likely reactions, and hard market information instead of conjecture.

To this add a touch of ideology to create ripe conditions for culture clash, e.g. private-public partnership (which apparently wasn't).

To this elevate the importance of the UKeU platform to a point where, for many observers and particpants, UKeU became 'the platform'. An unfortunate perception not helped by UKeU's earlier promotional material declaring it was developing a 'world-class' solution.

To this add attempts at platform imperialism where university partners were expected to adopt the UKeU platform, no matter their existing local infrastructures and investments. Making perfect business sense of course, but what was in operation here was a classic 'lock in' strategy … something I've waxed lyrical about in many Auricle articles.

On the platform front, it would have been much much better if UKeU had put itself in a position where it quietly worked on a new platform under a research and development umbrella instead of finding itself having to deliver a new concept under ruthless production deadlines and out-of-control expectations. Allowing the platform to apparently float to the top of UKeU activities was, to understate it, 'not wise'. Declarations about a 'world class' platform served only to raise expectations to an urealistic level and place an incredible millstone around the neck of everyone involved.

What I, and others, found particularly frustrating, was that UKeU started to build a significant team with knowledge of learning systems in Higher Education only after key decisions had already been made. As a result, any proposed changes would then have to work their way through formal processes and procedures with interfaces between at least two organizations, and sometimes others. Tearing up and starting again was just not an option although, IMHO, and in the opinion of some others, it sometimes felt that it might have been better to do so. From a platform development perspective, what was required was a highly responsive commando team of developers and a small cadre of analysts whose first allegiance was to the UK HE community mediated through UKeU or whatever … note, not an army … a commando team. I was privileged to work with some very bright motivated people in UKeU but, for some of us, it became apparent that the 'die was already cast' and far too much had come to depend on a platform that would have gained from a much lower public profile, along with an extensive root and branch review of where we were going with it. But, by this time, the platform had developed an unstopable political imperative which in turn fed the IT press stories of delays, problems, escalating costs. It would have been so much nicer to have been able to offer the community something far more modest which could evolve instead of finding ourselves in yet another 'big bang' IT project.

Which brings me to my final points. What keeps being asserted and hinted at by those who have given evidence to the Select Committee is the incredible unrealized value of the UKeU platform, i.e. “If only there had been a bit more time and money”. Now I know we are all looking for something to salvage from the UKeU experience, but let me now sound a note of extreme caution … Eeyore's at it again 🙂

Conceptually, the UKeU platform had some sensible and, at the time, novel concepts with, arguably, the strongest being that it attempted to deliver functionality and content (a learning object by UKeU definition) in the context of the part of the programme/module a user was at the time. For example, students didn't have to go off to another part of the learning environment just to access an online discussion specific to the part of the course they were at. Moodle, and probably other enviroments, of course, now do this

I believe that there are still a few concepts within the UKeU platform which are probably worth abstracting and using elsewhere. I also believe that uncoupling some of the proprietary dependencies within the platform, e.g. on the Vignette Content Management System are significant obstacles, although Sun Microsystems may have quietly gone on to do this. But do I believe yet more public money or effort should go on seeking salvation through child of the UKeU platform? Absolutely not.

Note I say through public money or effort. Sun Microsystems apparently claim the majority of the intellectual property over the platform and are in 'discussions' with the HEFCE on how to take this forward. Even if Sun was to offer a Java like 'free' license, or even a full open source license, I now think that all the child of UKeU platform would become is an unwelcome distraction which will consume evaluation and development effort now better spent elsewhere. But reputations are at stake here and it may be kind of difficult for some of the stakeholders to quietly fade into the background, so here's a challenge:

If the UKeU platform is so good, although no private company has chosen to buy it, before HEFCE comes to any agreement with Sun Microsystems et al, why not offer the platform to the UK HE community for independent review, with review members selected not by Sun or HEFCE but by the community? Also, publish all internal and external reports on the UKeU platform, including those from UK Universities who have actually used it as well as internal reports from within UKeU. A glowing independent report is all that is necessary, reputations would be saved and the community would feel less hard done by. After all that has happened, private arrangements between Sun and HEFCE are certainly the last thing we want.

It would be the ultimate irony in an already sorry saga if attempts to resurrect the GBP 14 million UKeU platform, even under a banner of offering it 'free' to the community, ends up being the final folly.

Finallly, lest it be forgotten, many fine people joined UKeU from relatively secure posts elsewhere or were appointed to UKeU project posts within UK Universities. They bought into the vision, they gave it their best shot and they certainly didn't get any bonuses when it all turned to dust. Instead, they joined the ranks of the unemployed or found themselves in different jobs with far less security and remuneration. They, I am sure, would have added another dimension to the deliberations of the Select Committee, but from them we will hear nothing.

Case study and policy wonks may find the Uncorrected Evidence and Sun Microsystems memorandum of interest.

The computing press are, to say the least, hostile but if you like more polemic you can visit some of the links below:

http://www.pcw.co.uk/news/1159479
http://www.whatpc.co.uk/analysis/1160641
http://www.whatpc.co.uk/news/1160644

DSpace filling the vacuum?

This week I've seen several references to DSpace pop up in my RSS reader - several of which were related to the University of Arizona's DLearn project. DLearn is essentially a Learning Object Repository that looks (and sounds!) very much as though it's based on DSpace. Although it's also registered on the DSpaceInstances Wiki the DLearn site doesn't declare the underlying technology in an 'About DLearn' section, which is a pity. It was a little disappointing to see so little content in the DLearn repository, but in fairness I am well aware that faculty buy-in does not happen overnight.

I subsequently came across another, rather relevant article (entitled 'Understanding Faculty to Improve Content Recruitment for Institutional Repositories'), which documented the University of Rochester's own implementation of DSpace (as an Institutional Repository rather than as a LOR), and described their efforts to secure faculty buy-in. The article starts of by highlighting some interesting figures:

“An April 2004 survey of 45 IRs found the average number of documents to be only 1,250 per repository, with a median of 290. This is a small number when considering the hundreds of thousands of dollars and staff hours that go into establishing and maintaining an IR. For example, MIT Libraries estimate that their IR will cost $285,000 annually in staffing, operating expenses and equipment escrow. With approximately 4,000 items currently in their IR, that is over $71 spent per item, per year“.

Ouch!

The report goes on to outline the process and results of a year long research project by the University of Rochester, which aimed to more clearly understand the needs of faculty and to identify ways in which their IR could complement the existing working practices of staff in a research intensive institution.

The researchers established that the benefits of the IR were largely misunderstood by faculty, an issue which was not helped by the use of terminology which, it was wrongly assumed, was understood. Through interviews with staff they drew up a wish-list of priorities, many of which were centred upon the authoring, archiving and sharing of information.

With faculty priorities clarified, they set about enhancing the IR to meet the needs of their intended users. It is worth pointing out, however that several of these priorities could not be met initially. DSpace has not been designed to handle workflow processes such as versioning and co-authorship for example - comments that Arizona's DLearn initiative has also received. Nevertheless, the University of Rochester has begun to make some interesting enhancements to their implementation of DSpace.

Staff in a research led institution naturally want to improve the processes associated with it. They want to make their own work easily accessible and searchable on the web, to be able to control who sees it, and they want to protect it from accidental loss. They don't however want to have to do anything complicated in order to achieve this. In order to make the IR more relevant to the needs of these individuals, Rochester has added another level to the DSpace structure. The addition, (known as a Researcher Page), is essentially a personal web page which acts as a directory of expertise for each member of staff. On their personalised Researcher Page, staff can showcase their research and can link to other publications held in subject repositories and electronic journals.

Behind the Research Pages is a simple interface through which faculty can upload and manage their documents (called the Research Tools page). There are no steep learning curves to deal with, and because staff can appreciate that there are personal benefits to using the system (as well as benefits to the institution), Rochester have effectively incentivised greater use of the IR.

As the report goes on to say:

“We believe that if we support the research process as a whole, and if faculty members find that the product meets their needs and fits their way of work, they will use it, and “naturally” put more of their work into the IR”.

It will be interesting to see how this project pans out, and to see whether Rochester's considerable efforts in this area do indeed increase uptake of the IR. They are planning to further develop their implementation of DSpace in line with the priorities identified by their staff, and are intending to make the Researcher Page and its associated tools available as open source, as and when they are satisfied with them.

BBC indicates academic use of blogs increasing

Those of you who have been following our Open source enterprise weblogging series of postings may be aware that, in contrast to the “let's dominate the VLE market” ethos which has been supported, by default, by a significant part of the HE and tertiary educational sectors, some of us have been putting the case for providing access to simpler but, nevertheless, educationally powerful, tools like blogs and wikis for some time. It's good to see that the BBC has picked up on this alternative approach to supporting knowledge acquisition and learning. They offer the following on their news site Academics give lessons on blogs (23 January 2005)

There's some good positive statements and examples in the Beeb's article, e.g

“Blogging lecturers say the technology provides them with easy online web access to students and improves communication outside of the classroom.”

“The weblog meant a place to store ideas, links and references.”

“… open new opportunities for students and staff.”

“… gained knowledge from strangers.”

” … develop things in a fairly cohesive fashion.”

Ah but ! … there's also the view being put by the University of Birmingham who whilst accepting the benfits of the blog as:

” … a strong tool for rapid knowledge development”

… express concern about the problems for an institution's reputation and legal liability which arise from the blog's 'openness' and 'ease of instigation'.

Birmingham, some readers may remember, is/was embroiled in a dispute with some of its academics who were publishing views on their personal web pages hosted by the University that some pro Israeli groups found unacceptable.

As I indicated in my recent contribution Weblogs Niche or Nucleus to last November's UCISA workshop, Beyond Email: Strategies for Collaborative Working and Learning in the 21st Century, there are good examples of Acceptable Use Policies around, e.g. Harvard Law, which should help prevent such disputes arising. An AUP should not translate, however, into “you can't publish unpopular views or minority viewpoints” that may upset the status quo or vested interests who are not stable or confident enough in their own product or viewpoint and therefore feel a need to try and undermine or eliminate the challenge.

It would be a sad day indeed if every story, document, article produced by an institution which may be viewed by the public had to pass through a Department of Censorship for approval just to make sure it was 'on message' or did not open the institution to brand or legal risk. The reality is that, unless we are to return to a totally verbal culture, everything written down is potentially viewable by the public (by intent or accident) and electronic communications, in all forms, now makes it impossible to stop what may be perceived as off message 'leakage'. Are we, for example, going to moderate and record all mobile phone communications, text messages, instant messages, file attachments, podcasts etc etc. I daresay there will be those institutions that try to do so, but it's a lot cheaper and more efficient to have staff and students 'buy in' to a reasonable AUP and to provide them with some great tools and services whilst educating them about their use, abuse and possibilities.

It would need a bureaucracy bigger than the productive base of an institution to approve everything and of course the introduction of such a bureaucracy would eventually erode the productive base on which it depends for survival. So what are we left with? Well, we could try concepts like trust, loyalty, discusssion, and agreement instead of fear, uncertainty and doubt about loss of control resulting in policies which try to control technologies either by banning them or constraining their use to a point where any user benefits are lost.

Yes, tools like blogs and wikis place authors and contributors in charge of their content in a way that those with strong centralist tendencies may find uncomfortable, but their job is to provide the pipes and wires that allow the water and electricity to flow. The use or abuse of that water and electricity needs the equivalent of an AUP. Such an AUP should never provide for the blanket removal of the underlying service just because of one abuse or dispute over use. I daresay that Salam Pax is grateful that the draconian AUP's of the previous Iraqi regime were not enacted just because he was 'off message'.

Subscribe to RSS Feed Follow new Auricle posts on Twitter!
error

Enjoy this blog? Please spread the word :)