Open source enterprise weblogging (continued)

No sooner had I raised the question about the eventual need for management tools for the multi-user version of the open source weblog engine WordPressMU then James Farmer contacted me with a link to a plugin which provides at least rudimentary management facilities. If you pop over to one Derek Ditch's site he offers a basic management plugin for WordPressMU. Make sure you download the updated version and place it in the plugins directory of wp-inst>wp-content. So it's now possible to create and delete blogs, and reset passwords from a single interface. There's obviously a lot more that could be done with this, e.g. blog backup, grouping, but even at this basic level Derek's work is to be applauded.

So I then moved on to tackle how the owner of a WordPressMU blog could select their own appearance and what do you know … with a little more work this is possible to. The key issue here is that WordPressMU is based on the latest beta of WordPress (1.5) and as such adopts it's new theme model which is a bit more complicated than the current release version of WordPress (1.2). But it should be possible.

I'm still left wondering why such an important area of development for WordPress doesn't give the impression of being part of the mainstream work, e.g. Donncha and Derek's inputs were sourced from their own blogs. It would be good to hear some perspectives from people who know.

Open source enterprise weblogging (continued)

In yesterday's posting I applauded James Farmer's success in getting the multi-user version of the open source WordPress (WordPressMU) weblog engine working. I bemoaned the apparent lack of mainstream support for WordPressMU from the WordPress community and my own lack of success in installing and testing the former locally. There's news, at least on the latter front. I've now managed to install WordPressMU and so with the inclusion of a sample (alpha level) registration script provided by Donncha 0 Caoimh (the lead developer of WordPressMU) users can create their own blogs within 30 seconds or so. In the latter part of this article I'll describe my installation gotchas!, which may help others avoid them.

I think what Donncha has done is really fantastic and he is to be congratulated.

I have, however, now moved on to a couple of key questions.

Although now it's possible for us to enable everyone to have an 'insta blog', a la Blogger.com, what happens when people leave the organization or finish their course? How to I manage/delete the blogs so created? At the moment each new blog is allocated its own tables prefix in the designated WordPressMU database, so I guess, pro-tem and with small numbers of users, it would be fairly easy to amend and delete blogs. But it would be dead easy for an institution to find itself with 5, 10, 15, 20 thousand users or more; at that level of uptake and turnover the management interface and processes become absolutely critical. The development of such is going to require the participation of pretty motivated and competent open source developers; but, as I asked yesterday, is WordPress.org buying in to the importance of this variant of their open source 'brand'?

Also, if each new blog creates say 12 tables then multiply that by the number of users. Are there performance implications in a single MySQL database handling such large numbers of tables or do we just throw hardware at it? I don't know but perhaps some MySQL gurus out there may know the answer.

Now to these installation gotchas! (N.B. this section will give technophobes indigestion and so they should jump to the last paragraph in this article 🙂

It really seems to matter that all the admin/config balls are in line before starting the install. Donncha's install is really efficient when everything's set up … but if the balls aren't in line … 🙁

The first gotcha! arose because the .htaccess file contains the mod-rewrite rules which drives the generation of the virtual urls. Our computing admin had turned off .htaccess so once I had that permitted for my WPMU directory things began to move forward.

The second gotcha! for was I had prepared a wp-config.php for my MySQL config (as the install page indicated I should) but the next install page then went on to state that such a file existed and then stopped the install until I removed it, so that it could create it.

The third gotcha! was the install gathered the database information and then produced a wp-config.php file with a mangled database name. Once I found out that this is what it was doing I just edited the produced wp-config.php manually and the install then proceeded smoothly.

The fourth gotcha! was when I edited the .htaccess file with the path to wp-inst Donncha's new RewriteRule. I inserted the full filesystem path but this generated multiple errors. I found that only the directory portion leading to my WPMU root was necessary, in my case /subdirectory/wpmu/ was necessary instead of /fullfilesystempath/sudirectory/wpmu.

As a quick, and probably very dirty, install guide I offer the following (at your own risk and with no guarantees it's relevant to your situation):

  1. Download the lastest WordPressMU snapshot from http://mu.wordpress.org/download/
  2. Unzip the file, change that name of the folder to wpmu.
  3. Create (or get your system admins to create) a MySQL database.
  4. Make sure you know the database name, user, password, database host.
  5. Transfer wpmu to your webserver.
  6. Make sure .htaccess is enabled by your system admin on wpmu and another one in its subfolder wp-inst
  7. Open that location in your browser through your webserver … the install should start
  8. Follow the instructions but don't create a wp-config.php manually, let the install ask you for the information and create the file (but check the content of the file created before proceeding with the rest of the install). Read/Write permissions are necessary on the following directories: wpmu, wp-inst, wp-content>smarty-templates as well as the .htaccess files in wpmu and wp-inst. These permissions can be amended after the install.
  9. Download the alpha registration script from Donncha's site and follow the instructions there (note my fourth gotcha!).

Having remedied these gotachas! and worked out the installation sequence, the installation process of the base system took only a few minutes and, as stated earlier, users can have their own customizable blog in 30 seconds or so. Note that I mean customizable in a technical sense. It needs a competent/confident person to make the necessary adjustments to move beyond the default appearance of a blog's home page. However, such a person could set up a new default style fairly easily, e.g. for a faculty. Posting articles and comments is, of course, a 'no brainer', even in the default state.

Open source enterprise weblogging

Last October, in BlogBuilder Highlights I was singing the praises of the University of Warwick's enterprise weblog system whilst making the case for JISC involvement in facilitating the availability of such facilities for other institutions. There's been some interesting developments on the open-source front. Over on James Farmer's blog we find Open source enterprise blogging through WordPress in which he outlines his success with the multi-user version of the very popular open source WordPress blog engine, i.e. WordPressMU.

James has had more success than me in installing WPMU. But his success is a very positive development because, if an educational institution or organization can install their own hosted multi-user blogging solution, then the scope for the development of interesting educational applications/tools based on such an engine is considerable, e.g. ePortfolios.

I have a few concerns, however.

I like WordPress a lot and, as soon as I have some time, Auricle will be ported to that engine. I'm not sure, however, despite the presence of a WordPressMU home page that the main body of WordPress developers buy into the WordPressMU vision. For example a question about WordPressMU in the main support forums stimulated the following response from a WordPress developer:

“… WordPress MU is a distinct project and it is not part of WordPress proper. You will have to contact the developer directly or find WP MU support forums somewhere.” (19 Jan 2005)

The key figure in WordPressMU development is Donncha 0 Caoimh.

Donncha's WordPress category within his blog Holy Shmoly! appears to be the nearest thing to support that exists and he's incredibly helpful. Donncha appears to be doing a sterling job of keeping this very important show on the road. Nevertheless, the paucity of documentation is 'challenging' although, as James Farmer shows us, it's possible to succeed.

I think it's essential that WordPress.org recognizes that the multi-user hosted variant of WordPress is a key part of its development, not some minority fork. As a start they should aim to integrate support into the main site as soon as possible.

VLE of the future is a minimalist aggregator

“… the VLE of the future is going to be less like an information portal, and more like an aggregator. it's going to be more like an editing and publishing tool and less like a browser. It's going to break out of the browser window and sit on the desktop … It will be slick and minimal, and will actually be fun to use”. Scott Wilson of CETIS is off to a good start with his inaugural article the VLE of the Future from his new blog, or, as he calls it, his workblog. I agree with Scott wholeheartedly. My recent Auricle article Aggregator Inhibitions provided a brief overview of some of the new breed of aggregator services and related issues.

I suspect that the mainstream proprietary VLE vendors recognize this as well, e.g. witness the Blackboard and WebCT agreements with MERLOT and their use of RSS.

Of course, being ever the Eeyore:(, if I was a well financed corporate, or even just desperate to grow market share, I would be looking to buy up such aggregator services … either to kill or control them.

Anyway, let's welcome the Scott Wilson workblog and add its RSS/Atom feed to your aggregator … or is that your minimalist VLE?

Wikis for knowledge nugget integration?

On David Wiley's blog we find the following intriguing statement: “Imagine the value OLS (Open Learning Support) could add to MIT/OCW if OCW were a wiki?”. David imaginings were stimulated by Chris Wagner's posting entitled the Impending Demise of Slashdot. As a contrast to the article plus threaded comment model commonly found in blogs. Chris Wagner makes powerful advocacy for adopting a wiki model.

“The Wikipedia model (the 'wiki way') is superior for knowledge management based on large-scale, persistent conversation.”

“With a large number of contributions, however, value might be better applied through content aggregation, integration, and editing. In that way, comments would add more to quality than to quantity.”

“… the focus on the new (articles) and the simple threaded appending of comments to articles leads to a long, but not well integrated knowledge asset, whose individual knowledge nuggets (story + comments) are poorly maintained.”

“… asset value for readers is almost exclusively determined by its new content, not the (quickly dated) archives … lives on the value of its present content, deriving too little value from its archived content.”

Chris goes on to contrast this with the Wiki model using Wikipedia as an example, e.g.

“… articles had more writing activity (by number of edits) in later life, accounting for 65% of content. Thus, quantity was more important earlier, replaced by a focus on quality later.”

Chris' has a point. His advocacy of the archive as the primary asset replete with embedded knowledge nuggets is important. To me he paints a picture where in the story + comment model these nascent knowledge nuggets seem to be travelling along a conveyor belt and unless you're paying attention at the time they fall off the end of the belt and get lost, to be replaced with new candidates which then vie for our attention.

The high level of contribution on Slashdot is fairly unique. For most of us, however, knowledge nuggets in the form of article + comments is at a more modest level. But I agree with Chris, we can easily focus only on the new article and commentary when the real assets lie in the archive. But those of us working in higher education know that the real gems lie in the archive. It's for this reason that wherever possible when we write an Auricle article we at least try to always reference related Auricle articles. Having said this I suppose a wikified Auricle could be composed of a never finished and constantly updated series of inter-related online documents.

A major concern, however, as someone who is plagued by the comment spammer pestilence in Auricle, I shudder to think what this plague could do to the quality of knowledge assets if they found, or were given, access, to a wiki.

Nevertheless, Chris Wanger's article and David Wiley's comment make for a good read and offer plenty of food for thought.

Moodling around in anger - some initial reflections

In a series of previous articles we described our initial 'look see' of the open-source Moodle virtual learning environment. In the silent interval since these articles we've been busy working with colleagues in one of our departments to design, develop and implement a distance learning course in which Moodle has started to play a significant part. So this article offers our first impressions of using Moodle in anger. Note the caveat 'first impressions' as we fully expect 'worms to come out of the woodwork', as they do with any system. As we have expounded within Auricle and other places, we've used Blackboard, we've used WebCT, but we've never committed to an enterprise class system. Why not? Mainly because the University, wisely, thought more time was necessary for the technology and the e-learning knowledge base to mature. Whilst the functionality of proprietary VLE/LMSs may have been enriched over the years, we still feel these are inherently first-generation products which major in content delivery; compounded by the fact that increasingly expensive recurrent licences are necessary to support this content delivery. In a previous Auricle article Clark Kent solutions have superpowers - well sort of! we've also challenged the community to look objectively at so called e-learning courses using these proprietary systems, many of which are, in reality, no more than logistical conveniences for the delivery of content and perhaps with a smidgen of noticeboard.

We've been tracking the development of Moodle particularly because of its assertions that its fundamental design was at least informed by a socio-constructivist pedagogy.

A few months ago, the e-learning team, at the university, were approached by a school within the university who wished to redesign a Master's distance learning course to adopt a more student-centred approach. Some members of the school team had experience of delivering courses on-line using Blackboard. However, because of the inherent content delivery model of Blackboard, they felt it would not meet their requirements for the redesign of the course.

As this article is about Moodle I won't go into the options that were considered, but 'cut to the chase' and reflect on the process that has taken place.

The whole course comprised eight modules with an overarching module which is to provide the 'glue' for the complete course. The course design was to follow a blended approach using some traditional distance learning material but to use a student-centred model to provide the activities/resources with assessment tied to the activities. The redesign had been in response to an in depth review of the existing course, which highlighted a number of key elements including helping distance learners to feel less isolated, take advantage of the professional experiences of the learners (this course is for medical doctors) and to make the materials as relevant and up-to date as possible. The design also needed to allow cohorts of learners to join at three-monthly intervals.

It's probably important to note that this project had to be off the ground within approximately 3 months when the first living, breathing (and paying) learners were to start their studies.

The project team, which included ourselves, met very frequently, a couple of times per week, to produce a model of what the course would look like and to set a schedule for key points to have been reached in order to meet the deadline. Once we had the specification we then started to create the course in Moodle.

We installed a course development version of Moodle 1.4.1 on the main university Web server which was later ported to a dedicated server for delivery of the live course.

One of the first tasks that we had to do was to give the school a clear identity by creating their own theme. This was fairly easy to do since it mainly required taking an existing theme from Moodle and adapting it to meet the needs of the school. A new language file was also created to accommodate changes in some of the module titles, e.g. courses became units. As for the theme the new language file was based on an existing file and simple editing was all that was required. Once the look-and-feel was established the course could then be created in Moodle.

The development stages were, as mentioned above, subject to meeting a number of important dates. These were identified as those dates when the key players in the project should be introduced to the new environment. For all this would be a first time experience. This group included the senior management within the school, the personal tutors for the course, the course administrators, the directors of studies and the course tutors. The 'buy in' of key stakeholders like this are essential because without their support the project would have great difficulty in embedding. Presentations were made to obtain as much feedback as possible in order that changes could be incorporated into the design.

Exposure of the course to a representative group of students was also deemed to be vital before the first cohort started in January 2005. A rapid prototype of the course was also developed that had many of the many of the features of main course. Towards the end of November 2004 a group of volunteers from the existing course were invited to an event at the university. They are at various stages in their studies ranging from beginning through to starting the dissertation. They were shown the new environment and through a series of 'etivities' explored the course features. Obsevers were on hand to monitor how they interacted with the VLE and noting any difficulties they experienced, e.g. navigation problems. At the end of the event a debrief was used to elicit verbal feedback. This event provided valuable information that was incorporated into the user instructions.

The volunteers then had two weeks to carry out some other online activities and to attend another feedback session at the end of the two week period. This session included some online discussion and face-to-face questions and answers. One of the volunteers who had not been able to attend the first event, joined the second event 'virtually'. We made use of the chat facility in Moodle and, having an observer who had a great typing speed, this student was able to read the comments and questions from the rest of the group. An interesting experiment that provided a useful record of the event.

The last but one group of key stakeholders, apart from the students themselves, were the course academics. They are all discipline specialists who have authored the text based materials for the existing distance learning course. They will also need to be on-board. A number of them attended a briefing session to be introduced to the new course. It was an interesting meeting because at the start the academics polarised into two groups, those who were prepared to change and those who considered the existing approach was sufficient. However by the end of the session all were supportive of the new course to a greater or lesser degree. A comment from one of the early doubters was very encouraging:

“I must be the most IT incompetent person in this room but even I believe that I can use this facility.”

One up for Moodle usability then?

This course also has international students and so students from Hong Kong became the final group to test the new environment.

By early December 2004 more and more material was being added to the courses including support material that had been developed out of the feedback from the various events. The course team kept in contact by posting any new information in a discussion forum. The email alerts became an essential part of making sure the project stayed on time. The first units are now ready for the students.

The course went live on 10 January 2005 and so it is very early days, However there are already a number of comments showing that the first impressions are very positive with statements like 'very impressive site- I am going to enjoy working this way!' being typical.

Auricle readers will undoubtedly have noticed that although Moodle is in this article's title it has hardly been mentioned. One of Moodle's greatest strengths is that it appears not to get in the way of designing a course to suit what the developers want rather than have to shoe horn it into an environment that substantially changes how the course is delivered. I'm not saying Moodle does not have its limitations, but it largely does what it says on the box (if it had one:) If you wish to deliver courses that support a student centred approach then this is what it does. The software installation does not raise any problems and customisation, as mentioned earlier, is straightforward, even for people with limited programming experience. The support from the wider Moodle community is quite exceptional with answers to most of the questions to be found in the extensive forums that exist at the moodle Web site. Questions posted to the forums often produce responses within hours of it going live.

So what are the problems that we have so far experienced? Not too many, thankfully, but note that our use of Moodle is for a single course for one school in the university.

One key contstraint is that there is not a lot of up-to-date documentation for Moodle in any one place. Nevertheless, information can usually be found from the support community. This means that finding out how a part of the underlying code works, e.g. developing a new language file, can be a little slow.

Also, system technical support will need experience of the open-source PHP scripting language if course customisation is required. Nevertheless, Moodle's modular architecture means that we have already managed to enhance some aspects of basic functionality via its plugins (which it calls blocks).

We initially installed Moodle on the university Web server for test purposes but we found this affected system response times significantly even with a relatively low number of users. To guarantee good performance it is necessary to run from a dedicated server and so with the co-operation of our University computing services team we've now taken this step.

Although this is a brief and initial account of a work-in-progress we believe all the participants have created an online course to the design that we want and not one forced on to us by the VLE. We have been able to concentrate on providing a rich and dynamic environment for the students and Moodle sits in the background doing its job.

So what's the judgement so far?

Shows some promise. We have experienced a very helpful rapid responses from the support community. We need now to consider how the system could cope with big numbers of registrations and enrolments. Document management appears weak but perhaps no weaker than other such systems. We need to explore more how systems such as Moodle could interact with document management systems/learning object repositories (centralized or distributed).

Texbooks or iPods?

In several of my recent postings to Auricle I've proposed the emergence of a 'filling station' model where networks are used to refresh highly portable devices. It seems that at least the College of Business Administration at the University of Texas is already well down this road. The University of Texas' course Digital Media for Management and Marketing states:

“The text is audio. It contains a series of mp3 files of panel discussions from conferences sponsored by ITConversations and other audio files. The topics of this course change so rapidly that no traditional textbook adequately fills the need.”

I admire the confidence and can certainly see the efficiency gain for course developers when conference panels and speakers are the primary source of the audio. I am less convinced about the efficiency gains from producing the equivalent of an audio textbook or even chapters thereof. It's one thing capturing the outputs of interactions that are taking place anyway, it might even be useful to capture streams of consciousness/brain dumps, but the efficiency gains from producing audio textbooks or polished audio presentations I'm less convinced by.

Why?

For most mortals polished usually means scripted and scripted means that text has to exist and be sequenced. If so, why not just publish the script? Unless, of course the audio is bringing an extra dimension not possible via textual narrative alone.

Like what?

Well, it may be good to hear the actual voice of a 'thought leader' (don't you just love that term:) We may want to benefit from non-verbal cues, or specific sounds may be part of the presentation, e.g. medical broadcasts (heart and lung sounds, types of babies cry).

But … and it's a big but … let's not forget that there's a significant overhead for the listener/viewer. They have to extract information from the audio/video whilst it marches ever onwards and even with pause/rewind this is a much slower process than is possible when using text.

So as we get caught up in our podcasting enthusiasm it's perhaps worthwhile asking if our podcast is actually going to make life better/easier for the user. If yes then go ahead but don't forget that an awful lot of information can be carried in humble text.

Southampton offers free online access to all research

Via JISC News we learn that the University of Southampton is to provide core funding for its institutional repository and as the items suggests “marking a new era for Open Access to academic research in the UK.” Southampton are to be applauded for their leadership in this regard and it will be interesting to see how quickly others can follow.

Press commentary on Google's Digital Library Initiative

Following on from Lisa Williams' Google does it again last week there has been some useful analysis and commentary published in the UK press. Read them quickly before they move into their pay-per-view archives. Read the rest of this article, you'll see what I mean. First up is Ben Macintyre's Paradise is paper, vellum and dust (The Times, 18 Dec 2004) from which the following extract is well worth repeating:

“Indeed, so far from destroying libraries, the internet has protected the written word as never before, and rendered knowledge genuinely democratic. Fanatics always attack the libraries first, dictators seek to control the literature, elites hoard the knowledge that is power … With the online library the books are finally safe, and the biblioclasts have been beaten, for ever.”

But that was on Saturday. On the following day we have the Sunday Times article All the world's best books at a click (Sunday Times, 19 Dec 2004) by John Sutherland, Professor of Modern English at University College London, which raises the commercial spectre:

“By the act of converting printed books to digital form Google will be creating a new copyright … Works in the public domain will effectively be privatised. Whether or not Google chooses to exercise its rights, it and its library partners will be owners of the newly processed property. So the vast reservoir of material in the out-of-copyright public domain will become 'proprietary', or pay-per-view. If we get access, it will be because we are 'allowed', not because we have the right. Great Books will go the way of Test cricket. You don’t pay, you don ’t see. Google hasn't said it will do this; but, as far as I can make out, nor has it definitely said it won't. ”

John Sutherland then goes on to suggest even the BBC's much trumpeted digital archive could be subject to such commercial pressures.

Hope with one hand. Withdrawn in the other:(

Google Does it Again

Just in case you missed it elsewhere, Google has announced it's plans to digitise the content from five of the worlds leading academic libraries, and to make them accessible via Google Print. The ambitious initiative will include the full libraries of MIT and Stanford, together with selected archives from Harvard, Oxford and the New York public library. According to Susan Wojcicki, director of product management at Google, the project aims to “…unlock the wealth of information that is offline and bring it online”. Whilst the scanning of books to facilitate their online presentation is nothing new (both Google, via it's Google Print initiative, and Amazon.com have been doing this for some time), the breadth of material being gathered by Google is likely to have a huge impact - the MIT libraries alone hold seven million volumes (which, it is estimated, will take 6 years to digitise).

According to a recent article on the BBC website, where the books are subject to copyright, users will only be presented with access to bibliographies and extracts. It is worth noting, however, that the New York Public Library is allowing Google to include a small portion of books no longer covered by copyright, and that according to Google Print, this material will be available to read online in its entirety.

Further information about this initiative can be found online at SearchEngineWatch.

Subscribe to RSS Feed Follow new Auricle posts on Twitter!
error

Enjoy this blog? Please spread the word :)