We've seen it happen many times:  an organization purchases a learning management system based on claims made by the sales representative about particular features or functionalities only to discover that the finalized product does not really work in quite the way expected.

To a certain extent, this is an issue that can be addressed by asking for details on a per feature basis during a request for information (RFI) or request for proposal (RFP) process. For example, in our process we ask vendors to assign one of the following capability ratings to each feature we ask about:

  • S - Standard: used to describe features and functionalities that are part of the LMS and (to the extent required) can be set up or configuration by [your organization]
  • SS - Semi-standard: used to describe features and functionalities that are not automatically part of the LMS and require work by your company to set up or configure, but that have been implemented for other clients
  • TP - Third-Party: used to describe features and functionalities of the LMS that are available- and have already been implemented and tested in other situations - via products or tools offered by other companies in partnership with the LMS company. (3rd party solutions that have not yet been implemented and tested should be marked as Completely Custom.)  
  • CC - Completely Custom: used to describe features or functionalities that could be added to the LMS for a particular client via custom programming        
  • U - Unavailable: used to describe features and functionalities not available in the LMS    

These ratings certainly help, but they still only get you a fraction of the way to knowing whether you will end up with a deployed system that truly meets your business needs. Even with these categorizations, there is still a great deal of interpretation involved in how a feature works.

A more critical step is the development of detailed LMS use cases or user scenarios that accurately describe the most critical functions you expect an online learning system to support. The basic idea behind use cases is that choices about technology should be driven by the desired user behavior and outcomes. Ideally, they should be developed before any focus on specific features and should then drive your review of features during vendor demonstrations.

Developing LMS Use Cases

Now, I am not an IT guy by training, so I will not pretend to know the best approaches and practices for the development of formal software development use cases. Having been on both the buy and the sell side of learning technologies many times, however, I know the practical steps any organization can take to help ensure a good business choice through use case development.

To start with, be absolutely certain that you understand your overall strategy and business goals for implementing a learning platform in the first place. A key part of your strategy and goal formulation should be developing a thorough understanding of who your end users are and which learning products and services will be of most value to them. Uses cases are only valid to the extent that they map back to actual user needs that correspond with your organization's business goals.

With these factors in mind, the next step is to invoke and document your imagination. Imagine yourself in the place of your most typical end user or administrative user, actually sitting at the keyboard and on the verge of interacting with the learning management system environment.

Now, in your mind, start walking through the specific actions you would expect to take and what you would expect the result of each action to be. A simple three column table like the following can serve to capture the actions you take and the corresponding results.

ID User Action Desired Outcome
  1. An end user navigates to her personal menu in the learning management system.
By default, all of the courses that the user is currently enrolled in or has ever been enrolled in are listed in ascending alphabetical order by title. For each title, an icon indicates the status of the course: not started, complete, in progress. For each title there is also an option to view additional details about the course.
  1. An end user follows use case A.
  2. The user clicks a link or icon to view additional details about a

The view for the course expands to show additional information associated with the course, including

  • course description information
  • scores for any assessments associated with the course.
  • a link to print a certificate for the course (visible only when course is completed successfully)
  1. An end user follows use case B.
  2. The end user clicks "print certificate".

A PDF document launches that displays the following:

(Detail certificate contents)

As you can probably already sense, the above can become a very detailed and ultimately very tedious exercise pretty quickly. The goal as a business decision-maker, however, is not to think through every possible variation that can occur. Rather, working with the dozen or so "must-have" scenarios that you can imagine, make sure you have walked through the broad set of actions that comprise that scenario.

Even doing this at a relatively broad level requires discipline and time, but the process inevitably clarifies your vision and identifies challenges and opportunities of which you were previously unaware.

Ideally, once you have gone through the process, you should also walk through the results with a handful of target users. A set of use cases that has been reviewed and verified by target end users is one of the most valuable tools you can have in determining the features your learning management system must have and can serve as the roadmap for vendor demonstrations. Having prospective vendors speak to and demonstrate against your specific use cases in real time will be more valuable than any information you are likely to gather through an RFP.

In fact, with solid use cases, you may be able to avoid an RFP all together.

Or, at least, you can avoid the type of RFP that is driven by a huge laundry list of features representing the wish lists of everyone involved in your selection process. Driven by the fundamental question “What should a user be able to do when using the technology and why?” use cases have a way of clarifying and narrowing the range of features that are truly critical.

Consider, for example, this communication I received from Amanda B., the head of learning and a large trade association:

As you advised, my team and I spent several weeks documenting and categorizing our use cases.  The results were so robust, we did not need an RFP.  Armed with the use cases, we met with several CMS/LCMS/LMS vendors that met with our prequalification.   These vendors were invited to demonstrate their product against our use cases.  Long story short, we selected ________________.   We are in the midst of configuration workshops and integration.

That's right: no RFP.

And I haven't the slightest doubt that Amanda got much better internal alignment by going through this process and was able to make a much higher quality choice in the end. And Amanda's needs were very complex - most organizations will not need to spend several weeks going through this process.

Now, you may still feel more comfortable sticking with a RFP process, but whether or not you go that route, take the time to think in detail about the user behavior and outcomes you are trying to achieve, write these down, and use them as a primary driver for your technology selection process.

And as a bonus, the use cases you develop during your selection process also serve as the basis for test cases that can be used to verify that a vendor ultimately delivers what you pay for. (More on that later.)


A version of this post was originally published on the Tagoras Web site on July 20, 2010. It has been significantly updated before re-publishing here.

Image by Philip Neumann from Pixabay