Tuesday, February 10, 2009

Why do we sacrifice?


I am still at a loss to understand why so many learning and development functions sacrifice the opportunity to ensure the LMS in their organisation becomes an integral (i.e. Mission Critical) system within their organisation.



I hear so much talk about integrating with Share point, Social Media, Content Management Systems and the like, that I think Learning functions have lost sight of how to demonstrate value to their organisation.

Sure you have the typical ROI measures that accompany successful systems usage but how often does management hear about the value of the blended program being completed and utilised or individual development plans being seen through to their end? How often does the evidence on the academic transcripts reflect the amount of development time people have engaged in to fulfill their development plan? How often has the direct reporting manager used the system to engage and identify with an individuals goals and aspirations?

This knowledge capture is vital for the learning function if they are going to demonstrate real value to the leadership of an organisation. It allows them to differentiate their services based on a (favourably) weighted argument that demonstrates the base benefits of the LMS and it's contribution to organisational systems and the learning function in turn.

There will be time for integration and using the benefits of emerging technologies but for now, and especially now, lets not sacrifice the opportunity to demonstrate real value......

5 comments:

Anonymous said...

Hi Wilko-
I think demonstrating value should be the default and if it's not, well that's scary. I don't think that this is linear process, i.e. "I've shown value of the LMS now I'll go integrate social media." Seems backwards...shouldn't it be more about supporting learning using the right tool for the job? Seems the development that is tracked throughout the year should be reflected throughout the year via two-way performance reviews.

Anonymous said...

Hi Janet and Wilko,

What is it that we end up capturing and tracking - amount of time spent, attendance, scores, satisfaction ratings? What is the basis of this tracking - activity logs (that have never really revealed much qualitatively) or multiple choice questions assessments and likert-scale surveys (which no one really spends time and effort validating because it would take too long and cost too much and which anyway mostly test recall)? It seems to me that the entire exercise is futile since the atomic units of tracking information itself are not valid. In this scenario, what real value does RoI hold? And what possibly could LMSes and related systems contribute?

Viplav

Anonymous said...

Hi Viplav-
The value is in the eye of the organization. Is time spent, attendance, scores, and satisfaction ratings your only experience with LMS tracking features? Do e-learning custom courses always capture qualitative data? Should there be no multiple choice questions, assessments, pre-tests, or Likert-scale surveys EVER? It's bold to assume that NO ONE in the learning industry spends time or effort validating data from an LMS. Information tracked at an organization that is highly regulated is valuable to keeping the company free of fines and out of litigation. Tracking certification requirements means technicians have updated skills. Tracking development vs. a learning plan prepares people to advance. So I guess that's my very short list of what LMSs can contribute.

Anonymous said...

Hi Janet,

Apologize if my comment sounded more like a rant against LMSs. To clarify, my argument is:

a. Organizations use LMS metrics to measure employees' learning and development and derive RoI from training initiatives. Obviously tracking and automated flexible reporting of any sort is valuable to any organization in any function - provided it is accurate to start with. And obviously lots of time and effort in organizations is spent on validating data fom an LMS that in turn provides a source of constant improvement just like from other systems for other functions. These systems provide base data upon which further analyses can be conducted.
b. At the very atomic level, tracking data is captured for an individual course. This tracking data is used as the input for other data capture around compliance, development plans and certifications. The fundamental question asked is "did employees learn?" or in predictive terms "can employees perform?" - whether it is to demonstrate compliance to legal requirements, track whether an individual is progressing as per the development plan or to certify them for skills. i.e. at the atomic level, data captured for the course is directly correlated to asking "did employees learn?" or "can they perform?".
c. This atomic tracking data for an LMS is time spent, attendance, scores, and satisfaction ratings (maybe cursory or detailed, and additional parameters as you suggested as well). Performance management systems could include mechanisms to track or correlate from other perspectives as part of appraisal processes, perhaps thereby adding to the accuracy of analytics. I am not bold enough to assume a position of superiority and ever state that "NO ONE in the learning industry spends time or effort validating data from an LMS".
d. This data is tracked by means of assessment instruments such as summative assessments that use items that are of multiple types - multiple choice, Likert-scale etc. These instruments and their utility must be separated from their typical use and effectiveness. So it would be wrong to infer "no multiple choice questions, assessments, pre-tests, or Likert-scale surveys EVER". Rather, their typical use and effectiveness in determing whether "an employee has learnt" or "an employee can perform" is important and something that is a key aspect of determining RoI.
e. These instruments are very powerful if they (and their constituent items) meet the basic requirements of educational testing - reliability (if the assessment consistently achieves the same result) and validity (if it is really measures what it is intended to measure).
f. This requires special expertise and time to create (not just by mapping to an established taxonomy) and establish for every course. The LMS has nothing to do with this process. This is evidenced in high stakes assessments like the SAT or GRE which have a long and statistically backed development process.
g. For routine courses, perhaps not many organizations or their development vendors would either know or spend that time and effort to create statistically valid tests. One would expect, though, that atleast certification testing would follow a much more rigorous test creation process because of the stakes involved. However "Tracking certification requirements means technicians have updated skills" would be correct in terms of the fact that you can document who passed the test, but to pass the test necessarily indicating updated skills implies that the measurement has no or little error factor.
h. Also some instruments may be better for testing certain types of knowledge/ability than others. For example, multiple-choice questions don't necessarily lend themselves to much more than recall of facts and routine procedures. There is a choice involved that, on a larger scale, impact the metrics that the LMS collects.
i. Let us look at time spent. Typically, the LMS would record the beginning of a session time and the end/suspend time and add it to the overall time that has elapsed to give us a sense of overall duration that the learner has spent on the course. What can we derive from this measure? Some learners may learn faster, some slower. Some may be distracted by a phone call, others may just not have enough time to go through it all in one attempt and therefore take longer to complete. What can we glean from this? Similarly, attendance. What can we say for that, especially in larger or virtual classes where it is easy not to be noticed, although you could still be "there"? I am interpreting both these in the sense of "did employees learn?" or "can they perform?"
j. Again "Tracking development vs. a learning plan prepares people to advance" is what is accepted traditionally as perhaps the best way to proceed. However, there are perhaps new perspectives, such as those brought about through connective, networked learning, communities of practice and informal learning, that may merit some thought and attention atleast in terms of the impact these could potentially have on how we learn and how we have managed these challenges traditionally.

Thanks for your response! And of course would love to learn from you other perspectives that could inform and correct my thinking.

Viplav

Anonymous said...

Hi guys,

Sorry for the late reply but I moved organisations on the 9th and have been inundated since then.

Viplav, I do not diagree with any of your assertions but I have to say that my increased exposure to managment, particularly in these hard times, tells me that some of the more atomic level data makes szense to managment and it is this language that needs to be spoken in order to grab sdhare of mind and heart.

Once this has been established, then we can argue the pros and cons of the depth of evaluation, criterion referenced verses pure behavioural assessment, reponse data verses objective data and so on.

Janet I agree with you but I lament here in Australia the lack of depth in the way the L&D community structure their justification and presentation and how they argue business case and value. It is at best raw and unstructured and leads to a lack of confidence.

Thanks for your commentary guys!