I have been reviewing some elearning materials for a friend and his not-for-profit and was astounded at how little effort was put into the learning objectives and assessing for those objectives. Needless to say the product was not one that I was impressed by but it did take me back to some of the rigor that I use to employ in my Instructional design and authoring days.
Back in those days we would re-visit, as part of our evaluation period, the elearning we had built and we would purposefully build in data collection at both the response level and objective level of the product to help us with our assessment. It allowed us to see what assessment questions and what embedded questions were working well and also what questions may need to be re-visited based on the weight of the data. e.g. a stronger correlation always appeared to exist between a poorly constructed assessment item and people getting the question consistently wrong. Not earth shattering but it did let us cut through the crap consistently.
My rumination today is one that asks whether we still have this same rigor as elearning specialists or do we rely on the automation of authoring systems and templates too much? Do we feel that meta data and LMS's have this covered? Have we become lazy from a design sense?