We live in a world of big data. As the cliché has it, the Internet changes everything, and it has certainly transformed our access to reams of data and valuable analysis of it.
I spent the recent US Presidential election campaign following the blogging exploitsof Nate Silver, erstwhile poker player and baseball analyst. Silver made sense of the extensive polling and voter behaviour data to correctly predict not just the overall result, but the actual outcome in every single state in the Union. His successful predictions were in marked contrast to those who "felt" Mitt Romney would win, or had a "hunch" he would beat President Obama.
In a feature in yesterday's Observer, Silver said "Numbers aren't perfect, but for me, it's numbers with all their imperfections versus bullshit. You had people saying 'you can't quantify people's feelings through numbers!' But what's the alternative?"
This resonates for me with the evaluation debate in learning and development. We have people saying you can't measure 'soft' skills and that you can't adequately measure learning overall. My old friend Martyn Sloman is far from alone when he opines "The first thing to be said is that much learning cannot be measured, and this may be the most valuable kind. The second is that even if it could be measured, such activities may not be worthwhile." (Training in the Age of the Learner)
I would reverse Sloman's formulation. Quantification of some learning may not be worthwhile, but with so much data, and so much analytical power, available, we should be trying to measure as much as we can. Evidence based on data is so much more meaningful than any personal feelings about the value of learning. We need a much clearer idea of what we are accomplishing from learning, and much better evidence to inform essential ongoing improvements in what learning and development contributes. However distasteful some find the numerical analysis of "people's feelings", it is a challenge we must rise to.