Tuesday 31 January 2012

Azimov's Laws of Robotics prove Prophetic when Applied to Learning in the 21st Century

As 2012 opens for business, and we continue to search for lessons to learn from 2011, it's time for a fresh perspective. The new year brings with it new pressures and paradoxes: it is important to identify initiatives to prioritise, and targets against we should test and measure our performance; but we must be mindful that when these pressures are applied together, we may end up running around in circles, fighting against ourselves and unclear about which priority supercedes another.

This is something which I think Isaac Azimov would have found amusing: 70 years ago, he was already forecasting the confusion of latter-day logic, as humans sought to improve their lives with improved technology and automated collaboration.  Azimov is, perhaps, best known for penning the short story "I, Robot", which later became a cinematic success for Will Smith, as he battled a futuristic reliance upon a potentially fallible robot-assisted human culture.

Recently, I saw an anecdote for Business Process Management, as I reviewed a link which took me to a Wikipedia entry for another of Isaac Azimov's science fiction stories: "Runaround".  [EHRworkflow Jan 30, 7:42pm via Twitter for Android Amusing transposition of I. Asimov "I, Robot" short story en.m.wikipedia.org/wiki/Runaround RT @ebuise RT @adam_deane #BPM Runaround wp.me/pPwZ5-1rL].  This article explains how the robotic support for Azimov's hero got confused as his primary laws conflicted with one another: laws 2 and 3 came into conflict and the deadlock was eventually broken by invoking the importance of law 1...
  1. "a robot may not...allow a human being to come to harm"
  2. "a robot must obey orders"
  3. "a robot must protect its own existence"

I was struck by the similarity between this neat little tale and many of the late-night, transatlantic conference calls I've been attending over the past few weeks...

Last year saw many organisations begin to turn the corner, as they re-discover the importance of funding their respective learning programmes.  My organisation, and others with which I am connected, are optimistic of significant increases to learning budgets in 2012, as a result of implementing new methods of demonstrating the tangible and positive impact which learning can hold over other business performance measures.  Of course, as more funds are granted, we are asked to make them go further.  This presents the CLO with a dilemma: how do I deliver extra, thus justifying the budget, without forgetting existing commitments?  How do we accept extra funding, but not make it available to those who had to cancel events at late notice?  How do we reach short-term business goals, whilst trying to invest in a long-term infrastructure?

The answer is to rely upon the very information which has brought us this extra funding: to ensure that we continue to demonstrate a strong ROI on any investment and to find a solid business case for those resources, technologies and content which will help to deliver the business outcome.  This is the guiding foundation for Learning Intelligence and even in times like these - when it's easy to assume that he who shouts loudest will get the most - that sound judgement and rational decision-making is at it's most important.

As an Humanist, I wonder whether Azimov would have any thoughts on our decision-making processes: as we continually, and alternately, invoke the importance of logic or emotion in our prioritisation of business imperatives - lurching from side to side like Speedy the robot in 'Runaround' - Learning Intelligence is the third law we need to pull us out of this continual loop.