Friday, 10 January 2014

What Can We Learn From IBM Watson's Clinical Judgement?

On the day that IBM launches it's new business unit, IBM Watson Group, I became inspired, for the second time, by the possibilities and opportunities for applying this technology to tangible problems facing us today - making our planet, a "Smarter Planet", if you will.









The first time occurred as I followed Watson's progress with Jeopardy!, and I was stunned that it was not only fast and accurate; but also gave a degree of confidence in it's answers.  As someone working on a predictive analytics project for a learning organisation, this could be so useful when applying future anticipated insight to a business decision-maker wanting to estimate the results of a training course.  Since it's gameshow success, things went a little quiet as IBM built upon their success, developing Watson's commercial viability in a protective and secretive research environment in Austin, Texas.  Quiet, that is, until this morning - the second time I was inspired - following this 112-minute video: 2014 IBM Watson Special Event.

Ethics

Ginni Rometty, President and CEO of IBM, said that when Watson won Jeopardy!, "I'm not sure many people really understood what was happening behind the scenes"; Mike Rhodin, leader of the new IBM Business Unit, spent 10 minutes outlining the approach Watson takes to learning: "It learns like our children learn...  It reads.  It asks questions.  If it encounters [conflict], it has to sort that out."

As various luminaries from the world of healthcare explained how they saw Watson's technology being applied to the industry, this sparked in me an interesting question: if Watson is going to deal with conflict in the world of medicine - a world fraught with ethical dilemmas, updating standards and emotional agendas - how will we know that Watson is making the correct judgements?  In other words, how can we be sure that Watson will learn correctly?


Learning

Mr Rhodin suggested a methodology in his earlier overview: "When our children come home from school, how do we know our children are learning?  We ask them questions and we check their answers.  If the answer's not right we help them to discover the right answer."  Sounds good - especially to a parent with 2 young children approaching school age!  Ok.  It builds algorithms on algorithms and, in this way, continues to learn.

However, in an industry with truly exploding data, an industry where Watson will know more than any human (i.e. be able to assimilate new research and evidence faster, and more efficiently, than any doctor or academic), who will be the "authority" to keep Watson in check?  Jay Katzen, President of Elsevier Clinical Solutions, told us "I'd have to read 174 articles, every single day, just to keep up!"

Despite what I tell my daughter, no parents are all-knowing (except, perhaps, my wife!) and most of us can probably think of a scenario where we have heard a parent giving advice to a child, with which we wouldn't agree.  Whether it's right or wrong makes no difference, of course: because often there is no right or wrong - just alternative approaches.  How will Watson make those kinds of judgements?  For example, will it be able to apply a weighting to it's billions of datapoints to promote or restrict sentimental, financial or political considerations?  (And would it be correct to do so?)

Machine Learning is not enough

Daniel Hillis put it best, half-way through the film, when he espoused a vision of machines and humans interacting to mutual benefit and advancement.  If cognitive science is about understanding how learning takes place, and cognitive computing about machines learning to learn, then what we should be aiming at is "cognitive collaboration".

Greg Satell (among others) beat me to coining this term, evidenced by his Digital Tonto blog entry 'The New Era of Cognitive Collaboration' where he discussed - what else?  - IBM Watson!  (It seems Watson really is breaking new boundaries, as all roads seem to lead back to it.)

What does this mean for learning, performance management and strategy?

If learning can be automated, is there still a place for Learning Professionals in the corporate world?  Absolutely!  Expertise, judgement and timing come together to create a critical mix of skills that make learning professionals the authority in this area.  In my organisation, we have seen a growth in learning datapoints of c.100%, over 3 years, as we assimilate more informal learning and reconcile that with greater appreciation of the value of learning in the 21st Century business.  Super-charged abilities to tackle big questions, like that presented by Big Data, mean that Watson will certainly not be under-utilised as data grows, whilst structure and formality subsides.

Technology is getting very smart indeed, and may be getting better able to assist us in our strategic direction.  Data Scientists and Business Analysts will ensure that the reports generated are as accurate and insightful as possible (one form of authority, perhaps); meaning that the learning professional may finally be able to rid themselves, once and for all, of the transactional and administrative tasks related to keeping the engine turning over and, instead, focus on the business of empowering the organisation with the correct skills, as required.  The crucial point, here, is that the learning organisation needs to be communicating strongly with the data community, to ensure the implementation of sound judgement and close the authority loop.

It seems, then, that the authority defining the direction of an organisation may remain the domain of the human for some considerable time to come