The AI field is quick to tout its benefits and victories. Where AI differs from other fields, though, is that you very rarely hear practitioners discuss its inherent business risks.
I get why we’re a little sure of ourselves: AI has seen marked success and widespread adoption during its short tenure. 1 It has yet to experience the kind of painful, widespread shock that has convinced other fields to take risk planning seriously. 2 Both of these reasons are compounded by Western business culture’s obsession with the positive: people get scolded for bringing up potential problems (or other deviations from leadership’s plans) so they learn to keep quiet. Which makes it easier to pretend that there are no problems.
Those are all understanable reasons to not want to talk about risks… but they’re still terrible reasons to avoid the subject. Consider:
Eventually, the business of AI will hit a sea change. The risks are already there. Ignoring them won’t make them go away, no more than pretending to be surprised will make the inevitable problems any easier to handle.
Given that, I ask: When will we, as a field, openly talk about risk?
My answer is: Hopefully, now. And I’ll start.
Over the coming posts, I’ll explore the idea of risk related to AI in business. I’ll go into more detail on why recent events will force us to think more about risk, show you how to spot risks in your company’s data efforts, and walk through some risks the AI field has collectively ignored.
My hope is that those will inspire discussion among executives, practitioners, and anyone else involved in the field. We want to get ahead of the risks before they blossom into problems that we can’t handle.
It may feel like AI has been around for a long time, but the early monikers of “Big Data” and “predictive analytics” only go back about a decade. ↩
A lot of people expected that CCPA and GDPR would be the kind of shock that would get AI to mature. Thus far, these new regulations have caused a lot of discussion, but not a lot of meaningful change. ↩