AI

Our Maturing Expectations of AI

Posted by Q McCallum on 2021-08-02

AI enters adulthood.

Whereas farm equipment brought about mechanization of agriculture, and factory robots brought us mechanization of assembly, AI models have given us mechanization of thought. AI provides lightning-fast decision-making, at scale. When you consider the dramatic decrease in costs for data storage and computation, it’s no surprise that AI is now everywhere we look.

As we continue to work with AI and deploy it to new situations, and we see the long-term effects of past efforts, we’ll learn more about where it works and where it doesn’t. If we are especially unlucky, we’ll learn the hard way. AI equivalents of the Knight Capital unraveling, LTCM’s rapid unwinding, the 2008 financial crisis, or the Three Mile Island incident are likely lurking as we speak.

This (possibly rude) awkening will shift our relationship with this technology. We’ll worry less about how AI has changed us, and we’ll be far more concerned with how we’ll change our use of AI.

From where I sit, that means:

1 - We’ll see less of AI (in name) because it will be absorbed into various business departments.

Companies have traditionally treated AI (and its predecessors, “ML,” “data science,” and “Big Data”) as special, separate entities within the business. They walled off their AI teams in a way that discouraged meaningful interaction with business stakeholders and limited uptake of domain knowledge.

We’re now seeing a slow move to embedding data scientists in product teams, and otherwise immersing data scientists in the business model. Over time this level of domain knowledge should lead to more domain-specific data scientists.

As I’ve said elsewhere: if a “quant” is someone who builds models on Wall Street, and if an “actuary” builds models in the insurance field, then “data scientist” is an umbrella term for a person who builds models in any other field. I expect that the data scientist name will disappear as practitioners adopt more industry-specific titles.

Expect people to start specializing in AI work for a given industry vertical. That may pave the way for industry-specific certifications or professional licensing as a way for people to demonstrate their domain knowledge.

2 - We’ll see the true costs of AI (and be more selective on where we deploy it).

Some days it feels like companies are throwing AI at every problem (and even some non-problems) just to see where it sticks.

On the one hand, I half-jokingly refer to this as a public good: the firms that are Trying AI, Anywhere And Everywhere serve as a form of research lab to show us all where AI can actually provide value. On the other hand, this scattershot experimentation comes at a private cost: every AI effort carries a price tag, but only so many of them will yield fruit.

AI’s lumpy, unevenly-distributed success stories can make it an expensive lottery ticket. Companies will improve their planning and execution when adopting AI-based solutions, starting with developing strategic road maps and evaluating the total cost of developing a custom model. They will therefore be more inclined to choose cheaper and/or less-risky non-AI approaches where appropriate.

3 - The AI we deploy will get more boundaries.

I often refer to ML/AI models as factory equipment that churns out decisions. Those decisions aren’t always correct, and incorrect decisions impact business outcomes and human lives. As we improve our knowledge of what AI really is, we will have an increased appreciation of the risk/reward tradeoffs of using it.

We’ll mostly express that in terms of reduced trust in AI based systems. The models won’t have nearly as much free reign as they do today. We’ll build more “padding” around those models in the form of monitoring, automated disconnects when results stray out of bounds, and instant-off switches for a human to shutdown models gone awry.

4 - Our focus on AI will move from the technology to the policy concerns. As such, the humans behind the AI will become more accountable.

Having spent the past several years building new AI tools and improving our techniques, we’re long overdue in considering the social ramifications thereof.

We already see the cracks in the facade. People are becoming more aware of how so-called “AI-driven” products are (mis)used for everything from predictive policing to replacing students’ exam performance. We’re starting to see an ML/AI model as an agent and extension of the company that deploys it.

Our desire to hold these companies responsible for their products will lead to more rules and regulations around how AI can be used, and who is liable when it misbehaves. (We already see kernels of this today. The decades-old Fair Credit Reporting Act (FCRA) requires lenders to disclose why a person was denied credit.) The days of “because the model said so” are numbered.

Expect some back-and-forth, as incumbent companies lobby to stave off regulation while more AI practitioners take up careers in law, policy, and insurance. Government calls to “break up big tech” through antitrust laws are just the beginning.

Time will tell

We’re more than a decade into AI – starting from its humble beginnings as “predictive analytics” and “Big Data” in the 2008-2009 time frame – and its polish is starting to wear thin.

With our increased understanding of this technology, we will expect more maturity in how it is deveoped and used. These views will shape how we employ it going forward.