Weekly recap: 2023-04-30

Posted by Q McCallum on 2023-04-30

What you see here is the last week’s worth of links and quips I have shared on LinkedIn, from Monday through Sunday.

For now I’ll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.

2023/04/24: Code won’t fix everything

We’ve seen this story play out before, haven’t we?

How Life Insurance Agents Beat Back a Tech Onslaught” (WSJ)

This article’s subtitle is: “A decade ago tech startups thought they could eliminate life-insurance agents. The agents won the battle, and now the startups are courting them.”

But it could just as easily have been: “Tech firms learn, once again, that code doesn’t solve every problem.”

Code solves a lot of problems, sure! My rule of thumb is to ask whether a problem is all of “dull, repetitive, and predictable.” If so, then tech is probably a good fit. If not – if the situation calls for nuance, or requires a human touch – tech will likely fall short.

There’s a wider lesson here, too. It’s that tech will augment more roles than it replaces. Especially when it’s the “AI” flavor of tech. There’s so much money to be made building tools to help professionals do their jobs, rather than trying to replace them outright…

2023/04/25: Executive data literacy as AI risk management

Dark road illuminated by streaks of light.  Photo by Clay Banks on Unsplash.

(Photo by Clay Banks on Unsplash )

Executive data literacy is a key element of a company’s AI risk management practice. Probably the most important element.

If the leadership team, product owners, and stakeholders all have a realistic picture of what AI really is and what it can do, then your company is better prepared to:

  • identify opportunities to use AI to the company’s benefit (and avoid costly, aimless wandering)
  • account for the harsh reality that some AI projects will not pan out (because these are not deterministic efforts)
  • evaluate candidates for key data-related roles (especially those critical first hires)
  • see when the data team is actually aligned to the company mission (and if not, know how to course-correct)
  • go toe-to-toe with pushy vendors of AI solutions (and, even, some pushy colleagues)
  • determine whether the company is even ready for AI right now (because, if not, you get to save money by not spending on AI you don’t need)

Data literacy lights the way.

2023/04/26: New blog post: “Three questions to improve your data hiring”

Help Wanted sign taped to a window. Photo by Tim Mossholder on Unsplash.

(Photo by Tim Mossholder on Unsplash )

Are you having trouble hiring data scientists, machine learning engineers, and data engineers?

Consider what they’ll work on, what skills they’ll need, and whether you’ve created barriers.

My latest blog post goes into detail: “Three questions to improve your data hiring

2023/04/27: You actually have to heed the warnings

This excerpt highlights important risk management lessons from the SVB failure:

  1. Warnings can come from any source; not just your risk department.
  2. Warnings are only worthwhile if you actually heed them.

More than a year before the bank failed, outside watchdogs and some of the bank’s own advisers had identified the dangers lurking in the bank’s balance sheet. Yet none of them — not the rating agencies, nor the examiners from the US Federal Reserve, nor the outside consultants that SVB hired from BlackRock — was able to coax the bank’s management on to a safer path.

From: “Silicon Valley Bank: the multiple warnings that were missed” (FT)

2023/04/28: Yet another new chatbot

The folks at HuggingFace have released their own chatbot, “HuggingChat,” an open-source alternative to ChatGPT:

Hugging Face : quand trois Français lancent leur alternative à ChatGPT” (Les Echos)

2023/04/29: The impact of generative AI in the workplace

Here are two articles on the impact of generative AI in the workplace:

Tech giants aren’t just cutting thousands of jobs — they’re making them extinct” (Insider)

How will AI affect work? Americans think it will, just not theirs.” (Vox)

The common theme is the surprise – outright shock, even – that generative AI (LLMs like ChatGPT) would be able to take someone’s job.

On the one hand: I get it. Everything we’ve heard about LLMs boils down to: “it’s OK, but you still need someone to tweak the outputs. It’s not perfect. So it’s not really replacing jobs as it is making them easier. "

On the other hand: I think the shock comes from the (mistaken) belief that white-collar office jobs were always safe from automation.

I’ve noted before that AI is a form of automation, and that automation eats work. While I doubt that AI-based automation is ready to take on a person’s entire role, I do think that it can certainly tackle certain tasks. And the list of tasks is growing.

There’s another angle to all of this, as well: unlike the introduction of industrial/agricultural automation, this time around the workers have access to the tools. Hence the recent rash of “ChatGPT does most of my job, here’s now” articles.

It’ll be interesting to see how this plays out over the long run.