Weekly recap: 2023-03-12

Posted by Q McCallum on 2023-03-12

What you see here is the last week’s worth of links and quips I have shared on LinkedIn, from Monday through Sunday.

For now I’ll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.

2023/03/06: Due diligence, redux

Last week I mentioned several questions you should ask if you plan to acquire, invest in, or use the services of a company that claims to use AI.

I forgot to mention: if you are being recruited to join a company that claims to “do AI,” then you, too, need to ask these questions.

Why so? Remember: committing to a full-time role is a form of investment. VCs provide cash; you provide your time and energy. You deserve to know that you are making that investment on solid ground.

(Especially since, unlike a VC, an individual can’t really diversify across multiple full-time jobs at the same time. You’re putting a lot of eggs into that one company’s basket. Make them count.)

This is when some people will challenge me with: “What if I spot a company with problematic AI, and then join it anyway, so I can turn it around?” Be my guest. Just make sure your compensation reflects that turnaround. Do you really want to provide PE-level value at FTE-level returns?

2023/03/06: Digital growth

Great thoughts on #digital #growth strategies by Barrie Markowitz and her colleagues.

My favorite is item 4: focus on first-party data.

Not only is first-party data better from a privacy perspective, but you also get stronger assurance of data quality. You can have more faith in data you’ve collected yourself, and data that people have volunteered to you.

https://www.insightpartners.com/ideas/digital-growth-strategies-startups/

2023/03/07: Human/AI interaction in medicine

Applying #AI in a medical setting:

Using A.I. to Detect Breast Cancer That Doctors Miss

Not only is this a very practical, actionable use of AI … but also:

Kheiron said the technology worked best alongside doctors, not in lieu of them. […]

“An A.I.-plus-doctor should replace doctor alone, but an A.I. should not replace the doctor,” Mr. Kecskemethy said.

This is the key. This is how you win with AI.

Instead of trying to outright replace skilled human practitioners, build an assistant for them. Give them extra eyes and ears.

(Think “lane drift detection” rather than “self-driving car.”)

People and machines are both wrong from time to time. They tend to be wrong in different ways, though. So the human-plus-machine combination will catch more problems than either one operating alone.

(I explored this idea a couple years ago in a piece I call “Human/AI Interaction: Exoskeletons, Sidekicks, and Blinking Lights” )

2023/03/08: Misuse of models: government services edition

New data science practice job interview! It’s a case study. Please identify what went wrong in the development and use of this AI model:

Inside the Suspicion Machine” (Wired)

(Note: while “everything” is technically the correct answer, you must provide detailed line items to get full credit.)

In all seriousness: as a long-time data professional, I am appalled at the misuse of AI here.

Appalled, but not surprised.

To the companies that develop AI solutions, and to the organizations that purchase and use those solutions: do better. It’s long overdue.

2023/03/09: Voice assistants

A little something on voice assistants:

Amazon’s big dreams for Alexa fall short” (FT)

There’s a lot to say here. So I’ll keep it to just a couple of points:

1/ I’ve noted before that context is key when it comes to business models and new technologies. Voice assistants started out with so much promise, and then they fizzled. So clearly this is not their time.

But, under what circumstances would it be their time?

And would providers of voice assistants need to start smaller? I do feel they tried to “boil the ocean,” as they say, in creating general-purpose devices. Maybe they’ll start with a more narrow focus next time around? Because that would play into the next point:

2/ Satya Nadella is refreshingly blunt here:

At Microsoft, whose chief executive Satya Nadella declared in 2016 that “bots are the new apps”, it is now acknowledged that voice assistants, including its own Cortana, did not live up to the hype.

“They were all dumb as a rock,” Nadella told the Financial Times last month. “Whether it’s Cortana or Alexa or Google Assistant or Siri, all these just don’t work. We had a product that was supposed to be the new front-end to a lot of [information] that didn’t work.”

While he’s talking about voice assistants, it’s hard to not also see this as a thinly-veiled comment on AI models in general (since AI powers these devices and is very much their “interface”).

It’s common knowledge that AI models aren’t always correct. And it’s much more difficult to tune a general-purpose model. What if Cortana/Alexa/Google Assistant/Siri had all started in one very specific domain, and then built out from there?

(We’ve seen that approach work before. Note that Amazon started off selling books. They were able to test their website, supply chain, and delivery channels on items that didn’t expire and didn’t require special handling for transit.)