Weekly recap: 2023-06-11

Posted by Q McCallum on 2023-06-11

What you see here is the last week’s worth of links and quips I have shared on LinkedIn, from Monday through Sunday.

For now I’ll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.

2023/06/05: Labels for AI-generated content

A proposed bill would require labels on AI-generated content:

Scoop: House Democrat’s bill would mandate AI disclosure” (Axios)

Driving the news: The bill, a copy of which was first obtained by Axios, would require output created by generative AI, such as ChatGPT, to include: “Disclaimer: this output has been generated by artificial intelligence.”

On the one hand: while hardly an airtight solution, this might be a good start.

On the other hand: I question just how much it would help. People see what they want to see, and that sometimes means seeing right past posted rules or disclaimers.

Remember in March, when someone generated the fake “Trump arrest” image? That person clearly labeled it as such when he shared it on Twitter. That didn’t stop it from going viral and being taken as truth.

Besides, let’s not forget the power of AI marketing. If this bill passes, we’ll likely see companies using the disclaimer for bragging rights. “See how good our AI is? Contact us for details.”

2023/06/05: The idea of “AI safety” is really about human decisions

People will claim that AI is the problem here. There’s certainly a problem, but it has nothing to do with the technology.

Eating-disorder group’s AI chatbot gave weight loss tips, activist says” (WaPo)

The National Eating Disorders Association (NEDA), which recently shut down a human-staffed helpline, is suspending its use of a chatbot that activists say shared weight loss advice with people who used the service.

Since the AI clearly did the wrong thing, why do I say that it isn’t the problem? That’s because, from what I can see, the AI was not suited for the task at hand. And it was deployed with insufficient oversight.

That responsibility falls to the company that chose to use an AI-based solution. Not the AI itself.

Do you want to avoid this kind of trouble in your company’s AI efforts? A key step in AI risk management is to decide what tasks are appropriate for a model to take on. Here’s my rough guide:

  • low-stakes? Great fit.

  • high-stakes? It’ll need supervision.

  • extremely sensitive / mission-critical? Hmmm… maybe sit this one out.

2023/06/06: Series of blog posts on data hiring

I’m running a short series of blog posts on hiring in data science / ML / AI. These are loosely based on questions I’ve received from various stakeholders, conversations with peers, and situations I’ve observed.

I’ve already covered:

I have a few more lined up before I close out (a couple of you already know what these are, so, shhh) but figured I’d ask here:

What other topics would you like me to cover?

Maybe you have a burning question about building a data team. Or you’d like something you can forward to your executive team with a “hint-hint please read” up top. Whatever.

Feel free to leave a note in the comments, or DM me here, or contact me through my website.

Thanks!

2023/06/06: data-driven decisions

This article has a lot of interesting details. One point really stands out for me:

You’ll notice that the CEO not only says that he is “data-driven,” but also provides specific examples of what he means.

This 33-year-old Wall Street wunderkind is the CEO of P.F. Chang’s, and he’s giving the chain a complete makeover: ‘We’ve remodeled about 80% of the fleet’ " (Fortune)

I’m very data-driven, so I wanted to build a heat map of sorts to see what’s happening within restaurants, so it’s not all anecdotes. The heat map for each restaurant measures culinary performance, the time to get an order to the table, audit scores, hospitality performance, and wait times. Customer scores and turnover rate are in there, too.

(Sharp-eyed readers will note that he comes from a finance background, so this is to be expected. But still …)

For all the data practitioners out there: how many times has an executive/stakeholder proclaimed that the company would be data-driven and then … just left it there? Or maybe assigned you with the task of figuring out what “data-driven” would mean?

2023/06/07: evaluating investments

Photo of a small plant, growing out of a glass full of coins.  Photo credit micheile henderson on Unsplash.

(Photo by micheile henderson on Unsplash)

How do you evaluate a financial investment? Easy: see how much money you put in, and compare that to how much money you get back. If things go well, the latter number is the larger one.

Now … how do you evaluate your company’s use of AI?

Same thing: see how much money you put in, and compare that to how much money you get back.

AI can be fun and exciting, sure. In a business scenario, it is an investment. That means it should improve your business, by some mix of:

1/ making you more money (growing your revenue from existing opportunities)

2/ saving you money (you’re spending less as a result of using AI)

3/ finding new money (you uncover new revenue streams, which leads right back to item 1)

4/ protecting money (you’re reducing risk, which indirectly impacts item 2 because you’re not shelling out cash to clean up a preventable mess in the future)

If your company’s use of AI doesn’t address one of those four points … if it doesn’t somehow lead to better/faster/smoother operations … then why are you using it?

2023/06/07: New shovels on old ground

Hmmm. It’s almost like “throw a bunch of AI at an age-old, highly-regulated industry” isn’t necessarily a recipe for success. Who knew?

Upstart insurtechs had been hoping to shake up insurance, one of the stodgiest of industries. So far, they haven’t succeeded.” (WSJ)

When in­surtech was hot, star­tups were pitch­ing all types of new data, in­clud­ing voice analy­sis or weather pre­dic­tions five years into the fu­ture, as ways of transform­ing risk analy­sis, ac­cording to Michel Leonard, chief econ­omist and data sci­en­tist at the In­sur­ance In­for­ma­tion In­sti­tute. But those lofty ideas were dashed on the re­al­i­ties of reg­u­la­tion, he said.

2023/06/08: ChatGPT use cases

On the subject of ChatGPT use cases:

I think that ChatGPT and its cousins will ultimately raise public awareness of LLMs’ potential. The specific use cases are where this technology will really shine.

This article excerpt dovetails well with my take:

America Forgot About IBM Watson. Is ChatGPT Next?” (The Atlantic)

Key point:

Watson was a demo model capable of drumming up enormous popular interest, but its potential sputtered as soon as the C-suite attempted to turn on the money spigot. The same thing seems to be true of the new crop of AI tools. High schoolers can generate A Separate Peace essays in the voice of Mitch Hedberg, sure, but that’s not where the money is.

OK, and where will the long-term value come from? From domain-specific, task-specific implementations:

Instead, ChatGPT is quickly being sanded down into a million product-market fits. The banal consumer and enterprise software that results—features to help you find photos of your dog or sell you a slightly better kibble—could become as invisible to us as all the other data we passively consume.