Weekly recap: 2023-07-23

Posted by Q McCallum on 2023-07-23

What you see here is the last week’s worth of links and quips I have shared on LinkedIn, from Monday through Sunday.

For now I’ll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.

2023/07/17: Let the machines do the dirty work

This is your periodic reminder that automation is ideally suited for jobs that humans don’t want to do.

I use the phrase “dull, repetitive, and predictable” to describe this kind of work. I’ve also heard some folks in the military say “dull, dirty, dangerous.”

Whatever your criteria, find the jobs that are a good fit for automation – through machinery, through software, through AI models – and you’ll run a more efficient business with a happier team.

Chipotle is hiring a robot to do a task employees hate” (Insider)

2023/07/18: Speak up!

Silhouette of woman shouting into megaphone. Photo by Patrick Fore on Unsplash

(Photo by Patrick Fore on Unsplash)

This is a reminder for all of the data scientists, data engineers, AI specialists, and people in similar roles:

Be sure to talk about your work!

If your stakeholders and product owners don’t know what you’re working on, it’s easy for them to assume that you’re not working at all. They’ll rightfully question why they’re shelling out all of that money on a data team.

How can you make sure you’re visible? Here are three steps:

1/ Keep track of the work you do.

2/ Make sure you understand how that work ties into business goals and revenue streams. (If you don’t know, ask!)

3/ Then, TELL people. (Provide periodic updates to internal customers and stakeholders. Speak up in project meetings and wrap-ups. Make sure your team gets a slide or two in that monthly town-hall where the CEO runs through recent accomplishments.)

2023/07/19: What the machines cannot do

Meta is officially releasing Llama 2:

Meta to release commercial AI model in effort to catch rivals” (FT)

While Meta’s technology is open source and currently free, two people familiar with the matter said the company had been exploring charging enterprise customers for the ability to fine-tune the model to their needs by using their own proprietary data.

My take:

It’s a sharp move to go the open-source route. I’ve mentioned before that a good portion of AI technology is already open-source and otherwise within easy reach. “The code we use to build models” is rarely a moat in this game, then, because most companies are using the same toolkits!

So where do companies still need help? (That is, the kind they can’t get by downloading an open-source toolkit or system?)

  • Building/training models – both in terms of consulting and infrastructure

  • Deploying models to production (model hosting) – this is akin to Google open-sourcing TensorFlow and then creating GCP Vertex AI

  • Surfacing reasons to use models (finding use cases) and identifying proprietary, game-changing data in your company – for this, you need to mix knowledge of your business model/product with a deep understanding of what AI can do. No outside tool can really do that for you. If you move quickly here, you might just outpace your competition.

2023/07/20: Careful with those budget cuts

Silicon Valley start-ups explore sales as funding runs dry” (FT)

“Startups are shutting down left and right, and you need to grow or cut your way to profitability now [because] you’re not raising funds anytime soon,” said Adam Jackson, a serial technology entrepreneur and investor based in California.

I get it. As your runway gets shorter, it’s tempting to cut costs. So if you need to cut something that’s not bringing business value, sure, cut it. But be mindful to not cut something that’s actually useful.

I’m thinking specifically of your company’s data science/AI department.

I have a longer write-up in the works, but the short version is that this is a good time to ask yourself:

“Does AI actually bring value to this company, but I don’t know about it?” (Maybe the data scientists are too quiet…)

“Are we using AI to the fullest?” (Maybe we’re stuck on basic reports and dashboards, when we could be doing so much more…)

“If not, how else can we use AI to improve our products?” (It’s time to sort out our use cases…)

The answers might very well be “no,” “yes,” and “there’s nothing else.” But that’s unlikely. Do yourself a favor and double-check before you reduce your AI team’s headcount or close the department altogether. You might have a lot of unrecognized value in there.

2023/07/21: Double-checking the machine’s work

This article describes how SAP is using generative AI for its marketing efforts. It’s a useful read overall and there’s also an important, yet subtle, lesson.

SAP’s marketing team is running multiple generative AI experiments. Its top marketer tells Insider what she’s learned so far.” (Insider)

The lesson? Notice how many times the author mentions people following up on the various models’ outputs.

The reality of generative AI models is that, like any other AI model, they can produce incorrect results. When a generative AI model does this we call it a “hallucination,” but let’s be honest: it’s an error.

That takes some of the sheen off of generative AI. It’s suitable for portions of certain tasks, sure. But it doesn’t completely eliminate human involvement in a workflow.

Generative AI is a powerful tool. It can bring new efficiencies. But it’s no free lunch.