Weekly recap: 2024-01-28

Posted by Q McCallum on 2024-01-28

What you see here is the last week’s worth of links and quips I have shared on LinkedIn, from Monday through Sunday.

For now I’ll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.

2024/01/22: Alternatives to an AI chatbot

I started to write a post here about another AI chatbot gone astray:

DPD AI chatbot swears, calls itself ‘useless’ and criticises delivery firm” (The Guardian)

But as that exceeded LinkedIn’s post character limit, I’ve put my thoughts on my blog instead:

Three alternatives to developing a public-facing AI chatbot

Long story short: AI chatbots are fun, AI chatbots are exciting … and sometimes your company needs a little less “fun” and “exciting.”

(Rogue chatbots are hardly the only AI risk lurking in your company. Looking for a review of your strategy, products, practices, and team? Reach out for an AI assessment: https://riskmanagementforai.com/services/)

2024/01/23: Don’t let that chatbot veer off the road

By limiting an LLM’s source material, you can reduce the chances that it goes awry.

In fact, the more you do this, the closer you get to the pure-search system I mentioned yesterday:

BMW showed off hallucination-free AI at CES 2024” (Ars Technica)

For its implementation, BMW had a compelling solution to the problem: Take the power of a large language model, like Amazon’s Alexa LLM, but only allow it to cite information from internal BMW documentation about the car.

[…]

Should you manage to stump this new voice assistant, instead of just saying, “I don’t understand,” it’ll ask questions to get to the root of your request.

2024/01/24: Gray Rhino Newsletter

For the #risk enthusiasts out there:

Michele Wucker, author of The Gray Rhino and You Are What You Risk, has just launched The Gray Rhino Wrangler newsletter. (Check out her introductory post, below.)

Last week’s issue covered AI risks, a topic that is top of mind for me. Can’t wait to see what’s next.

The newsletter’s posts will also appear on LinkedIn… but you know how things can get lost here. You’re much better off subscribing, so it lands in your inbox.

2024/01/25: Data supply chains

This is an article about web scrapers and politics. It’s also a lesson in data supply chains.

On the one hand: (accidentally) training a model on a vastly imbalanced dataset is a bad idea. Yes. That is a well-known, though under-appreciated, risk in building a dataset.

On the other hand: you can mitigate this risk by being picky about your source data. If your data collection method is to simply “grab any data we can” – if you don’t proactively vet and choose your sources, then review the material you get from them – then, yes, you’re asking for trouble.

I can already hear the grumbling: “But managing the input data is no fun! My company’s data scientists don’t find the work important. My fellow stakeholders don’t see any value there, either, because there’s no direct connection to product features or revenue!”

Fair enough. You can always remind them: “If we don’t keep track of the data on the way in, every report and AI model built on that data might very well be rubbish. Is that what we want?”

Most Top News Sites Block AI Bots. Right-Wing Media Welcomes Them” (WIRED)

2024/01/25: Ethics in data

Here Solomon Kahn raises some good points around the (sometimes lack of) ethics in our industry when it comes to disclosures.

So for anyone who’s wondering, I’ll put this out there right now:

For all my years of data strategy, advisory, and assessment (due diligence) work … every time I’ve recommended a product or service, I did that because I thought it was the best approach for your situation. Not because some undisclosed third party had secretly paid me to do it.

I’ve never taken any kickback for any of that guidance.

2024/01/26: What train heists can tell us about AI

This is an article about modern-day train heists. It also holds lessons about how businesses use automation, such as software or AI.

First, the article:

The Great Freight-Train Heists of the 21st Century” (NY Times)

This excerpt here stood out:

Over the past decade, in a push for greater efficiency, and amid record-breaking profits, the country’s largest railroads have been stringing together longer trains. Some now stretch two or even three miles in length. At the same time, these companies cut the number of employees by nearly 30 percent, so fewer people now manage these longer trains.

Do you think this is just an issue for railroads? Think again. Consider the ways your company is applying software- or AI-based automation. You’re either trying to reduce headcount, or do more with existing headcount. Either way, the goal is to operate with as few people as possible.

  • On the one hand, it’s extremely efficient to automate tasks that are some mix of dull, repetitive, and predictable. That frees people up for other, hopefully more interesting work.
  • On the other hand, introducing new automation creates new system interactions, which exposes you to new risks (unexpected failures, criminal behavior, and so on). And reduced headcount means that you have fewer eyes on that system. Combined, that makes it easier for new problems to develop and grow.

Am I suggesting that we do away with all automation? Far from it! Automation, under the right circumstances, is a game-changer. Emphasis on “under the right circumstances.”

You’ll want to review your automation needs on a case-by-case basis. When you look at your company’s operations, ask yourself how close you’re getting to the equivalent of that three-mile train with just a handful of people on-board. Then sort out how to address the new problems that this situation creates.