Weekly recap: 2024-03-17

Posted by Q McCallum on 2024-03-17

What you see here is the last week’s worth of links and quips I have shared on LinkedIn, from Monday through Sunday.

For now I’ll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.

2024/03/11: Trying to steer clear of the bad parts

I admit, I was skeptical when I saw the headline:

With ‘Charlie,’ Pfizer is building a new generative AI platform for pharma marketing” (Digiday)

Generative AI? In pharma marketing? Given all of the LLM mishaps? No thanks.

The article goes into more detail, though. It sounds like the teams involved understand the dangers and they’ve taken active measures to address them – including data governance and tailored data access.

In other words, they’re closing off their exposure to downside risk while leaving themselves open to the upside gain of using this technology.

And that is how you handle risk management in AI.

Because health care data is so sensitive, Pfizer is making sure Charlie’s collection and use of data meets various internal and external privacy requirements. Based on who is using Charlie, the platform can tailor its features to an employee’s role, how they use it and the types of data users engage with. That all makes data governance especially key, both in terms of accuracy and privacy.

“You’ve got to be very careful,” [Arpit Jain, president of Publicis Groupe’s Marcel AI platform] said. “The last thing you want is to have some crazy data that you did not clean up right. And especially in healthcare, it could be a matter of life and death. If you recommend the wrong iPhone, it’s okay — it’s not the end of the world. But if you recommend the wrong medicine, God forbid something like that happens.”

2024/03/12: A warning from another field

This is an article about America’s health care tech infrastructure. It is also a warning about AI companies and risk.

The U.S. Health System’s Single Point of Failure” (The Atlantic)

As we become increasingly reliant on a handful of companies for our genAI needs – what I call the AI as a Service (AIaas) providers – what rules will we need to make sure they establish the proper controls and safety procedures?

This goes above and beyond the rules of how AIaaS providers can source their training data. I’m also interested in how they protect that data supply chain and any AI artifacts generated from it.

As troublesome as genAI can be now, with hallucinations and other weirdness … imagine when models misbehave because they have been manipulated? Everything from poisoned training data to installation of a rogue model. A small problem inside OpenAI, Google, or any other AIaaS company can very quickly spread to their collective millions (if not billions) of API users, which will in turn impact their end-users, and so on.

2024/03/13: Smart homes

Interesting article on when your “smart” home includes the previous owner’s devices:

Living with the ghost of a smart home’s past” (The Verge)

For a deeper look into the complications of shared, smart home devices – especially the kind that are wired into the house itself, and what happens when they start talking to each other – check out Chris Butler’s work on this topic:

I have a hunch these articles will be relevant for years to come…

2024/03/14: Borrowing a page from the pharma playbook

I talk a lot about the outward-facing risks in using genAI. Mostly of the “model is wrong” or “genAI bot says dumb things” variety. And those are plentiful.

There are also inward-facing risks to consider. Such as: “what if it doesn’t work?”

Building an AI system is ultimately an R&D effort. You don’t know for sure that it will work until you actually get it working. And there’s no clear end date. That can make it difficult to allocate time, effort, and capital.

To borrow some ideas from the pharma industry, check out this Odd Lots episode with the Moderna CFO Jamey Mock. The interview doesn’t mention AI directly, but many of the lessons about allocation of funds on an R&D effort – especially of the long-term variety – apply to AI.

Moderna’s CFO on How to Allocate Capital in Big Pharma | Odd Lots

(For more details on this idea, check out my old blog posts: “Treating Your ML/AI Projects Like A Stock Portfolio” and “Reducing Risk in Building ML/AI Models”.)

2024/03/15: The more subtle fight over training data

In the fight over AI training data, people mostly think about LLM companies scraping websites. Let’s not forget about the quiet, everyday companies that surreptitiously collect/resell personal data. Like, say, your car’s manufacturer:

Florida Man Sues G.M. and LexisNexis Over Sale of His Cadillac Data” (New York Times)

(Or your food delivery app, or your grocery store, or …)

Bonus: This excerpt offers an important lesson in data collection policies.

If you can’t (or won’t) tell someone how you got their data, and specifically when/how they gave consent, you’re probably up to something shady.

“What no one can tell me is how I enrolled in it,” Mr. Chicco told The Times in an interview this month. “You can tell me how many times I hard-accelerated on Jan. 30 between 6 a.m. and 8 a.m., but you can’t tell me how I enrolled in this?”

2024/03/16: Ads, ads everywhere

Do you feel your apps are showing you more ads? Probably because they are:

Uber and Instacart Are Showing More Ads in Their Apps. Not All Customers Like It.” (WSJ)

Pay close attention to the “Ads that didn’t work” section. I’m glad the team admits that this didn’t work … but… it’s not clear why they even tried this in the first place?