Weekly recap: 2023-11-05

Posted by Q McCallum on 2023-11-05

What you see here is the last week’s worth of links and quips I have shared on LinkedIn, from Monday through Sunday.

For now I’ll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.

2023/10/30: Policies around model updates

If your company does enough AI, you’ve probably ironed out the mechanics of migrating a new model to production: “move binaries here, stop server, restart server,” that kind of thing. But what policies exist around that migration?

  • How many people need to sign off on it? And from which departments (legal, PR, other)?
  • What performance metrics do you need to consider when updating?
  • What other documentation must accompany a model change?

Updating an AI model can lead to changes in any system downstream of that model’s outputs. This can impact everything from automated purchasing decisions, to fraud detection, to inventory management. Pushing new models should not be taken lightly.

Consider Hedge fund Two Sigma, which has encountered an issue with unauthorized model updates. (These are algo trading models, not necessarily AI models, but the core lesson still stands.)

Hedge Fund Two Sigma Is Hit by Trading Scandal” (WSJ)

Key point:

Big firms such as Two Sigma usually closely monitor and are fully aware of all important changes to its trading models. “In well-run firms, all changes—calibrations or model changes—are governed by procedures so that they must be disclosed and approved by the proper people,” said Aaron Brown, a veteran quant who wasn’t aware of Two Sigma’s situation.

You may remind me that Two Sigma experienced an issue because of unauthorized model changes. This is true. But you’d then have to ask yourself:

  • How did Two Sigma uncover this?
  • Could your company do the same?
  • How would you define, and then detect, a rogue model change?

2023/10/31: Facebook subscriptions

This has been in the works for a bit, but it’s now real: Facebook will offer paid, ad-free subscriptions to users in the European Union (EU):

Facebook and Instagram users in Europe can pay for ad-free versions” (The Guardian)

Facebook is supposedly doing this to satisfy EU privacy regulations. There’s no doubt some truth to that, but I sense there’s more to the story: it could be a slick way for the company to move away from the ads business (or, at least, to diversify) while claiming it was someone else’s idea.

Why would they want to diversify? Simply put, the entire targeted ads business – not just Facebook – faces a triple threat:

  • increased consumer privacy concerns
  • new data/privacy regulations
  • third-party changes (Apple introducing stronger privacy controls into iOS, Google looking to limit third-party cookies in Chrome)

Besides these external matters, they’re also staring at the internal matter of targeted ad performance:

While the first two points may be subjective and/or based on small sample sizes, well … that third one comes straight from the source.

Given all of that, it makes sense for ad-driven platforms to look for new revenue streams.

(I’ve written about this before, in the context of streaming video services mixing paid and ad-supported plans.)

Facebook may use the EU as a test-drive before bringing paid plans to other countries. Hell, Twitter’s already doing it (Bloomberg, 2023), so why not?

2023/11/01: Model languages and controls

The crew at Le Monde tested Microsoft Image Creator (Créateur d’image de Microsoft, which runs OpenAI’s Dall-E 3 behind the scenes):

On a testé… le créateur d’images par intelligence artificielle gratuit proposé par Microsoft” (Le Monde)

Of note:

1/ Language: Image Creator can handle more than English. (The article includes the prompts the team used, if you’d like to follow along in French.) That’s huge. Support for languages beyond English puts these tools in the hands of a much wider audience.

2/ Controls: I’ve talked a lot about risk management for AI chatbots (such as in this piece for O’Reilly Radar) and controls are a key part of a risk management strategy. From what I’ve read here, Image Creator has three key controls in place:

  • It implements rate limits, so you can only generate so many images in a given time span. That should slow down anyone exploring prompt injection techniques.
  • There are prompt filters in place, so Image Creator will reject prompts it deems inappropriate.
  • People who submit too many inappropriate prompts risk account suspension.

While those controls won’t stop every troublesome prompt, combined they should slow down attackers. (That, and I’m sure they’ve implemented other controls.)

3/ User accounts: Some of the aforementioned controls are possible because Image Creator requires a Microsoft account. On the one hand, this is an easy way for Microsoft to boost its user base. On the other hand, it may also lead to stolen and/or faked Microsoft accounts becoming a hot item for people who want to misuse the system.

4/ Terms of Use: Image Creator is limited to non-commercial use. Per the Terms of Service:

7. Use of Creations. Subject to your compliance with this Agreement, the Microsoft Services Agreement, and our Content Policy, you may use Creations outside of the Online Services for any legal personal, non-commercial purpose.