Weekly recap: 2023-02-12

Posted by Q McCallum on 2023-02-12

I’ve been posting short notes on LinkedIn as of late, and I figured that I should periodically collect them and post them here.

What you see here is the last week’s worth of links and quips I have shared on LinkedIn (from Monday through Sunday).

For now I’ll post the notes as they appeared on LinkedIn, including hashtags and sentence fragments. Over time I might expand on these thoughts as they land here on my blog.

2023/07/06: Seinfeld meets ChatGPT

Why watch Seinfeld reruns when you can … generate whole new episodes? Or better yet, one long episode that runs 24x7?

“Seinfeld: Künstliche Intelligenz parodiert 24 Sunde pro Tag die Show über Nichts” (Der Spiegel)

(Coverage in English, courtesy of Vice. )

How soon till studios load up on GPUs? They could then churn out infinite content – that is: “space on which to sell ads” – with minimal human effort.

(Notice, I didn’t say it’d be good content. But, perhaps, still better than reality TV?)

#ai #gpu #seinfeld

2023/02/07: Judge uses ChatGPT

There’s a lot to be said about this, so I’ll focus on one thing:

This article demonstrates why Google has rushed to build a competitor to ChatGPT.

“An ML model” and “a search” engine’ may seem like very different animals, but they are flip-sides of the same coin:

  • Both are based on expressing real-world concepts as sets numbers (“vectors”), which a computer then compares for the purposes of grouping, sorting, pattern-matching, and spotting outliers.
  • Both are built on feeding a computer lots of data, so it can develop those comparisons. (ML model –> “large historical training dataset”; search engine –> “web pages from the internet”)
  • A prompt entered into ChatGPT is akin to a query entered into a search bar. “Please find me stuff related to this.” A new record passed to an ML model is asking, “what is this most similar to?”

Why does this matter?

Like a traditional search engine, ChatGPT was built on a lot of data from the public internet.

Unlike a search engine, ChatGPT summarizes those results. It doesn’t hand you a list of links that you need to click through yourself. So for (certain types of) research, it’s a step ahead of a plain search engine.

No wonder Google has been in a rush to compete with ChatGPT.

“Colombian judge says he used ChatGPT in ruling” (The Guardian)

2023/02/07: AI-washing

Odd – I thought the phase of AI-washing had ended. Maybe companies are laying claims to AI again now that crypto has lost some of its sheen?

“From Shoes to Insurance: Startups Latch Onto the AI Hype Cycle” (Bloomberg)

(What will be the next fad? One thought: maybe we’ll rename “AI” again and reset the hype cycle around “collect and analyze data for fun and profit.” )

#ai #data

2023/02/08: NIST’s framework for AI-driven risk management

NIST defines a framework for #AI -driven #risk management

2023/02/09: definitions matter

This is a large, yet often overlooked, #risk when it comes to technology:

Product/app teams that, lacking domain/social knowledge, insist on fitting messy concepts into neat little boxes. And then treat an oversight as a corner case.

“This Is What Netflix Thinks Your Family Is” (The Atlantic)

This concern goes well beyond what Doctorow explains here about defining “family” for Netflix password sharing. Consider the YouTube and Facebook copyright infringement detection systems that didn’t properly account for … classical music. )

Food for thought as you build your next tech- or #AI-based system. Have you really thought through the use cases, and tested your assumptions/definitions?

2023/02/10: Evaluating the machines

If machines are going to do the work, it’s important to know how to evaluate the machines:

#AI #ChatGPT #GenerativeAI

“At This School, Computer Science Class Now Includes Critiquing Chatbots” (New York Times)