From Ben Thompson's stratechery (hat tip to Dave Verwer at iOS Dev Weekely):

It happened to be Wednesday night when my daughter, in the midst of preparing for โ€œThe Trial of Napoleonโ€ for her European history class, asked for help in her role as Thomas Hobbes, witness for the defense. I put the question to ChatGPT, which had just been announced by OpenAI a few hours earlier:

[The chatbot's answer to "Did Thomas Hobbes believe in separation of powers" removed]...

This is a confident answer, complete with supporting evidence and a citation to Hobbes work, and it is completely wrong. Hobbes was a proponent of absolutism, the belief that the only workable alternative to anarchy โ€” the natural state of human affairs โ€” was to vest absolute power in a monarch; checks and balances was the argument put forth by Hobbesโ€™ younger contemporary John Locke, who believed that power should be split between an executive and legislative branch. James Madison, while writing the U.S. Constitution, adopted an evolved proposal from Charles Montesquieu that added a judicial branch as a check on the other two.

Good timing, Dave and Ben. I'd just seen StackOverflow's header chyron say they don't allow ChatGPT answers, which, if you understand society's use of language well, means...

We almost accept ChatGPT style answers, but not yet, dang it! But you know we're close because it's not so obvious we don't have to say it.

It's the door handle that says "Push" because it looks like a pull. That is, the StackOverflow warning tells us that AI answers are so convincing at this point you'd naturally assume those answers are okay to post [for internet points! Points!]. You have to be told to push the door to open. It looks like a pull.

door with handles towards you saying push

And that, of course, got me googling, and seeing that AI is finally getting close enough to what we thought the Eliza chatbot was nearly doing years ago.

As Dave mentions:

I remember chatting with Kim Silverman in the WWDC labs in 2008 about speech synthesis and recognition. He talked about the early days of speech synthesis in the late 1970s and how developers quickly progressed to 90% of the way there and then spent the next 30 years getting to 95%.

We really have been at 90% for a while. But now AI is at 90% for news stories and apparently close enough to start wondering when it'll come for us coders, too.

I'm far enough into my career that if it was at 100% tomorrow I would take the pink slip and relax at home, watching some chatbot do my job for me. Realizing I'm still biased, however, I was suspicious of my first take:

There's no way the code is going to be good at first. It'll need PRs like crazy for years.

My job has, for the last several years, becoming increasingly concentrated on PRs. I think it's mostly about tooling and scrum culture. Tooling makes PRing not just possible but easy now. We can all drop comments on specific parts of code, create PRs to PRs to show how we might do something small differently, and swap branches without having to reset our setup to run them.

Scrum culture says we need self-contained but non-specialist teams (give or take) which means we're all expected to have an opinion on someone else's code. It's a full stack developer's world today in a way it wasn't so much in 2000. I could digress for a while on bikeshedding, but, with a high-functioning team, that's mostly good.

But when the AIs start crapping out code like a new dev, good heavens, can you imagine? We'll need to train these things for a while. Maybe they'll get good enough that they take to the training really quickly, but -- and not just because of anecdotes like Thompson's post, above -- I think we've got a few more years of heavy PR work before that happens.

Labels: , , ,