It was one thing, then the other…
My dog needed to pee…
My dog needed to pee. That’s my excuse and I am sticking with it. It’s factual. He needed to pee and got me up too early. Just a bit too early, like 45 minutes, not really enough to go back to bed, but too early to be quite ‘with it’ right off. So the first cuppa coffee was kind of lost in the haze…
Elon Musk’s problem may have been that I started doing shit in that haze, and then, kept doing it while I worked on the second & third cups. Am an old guy & it takes a lot of coffee sometimes…
...anyway, I set out to use ChatGPT to demonstrate that EV batteries were a net energy loss.
That’s when things went spectacularly wrong.
Yes, the batteries are a net loss if they have a normal lifespan as they take 212.7 barrels of oil to make, and only replace 150 barrels of oil IF they meet the usual specs, and so few actually do. So my thing number 1 did happen, but not as I anticipated.
Anyway, we had bogus numbers spewing out like an oil gusher just brought home. 3 million plus barrels of oil per battery. WTF?
But, it’s AI so you just got to trust it yes?
Then, of course, you look, and get on the AI’s case about the goofy math. That takes some work, but whittles it down to 215 thousand barrels of oil.
That’s better, but eh? Still, it don’t sit well even with 3 cups in you.
So you go back & get that 4th cup, and start a process by process, hand audit of its math. That yields realistic numbers of 212.7 barrels of oil per battery to produce the things, while they only provide 150 barrels of oil in electricity over their lifespan.
Thus the first thing, of demonstrating the EV battery idea to be not good & a net loser, that is produced, but along the way, you get diverted into demonstrating that AI can’t be trusted. That ChatGPT is prone to really wonky behavior, especially if it involves more than one unit-of-measure, such as barrels of oil, and gallons of diesel fuel, and kilo Watts of electricity.
ChatGPT getting ‘confused’ was my second thing of the day. The original article that I had put on Substack was to the point of that first thing, but really, it also brought up the other thing.
Hopefully neither of my things were part of the things in his day.
Sigh...note that his tweet demonstrates the multiple tweet bug where it chops off the text if there are graphics involved.
Well, better luck to all us guys for tomorrow. Will get my dog to pee before we go to bed tonight.
I think Chatgpt is going to have it's hands full with Uncle Clif
I think what you really proved here is that AI does not really 'understand' anything. Further it is not self-aware enough to be capable of a reality check. All it can do is try to match situations it is presented with, with scenarios in its (limited) memory.
When a genius like Clif hits it with things that exceed its situational database, it should act like whatever that 1960s anthropomorphized robot was that said 'It does not compute!!' But instead it makes the fundamental error of extrapolating based on assumptions and conditions it is given, without the apparent ability to reality-check those either. And provides nonsensical conclusions.
Maybe OK for a discussion of the weather, but not ready for prime time.