ChatGPT and Calculators

What do we really value?

This is one of my infrequent “philosophical” type posts. An earlier version of this appeared on LinkedIn.

There was a LinkedIn post along the lines of “are we treating ChatGPT today like we used to treat calculators in the past”.

In my mind the question is “what skill do we believe is valuable that ChatGPT will replace”.

The parallels between how we treated calculators in a school setting (“no you can’t use them for homework”) vs how we’re treating ChatGPT (“no you can’t use them for homework”) needs a deeper dive.

Calculators

40 years ago we believed mental arithmetic was important; calculators and now computers show that’s not true. Well, to an extent.

I grew up during that period; we were taught how to use calculators, but we weren’t allowed to take them into maths exams; we were expected to know basic rules and be able to derive solutions. We even had a book of common formulas and log tables and so on.

So it might be fair to ask “Do I need to know the derivative of sin(x^2)?“. 40 years ago I could do that using the Chain Rule. 30 years ago I had a calculator that could do it. Today? I’ve no idea. But I can google it… (oh, right; d/dx f(g(x)) = f'(g(x)) . g'(x) so the answer is cos(x^2).2x). I’ve never needed to know that since leaving education.

However, it might also be fair to ask “Do I need to know what is 7*12?“. Now I can also use a computer for this, but I do know the answer; it’s useful in “7 ft 5 inches is 89 inches” type manner; I do simple arithmetic all the time.

It’s also useful to act as a sanity check; if the computer does a bunch of calculations you may be able to do some rough approximations to determine if the answer is sane. Computers make mistakes. Just ask British Post Masters. People have gone to prison because of belief that the computer was right.

So we may have been right 40 years ago to have concerns around the use of calculators; or perhaps more accurately around the blind trust that the machine always gave the right result.

ChatGPT

What skill will ChatGPT replace? Is it a skill we consider important; or will it just lead to enforcing the real skill (critical thinking) vs the mundane (data collation)?/

Now there’s also a qualitative difference between the two; 2+3=5 whether you do it by hand or by computer. But can we evaluate ChatGPT output the same way?

Given we know that current generation AI solutions give false answers very easily (see Google Bard’s demo day issues) there’s a definite need for critical thinking.

But without the knowledge base, how can you be critical? I can do a simple approximation in my head of a shopping bill and get something close to the total; “OK, the till receipt is probably accurate”. But how can I evaluate what an AI tells me about a topic I don’t know anything about?

Modern AI systems are kinda a black box; we train them with a bunch of data with known results, and the system learns to adjust variables so that it can better evaluate. Eventually the goal is that they can be fed new (known) data and come up with the right results; we then let it be used on unknown data.

The problem is that we don’t know what happens inside the box. A trial last decade on detecting skin cancer based on pictures had a good result on the training and validation data. But it turned out that the system was using the presence of a ruler to determine if a mark was cancerous or not. This is a great example of bias in the training data (pictures of cancerous marks were likely to be taken by a doctor and to have a ruler on it for sizing purposes) and led to the black box using totally the wrong data; of course it did well against the verification data!

Summary

Too many people believe “‘cos the computer said so” already (see the earlier post-office scandal). Add in some biases that may be encoded into the AI because of training data. Add in attackers deliberately tainting training data…

And I’m not going to even get into melt-downs, like a Microsoft AI chatbot going full racist after interacting on twitter. That’s a different problem entirely.

In my mind the challenge with a technology like ChatGPT is in how can we verify the output? We’ve developed tools against “Deep Fake” videos because there are discontinuities. But when the output can only be evaluated by a human? Hmm…

I foresee a variation of the email spam wars; spambots will do better to bypass filters; filters will get better to block the bots… rinse, repeat.

Who needs Tucker Carlson when the computer can make up the lies customised to your desires and you have no way of knowing if they’re true or false?

This is the challenge we have to face. That is the critical skill we need to retain.