AI: Could it turn us all into paperclips?

AI

A couple of months ago, I was at a dinner party with a professor of one of the hard sciences at Cambridge. The talk turned to AI, and he told us the following story.

He had asked ChatGPT to write one of the essays that he sets his students.

The result, he said, was a solid 2:1 piece of work.

One of his colleagues believed that the students were making use of this, but, rather than fight it it, he simply passed that week’s batch of marking to the AI and let it, literally, mark its own homework.

It is difficult to grasp the speed at which uses for AI technology are appearing.

So, for a flavour, here are a few things from the latest issue of a weekly newsletter aimed at software developers.

The big news is the release of Facebook’s “large language model”: the same set of programs as Facebook uses are available to anyone who fills out a form and downloads it.

Then there is an assistant that will write quite complicated programs for you if you tell it in some detail what to do — the example given is a game of “Snake” — which appear, in the demo, in about five minutes from nothing at all.

Another app produces short videos from a text description.

Meanwhile, in Ireland, a chain of provincial papers has experimented by publishing a clickbait opinion piece (“Should refugees be sent home?”) that was generated also by AI. 7

These are just the novelties that have appeared in one week.

All these programs are variations on a couple of techniques that broke into the public arena last year, and which are together known as “large language models”, even when the language in question is pictorial.

The models consume pretty much everything that anyone has ever published on the internet, and are then trained, using incomprehensibly complex statistical techniques, to recognise patterns within this universe of data.

In particular, they learn which words are most likely to follow other words in any particular context.

This is a skill that humans interpret as answering a question. But, when machines do it, they don’t understand the question in the way that a human does.

They don’t, in fact, understand it at all, though they have been trained to respond as if they did. Sometimes, this is convenient even for a writer.

I can pass the text of this paragraph to one and tell it to change “model” from singular to plural, and it will fix all the affected verbs as well. In a similar way, it is easy to change the tense in which a passage has been written.

The models can also learn which patterns of pixels have been associated with particular words — a skill that humans interpret as being able to understand what a picture shows. Here again, there is no understanding in the machine.

The apparent understanding — the descriptions of these pixels — was supplied by human labour.

You’ve done it yourself every time you have been asked to click on all the traffic lights in a CAPTCHA, or, indeed, every time you have labelled a friend in a photograph on Facebook.

All the machine is doing is sifting out which patterns are most often associated with which descriptions.

But the internet is now so vast, and these machines and techniques are so powerful, that they can appear to be a reasonably accurate representation of all the things that internet users care about in the world, and this representation can be manipulated in all sorts of ways.

Many of these are obviously harmful.

If you want to download Facebook’s model, you must agree to a long list of things that you will never use it to do.

Among them are the obvious categories — human trafficking, child sexual abuse, terrorism, inciting or promoting violence, or discrimination, copyright infringement, impersonating particular people, or indeed pretending that you are a human. . . The list goes on.

These would not be banned if the technology could not be used for them, and the agreement is obviously impossible to police. It can all feel completely overwhelming and very dangerous.

There are questions that help us to put the current debates about AI into perspective.

We can ask first whether the Vatican is alive. We do talk of the Vatican as if it had agency, and intellectual activity.

To say that “The Vatican thinks in centuries” is a perfectly sane thing to do, and helps us to understand what it does. To say that it has learned from experience is also quite obviously true.

Canon law on its own is simply a written set of decision-making procedures, what computer scientists call algorithms.

These can also be seen as the products of a technology — writing — which made possible the storage and manipulation of information and the creation of knowledge in ways that are not available to purely oral societies.

As one astute critic pointed out, this transformation has costs: “The letter killeth, but the spirit giveth life.”

None the less, the benefits are greater, especially in competition with societies that lack the skills of reading and writing. Continue reading

 

Additional reading

News category: Analysis and Comment, Great reads.

Tags: