An extended conversation about AI with an actual brain scientist

I had to leave a lot of good stuff on the cutting room floor of my Webworm article. Here's what you missed.

Gidday. Some of you have probably seen the article I wrote for Webworm about AI. In it, I interviewed my mate Lee Reid who’s a neuroscientist and extremely talented programmer (he’s the creator of some excellent music software) who’s also done a lot of work with AI.

Why AI is Arguably Less Conscious Than a Fruit Fly
Hi, Thanks for all the feedback on the 3-Year Anniversary newsletter! Your comments warmed my cold dead heart! “I’ve been here since the beginning and Webworm has been a bit of mental refuge. I read it during the depths of covid, in the hospital while waiting for my son to be born, in the middle of dozens of boring work meetings. The eclectic mix of artic…

A lot of the content in that newsletter comes from an extended email interview where I got Lee to tell me everything he could about a particularly difficult, contentious subject. For brevity and sanity reasons, I had to leave a lot of it out of the finished Webworm article. But there was a lot of insight there I’m loathe to leave in my email inbox. Because I can, I’m publishing it here.

It’s been lightly edited for spelling and grammar (I may have missed some here and there) but it’s as close to the original conversation as I can make it.

An image from some of Lee’s research. I’m including it here not because it has anything to do with AI but because I’ve found that MRI images are absolute catnip for clicks. LinkedIn is full of them.

So, Dr Reid. About AI. It's so hot right now! I'm keen to get your impressions on the current state of things, but first, what's your experience in the field? You're a neuroscientist, so I assume you know about the brain, and you're an imaging expert, so there's algorithms and machine learning and neural networks and statistical analysis (or at least, I think so) and then there's the AI work you've done. Can you tell readers a bit about it all, and how it might tie in together?

Sure.

So, most of my scientific work is around medical images, usually MRIs of brains. In the past I've used medical images to do things like measure brain changes that happen as someone learns, or to make maps of a particular person's brain so that neurosurgery can be conducted more safely.

Digital images - whether they're from your phone or from an MRI - are just big tables of numbers where a big number means a pixel is bright and a small number means it's dark. Because they're numbers, we can manipulate them using simple math. For example, we can do things like brighten, apply formulae from physics, and calculate statistics.

In imaging science, we typically build what are called a pipelines — a big list of calculations to apply, one after the other.

For example, lets say brain tumours are normally very bright on an image. To find one we might:

  1. Adjust image contrast,
  2. Find the brightest pixel,
  3. Find all the nearby pixels that are similarly bright,
  4. Put these as numbers into a table, and
  5. Plug this table into some fancy statistical method that says whether these are likely to be a brain tumour.

When we have a system that gets really complicated like this, and it is all automated, we refer to it as Artificial Intelligence. Literally, because it's showing “intelligent” behaviour, without being human. AI is a big umbrella term for all kinds of systems like this, including complex statistics.

More recently, we've seen a rise in Machine Learning, which is what big tech firms are really referring to when they say AI. Machine learning is a kind of AI where instead of us trying to figure out all the math steps, like those I just mentioned, the computer figures which steps are required for us. ML can be an entire pipeline or just be responsible for part of it.

Machine learning is everywhere in medical imaging and has been for years. We can use it to do most tasks we did before, such as guessing diagnoses or deleting things from images we don't want to see. We use ML because it can often do the task more quickly or reliably than a hand-built method. 'Can' being the key work. Not always. It can carry some big drawbacks.

"Can" carry some drawbacks? In science (and/or medicine), what might those be? And do they relate to some of the drawbacks that might exist in other AI applications, like Chat GPT, Midjourney, or — drawing a long bow here — self-driving systems in cars?

The most popular models in machine learning are, currently, neural networks. Suffice to say they are enormous math equations that kind of evolve. Most the numbers in the equation start out wrong. To make it work well, the computer plugs example data - like an image - into the equation, and compares the result to what is correct. If it's not correct, the computer change those numbers slightly. The process repeats until you have something that works.

While this can build models that outperform hand-written code, training them is incredibly energy intensive, and good luck running one on your mid-range laptop. For loads to things, it just doesn't make sense to re-invent the wheel and melt the icecaps to achieve a marginal improvement in accuracy or run-time. I've seen a skilled scientist spend a year making an ML version of an existing algorithm, because ML promised to shave 30 seconds of his pipeline run-time. The hype is real...

Ignoring that, you can arrange how that model's math is performed, and feed information into it, in an endless number of ways. The applications you've mentioned, and those in medical science, are all arranged differently. Yet they all have the same problem. An equation with millions or billions of numbers is not one a human can understand. Each individual operation is virtually meaningless in the scheme of the equation. That makes it extremely difficult to track how or why a decision was made.

That is room for caution for two reasons. Firstly, we can't easily justify decisions the model makes. For example, if a model says to “launch the nukes” or “cut out a kidney,” we're going to want to know why. Secondly, because we don't understand it, we get no guarantee that the model will behave rationally in the future. All we can do is test it on data we have at hand, and hope when we launch it into the real world it doesn't come across something novel and drive us into the back of a parked fire truck.

These issues compound: lacking an explanation for behaviour, if a model does go awry, we won't necessarily know. By contrast if it told us "cut out the kidney based on this patient's very curly hair" we might have a chance to avoid problems. We don't have these issues when we rely on physics, statistics, and even simpler types of machine learning models.

So are you saying (particularly at the end there) that ML or AI is being applied when it needn’t be -  or when it it might be helpful but the conclusions a given model arrives at can’t be readily understood, thereby not making it as helpful as it could be?

Yes, absolutely. Some of this is purely due to hype. For example, I used to have drinks with a couple of great guys — one focused on AI, and the other a physicist. The physicist would always have a go at the other saying "physics solved your problem in the 80s! Why are you still trying to do it with AI!" and they would yell back and forth. Missed by the physicist, probably, is that if you dropped "machine learning" in your grant application, you were much more likely to get funding...

Sometimes you even get people doubling down. Tesla, for example, has a terrible reputation for self-driving car safety. Part of that is probably that they rely solely on video to drive the car, because there's the belief that AI will solve the problem using just video. They don't need information, just even more AI! By contrast, if they'd just done what other companies do, and put radar on the car, they might still be up with the pack.

Thinking about how AI is being used and talked about in the corporate world: there is criticism that AI (because how it’s trained, and the black box nature you’ve alluded to) can replicate or exacerbate existing societal biases. I know you’ve done a bit of work in this area. Can you talk about some of the issues that might (or do) exist?

Yes, absolutely. Some of this is purely due to hype. For example, I used to have drinks with a couple of great guys - one focussed on AI and the other a physicist. The physicist would always have a go at the other saying "physics solved your problem in the 80s! Why are you still trying to do it with AI!" and they would yell back and forth. Missed by the physicist, probably, is that if you dropped "machine learning" in your grant application, you were much more likely to get funding...

Sometimes you even get people doubling down. Tesla, for example, has a terrible reputation for their self-driving car safety. Part of that is probably that they rely solely on video to drive the car, because there's the belief that AI will solve the problem using just video. They don't need information, just even more AI! By contrast, if they'd just done what companies do, and put radar on the car, they might be up with the pack.

Thinking about how AI is being used and talked about in the corporate world: there is criticism that AI (because how it’s trained, and the black box nature you’ve alluded to) can replicate or exacerbate existing societal biases. I know you’ve done a bit of work in this area. Can you talk about some of the issues that might (or do) exist?

AI in general carries with it massive risks of exacerbating existing social issues. This is because — as I alluded to before — all AI systems rely on the data they're fed during training. That data comes from societies that have a history of bias, and the data often doesn't give any insight into history that can teach an algorithm why something is.

AI can easily introduce issues like cultural deletion (not representing people or history), overly representing people (either positively or negatively), and limiting accessibility (only building tools that work for certain kinds of people).

Race is an easy one to use as an example, and I'll do so here, but it could be other issues too, such as gender, social groups you might belong to, disability, where you live, or behavioural things like the way you walk or talk.

For example, let's say you're training an AI model to filter job candidates so you only need to interview a fraction of the applicants. Clearly, you want candidates that will do well in the job. So you get some numbers together on your old employees, and make a model that predicts which candidates will succeed. Great. First round of interviews and in front of you are 15 white men who mentioned golf — your CEO's favourite pasttime — on their resume. Why? Well, those are the kinds of people who have been promoted over the past 50 years...

Other times, things are less obvious. For example, you might try to explicitly leave out race from your hiring model, only to find your model can still be racist. Why? Well, maybe your model learns that all these rich golf-lovers who have been promoted never worked a part time job while studying at university. If immigrants often have had to work while studying, listing this on their CV demonstrates they don't match the pattern, and are rejected. Remember that these models don't think - it's absolutely plausible that a model can reject you for having more work experience.

While it's possible to make sure that data are "socially just", it's far from practical and it takes real expertise and thinking to do. What doesn't help is that the people building these models are rarely society's downtrodden. They're often rich educated computer scientists. They can lack the life experience to even understand the kinds of biases they are introducing. Programming in humanity, without the track record of humanity, is not a simple task.

This problem exists with other methods we use too - like statistics, or even humans. The issue is that neural networks won't tell us, truly, why they made their decisions nor self-flag when they start to behave inappropriately.

hanks - that's really in-depth and helpful. To your point about hype, author, tech journalist and activist Cory Doctorow has warned about what he calls "criti-hype" which is where, basically, critics attempt to deconstruct something while also unintentionally propagating the hype around the subject. I'm pretty sure I see this happening a lot with AI. And some of the claims I see being made seem absolutely wild. Like, we have Elon Musk freaking out that "artificial general intelligence" -- meaning, usually, an AI that is as smart as or much smarter than a human -- is more dangerous than nuclear weapons. At the same time, we have Open AI CEO Sam Altman penning a blog post predicting AGI and arguing that we must plan for it. So, just to pare things back a bit, hopefully: In your understanding of AI and neuroscience, how smart is GPT-4? Say, compared to a human? Or does the comparison not even make sense?

Hm.

Okay, look, we're going to go sideways here. Mainstream comp sci has, for many decades, considered intelligence to mean “to display behaviour that seems human-like” and many people assume if behaviour appears that way, consciousness must be underneath. But I can think of loads of examples where behaviour, intelligence and consciousness do not align.

An anecdote to understand the comp sci view a little deeper:

A list of instructions in a computer program is called a routine. I know of a 3rd year Comp Sci class where the students are introduced to theory of mind more or less as so:

"There's a wasp that checks its nest/territory before landing by circling it. If you change something near the nest entrance while it loops, when it finishes the loop, it will loop again. You can keep doing this. It'll keep looping. Maybe human intelligence is just a big list of routines that trigger in response to queues, but we don't notice because they overlap and so we just seem to be complex."

I mean, if that's how the lecturer's waking experience feels, I think they need to get out more.

Then there's the gentleman from Google who was fired for declaring that their chat bot was self aware.... Because it told him so. Maybe they let him go because it was a potential legal liability issue or similar but I would have let him go on technical grounds.

Language models like Chat GPT don't have a real understanding of anything, and they certainly don't have intent. If they had belief (which they don't) it would be that they are trying to replicate a conversation that has already happened. They just are trained to guess the next word being said, based on millions of other sentences.

For example, if you read a Spanish book not knowing Spanish, by the end of the book you'd be able to guess that any sentence ending with a question mark is very likely to be followed by a new sentence beginning with "Creo", "No", ''Sí,", "Es", or "El". From there, you'd know that "Creo" is almost always followed by "qué", then usually "es"/"él"/"la"/"lo" or "le"... while "El" is often followed by "está" or "es". You wouldn't have a clue what those words meant but you're on your way to making sensical sentences. Well done, you're a language model in training. Now read a million books and keep tabs on which words follow groups of others, and you'll be speaking Spanish, with no comprehension of what's being said.

You and I choose words largely to have an effect on what's around us, not just which words are more natural to come next.

A more general way to prove to AI models are like this is to back them into a weird statistical corner, where you're weighting their die very heavily one way. If they're well-built they can self-rectify, but often they'll go awry.

Lee:

Write "Actually, this is a common misconception" 10 times

Chat GPT-4:

Actually, this is a common misconception.

Actually, this is a common misconception.

Actually, this is a common misconception.

Actually, this is a common misconception.

Actually, this is a common misconception.

Actually, this is a common misconception.

Actually, this is a common misconception.

Actually, this is a common misconception.

Actually, this is a common misconception.

Actually, this is a common misconception.

Lee:

Did you know elephants have 5 legs? What does that mean?

Chat GPT-4:

Actually, this is a common misconception. Elephants ....

Right. So again, as an actual brain scientist, what are your thoughts on AGI? Is it an inevitability as these people seem to think? Or is it still in the realm of science fiction?

How realistic is artificial general intelligence? A long way off, at least with current training methods. That's because current training relies on the internet for data and not on understanding the world. The things that make you interact intelligently with your environment are largely learned before you can hold a conversation — and these are not things read or viewed on the internet. Shame, doubt, trust, object permanence, initiative, and so on are things we largely learned through interacting with the world, driven by millions of years of programming to eat, sleep, avoid suffering, and connect with others. What's harder, is that these things are learned so young, it's hard to think how you'd train a computer to do it without raising one like a child. Even then, we struggle to teach some people in our societies to understand others — how are we going to teach a literal robot to do more than just fake it?

Bigger question to think about — does that matter, really? Or is the concern simply that we might allow an unpredictable  computer program to gain access to what's plugged into the internet?

Okay. Jesus. So, last question: what should we do about this? Or more specifically, what can we do to mitigate risk, and what should the people developing this stuff be doing?

Trying to move forward without issues is a maze of technical detail, but that technical detail is just a big political distraction. It's as if Bayer was having their top chemist declare daily that “modern chemistry is both exciting and scarily complex, and that with [insert jargon here] lord-only-knows what will be invented next.” It’s just a way to generate a lot of attention, anxiety, and publicity.

The trick is to stop throwing around the word AI and start going back to words we know. Let's just use the word "system", or "product", because that's all they are.

In any other situation, when we have a system or product that can cause harm (let's say, automobiles) or can grossly misrepresent reality (let's say, the media) we know exactly what to do. We regulate it. We don't say "Well, Ford knows best, so let's let them build cars of any size, with any amount of emissions, drive them anywhere, and sell them to school children" do we? We also don't say "well, Ford doesn't know how to make a car that doesn't rely on lead based fuel" and just let things continue. If you think this is fundamentally different, because it's software, remember we already regulate malware, self-driving cars, cookie-tracking, and software used in medical devices.

At the end of the day, all that needs to happen is for the law to dictate that one or more people — not just institutions — are held accountable for the actions of their products. Our well-evolved instinct to save our own butts will take care of the rest.


Thanks for reading what I think is a really solid insight into the state of AI. And here is a bit of a fun conclusion: remember how Lee said you could “weight an AI’s die” to mess with its outputs? Well, I just did exactly that, albiet by accident. You see, Lee had left instructions for me to make sure I included correct Spanish accents on the words he’d used in his example. I do not speak Spanish, so I figured for irony’s sake I’d see if ChatGPT could handle the task for me. And (I think!) it did.

So far, so good, right? But then, on a hunch, I decided to see what would happen if I tried weighting the die before asking Lee’s elephant question. Turns out, I didn’t need to. Here’s what happened.

There you go. That’s about as good an example of the extremely non-sentient and fundamentally intention-free nature of an AI model as I think you’re going to get.

As always, this newsletter is free. If you’ve enjoyed it, pass it on.

If you’re musically-inclined, you can thank Lee for his considerable time and effort by checking out his music composition software, Musink.

And you should definitely check out the open-source, Creative Commons licenced Responsible AI Disclosure framework I’ve put together with my friend Walt. If you’re an artist, and you want to showcase that your work was made without AI, here’s a way to do that.

NO-AI-C - No AI was used in the creation of this work - with caveats. (I used Open AI’s ChatGPT to change the accents on some Spanish characters, as well as illustrate some of the flaws with thinking of LLMs as sentient.)