When I was researching topics to talk about for the podcast, the concept of neural networks seemed like a totally reasonable one to discuss since it’s one specific method used within machine learning. But then I started reading about it and thought “Nope, this is a terrible idea Chris, you should not do this one, because you’ll just confuse people more than they were before and they will never listen to you again.” I gotta admit, I make a pretty good argument. But, I’m also a notoriously terrible listener. Then it dawned on me that this whole internal monologue was a great example of me using my built-in neural network. Mind blown!
So let’s explore how neural networks work, where they show up in our daily lives, and why they might be one of the most fascinating — and mysterious — technologies of our time.
Back to Basics
In the first episode, I talked about machine learning in general terms, as a way for machines to learn through pattern recognition and the various ways that that learning is structured.
A neural network on the other hand is a machine learning model that’s inspired by the human brain, that tries to recognize patterns the way our brains do. But it’s not trying to think like a brain. Instead, it copies the structure of one. Me notwithstanding, the human brain is arguably the most powerful learning system we know. It can recognize faces, understand language, solve problems, and adapt — all from experience, without being explicitly programmed.
Neural networks don’t replicate the brain, but they borrow the idea of learning through connections. Just like our brains use networks of neurons that fire together and strengthen over time, artificial neural networks adjust the strength of their internal connections to improve performance as they see more data.
By mimicking this structure — lots of simple units working together and adjusting based on experience — a system is created that can:
- Handle complexity
- Learn from examples
- And generalize from what it’s seen to things it hasn’t
In other words, modeling after the brain gives us a path to build machines that can learn, not just compute.
How It Works
So how does this actually work?
Let’s say you take a picture of your cat. Because, of course, cats. Your phone breaks it down into tiny pixels — just numbers, really. Maybe 1s and 0s, or color values like red 128, green 64, blue 200. Those numbers are fed into the first layer of the neural network: the input layer.
From there, things get… a little complicated. But I’ll try to simplify as best as my neural network will allow.
Think of the network like a series of filters. The first layer might detect basic patterns like edges, colors, and shapes. The next layer might combine those into eyes, ears, fur. And eventually, the final layer says, “Oh, that’s totally a cat.”
The system learns by comparing its guess to the real answer. If it’s wrong — let’s say it thought your cat was a dog — it goes back and tweaks the math. This process is called backpropagation — basically, learning from mistakes.
We can think of this like teaching a child to recognize animals.
Let’s say you show them hundreds of pictures, and each time you say, “This is a dog,” or “This is not a dog.”
At first, they might just guess — maybe they think anything with four legs is a dog. But over time, as they see more examples and get corrected, they start to notice the important details: the shape of the ears, the size of the nose, the texture of the fur.
They’re not memorizing the pictures — they’re learning the patterns.
Neural networks learn the same way. You feed them tons of labeled examples and the network adjusts its internal settings each time it makes a mistake. Eventually, it gets really good at spotting the difference. This type of machine learning is also referred to as deep learning because the network has many layers–each one learning a different level of a pattern.
It doesn’t know what a dog is in any human sense — it just becomes very, very good at identifying the visual clues that tend to show up in dog pictures, based on patterns it’s seen before.
This process — showing labeled examples and correcting mistakes — is what we call supervised learning. I talked about this in the last episode. That’s the most common way that neural networks are trained today. The learning is supervised by giving the model the right answers up front, and it learns from the difference between its guess and the truth.
There are other kinds of learning too, that I also mentioned, like unsupervised learning, where the model has to find patterns without labels, or reinforcement learning, where it learns by trial and error. But for most tasks like image recognition, speech-to-text, even detecting cancer (and of course, cats), supervised learning is the go-to approach.
Why It’s Useful
So, we’re now at the part of the podcast, where you might be asking yourself, “Great, who cares? How does this really affect my daily life?”
Well, here’s an example that I think is pretty amazing.
In 2017, a team of researchers at Stanford University used a special kind of neural network called a convolutional neural network to recognize melanoma. This type of neural network is really good at handling visual data — like photos, videos, or anything with pixels.
They fed it nearly 130,000 images of skin conditions — everything from harmless moles to potentially fatal melanomas. No rules. No if-then statements. Just thousands and thousands of labeled examples.
Then came the real test. They asked the AI to analyze new images it had never seen before — and pitted its answers against 21 board-certified dermatologists.
The result?
The AI matched the experts, diagnosis for diagnosis. In some cases, it even caught signs of cancer that the doctors missed. Oh, man.
This was a neural network doing what it does best: learning from data, finding patterns, and making decisions — all without ever being explicitly told what a melanoma is supposed to look like.
An algorithm, trained not in a medical school, but on a mountain of images–was able to spot cancer on par with trained professionals. Not because it understood what cancer is, but because it had seen enough examples to know what it usually looks like.
That’s the power of neural networks. They don’t memorize — they generalize. And in this case, that power could literally save lives. So, that’s pretty cool. Better than taking my job at least.
Limitations
Now, for all their power, neural networks come with some serious limitations — and one of the biggest is what’s often called the “black box” problem.
Here’s what that means: When a neural network makes a decision — say, labeling a photo as a dog (naturally), or predicting someone’s creditworthiness — we don’t always know why it made that decision. Sure, we can trace the math. We can see which layers were activated. But we can’t always interpret what that actually means in human terms.
It’s not like a checklist you can review: “It has four legs, floppy ears, and a wagging tail — must be a dog.” Instead, the network’s internal reasoning is buried in millions of tiny weights and calculations that don’t easily translate into something we can read or explain. Again, this is referred to as deep learning.
So even if the model gives the right answer, we can’t always say why — and when it gives the wrong one, that gets even trickier.
Here’s why it matters. Imagine an AI system is used to screen job applications, or approve home loans, or flag suspicious behavior at an airport. If it rejects someone unfairly — or worse, systematically favors one group over another — how do we catch that? Who’s responsible? And how do you appeal a decision if no one knows how the decision was made?
This is more than a technical issue — it’s a trust issue. And an ethical one.
And it’s part of what makes neural networks both powerful and potentially dangerous. They can help detect cancer, sure. But they can also reinforce human bias, quietly and at scale, unless we’re keeping an eye on it.This is why researchers are working on something called explainable AI which are tools and techniques to help us peek inside the black box. Because it’s not enough for a system to be smart — we need it to be understandable and trustworthy.
Conclusion
Just like all machine learning models, neural networks don’t dream, or feel, or understand the world the way we do. They sometimes get things wrong, can be biased in their decisions, and because they don’t actually “understand” like a person does—they just find patterns. This is important to keep in mind so we don’t totally freak out about AI taking over and turning us into batteries.
Still, neural networks have opened the door to huge leaps in AI. And they’re only getting more powerful. From recognizing your voice (and cats of course) to enabling self-driving cars, neural networks form the foundation of deep learning and make modern AI possible.
Leave a Reply