You use one every day; probably dozens of times before breakfast. It’s how you check out at the grocery store, turn up the volume in your car, or doom-scroll instead of doing literally anything else that in comparison would seem productive.

The touchscreen has quietly become one of the most familiar interfaces in modern life. So familiar in fact, that most of us never stop to think about how it works. Why can it tell the difference between a swipe and a tap? Why doesn’t it respond when your hands are wet or cold? 

In this episode, we’re going beneath the surface—literally. We’ll look at where touchscreens came from, how they work, why they sometimes don’t, and how they’ve changed the way we design the digital world around us.

What is it?

While touchscreens feel like a relatively modern invention, the roots go back much further than you might expect. The first touchscreen is credited to Eric Arthur Johnson who—as an engineer working at the fancy-pants-named Royal Radar Establishment in the UK—wrote an article in 1965 describing a capacitive touch interface. By the early 70s, that interface was being used in air traffic control systems. These early screens allowed operators to interact directly with radar displays; a super novel idea at the time. 

Well, not to be outdone, American inventor Dr. Samuel Hurstin in the late 1970s developed and later patented a resistive touchscreen while working on an instrument to read x-ray data. Resistive touchscreens became popular because they were relatively inexpensive and could be operated with a stylus or in the case of a medical setting, a gloved hand.

In 1983, HP released the HP-150, one of the first touchscreen computers sold to consumers. But the HP-150 didn’t use capacitive or resistive touch—it used infrared light. A grid of tiny infrared beams ran across the front of the screen, and when a finger interrupted one, the system cut that finger in half… ha, I’m just kidding. It registered the position. It was clever, but the system was prone to interference from dust and debris, and perhaps was a bit ahead of its time, and it never really caught on.

By the late ’80s and early ’90s, touchscreens were becoming more common in commercial applications like ATMs, industrial control panels, and early point-of-sale terminals. These were mostly resistive screens—durable and simple, even if they weren’t especially responsive.

Then came consumer devices. Apple’s Newton in 1993, Palm Pilots in the late ‘90s, and later, the Nintendo DS—all relied on resistive touch. These screens were designed for use with a stylus and could only register one point of contact at a time.

By this time, multitouch was being explored in research labs and in 1982, the University of Toronto demonstrated a human-input multitouch system, and by 2005, researchers at NYU had developed more refined multitouch interfaces.

But the real tipping point was 2007, with the launch of a super obscure and now largely forgotten device called the iPhone. The equally obscure company, Apple, didn’t invent capacitive touchscreens or multitouch, but they were the first to package it in a way that felt seamless, and truly consumer-ready. The iPhone used a projected capacitive touchscreen, which allowed for highly accurate and responsive multitouch gestures like swiping and pinching. Too bad it was such a flop.

From there, touchscreens became standard—not just in phones, but in tablets, appliances, cars, and kiosks. The technology continues to evolve, but the core interaction—reaching out and directly manipulating digital content with your fingers—has remained central.

How does it work?

As I’ve already mentioned, there are two main kinds of touchscreens: resistive and capacitive.

Resistive is the older type. Think stylus-based screens, some ATMs, and those touchscreen soda fountains that have that hesitation so you end up spilling your Mr. Pibb all over your hands and looking totally lame in front of all those people at the 7-11… as an example.

Here’s how they work: They have two thin layers with a tiny gap between them. When you press down, the layers touch and this registers your input… albeit sometimes way too slow, ima right? 

Anyway, it’s super basic. But… no multitouch. It’s not very precise and it’s kinda slow.

Capacitive screens, on the other hand, which are used in most modern smartphones and tablets, rely on the electrical properties of your skin. Your body naturally carries a small electric charge, and when you touch the screen, it disturbs the screen’s own electrostatic field. Sensors embedded in the glass detect that change in capacitance and calculate the exact location of your touch based on where the disruption occurs.

This method doesn’t require pressure, just contact with something conductive. That’s why you can tap lightly and still get a response. But it’s also why things like gloves or plastic styluses don’t work unless they’re specifically designed to conduct electricity.

Modern capacitive touchscreens rely on a grid of tiny, transparent electrodes embedded just beneath the glass. These are arranged in rows and columns to form a sensing matrix, and the way that matrix detects your touch depends on two different techniques: self-capacitive sensing and mutual-capacitive sensing.

With self-capacitive sensing, each electrode in the grid is measured independently. With some nifty electrical engineering that is way too nerdy to get into, it can figure out which row or column is being touched. The upside of this is that it’s very sensitive, and works well even if your finger is hovering just above the screen. The downside? If you’re touching multiple spots at once, it can’t always tell exactly where because it’s only measuring each point separately. This makes self-capacitive screens less ideal for true multitouch.

Mutual-Capacitive Sensing is where things get a bit more clever. Instead of looking at each electrode on its own, mutual-capacitive sensing measures the relationship between the horizontal and vertical electrodes; think of it as checking how much charge is shared between two intersecting lines.

When your finger touches the screen at a specific point in that grid, it disrupts the flow of charge between those crossing electrodes. The system can pinpoint the exact x and y coordinates by analyzing this interference. 

The upside with this method is that it’s great for multitouch. Because it measures intersections, the system can detect multiple fingers on the screen at once and track them accurately. The downside however is that it’s slightly more complex to design and compute in real time, especially on larger displays.

In most modern devices, both methods are used together. Self-capacitive sensing helps with things like detecting when your finger is hovering, or enhancing palm rejection. Mutual-capacitive sensing handles the precise tracking of swipes, taps, and multi-finger gestures.

Why it matters?

Typically, touchscreens are very reliable, until they’re not. Take liquids, for example. A few raindrops on your screen, or let’s say some spilled Mr. Pibb, can cause some issues. That’s because water is conductive, just like your finger. A capacitive screen can’t always tell the difference between intentional touches and stray droplets, so it starts registering ghost taps: phantom inputs from nowhere. Spooky.

Cracks introduce a different kind of confusion. If the glass is broken but the underlying sensor layer is intact, the display might still show a normal image. But the touchscreen might behave unpredictably registering false touches, missing real ones, or reacting slowly. That’s because the electrodes under the glass rely on consistent contact and spacing, and a cracked surface disrupts that balance.

Then there’s the strange fact that sometimes your finger doesn’t seem to work at all; usually when it’s cold. Cold fingers have lower blood flow, and that reduces the conductivity of your skin. To the screen, you’re barely registering at all. Add gloves into the mix (unless they’re specially designed with conductive thread) and you’re essentially invisible.

To help compensate for all this, modern screens often incorporate self-capacitive sensing, which can detect when something conductive, like a finger, is nearby, even if it’s not pressing down firmly. This helps with things like palm rejection, a feature that’s crucial for drawing tablets and stylus-heavy devices. When you rest your hand on the screen while sketching, the system has to figure out which contact points to ignore. It does this by comparing touch pressure, timing, location, and motion. If it senses a stationary, flat contact that’s not moving independently from your stylus, it assumes: “probably just your palm.”

Meanwhile, to make up for the lack of physical buttons, many devices use haptics: tiny vibrations or pulses triggered when you tap the screen. Under the hood, this usually involves either a small motor or actuator. It’s a bit of sensory trickery that makes a flat surface feel interactive and responsive; essentially faking tactile feedback so your brain doesn’t miss physical keys too much.

When all of this works together—gesture recognition, palm rejection, haptics, and smart sensing—you get the illusion of a screen that just knows what you want. 

Design considerations

The rise of touchscreens didn’t just change what our devices looked like, it fundamentally changed how we interact with them. And in the process, it forced designers to rethink decades of interface conventions.

In the early days of computing, interfaces were built around the keyboard and mouse. You had precise control, tiny targets, and interactions that depended on hovering, right-clicking, or dragging. But the moment we replaced that precision with the blunt instrument of a finger, everything had to scale up. Literally.

Buttons had to get bigger. Menus had to collapse into icons. Interfaces had to become flatter, simpler, and more forgiving. Designers started talking about the fat finger problem: the idea that people weren’t just pointing anymore, they were poking. And that meant you couldn’t afford tiny tap targets. Mis-taps were frustrating, especially when there was no “hover” state to preview what would happen.

It wasn’t just about design, it was about behavior. Gestures like swiping, pinching, and dragging became new forms of language. You weren’t just clicking––now you were flipping pages, tossing away emails, stretching photos. 

Touchscreens brought interface metaphors to life in a more literal way. But this new flexibility came with trade-offs. Nowhere is that clearer than in cars.

As touchscreens started showing up in dashboards, designers embraced the clean look; big glossy panels replacing clusters of knobs and dials. So, sexy. But in the real world, those knobs and dials had a purpose: you could operate them without looking. Adjust the temperature, skip a track, turn up the volume—all by feel. A touchscreen demands your visual attention. Even finding the right spot to tap can take your eyes off the road for longer than you think.

Some automakers are now backing away from all-screen designs, reintroducing physical controls where they matter most. It’s a reminder that just because something can be a touchscreen doesn’t mean it always should be.

Accessibility added another layer of complexity—and opportunity. For people with vision impairments, touchscreens can be either a barrier or a breakthrough. Screen readers and voice assistants have made smartphones more   ever, but only when designers follow accessibility guidelines. Features like haptic cues, voice navigation, adjustable contrast, and gesture-based shortcuts have opened up entirely new possibilities.

Touchscreens have made technology more direct, more personal, and more portable. But designing for them isn’t about cramming everything onto glass—it’s about making that glass feel intuitive, responsive, and even invisible.

Conclusion 

Touchscreens are one of those technologies that feel simple on the surface. You tap, it responds. But behind that glass is a carefully orchestrated mix of hardware, software, and design. Tiny electrical fields. Sensor grids. Algorithms trained to tell the difference between a fingertip and a water droplet… or a palm and a pen.

Over the decades, we’ve gone from lab prototypes and military radars to devices that live in our pockets, on our wrists, and in our dashboards. And along the way, the touchscreen reshaped how we build—and think about—interfaces. It challenged designers to reimagine not just how things look, but how they feel to use.

Of course, it’s not all perfect. Wet fingers, cracked glass, mystery ghost taps—these can cause some weirdness. But most of the time, we barely notice. And that’s kind of the point. The best tech often fades into the background until it’s broken or missing.

So the next time your phone ignores your cold thumb—or your car’s touchscreen buries the control for the seat heater three menus deep—you’ll at least know what’s going on under the hood.