"I Agree... But Do I Really Agree?" by Javier Vergara
Let’s talk about terms and conditions and privacy notices. We’ve all ticked that little box saying "I’ve read and understood everything and hereby sell you my soul"—but has anyone actually read them? I know I haven’t. (Okay, I’ve read a few, but only as part of my job.) And I’m a lawyer specialised in IT. Imagine that!
If even I don’t read the legal terms of the services I use, do you think most people do? Probably not. And even if someone does, do they really understand them? Doubtful. Most terms are written in a dialect called Legalese, somewhere between Tolkien and Kafka. Take this real example:
"The Provider may terminate this Agreement ipso facto and without prejudice if the User commits a material breach, inter alia, failure to pay or comply with usage terms. Relevant provisions shall survive mutatis mutandis."
Got that? Yeah, me neither—and I do this for a living.
The truth is, most online terms and conditions score horribly on readability tests. That same clause? It’s written for someone with a college degree. And even then, good luck. So why read something time-consuming and complex if (1) you can’t negotiate it, and (2) you have better things to do?
In fact, research suggests up to 98% of people accept online terms without reading a single word. (Don’t quote me—just Google it. It’s wild.) As for the other 2%? Probably just an error margin... or a handful of Tolkien scholars and Kafka enthusiasts treating terms and conditions like bedtime stories.
What we have is rational ignorance in action. Nobody reads, so companies have zero incentive to make their terms accessible. In fact, the incentive is the opposite: people don’t read anyway, so better to load the terms with legal protections for the company. Yes, legislation sets limits on how far they can go, but a clever legal team can still draft terms that stretch those limits without breaking them.
The end result? Users are at the mercy of whatever the platform decides. Their only real option is to stop using the service—which isn’t always realistic, especially when there’s no good alternative.
So yes, the system is broken. But does it need to be fixed? You might think "not really"—if you’ve never had issues, maybe it feels like no big deal. But it is a big deal. Ever heard of "deplatforming"? That’s what happened to Trump in 2021, and it’s what happened to me—I got booted from WhatsApp for "spamming" after sending a Christmas greeting to 50 friends and family. All legit contacts. Still, I violated Meta’s policies, and poof—I was gone. Forced to move permanently to Signal.
And no, I hadn’t read Meta’s terms either. Like the 98%, I clicked "I agree" without a second thought.
Since then, I’ve been thinking a lot about this. How do we fix a broken system? More regulation? Meh—I’m really not sold. Every time someone says "we just need better laws," I roll my eyes a little. Regulation sounds good in theory, but in practice? It’s a mess. It bloats bureaucracy, drives up compliance costs, and throws a wet towel over innovation. It’s like trying to fix a leaky faucet by flooding the entire house. Don’t believe me? Just Google "AI memes EU vs US vs China." You'll see exactly what I mean. And worst of all—it doesn’t even tackle the actual problem. People still don’t read or understand what they’re agreeing to. No law is going to magically make 20-page privacy policies readable or make users suddenly care. So what's the point?
I’ll admit something: even after getting kicked off WhatsApp, I still don’t read the full terms of every new service I use. I just don’t have the time or patience. But I do try to understand them now. Here’s how: I copy-paste the terms into ChatGPT and ask for a 200-word summary of the main risks and takeaways—in plain English. I’ve been doing this for about six months now, and I save the summaries in a spreadsheet.
Sure, AI isn’t perfect—it constantly misinterprets some clauses. But this approach has massively improved my awareness of what I’m agreeing to, where my data is going, and what I should (and shouldn’t) do. I now make more informed choices when choosing services. That’s a win.
Still, I know most people won’t go down this path. They’ll keep hitting "I agree" without a second glance—and most of the time, that won’t cause any real harm. Until it does.
Like getting permanently banned from a platform you use to talk to your family. Or losing access to your cloud storage with years of photos and documents. Or watching your location data quietly fuel some targeted ad campaign that knows a bit too much about your weekend habits. Or having your account shadowbanned, throttled, or locked because of a vague "policy violation" you didn’t know existed.
It’s all just harmless boilerplate—until the consequences suddenly aren’t so theoretical.
So what’s the solution? Do we just tell people to start pasting legal docs into their favourite LLM? It’s worked for me, but maybe it’s not scalable. Still, it shows there’s room for market-driven solutions—tools that help users understand what they’re signing up for, without killing them with legal jargon.
The system might be broken. But it’s also an opportunity. Maybe it’s time for UX to meet the law. And if the market can deliver something that smart, simple, and actually helpful—then maybe, just maybe, we’ll stop clicking "I agree" like it’s nothing.
