Transcript of Episode #1043

Memory Integrity Enforcement

Description: Are Bitcoin ATMs anything more than scamming terminals? Ransomware hits the Uvalde school district and Jaguar. Did "Scattered LapSus Hunters" just throw in the towel? Germany, for one, to vote "no" on Chat Control. Russia's new MAX messenger has startup troubles. Samsung follows Apple's WhatsApp patch chain. Shocker: UK school hacks are mostly by students. HackerOne was hacked. Connected washing machines in Amsterdam hacked. DDoS breaks another record. Bluesky to implement conditional age verification. Enforcement actions for Global Privacy Control. Might Apple have finally beaten vulnerabilities?

High quality  (64 kbps) mp3 audio file URL: http://media.GRC.com/sn/SN-1043.mp3

Quarter size (16 kbps) mp3 audio file URL: http://media.GRC.com/sn/sn-1043-lq.mp3

SHOW TEASE: It's time for Security Now!. Steve Gibson is here. Who would have thought it, Russia's new enforced messenger has had startup problems. What a shock. Steve's going to tell the story of how he hacked the dorm washing machines. And then we're going to talk about an amazing improvement Apple has made to its own chips that may eliminate 90% of security problems. Wow. All that coming up next on Security Now!.

Leo Laporte: This is Security Now! with Steve Gibson, Episode 1043, recorded Tuesday, September 16th, 2025: Memory Integrity Enforcement.

It's time for Security Now! - yay, it's Tuesday! - the show where we explain and help you understand everything that's going on.

Steve Gibson: Oh, do we have one today, my friend.

Leo: Uh-oh. This is Steve Gibson, ladies and gentlemen. Get your - is this a propeller hat episode?

Steve: Well, it's titled "Memory Integrity Enforcement."

Leo: Okay.

Steve: Which is the technology in the A19 chips that Apple announced a week ago.

Leo: Oh, yes. Is this like ASLR? Is it like...

Steve: This is, no, this is if the only problems that security created were use-after-free and buffer overruns, or use of memory you don't own.

Leo: Yeah?

Steve: They would all be gone.

Leo: Well, that's good.

Steve: It's huge. It is...

Leo: Because that's where most of the security exploits start; right?

Steve: Way most. And in fact I was, before I remembered that it was possible to have other types of bugs, I was dancing around thinking, well, it's over. They won.

Leo: They fixed it.

Steve: But, oh, there are - it is possible to have a different kind of problem. But, oh, far and away, mostly, like that dumb Adobe DNG image problem that embarrassed Apple a couple weeks ago that coupled with the WhatsApp exploit created - it allowed targeted attacks on WhatsApp users. That would have never happened if MIE was in place.

Leo: Interesting.

Steve: I mean, they spent five years, although of course they had to blow it up to half a decade, it's like, okay, yes, also known as five years, to get this done. But anyway, our listeners always say they like our deep propeller-head episodes. Well, get out your galoshes because this one's going to be deep. But Leo, I interrupted your...

Leo: Well, I was just going to say, here's Steve Gibson, so that's good enough.

Steve: So we've got Security Now! Episode 1043 for the 16th. This time the show notes are properly dated at the top. We're going to look at whether Bitcoin ATMs are ever anything more than just scamming terminals. Two instances of ransomware I wanted to talk about, one that hit the unfortunately well-known Uvalde school district. And also Jaguar, which had some surprising downstream consequences. We're going to ask the question did the self-named Scattered LapSus Hunters hybrid group just throw in the towel? Germany has said they're going to vote no on chat control. Russia's newly released Max Messenger is having some startup troubles. I know, who would be surprised? Samsung is following Apple's change in the WhatsApp patch chain. And a shocker, Leo, UK school hacks turn out to mostly be made by students.

Leo: They thought they could rein them in, but no.

Steve: We have some numbers. Also, unfortunately, HackerOne was hacked.

Leo: Oh.

Steve: Which is not good. But again, it's that centralized hack that just keeps on giving. We've also got connected washing machines in Amsterdam having been hacked. The university is going to take measures. DDoS has broken another record. Bluesky has announced they're going to be implementing some conditional age verification in other states.

Leo: Oh, boy.

Steve: We're going to look at enforcement actions coming for Global Privacy Control. That's that GPC notice that sort of replaced DNT, the Do Not Track, which never got off the ground. And we're going to ask the question, might Apple have finally beaten vulnerabilities? Actually, most vulnerabilities, but it's a huge win.

Leo: That's great. That's amazing.

Steve: So we're going to do a deep dive into what it is that they did, the history of this campaign and, you know, what this is. This is new hardware introduced last week in the A19 chips. And as Apple put it, they - I don't remember now their exact word. It wasn't astonishing, it wasn't - it was some - they said they dedicated a huge - they had a different word, I'll end up sharing it when we get there - a huge percentage of silicon to this. I mean, they are serious about keeping those targeted attacks from happening. Remember, no normal users are ever hit by this anymore. You know, we covered the news of that hobbyist who'd given up hacking Apple long time ago because it was no fun anymore, if it got to the point where all the low-hanging fruit was, like, up so high that you just couldn't reach.

Leo: Right. There is no low-hanging fruit, yeah.

Steve: No, no.

Leo: Well, that fruit is now way up there.

Steve: That's right. And costs millions of dollars to pluck. So, and of course we've got a great Picture of the Week, which we'll get to after our first announcement.

Leo: Now.

Steve: I guarantee you, Leo.

Leo: Yeah, yeah. I think we have a pretty good idea.

Steve: Okay. So this picture raised some questions. I gave it the headline "What exactly is the plan here?"

Leo: All right. I'm going to scroll up. I have not seen it. Okay. There's a tree and a fire hydrant. There's a very important gate around three quarters of the fire hydrant.

Steve: And the business end of the fire hydrant, where the hose connects, is blocked by the gate.

Leo: Oh, okay.

Steve: So, okay, now, because this email went out yesterday afternoon, I've had some feedback from our listeners, with their conjectures, answering my question, implicit, "What exactly is the plan here?" So first of all, for those who aren't seeing the video, there is a beautifully painted fire hydrant. The fire hydrant is red. It's wearing a yellow-painted cap on top. I mean, it's lovely. And so, okay, the problem is that, you know, a fire hydrant is all about access. You need, the fire department needs to hook their hose up if they need water badly. And this fire hydrant, beautifully painted though it is, has been surrounded, as you said, Leo, on three sides, the front business side included, by this weird sort of, I mean, it's got to be a custom gate.

Leo: It's a beautiful gate. It's got a little star in it, yeah.

Steve: It is gorgeous. It's got a - yeah. But, I mean, you just can't go - what, do you put in Amazon "I would like a fire hydrant fence?" I mean, it looks like it was made to order for this fire hydrant. So, you know, what? Anyway, the best feedback that I've seen from one of our listeners was, you know, after the guy painted his fire hydrant, he probably was upset at the idea of dogs peeing on it.

Leo: Yeah.

Steve: So, and given the fact that the ground, the grass below it and in front of it is brown, there may have been some urination occurring in the past.

Leo: Yeah, yeah. May have some of that, yeah, yeah, something going on there.

Steve: So, yeah, that was - I think that's the best idea. I mean, obviously if the fire department actually needed it, one of the burly firemen would just grab it, toss it up in the air, and get it out of the way. Presumably it's not cemented into the grass. And, I mean, I zoomed in and looked to see whether it could open. Does the gate hinge? Doesn't look to me like it does. It's just some weird, like, okay.

Leo: Oh, that's interesting.

Steve: Like it's sort of like "In case of fire, break glass." Right? So presumably, in case of fire, throw fence, and then get access to the hydrant.

Leo: It's quite cute, though. It's quite - it's a nice little looking setup.

Steve: It's a statement, yes.

Leo: It's a statement, yes.

Steve: Yeah. The only question is, what is it saying?

Leo: What is it saying is the question. Exactly.

Steve: You know, just keep your dog walking. Don't stop here. So, okay. So the District of Columbia's Office of the Attorney General has filed - when you hear the facts, this is a well-deserved lawsuit against the largest crypto ATM operator in the U.S. That's a company known as Athena Bitcoin. And we've talked about the problems, sort of endemic problems with crypto ATMs. Excuse me, I have the hiccups.

This lawsuit alleges with, again, ample evidence, as we'll see, that the company knew, Athena Bitcoin knew its Bitcoin ATMs were being used to collect funds from victims of illegal scam operations. But rather than stopping the transfers, it instead charged large hidden fees, then refused to provide victims with refunds when they were due.

So overall, the concept, you know, theoretical, the idea of a Bitcoin ATM, of having one, I think is cool; right? It serves as a real-world interface to a purely ephemeral digital currency. But we've learned that the number one enabling factor for ransomware was the emergence of cryptocurrency. One of the principal lessons to be learned broadly from the Internet is that, sadly, anytime there's the freedom of anonymity, there will be abuse. So it should come as no surprise that scammers were quick to jump onto Bitcoin ATMs as the means for suckering the uninformed into all manner of online scams. We've previously touched on the problem of ATM abuse, as I said. And now this lawsuit gives us a window into how bad, exactly, it is.

What's somewhat surprising is that these Bitcoin ATMs see such low levels of NON-fraudulent, which is to say, legitimate use. Believe it or not, only 7% of Athena's Bitcoin ATM transactions were legitimate. Officials say that 93% of all deposits made across the seven Bitcoin ATMs which Athena operates in Washington, D.C., were the result of scams. 93% is crap, is like, you know, someone sending you email saying that your webcam was on, and they saw you doing something that you don't want the world to know about; and unless you pay them, you know, go to your local Bitcoin ATM and send some money, they're going to release this to the world, that kind of nonsense. So scammers would trick victims into going to an ATM to transfer funds into the scammer's bitcoin account.

So, okay, that's bad enough. But the D.C. Attorney General alleges first that Athena knew that allowing users to deposit funds into accounts they don't own would be abused for scams. They did nothing to stop the scams beyond displaying what was obviously an ineffective warning on the ATM screens because nobody took the warning to heart. The Attorney General, whose name is Brian L. Schwalb, he claims that Athena instead applied large fees. Instead of like adequately warning people and making it clear that there was a high likelihood of them being scanned, they charged horrendous fees.

The fees, which were not were not visible to the customers, thus hidden, reached up to 26% of the transaction amount, which is almost 100 times the fees practiced by Athena's competitors, which go from around 0.24% to as high as 3%, but not even approaching 26. As a consequence, scammed individuals were victimized essentially twice, first by the scammers themselves, and then by Athena that was riding along a 26% surcharge for the privilege of being scammed in the first place.

So the median loss per victim, meaning that the number where as many paid more than as - the number of people who paid more than as paid fewer than, that was $8,000. Meaning that half of the people scammed paid more than $8,000, and the other half paid less. I don't know what the average amount was. The victims' median age was 71, so half of the people who were being scammed were older than 71. And the scammers were deliberately specifically targeting the less technical elderly population in Washington, D.C.

The Attorney General brought the lawsuit as a means of forcing Athena into compliance with anti-fraud measures and to secure financial restitution for its victims, as well as to pay financial penalties to the District of Columbia. He said: "Athena knows that its machines are being used primarily by scammers, yet chooses to look the other way so it can continue to pocket sizable hidden transaction fees. Today, we're suing to get District residents their hard-earned money back and put a stop to this illegal, predatory conduct before it harms anyone else."

Leo: What did they - what was - so do you think these elderly - by the way, we're both under 71, so we're okay. But do you think that these elderly people - what did they think? They were going to put cash in this machine and get solid gold bitcoin? What did they...

Steve: I think they believed that they were going to get something, obviously, in return for giving more than $8,000. Like, you know, we know email comes in, and you read it, and it motivates you to take some action.

Leo: Yeah. So they were being - so maybe they were thinking they were going to maybe pay a ransomware or something like that; right?

Steve: Could have been paying a ransom. Maybe they believed that their bank was actually going to foreclose on their home? I mean, just anything.

Leo: Yeah. And it wasn't Athena doing this, but they knew there was a reason why people were spending all this money on bitcoin.

Steve: Well, and when the AG is able to look at the transaction history and follow the money trail, which Athena could just as easily do since they're the people running the ATM, and conclude that only 7%, like, you know, like for example, what would 7% be? Etsy allows you to pay with bitcoin to get the sofa that you want or something. I mean, so...

Leo: It was still a bad deal because of the fees.

Steve: Right, on top of - exactly. So the person gives them their bank transaction data, and these people take an additional 26% just for essentially a zero cost to them transaction.

Leo: Wow.

Steve: Their competitors are charging a quarter of a percent up to 3%. These guys are charging 26%. And they're the leading ATM in the country, which makes you wonder what they're doing everywhere else.

Leo: What?

Steve: This is just the Washington, D.C. AG that is going after them.

Leo: Ay ay ay.

Steve: Yeah. So again, we have great technology, and it's good, you know, the bad guys, the scammers...

Leo: They love it. They love it.

Steve: They will find a way to abuse it. And in this case half of the people that were victimized were older, were 71 years or older. So Leo, not long from now get ready to be in the...

Leo: Next year, Steve.

Steve: That's right, for me, yeah. Right. Okay. And speaking of ransomware, the Uvalde school district is shut down all this week following a ransomware attack. If that name sounds familiar to our listeners, that's because three years ago, in 2022, an 18-year-old former student fatally shot 19 students and two teachers, injuring 17 others. But I doubt that this ransomware attack on the district had anything to do with that. As we know, such attacks are almost always the result of just targets of opportunity. Uvalde's cybersecurity was likely wanting, and it was not adequately protected from someone clicking on a link that they shouldn't have. The incident impacted the district's phone system, their security cameras, their visitor management, and the thermostatic controls for the schools in the district. Consequently, classes will be closed all this week while the district gets back on its feet.

And I deliberately wrote in the notes: "Uvalde's cybersecurity was likely wanting, and was not adequately protected from someone clicking on a link they shouldn't have." I don't know that's the case. But that's almost always now the way we're seeing these things happen. And I've mentioned this thought before, and it's going to be something people are going to be hearing from me going forward. The evidence clearly shows, and I firmly believe, that the new goal for any enterprise's internal security must be to harden itself against random people inside the organization clicking on links they should not.

Leo: Yeah. The threat's coming from inside the organization, really, yeah.

Steve: Yes. That is exactly right. You know, today's podcast topic is about the tremendous lengths Apple has been forced to go to, to harden their system against the inevitability of bugs in software. For a long time the focus was on eliminating those bugs. But we've learned that's apparently never going to happen. So Apple has committed massive resources to being able to immediately terminate any process where misbehavior is detected to protect the phone's owner.

Similarly, we've talked many times about the need to train employees not to click on that link in the email that appears to be from their mom, or on that link that says they only have two days remaining before their bank account will be closed unless they respond.

Leo: Go down to the convenience store and find that bitcoin machine because that's the solution. Geez.

Steve: Exactly right. Exactly right. And, you know, so telling people, employees not to click on the link is analogous to telling every coder of every piece of software on an iPhone that they may never make another mistake. In other words, you can ask, but you're not going to get it. My point is that regardless of how much training employees receive, you're going to have a new hire, somebody on the loading dock who missed the last training because, you know, they couldn't make it. They are, somebody is going to click on a malicious link. It's inevitable.

So, similar to what Apple has finally been forced to do, the only sane recourse is for enterprises to get very, very serious about hardening their internal security against anyone who might click on anything that they receive over the Internet - whatever it takes. I'm not suggesting it's easy, but that's the bar. That's where it is now. If that means implementing new VLAN network segmentation to give up the massive convenience of having everyone being able to participate as equal peers on the same network, then so be it. That's what's going to be necessary, given all the evidence that we've been seeing for the last year here. All of these recent massive ShinyHunter and Salesforce compromises are showing us, as you said, Leo, that the calls are now coming from inside the house. The bad guys have clearly located our Achilles heel, and it is us.

So my message to our listeners who are in charge of such things is that, if results are what matter, rather than feel-good but ultimately failure-prone measures, it's no longer sufficient to rely upon "adequate training" of every single last employee. There is no such thing as adequate training. And of course you have to include the bosses, too, because they're just probably more prone.

Leo: And they're arrogant. They don't need the training. I'm the boss. I don't need that.

Steve: Exactly. I can click any link I want.

Leo: That's right.

Steve: Anyway, we've tried that; right? We've tried the training. It didn't work. So the only thing that will work is seriously thinking about arranging to make clicking on malicious links safe. That is the next frontier for internal enterprise security. We need to figure out how to do that.

Leo: Do you think that's doable?

Steve: Again, it's - yes. I would say it is. But I'm not a person, you know...

Leo: Yeah, it's challenging.

Steve: ...a CISO inside of an enterprise who needs to figure out how Marge can print.

Leo: Right.

Steve: You know? Marge needs a way to print. And but Marge also needs her computer to, if the computer is malicious through no fault of hers, it can't hurt the enterprise, even though it has some privileges on the network, which Marge needs in order to do her job. So, I know, it's not easy, and probably requires rethinking the boundaries of trust that exist. The easy way to establish an enterprise is just to hook everybody up. That's what Microsoft did when the Internet happened. They put all Windows 95 machines on the Internet. How did that work? Yikes. There was no firewall. And I created ShieldsUp! that greeted by name when they came to my website because I was able to get the name of them and their computer. And it was a wakeup call.

So we know that change is hard. But I think, if CISOs continue to imagine that training is the solution, they will continue, enterprises will continue to fall to ransomware, and to data exfiltration and all the embarrassment that follows from that. The solution is recognize that the internal networks now need to be hardened against its own employees. Not because they're malicious, but because the links they may click on could be.

Leo: Wow.

Steve: Yeah. I mean, it is a different scale. But that's where we are today. And so I just wanted to clearly throw the gauntlet down. I think any rational examination of the types of exploits and problems we've seen for the last year would cause anyone to reach that conclusion. It's, you know, sorry, but training isn't going to cut it. People are, I mean, just, and again, the problem is it just - the challenge is so difficult because it's the weakest link process in security. Security has to be perfect. So every single person in an organization has to never even once click a bad link. One mistake is all it takes.

And so the only way to protect against one mistake is to figure out how to create an internal organization of privilege such that a computer, an employee's computer that falls to malware, that the damage it can do is minimal. If it allows a bad guy to get into it, they're frustrated. They can't do anything. And that is just not the case in today's enterprise.

Leo: Houston, we have a problem.

Steve: And speaking of clicking on a bad link, I wanted to touch on just one more recent ransomware attack because of its consequences, which were somewhat unique and interesting. More than two weeks ago, Jaguar Land Rover's automotive production lines were ground to a halt due to a ransomware attack. And today, all production remains halted.

Leo: What?

Steve: The company has said - yeah. The company has said that it expects that at least three of its production lines may be able to resume operation later this week.

Leo: Holy cow.

Steve: Here's the interesting - yeah. Here's the interesting bit: According to the BBC, several of Jaguar's smaller suppliers are now facing bankruptcy due to the prolonged production shortage by Jaguar. So talk about a supply chain attack. The loss to Jaguar themselves is estimated to end up being between 50 and 100 million pounds since the attack. But the ripple effects of the incident are revealing it to be perhaps one of the most significant - as in the worst - cyberattacks in Britain's history. It's expected to affect Britain's national economic growth stats, it's so bad. So, wow.

Leo: Wow.

Steve: I don't know what the deal is with Jaguar and their cybersecurity, why all of their production lines are down. Obviously they weren't, you know, they weren't set up to be resilient from an attack, and an attack has hit them hard. But interestingly enough, it's also hit their suppliers, who are like, didn't have, apparently, any margin, any operating margin to fall back on when Jaguar stopped ordering things from them and stopped paying their bills. I'm sure what's happened is that Jaguar's accounting systems were taken out, too, so they don't any payables operation in place. They can't pay their suppliers because they don't know who owes them what. I mean, it's a mess.

Leo: That's, yeah. Why would it take three weeks to fix? Oh, my god.

Steve: Again, I have no visibility into their operations, but it doesn't look good. Okay. So it's impossible for us to know what's actually going on here. But that hybrid group that was calling itself, right, self-named the Scattered LapSus Hunters, remember that was composed of individuals from ShinyHunters, Scattered Spider, and LapSus$. Remember than they were the ones who threatened Google, saying that they had to terminate two of their threat intelligence group employees or else. Well, they posted a rambling goodbye note, referring to their attack on Jaguar, by the way, and four moderate intrusions into Google. Now, I would normally post, I would share with our listeners a rambling goodbye note. But this one was so rambling, it didn't even clear that bar. I'm not going to bother because, I mean, this just was all over the place.

As is so often the case with these sorts of things, we're almost certainly going to never know what really happened here. Why was it that after they threatened Google with, like, dire consequences, they suddenly say, okay, uh, goodbye. Okay. Maybe Google did not take that lying down. And remember last week we were saying we hoped they would not. But we've been covering the consequences of this group's actions, which, while not really qualifying as a reign of terror - Jaguar might disagree - did at least certainly put this group squarely on the map. It might just be that they ran dry of targets of opportunity which they had previously acquired. Remember they were the ones who were leveraging all of these attacks against Salesforce. Or perhaps some counter cyber-intelligence managed to penetrate their ranks to convince them to stand down.

Whatever the case is, I wanted to keep our listeners current with the news that they had formally said goodbye. So we'll see what happens next. I have no idea what's going to happen. Except, Leo, I do know one thing. We're going to take a pause for our next sponsor.

Leo: Back to you, Steve.

Steve: Okay. So many of the governments within the European Union have by no means given up on legislation to obtain some sort of access or control of privately encrypted interpersonal messaging among its member citizens. But there is some disunion, evidenced in news from last Wednesday, posted by the German government, which indicated that they, Germany, will have none of that. Period.

They wrote: "September 10th, 2025 Berlin. From the Digital Affairs & State Modernization Committee." They posted: "The Digital Affairs Committee met Wednesday afternoon to discuss the status of the CSAM" - of course we all know what that is, Child Sexual Abuse Material - "regulation, publicly known under the term 'chat control.' Its purpose is to combat sexual violence against children and adolescents online. For over three years, various proposals have been under discussion at the EU level to require providers of messaging and hosting services to detect material related to online sexual child abuse. An agreement has not yet been reached.

"As a representative of the Federal Interior Ministry reported to the Members of Parliament, the Danish Presidency of the Council, in office since early July, is treating the matter as a high priority." Meaning it hasn't been dropped, by any means. They said: "A unified legal basis across the EU is urgently needed, given that the current situation is worrying. It is clear that private, confidential communication must remain private. At the same time, there is an obligation to take action against child abuse online.

"A representative from the Federal Ministry of Justice pointed out that the matter involves very severe intrusions into privacy, leaving open the question of how deep those intrusions are. He also pointed to the strict limits that have already been made clear in EU Court of Justice case law on data retention, and emphasized that a regulation is needed which will stand legal scrutiny." Okay. Whoops. In other words, the EU already has strong existing law that would make what "Chat Control" wants to accomplish illegal under their own law.

The article finished, writing: "In their questions, MPs asked about the joint position of the federal government, the criticism from civil society about the regulation, and the further process in the negotiations. The representative from the Interior Ministry explained that the Danish position could not be supported 100%. For example, Germany is opposed to breaking encryption. The goal is to produce a unified compromise proposal, also to prevent an interim regulation from lapsing." So Germany has just said no. They're opposed to breaking encryption. Sorry.

So this has all the earmarks of being a very heavy lift. This Chat Control dream of theirs is still facing very stiff headwinds. I don't know what it means for Germany to declare that it's a firm "no" vote, but the EU's existing personal privacy laws would need to be changed for Chat Control to be legal, even in the EU that wants it. So lots has to happen first. It's a mess. And, you know, who knows what the answer's going to end up being. But maybe governments will go round and round, Leo, for a while, and then just end up saying, well, we'll just have to, you know, make better use of the provisions that we have. Which is, you know, what the people who absolutely want no exception to privacy and encryption and messaging say is the right course of action.

Leo: I think it's telling that even within the EU countries can't agree.

Steve: Right.

Leo: Like some want it. Some don't want it. Some say you can't do this. Some say we have to do this. If they can't agree, of course we know that even inside the NSA there's no agreement. So I don't - this is one of those things where the people who say, look, there's no way you can break encryption for some people without breaking it for all people, are not necessarily widely understood. I mean, that seems like a notion that other people don't understand. And maybe we need to work harder to get that through to them.

Steve: Well, and then we also have the issue of communicating with anyone in the EU from outside the EU. That presumably means that your messaging will be decrypted, too.

Leo: Oh, yeah, good point, yeah.

Steve: Much like the UK saying we want to...

Leo: Right, with Apple.

Steve: ...be able to see everybody's.

Leo: You know, one way to - one thing that often brings this home to them is pointing out that, yeah, okay, well, so we're going to break encryption for those people. But it will also break it for you. You won't have private communications anymore, either. And often that stops legislators cold. They go, oh.

Steve: Right. You mean the government is not going to be an exception?

Leo: We don't have privacy? They think they do. That's the problem here. Oh, no, we've got ways.

Steve: They want it forever, you know, they want to be able to check everybody else's messages.

Leo: Privacy for me, not thee, yeah.

Steve: Right. It turns out that even when there are many Western models to follow, launching a new secure messaging service from scratch is not a slam dunk. The news out of Russia is that hackers immediately began selling hacked accounts for Russia's Max messenger for prices up to 250 in USD, or access to accounts can be rented by the hour.

Leo: This is for the encrypted chat that the Russian governments are forcing phone manufacturers to put on the phones.

Steve: Right.

Leo: In lieu of everything else.

Steve: Exactly. And blocking the alternatives in order to force their citizenry over, I mean, we heard from some of our Russian listeners, who were saying, yeah, this is so that we're forced to use Max as the reason, you know, Google's group messaging and Google's conferencing is being blocked now. So working to combat this abuse - of course they're not taking it lying down, either - Russian officials say they've already blocked more than 67,000 accounts for suspicious activity, such as spam, sharing malicious files, and, you know, the whole rigmarole. Looks like the Kremlin and our favorite agency Roskomnadzor...

Leo: I'll get the echo ready for next time.

Steve: ...are going to have their hands, yes, are going to have their hands full dealing with the consequences of their own messaging service. Which they said they wanted, so it couldn't happen to a nicer bunch.

Leo: It's no surprise.

Steve: As I said, even though they've got Western models to follow, still not an easy thing to do.

Leo: Yeah.

Steve: Samsung recently patched a zero-day, their own zero day, 2025-21043, which they rated as Critical in the Android OS version that ships with the Samsung devices. The vulnerability was discovered in Android's libimagecodec.quram.so file. Now, I didn't dig in to see whether it may have been similar to what Apple recently patched, that is, whether that was also having to do with decoding the Adobe DNG file format. But like the recently patched Apple vulnerability, this one also formed part of an exploit chain that targeted WhatsApp users.

So whether WhatsApp was on Apple, where it was using, we know, that Adobe DNG image decompression flaw, or whether it was on a Samsung phone using Android OS, there was some flaw in the image codec which was chained with the WhatsApp flaw that allowed spyware to be installed onto the users of WhatsApp for Samsung, presumably broader for Android OS. So at least on the Apple side we will see by the end of this podcast why that would not have worked, if this was already in place, what they have now released with this new hardware.

While I was assembling today's show notes, I was reminded that there's all the difference in the world between a casual mistake made by an employee who clicks on a malicious link they receive, and an employee on the inside who wishes to maliciously attack their own employer. You know, that's a higher bar than an "oops I clicked the wrong link." An article from the UK's privacy watchdog is what reminded me of this difference. They found and reported that UK students are increasingly behind the hacks of their own schools. Okay. Insider hacks, right, because the student is on the school's network and is able to sneak around. The UK Information Commissioner's Office (ICO) says it studied 215 insider-caused breaches within the UK educational sector between 2022 and the middle of last year, 2024, and found that students, to no one's surprise, were behind 57%, so by no means all, it wasn't 97%, but more than half, 57% of all intrusions.

So certainly there are still external actors trying to get in. And where a stolen password was used to breach a school system, students were involved in almost all cases (97%). So virtually all stolen passwords were student based. The underlying motives were cited as being dares, notoriety, a little bit of financial gain, revenge, and rivalries. In other words, basically "because it's possible to do it" sorts of hijinks. Breaches were blamed on staff leaving devices unattended, students being allowed to use staff devices.

Leo: Hijinks.

Steve: Yeah, hijinks, yes. They're up to some hijinks.

Leo: Oh, you kids.

Steve: You rascals.

Leo: You little rascals, you.

Steve: That's right. Incorrect permissions on school resources, and, in some, though rare, 5% of the cases, on students using sophisticated techniques to bypass security and network controls. So maybe we have some listeners among the students in the UK who are a little more sophisticated.

After researching those 215 insider student-caused breaches, the Information Commissioner's Office reached two conclusions. The first one was that an early familiarization with hacking might lead kids down the wrong path and serve as a gateway to a life of cybercrime. Okay. Hold on. I remember being that age, and I was notorious for all manner of hijinks, of course the Adventure of the Portable Dog Killer, to name one. But I think it would be a stretch to imagine that some high-schooler's success and guessing a teacher's password - or perhaps looking underneath the keyboard for it, written down on a Post-it note - would lead to a life of cybercrime. After all, everyone is an insider within their own family's home where there are plenty of tantalizing hacking opportunities. So one's school, I would say, is just another of many.

The second conclusion the ICO reached was that the responsibility for much of their students' hacking successes lay at the feet of the school's administrators who repeatedly failed to properly and adequately secure their own networks. And of course writing one's password on a Post-it note under the keyboard is never a good idea. In conclusion, the ICO urged schools to "remove the temptation from their students" by taking steps to improve their own cybersecurity and data protection practices. So, yes. You are trying to herd a wild bunch of cyber-enabled kids. You know, do yourself a favor by locking the gate, if that's what you're trying to do, and not allowing them to see what's on the other side because, oh, that might lead them to a life that they regret. Okay, I don't think so. I think they're just having some fun, you know, accepting a dare and so forth.

It's never a good sign when a security-aware bug bounty company such as HackerOne, one of the leading bug bounty companies, we've talked about them often, themselves get hacked. But this really wasn't on them. The blast radius of the recent Salesloft Drift supply chain attack has been wide and deep, and HackerOne was another entity that got caught up in it. They first posted about this shortly after it happened, back at the end of August, August 28th. So like three weeks ago.

They wrote: "Recently, hundreds" - and that's true - "of companies have been responding to an attack that resulted in unauthorized access to Salesforce records connected to the Drift (from Salesloft) application" - I'll talk about what that is in a second - "a situation detailed in reports from Mandiant and others. As part of our commitment," writes HackerOne, "to transparency, trust, and our company's value of 'Default to Disclosure,' we're writing to confirm that HackerOne is among the companies impacted by this incident."

So, okay. They're trying to obscure themselves a little bit by being among the herd, and it's like, well, we're just one of hundreds. Okay. Anyway, they said: "Our security team received notice of the potential compromise from Salesforce on Friday, August 22nd, and this was confirmed by Salesloft on August 23rd. HackerOne's security team immediately initiated incident response procedures, working in partnership with Salesforce and Salesloft, to assess the scope and impact of this incident.

"HackerOne's investigation is ongoing, but we can confirm that a subset of our records in our Salesforce instance was accessed via a compromise of the Drift application. Due to HackerOne's strict policies and controls governing data segmentation, we have no reason to suspect that the incident impacted or exposed any customer vulnerability data. We're continuing to conduct forensics on the records that were accessed and will communicate directly with any impacted customers, as appropriate."

Okay. So that's everything we would want and hope to see in a breach disclosure, a straightforward reporting of the event with a promise to follow-up when anything more is learned. And that follow-up was posted last Thursday, which is why it came back to my attention. Last Thursday, they wrote: "HackerOne continues to investigate the recent Salesloft Drift incident, and we are posting here to update you on the status of our investigation as well as provide additional information we are able to share at this time. Based on the information we have to date, a subset of HackerOne's Salesforce data was accessed via the Drift application on August 13th and August 18th. Both the dates and the indicators of compromise are consistent with what Salesloft has reported, which can be found at trust.salesloft.com." And don't bother going looking because it's just marketing spiel.

They said: "We can confirm that all Salesforce Drift connectors are currently offline; and, as a precaution, we have rotated all relevant API and service credentials." And I'm going to explain what this terminology here means in a second. "Due to HackerOne's strict policies and controls governing data segmentation, we have no reason to suspect that the incident impacted or exposed any customer vulnerability data. Nor have we found any indication of lateral movement." That's all good.

"We understand that you may still have questions about this incident, and we appreciate your patience as we continue our investigation. HackerOne has engaged a third-party forensics firm to ascertain what records were accessed, and we will communicate directly with impacted customers, as appropriate."

So basically they're saying, yes, we were caught up in this. We've verified that our network was penetrated. But we have an architecture. Now, this is similar to what I was suggesting ought to be the standard going forward, where segmentation, you know, network segmentation, where network, network segments - I was trying to find another word, but there it is - segments are isolated from one another by purpose so that, unless it's actually necessary for some API or individual to have access to some specific set of data, there is no physical access. That's what prevents any damaging lateral movement. We're always now talking about lateral movement, how you get in somewhere and then you move laterally in a network to some other location, and then from there you're able to get access you didn't have from where you began. That's what needs to be contained.

So I usually try to find some lesson for us to take away from incidents that we cover, like all of this. The problem is today's modern model of outsourcing services and interconnecting separate enterprises' automated systems with persistent authentication, which is what happened here, it inherently brings a risk, which we are and have been seeing play out. One of the recent trends I'm sure everyone listening to this podcast has encountered is the increasing, at least for me, annoying use of automated conversational AI chat windows that increasingly appear, typically in the lower right corner of a website. I have yet to find engaging with one of those annoyances to be fruitful. You know, if you've encountered one of those, you know, it may have been courtesy of Salesloft Drift, since that's what their technology does. That's been the root cause of all of this pain.

Salesloft Drift describes themselves as: "A conversational AI/chat/lead qualification component of the Salesloft platform. It's built on or integrates the Drift chat/AI agent that engages website visitors in real time, qualifies leads, routes them to the sales team via workflows like Rhythm, and helps convert them into pipeline."

Okay. I don't want to be converted "into pipeline," whatever the heck that means. All I want to know is whatever happened to that end table that we ordered? But that information is not available through the chatty chatbot.

In order to integrate with its client enterprise customers, this Salesloft Drift AI chat thing needs to have access into its customers' networks. Consequently, when Salesloft Drift is hacked, all of its many customers' networks then suffer their own respective breaches as the hackers of the company to which they have outsourced this service obtain the credentials that allow access into every one of those enterprises' internal networks.

It's an inherently unstable solution with an astonishing blast radius. But, you know, you get to annoy every one of your visitors by asking them, unprompted, what they need and whether there's anything they want to ask, while not ever being able to provide any answers. This today is what we call progress, Leo.

Leo: It's customer service, baby.

Steve: Have you seen those things, those annoying little chatty windows in the lower right?

Leo: I always close it. Always close it.

Steve: Oh. And I finally in frustration once, I asked one of them, I said, well, here's what I want to know. And I presumably get some LLM AI thing. And I got nowhere with it. Finally I got pissed off, and I said, I want to talk to a supervisor. And then it gave me a phone number to call. So it's like, okay.

Leo: Oh, that's ridiculous. Oh, my god.

Steve: For future reference, just be upset with it and tell it you want to talk to a supervisor.

Leo: Give me the number. Just stop it.

Steve: Okay. So it was a little over a year ago in Episode 975, it was May of 2024, that we last talked about students hacking their university-provided washing machines - you'll remember that, Leo - to obtain free laundry services. Now, today, a university campus in Amsterdam has shut down its laundry room after its five smart washing machines were hacked in July. Surprise, surprise. Again, that's what you would call an insider attack. Students were able to wash their clothes for free for months, but that will be ending. That will be ending shortly.

Leo: Oh. Aw.

Steve: I know. Those five Internet-connected smart machines are being replaced with "dumb" washing machines that accept old fashioned coins. Who even has coins anymore? Seems like the students are going to get what they deserve here, needing to somehow now go find coins to put in the slots. Imagine that the university must have been confounded. Why has everyone stopped using our washing machines? When we go to empty the coin boxes, they're empty. Imagine that.

Now, I'll confess, as I mentioned when we talked about this before, UC Berkeley also provided coin-op washing machines in pre-Internet 1973, when I happened to be there. And, really, what did they expect? The machines had been placed in Ehrman Hall, which was the engineering dorm, where I was. It turned out that the coin-op box that had been added as an afterthought to the machine, had a sheet metal screw in the back, the removal of which created a hole through which a properly shaped length of coat-hanger wire could be threaded.

Leo: Not that you would do anything like this.

Steve: Not that I would have ever had anything to do with that. But with a little bit of fishing around, it turned out, the lever that was normally actuated by the insertion of a quarter into the front could be tricked into believing that that had just happened. So let's just say that I never needed to bring laundry home on the weekends for my mom to wash.

Leo: And that, my friend, that's what leads kids to hacking.

Steve: That's down the dark path. That's right.

Leo: It's the gateway drug to future hacking exploits. Wow.

Steve: Indeed.

Leo: But that's just - that's what hacking is, right, is getting around restrictions.

Steve: Yeah. I mean, it's like Wozniak and phone phreaking...

+LEO: Oh, yeah, the blue boxes, yeah.

Steve: ...with the blue box that generated a 2600 Hz tone that disconnected the local line and dropped you into the long-haul network. Not that I knew anything about that.

Leo: No, of course not.

Steve: No, no, no, no, no.

Leo: Not a thing.

Steve: Just things that fascinated kids. Okay. I'm just going to start this next piece by reading what was posted. Then I'm going to share my sadness.

Leo: Oh.

Steve: Uh-huh. UK, London, Tuesday, last Tuesday, September 9th: "FastNetMon," they wrote, "today announced that it detected a record-scale distributed denial of service attack, you know, DDoS, targeting the website of a leading DDoS scrubbing vendor in Western Europe. The attack reached 1.5 billion packets per second." Not bits. These are 1.5 billion packets per second, one of the largest packet-rate floods publicly disclosed.

Now, I'll just pause to say that, remember, we talked about the challenges that flooding attacks present. One is bandwidth, just the wires are unable to carry the amount of bandwidth that's being generated, so packets overflow the incoming buffers of the routers and are being dropped. And as a consequence of that, valid data, the valid packets have a very low probability of making it through the buffer into the router. As a consequence, the valid service is denied.

The other problem is that every packet that does get into a router needs to be examined for its destination, the routing table then used to look up which interface that packet should be sent out of. In other words, there is a per-packet routing overhead separate from just the raw bandwidth overhead. So when you're generating 1.5 billion packets per second, and they are all focused down onto some poor little IP address somewhere, what happens is all the routers everywhere on the globe are dealing with all of those packets. And as they are routed closer and closer to their destination, through multiple router hops, the overall rate of packets skyrockets to the point where, even if the bandwidth weren't being flooded, the number of packets that needed to be examined per second, no router could possibly handle.

So this attack, 1.5 billion packets per second. As they wrote, "One of the largest packet rate floods publicly disclosed. The malicious traffic," they said, "was primarily a UDP flood launched from compromised customer-premise equipment (CPE), IoT devices, and routers, across" - get this - "more than 11,000 unique networks" - not devices, 11,000 networks - "worldwide. The disclosure," they said, "comes only days after Cloudflare reported mitigating an 11.5 Tbps DDoS attack." 11.5 terabits, trillion bits per second. "Showing," they said, "how attackers are pushing both packet and bandwidth volumes to unprecedented levels." I mean, really, it's just crazy.

"Pavel Odintsov, Founder of FastNetMon, said: 'This event is part of a dangerous trend. When tens of thousands of customer-premise equipment devices can be hijacked and used in coordinated packet floods of this magnitude, the risks for network operators grows exponentially. The industry must act to implement detection logic at the ISP level to stop outgoing attacks before they scale.'"

Okay. So there what he's talking about is, as I said, attacks originate from 11,000 networks; right? And it's the concentration, the aggregation of all of that bandwidth as it narrows down on the Internet to a single target that causes the buffers to overrun and the routers to fail to be able to route that many packets per second. But if it were possible for all 11,000 of those source networks to never transmit the outgoing packets, then there wouldn't be the ability for the traffic to aggregate.

Anyway, this quote finishes, saying that: "FastNetMon Advanced platform is designed to handle attacks of this size. Using highly optimized C++ algorithms for real-time network visibility, FastNetMon enabled its customer to automatically detect the flood within seconds, preventing disruption to the target service."

Okay, I'm not sure what "highly optimized C++ algorithms" have to do with anything. And unfortunately, this Pavel guy is dreaming. We've been talking about the problem of DDoS flooding throughout the entire 20 years of this podcast. And during that time, while attacks have grown astronomically in scale, they have also become less possible to prevent. Back in the early days, spoofing source IP addresses was the order of the day. We argued at the time, correctly, that no ISP should emit any packets from their networks that contained a fraudulent source IP. So-called "egress filtering" could have been employed back then to nip those attacks in the bud before the traffic was given the chance to aggregate into an overwhelming flood. That was all true then.

But the only reason devices back then were spoofing their source IP addresses was to hide their true IP from their victims. Once you have tens of thousands of individually compromised home routers and IoT devices, hiding is no longer necessary. Who cares if the identity of some of these devices, or all of them for that matter, is known? They're scattered across the globe in faraway countries behind ISPs that will never pick up the phone. As a consequence, source IP spoofing as a requirement for packet and bandwidth flooding is far less important today than it once was. There's no way for an ISP now to know that any given outbound traffic is fraudulent because it carries valid source IP addresses.

The other factor is that it is trivial for a CDN like Cloudflare to drop all incoming readily spoofable UDP traffic. Cloudflare doesn't need UDP traffic. It's a web hosting provider, so what it needs is TCP traffic over port 80 and 443. And as we noted recently, even port 80, you know, old HTTP, unencrypted instead of HTTPS, HTTP port 80 is now falling by the wayside, too. So now the name of the game is connection flooding, and connection flooding needs TCP protocol with roundtrip packets. And roundtrip packets prohibits the use of any spoofing. And of course now, who cares when today's massive bot networks have tens of thousands of individually throwaway agents? We don't care what their IP address is. Nobody will ever contact the people who are in control of them or their ISPs or their ISPs' ISPs.

One of the earliest things we talked about on this podcast during our "How the Internet Works" series was the brilliant genius invention of the idea of opportunistic packet routing. By completely dropping the idea, just forgetting about it, that every communication packet needed to get through the network with 100% reliability, the brilliant designers of the Internet invented an incredibly elegant solution for the ages.

There's just one problem with it. To this day, and probably forevermore, that incredibly elegant system is utterly and completely vulnerable to packet generation abuse, and there is no way to fix it. None. This astonishing global network which we have is there, it's in place, so that anyone anywhere can send a packet to anyone else anywhere else. Unfortunately, there is nothing to prevent bad guys with thousands of remotely scattered devices under their control, all sending as much packet traffic as they can to anyone they choose.

The result of this is that frequently targeted companies are choosing to hide behind the growing number of companies who are able to provide comprehensive DDoS protection thanks to having many points of Internet presence themselves, their own massive network bandwidth which is able to absorb these attacks, and the automation in place to block incoming attack traffic once it's been identified. It's not an ideal solution, but I suppose it's the price we pay for a system that otherwise works so incredibly well.

And Leo, you know the other system that works incredibly well?

Leo: You mean the system where we do ads to pay for all of this, and you drink more coffee?

Steve: And coffee. And coffee.

Leo: That system? I like that system.

Steve: That's the one.

Leo: We're going to take a little break. We'll have more Security Now! in just a moment. All right. On we go.

Steve: Okay. So Bluesky is going to implement "conditional" age verification for South Dakota and Wyoming. As age verification requirements continue to evolve, we got an update last Wednesday from Bluesky. Recall that the last time we talked about them they were going, and did go, completely dark in Mississippi due to Mississippi's "all or nothing" age verification law.

After the first two paragraphs of Bluesky's posting, which didn't really say anything, it was just, you know, marketing spiel, they said: Bluesky wrote: "In the UK, we complied with a new law that requires platforms to restrict children from accessing adult content. In Mississippi, the law requires us to restrict access to the site for every unverified user." That's the difference. They said: "To implement this change, we would have had to invest substantial resources in a solution that we believe limits free speech and disproportionately harms smaller platforms. We chose not to offer our service there at this time while legal challenges continue." Like why invest in this if the law is going to get changed or overthrown?

They said: "South Dakota and Wyoming have also passed online safety laws that impose requirements on services like ours. These are very similar to the requirements of the UK Online Safety Act. So as we did in the UK, we'll enable Kids Web Services' (KWS) age verification solution for users in these states. Through KWS, Bluesky users in South Dakota and Wyoming can choose from multiple methods to verify their age." But the important part is you don't have to unless you're trying to access adult content. So all users can still remain anonymous unless they are trying to access age-restricted content. That's what Mississippi did not do.

They said: "We believe this approach currently strikes the right balance. Bluesky will remain available to users in these states, and we will not need to restrict the app for everyone. We're committed to keeping our community informed as we navigate these new regulations. As more states and countries adopt similar requirements, we'll update this blog post accordingly."

So again, just to be clear, the difference between Mississippi, South Dakota, and Wyoming is that the more sane laws passed in South Dakota and Wyoming only require age verification before their citizens are allowed to access adult content, as opposed to all social media content. That's what's similar to what the UK has done.

Following that tragic Mississippi suicide of the young man who was catfished on Instagram, the state of Mississippi has effectively declared war on all social media regardless, of its content. While First Amendment lawsuits are flying, Bluesky decided to just back out of Mississippi until the dust settles. What would be good is if Mississippi were to align themselves with South Dakota and Wyoming and just say, okay, it's just the adult content. But, you know...

Leo: It depends on what you define as adult content, though. That's the problem.

Steve: That's true.

Leo: And that's where these legislators are much broader than you and I might expect when they call stuff "adult content."

Steve: And unfortunately, as we know, our U.S. Supreme Court did not make this fight any easier because they said we don't think it is a First Amendment compromise to require people to provide proof of their age.

Leo: Right.

Steve: Well, I mean, that's a huge privacy compromise. Right now we have no system that allows you to do that without divulging who you are.

Leo: Guess who's the latest? ChatGPT says it's going to attempt to guess your age.

Steve: Oh, my.

Leo: And if it can't guess that you're over 18, it's going to ask for verification.

Steve: Wow.

Leo: This in the way of lawsuits after teen self-harm stories, blaming ChatGPT. They're going to create a ChatGPT for kids. So if it thinks you're under 18 it's going to shift you over to that. And if it's not sure, it's going to say, okay, you need to give me some ID. And that's, again, hugely problematic. I asked ChatGPT. It says, well, I know you're 68. You told me. But it believed it. And that's the point is it assigned me an age based on what I had told it in a prompt. So this seems like this might a pretty good...

Steve: Well, and I'm sure it knows who I am. It knows me, my email address. It knows my account. It can go check, and I'm all over the Internet. So it knows what my birthday was. It doesn't have to guess that.

Leo: Right. Right.

Steve: The big problem, I mean, I don't, for example, I'm a big ChatGPT user. I don't have a problem, you know, disclosing who I am to ChatGPT. But, you know, the dicey things are, for example, porn sites, where people are going to be very self-conscious about, you know, de-anonymizing themselves there, and that's what the, well, in fact we're about to talk about that because the UK is really going overboard here.

This next story I have, speaking of the UK, they're on the warpath following their July 25th passage of the new age-check requirements, and that's what we were talking about, the Online Safety Act, which talks specifically about adult content. Only a week after its passage, they announced that they had launched investigations into the compliance of four companies - which collectively run 34 pornography websites - to verify that they were now using "highly effective age assurance" to prevent children from accessing that content. At the time they said that these 34 new cases added to Ofcom's - that's the office in the UK that does this - to Ofcom's 11 investigations that were already in progress into 4Chan, an online suicide forum, seven file-sharing services, and another pair of other porn publishers.

They concluded by saying that they expected to be making further enforcement announcements in the coming weeks and months, which just happened last Thursday with their apparently proud announcement that another 22 porn sites were now being investigated to verify the effectiveness of their age verification measures.

So as I started to say, it's one thing to need to show your ID in order to pick up a medication prescription, or before purchasing alcohol. But it's obviously a far more sensitive matter, a personally sensitive matter, to need to produce an ID in order to obtain access to online content that is, to say the least, controversial and probably extremely embarrassing. So it's hardly any surprise to learn that the traffic of the websites that are requiring such proof of age has dropped precipitously and significantly.

And Leo, somewhere I saw, and when I went back to look for it I couldn't find it, but they were actually targeting sites whose traffic had increased since their legislation because we knew that people were being driven to the sites that did not require age verification and away from the sites that were. This is just a mess. You know, I'm glad Stina is on this because, I mean, she's a bulldozer, and she's going to, if she's working with the World Wide Web Consortium and has a nonprofit set up, and they are 100% dare I say "laser focused" or "laser aimed" at this problem, you know, we need a solution, and we need it yesterday.

Leo: Stina Ehrensvard, who is the CEO of Yubico, and a friend of the show. And of course the YubiKey is the number one solution for hardware authentication. So she's working on some sort of ID, privacy-forward ID solution.

Steve: Yes. She has established a nonprofit. She just won a big award as like Sweden's number one entrepreneur innovator award deal.

Leo: Nice.

Steve: I mean, so she's really - and since I knew her, I mean, she used to come down because - what's the big gaming company down here?

Leo: Zynga?

Steve: World of Warcraft?

Leo: Oh, Blizzard, yeah.

Steve: Blizzard is down here. And she was providing their identity solutions. And so we would meet at Starbucks and spend a morning, you know, talking about all this stuff.

Leo: Let me correct, by the way, I gave the wrong - I called her Stina Svalbard. She's Stina Ehrensvard.

Steve: Ehrensvard.

Leo: Correct that, yes, Svalbard is the city closest to the Arctic Circle.

Steve: Oh. Yeah, anyway, so this has been a thing for her. And a few months ago I sent a note just saying, "Stina, I hope somebody is looking at age verification because we need a privacy-forward age verification system where all it does is it challenges you for an 'are you at least this old,' and you just get a go/no-go reply from a system that cannot be spoofed. It is biometrically locked, you know, that provides the things we need so that" - and anyway. So she says yes. I have a nonprofit that's doing that right now.

Leo: Good. Good. That's exciting.

Steve: Yeah, it is.

Leo: Right, we'll be [indiscernible] with interest. We'll talk to her when it comes out.

Steve: Okay, we've talked about GPC, the Global Privacy Control, which as we know is just - talk about go/no-go. It's a signal reminiscent of its predecessor, DNT, Do Not Track. And of course much as I was for DNT, it never got off the ground since without enforcement it means absolutely nothing. You know, you've got to sue some people in order to get the industry's attention and for them to go, oh, maybe we should take this seriously. But on the enforcement front, GPC may have a brighter future. The news is that state attorneys general from California, Colorado, and Connecticut - three C's, we've seen these three get together before. Colorado, California, and Connecticut, they've announced a joint investigation into companies refusing to comply with Global Privacy Control, which is now a law.

Data trackers that refuse to honor the GPC signal are in violation of recently passed state privacy laws. Seven other U.S. states also require companies to honor GPC, but they've not joined the enforcement action. They may not need to, or maybe we'll make it 10 companies, or 10 states. Anyway, this is great news since, as I noted, without any enforcement the law means nothing and will likely suffer the same fate as befell DNT. There's hope here because, you know, certainly California is serious about its privacy laws. And if it's got, what was it, 499 registered data trackers, if California investigates and finds they're not honoring it, they're going to get kicked out of California. So yay for enforcement.

Listener feedback. Micheal Buck wrote: "Hi, Steve. In Episode 1040 you talked about your disappointment with what you called 'Synology's built-in NAS synchronizer.'" He said: "I'm not sure you gave your listeners a fair review of Synology's solutions." He says: "I'm a Synology user and have used Synology Drive, which works like Syncthing, Box, and other synchronizing tools. Like you, I have several machines that I use, and like to keep files synchronized between these machines. Synology Drive was easy to set up, and I've been using it for years without any problems. It keeps my files synchronized between multiple Mac and Linux machines. I also use the tool that Leo mentioned, Hyper Backup.

"Most Synology NAS machines have an external USB port. My son also has a Synology, and we each purchased a large USB drive and plugged them into each others' NAS USB ports. Then we each use Hyper Backup to back up our NAS machines to our own USB drives at each other's location. The data is encrypted, and we don't eat up the disk space on each other's NAS. Thanks for all you and Leo do to provide a great podcast. Cheers. Mike, SpinRite owner and podcast listener since Episode 1 in Payson, Utah."

Leo: That's clever. That's very clever.

Steve: Yeah, that is. Okay. So in case anyone else may have been confused by my disappointment with Synology's built-in inter-NAS synchronization, I wanted to take another moment to clarify. There was nothing whatsoever wrong with it. I agree with Mike that it was quick and easy to set up. And I have a strong bias toward what we would refer to "living off the land" solutions, meaning that, if Synology provides a means of keeping two of their NASes synchronized, I would be strongly inclined to assume that they know best how to do it.

And, again, it worked. I would have never been unhappy with it or aware that the system, at least for me, was operating in what appeared to be a far from optimal way unless I had been watching the Synology drive's massive apparent full resynchronization using SoftPerfect's wonderful free NetWorx utility which I've spoken of before. I have that utility, NetWorx, configured to continually display the SNMP counters on my router's interface, so it is showing me, not my own machine's bandwidth, but the instantaneous bandwidth usage of my entire LAN, which includes the Synology.

What I witnessed to my extreme chagrin on many occasions was my network's bandwidth being pinned for a very long period of time after only updating a few files on my NAS. And when I checked the NAS's drive lights, they were all flashing away like mad. So it appeared that updating a small collection of files was basically triggering some sort of shock and resynchronization of the entire NAS whenever that happened. Again, everything worked, but it was certainly not a situation that I wanted to live with. The only change I then made was to shutdown Synology's native synchronizer and run Syncthing natively on both NASes with them synchronizing everything on each end.

Now, using Syncthing, when I update a few files on my local NAS, for example, after rebuilding a new instance of the DNS Benchmark, after a short delay I'll notice a brief, few seconds long "blip" of outgoing bandwidth as my local Syncthing instance sends those, and only those, updated files over to the other NAS.

So, yes, Syncthing's native synchronization works. No question about it. I meant to say Synology's native synchronization works. It's easy to set up and configure. But it might be worth monitoring its bandwidth usage; or. if that's not easy for you to do, just watch its drive activity lights after you've updated a bunch of files all at once, and see if they just go, you know, blip for a few seconds, or if it generates, you know, 45 minutes to an hour of frantic drive lighting because that's what I saw.

Greg Williams wrote: "Hi, Steve. Just a few notes. Cloudflare already has Certificate Transparency Monitoring," he says, "although it's in preview," and gave me a link. He said: "No idea why they didn't use it themselves." And he said: "You also mentioned the 1.1.1.1 domain. That's not a domain, it's an IP address that's not owned directly by Cloudflare, but APNIC." He said: "See the Wikipedia article," and he gave me a pound tail on the URL which as we know jumps you to a section on a page. "That page is titled #Prior_usage_of_the_IP_address for other references to the default use of 1.1.1.1," he says, "as laziness by other vendors, including Cisco." Signed "Cheers, Greg Williams, Brisbane, Australia."

Leo: Interesting.

Steve: So, okay. Yes. So of course, first of all, Greg is 100% correct about 1.1.1.1 not being a domain. I know better. The numeral "1" is not a TLD; right? It's a numeral "1," which could never be a TLD since the RFC-specified minimum length of any TLD is two characters. You cannot have a single character top-level domain. So, Greg, thank you for the correction. I also got a kick out of Greg's reference to that Wikipedia page which suggests that it wasn't just this random CA that was using 1.1.1.1 out of laziness. Apparently Cisco and others have been found to be using it, too, for very much the same reason. So thank you for that, Greg.

Buzz said: "I've listened to the last show, and as a UK Citizen I can confirm that Apple's ADP is still active for those users who opted in at the start."

Leo: Ah, good.

Steve: Yeah. "It is unavailable to any new users. Best regards, Buzz."

And Dan Bright said: "Hi, Steve. Regarding last week's talk about the availability of Apple's ADP in the UK," he said, "I have it turned on myself and can confirm that Apple has not yet removed it from my account. Kind regards, Dan in Scotland."

So anyway, Buzz's and Dan's notes were echoed by other listeners who all confirmed that, while it's no longer possible to enable "fresh" ADP, you're not able to turn on Advanced Data Protection, it has not yet ever been forcibly removed from any UK-based Apple user who has reported in to us. So if the effect of the still-inferred and presumed UK "notice," which was presumably sent to Apple, if that stands, then the presumption is that Apple will eventually be required to ask all UK users to please flip the switch off. Or perhaps Apple will themselves preemptively disable the feature with some future update and just inform their users that "the devil made them do it." So don't know what's going to happen. But it is at least a little bit of a canary for us to get some sense for what's going on because, you know, no one's talking, annoyingly.

John David Hickin wrote: "I'm following the proposals to solve the problem of asserting that age of X is >= Y," the way he phrased it. He said: "Zero knowledge proofs may come in handy here, but it seems to me that there is" - and he gets kind of clever here. He says: "There is a potential use case that deserves thinking about. If different sites start to impose differing age requirements, while attracting the same visitors, then web tracking across those sites may be able to refine upwards the lower limit on a person's 'guessed' age." And it's true. He said: "I'm not sure if it's a real issue, but somebody will surely try to monetize it."

So anyway, John's thinking is correct and clever. That is, if the - and using that equation, age of X >= Y, well, if the Y changes as you move from state to state, and you continue making that assertion, and you were to follow that person as they roamed from state to state, and watched whether that assertion was true or not, you would end up being able to find the - you would be able to elevate up to equality potentially where X was equal to Y. So again, as I said, clever.

The handwriting is certainly on the wall that this previous era that we have all been enjoying of free and full unfettered access to the Internet's content is rapidly drawing to a close, thanks to recent legislation in the UK, soon coming to the EU, and already within many state jurisdictions within the United States. Internet websites, which inherently have global reach, are being required to comply with the laws which govern their visitors which often requires that those visitors sacrifice the fully anonymous access that we've been enjoying up to this point to the requirement of an acceptable form of age verification.

I haven't noted this before, but we may see safe havens for anonymous Internet access spring up in the wake of these new legal restrictions. Websites that are compelled to obey the law might geolocate their visitors and limit their age restriction enforcement to only those countries that impose these requirements, much as Bluesky is doing on a state granularity here in the U.S., and also for the UK. Given that doing so is entirely feasible, that is, geolocating your visitor, it would seem to follow logically from country-specific legal requirements. So, for example, anyone coming from the UK, the EU or the U.S. would be required to provide proof of their age. But, for example, Icelandic visitors, who are outside the EU and live within a society with very liberal Internet regulations, might not be required to give up any identifying information.

And if that were the case, it would not be a stretch to imagine commercial VPN providers deliberately establishing points of presence in Iceland and offering customers anywhere, including the UK, EU, and U.S., the option of having their VPN traffic routed out through Icelandic locations. Again, all just technology.

Leo: This is the problem with a global Internet.

Steve: Yup.

Leo: How do you solve these problems?

Steve: Yup.

Leo: There's no national jurisdiction that applies.

Steve: And you're enforcing the laws under which your visitors are under.

Leo: Right.

Steve: Which varies from country to country, state to state.

Leo: Ultimately, though, the lowest common denominator ends up winning; right? If people get more and more afraid of getting sued or shut down, they just kind of revert to zero free speech, I guess.

Steve: As I think you correctly generalized, there is a coalition that just wants to see all pornography outlawed on the Internet.

Leo: Yeah.

Steve: And so, you know, I mean, it's like there's that, too; you know?

Leo: That's what some of this is, I think.

Steve: They said, okay, we're just going to make it so painful that it will stop being a profitable business.

Leo: Yeah. And I think it's important the distinction between pornography and adult content. I think there is also a fairly large constituency on the Internet that wants to control what you see, period, and is willing to call it adult content in a variety of things that others might not consider adult content, stuff that's not pornography.

Steve: Yes. A week or two ago I read a really well-written lament from someone who was just - he or she, I don't remember now, wrote adult non-pornographic, like I don't know if it was poetry or...

Leo: Oh, yeah, I read that, yeah. It was erotica, yeah.

Steve: I mean, and it was - yeah, exactly. Exactly. And it was like, I'm subject to these laws now.

Leo: Right. And, yeah, I think it's really a desire, a strong desire to control what you and I and everybody else can see to control the flow of information. And I think that's anti-democratic in the long run. But they always use children, you know, let's protect the children as the excuse.

Steve: Right. And it's not that they're wrong. I mean, the children...

Leo: I want to protect children, yeah.

Steve: Absolutely. Absolutely. Let's take a break, and then we're going to start in on Memory Integrity Enforcement. And I'll find a point at about two hours, in another half hour, to take our final break because we're going to spend now until the end with, as I said, get your waders on.

Leo: I'm looking for my propeller hat, yeah, my...

Steve: I don't think that's going to do it. I think you need waders. You need to be able...

Leo: Uh-oh.

Steve: We're going to get into some deep stuff here.

Leo: Oh, I love it. It's always - everybody loves it when you go that way. Let's go. We're getting in deep, kids. Hang on. All right. I'm going to massage my temples while you describe Memory Integrity Enforcement.

Steve: Just, yes, close your eyes, sit back, let it just flow over you. Apple's big September 2025 product update announcement last Tuesday included a technical capability advance which garnered much less attention. But it was nevertheless perhaps somewhat more important in the long run for Apple's users than their decision, you know, to create, Leo, your new Cosmic Orange color for the iPhone 17.

Leo: I'm ready for Cosmic Orange. I can't wait. I'm so excited.

Steve: Under the covers of any iPhone 17 and its A19 chips lies an advance in hardware technology that goes further than anything Apple has previously, or any company has previously implemented to prevent coding mistakes from being leveraged into exploitable vulnerabilities that can be used against iPhone users.

It's worth remembering that, if today's incredibly complex code did not contain subtle mistakes, none of these extra fancy prophylactic measures would be required for security. Two weeks ago everyone needed to update and reboot their iOS and iPadOS devices, and their Macs, for that matter. After Apple discovered that a subtle flaw in the decompression code for Adobe's DNG lossless image compression format, coupled with a registration bypass flaw in WhatsApp, was being leveraged in the wild, almost certainly by the customers of commercial spyware vendors, those customers largely being governments, and to install spyware into the iDevices of highly targeted Apple users. Does this affect you and me? No. But Apple is serious about nipping all of this stuff in the bud. And, you know, and being able to claim that they have an utterly bulletproof platform.

So were it not for the apparent impossibility of catching all mistakes before they ship, there would be no need to go to these seemingly endless lengths to protect the users of these devices from their abuse. But one of the painful lessons the industry has reluctantly acknowledged, you know, as our understanding of the nature of security has matured, is that mistakes are not disappearing. And they may never because we're always pushing the boundaries of what's possible for us to build. This created the concept of "Layered Security" described as "Defense in Depth." The idea is to, wherever possible, establish multiple, often redundant, layers of protection so that the failure of any one or more layers would still leave a system's effective security intact.

Furthering this apparently endless effort, last Tuesday, Apple's SEAR (S-E-A-R) group, where SEAR stands for "Security Engineering and Architecture Security Research" informed the world of their latest and greatest hardware-assisted technology that has been incorporated into the A19 processor chips being used by their iPhone 17 and other just-announced devices. Their blog posting was titled: "Memory Integrity Enforcement: A complete vision for memory safety in Apple devices."

Okay. Now, I'm going to start by sharing just the first two sentences of their posting, after which we'll need to pause to catch our breath. Apple's team wrote: "Memory Integrity Enforcement (MIE) is the culmination of an unprecedented design and engineering effort, spanning half a decade" - as I noted earlier, also commonly known as five years - "that combines the unique strengths" - half a decade.

Leo: Half a decade.

Steve: That's right, "that combines the unique strengths of Apple silicon hardware with our advanced operating system security to provide industry-first, always-on" - that's one of the keys - "memory safety protection across our devices without compromising our best-in-class device performance. We believe Memory Integrity Enforcement represents the most significant upgrade to memory safety in the history of consumer operating systems." Okay.

Leo: Long time. At least half a decade.

Steve: That certainly sets the bar high, yeah. So the reason we're here today with this podcast is to gain an understanding of what Apple has done to justify this claim. Their posting then continues to remind us of the nature of the threats they face and some details of their journey up to this point. I'm going to share that, interrupting to comment and elaborate where needed. They write: "There has never been a successful, widespread malware attack against iPhone."

Okay, now, that's true, and it's worth remembering. Microsoft might argue that Windows, being a far more open platform compared to Apple's, which is a much more controlled environment, faces a much more daunting security challenge, that is, that Windows faces a much more daunting security challenge. But all of Microsoft's biggest problems were of their own making with their own code. All of those early Internet worms leveraged fundamental flaws in Microsoft's IIS web server, and the many continuing problems with Microsoft's NT LAN Manager and their Remote Desktop protocol. Those were, in every case, enabled by Microsoft's poor coding and insecure protocol designs. Apple has objectively done a far better job, and their devices are every bit as well-connected as Microsoft's.

So Apple continues: "The only system-level iOS attacks we observe in the wild come from mercenary spyware, which is vastly more complex than regular cybercriminal activity and consumer malware. Mercenary spyware is historically associated with state actors and uses exploit chains that cost millions of dollars to target a very small number of specific individuals and their devices." And I'll just note that what Apple is saying is we don't care. We're going to stop that, even though, you know, they've never really had a big problem.

They wrote: "Although the vast majority of users will never be targeted in this way, these exploit chains demonstrate some of the most expensive, complex, and advanced attacker capabilities at any given time and are uniquely deserving of study as we work to protect iPhone against even the most sophisticated threats. Known mercenary spyware chains used against iOS share a common denominator with those targeting Windows and Android: they exploit memory safety vulnerabilities, which are interchangeable, powerful, and exist throughout the industry."

Okay. That's all true. And I'll just say, I may not care less how thin Apple is able to make an iPhone. But the same dogged crazy over-the-top passion that they show for making their phones ever thinner, a whole different group at Apple is showing the same sort of focus on, darn it, we're not going to let anything attack our devices, period, no matter how much they cost whoever it is that wants to do it, we're just saying unh-unh, not here. So as I noted earlier, despite all the lessons we've learned, even recently authored code, such as that Adobe DNG file decompressor, continue to exhibit exploitable vulnerabilities.

So Apple writes: "For Apple, improving memory safety is a broad effort that includes developing with safe languages and deploying mitigations at scale. We created Swift, an easy-to-use, memory-safe language, which we employ for new code and targeted component rewrites. In iOS 15, we introduced kalloc_type, a secure memory allocator for the kernel, followed in iOS 17 by its user-level counterpart, xzone malloc. These secure allocators take advantage of knowing the type or purpose of allocations so that memory can be organized in a way that makes exploiting most memory corruption vulnerabilities inherently more difficult.

"In 2018, we were the first in the industry to deploy Pointer Authentication Codes (PAC) in the A12 Bionic chip, to protect code flow integrity in the presence of memory corruption. The strong success of this defensive mechanism in increasing exploitation complexity left no doubt that the deep integration of software and hardware security would be key to addressing some of our greatest security challenges." It's worth noting that that means what they're saying is we learned something from that A12 Bionic chip experience. They said then: "With PAC behind us, we immediately began design and evaluation work to find the most effective way to build sophisticated memory safety capabilities right into Apple silicon."

Okay. So to put this into perspective, the earliest efforts at building barriers around memory to protect its misuse were implemented in software. They were useful and effective, but they turned out to fall short of being "absolute." As a consequence, while the bar was meaningfully raised, this just meant that the bad guys needed to work a lot harder. You know, we talked about address space layout randomization, for example, and that, in turn, with the bad guys working, needing to work harder, the governments needed to pay more as exploits became significantly more rarified. Unfortunately for journalists, political activists, and other targeted individuals, governments have no shortage of funds, nor willingness to pay a competitive price.

You know, after adding things like address space layout randomization, kernel address space layout randomization, stack cookies, reference counting, and other software-based mitigations, all I'll note that we've covered in the previous years of this podcast, they were all eventually worked around by highly motivated attackers. So the ante had been upped, and it was time to start adding explicit anti-exploitation features to the underlying hardware.

Apple wrote: "Arm published the Memory Tagging Extension (MTE) specification in 2019" - okay, so that was six years ago - "as a tool for hardware to help find memory corruption bugs. MTE is, at its core, a memory tagging and tag-checking system, where every memory allocation is tagged with a secret. It's a 4-bit secret. The hardware guarantees that later requests to access memory are granted only if the request contains the correct secret. If the secrets don't match, the app crashes, and the event is logged. This allows developers - again, developers - to identify memory corruption bugs immediately as they occur."

Okay. So again, I'm going to pause to highlight this distinction because it's important. Arm's MTE was introduced, as I said, six years ago in 2019 with the ARM v8.5-A architecture. Its intention, design, and focus was to assist developers, both software developers - both the software, like debuggers, and the people - during code development time when they were debugging. Running code under a debugger that would attempt to verify and validate every memory access would introduce prohibitive overhead. We'll be talking a lot about overhead in a bit. You know, everything is about overhead.

So Arm's MTE was added to the ARM architecture to allow the hardware, while running at full speed, to detect instances of "use after free" and "out of bounds" accesses. And we'll explain how in a minute. It's not possible to do this at speed without hardware assistance because you'd have to check every reference to memory. And you just can't. This has to be done in the hardware.

By tagging memory allocations with what were known as "colors" consisting of 4-bit tags, so different allocations receive different coloring, and then checking against those pointer tags at runtime, MTE was able to provide a low-overhead, always-available, bug-trapping mechanism in hardware.

Since we're going to be talking about "tagging" a lot, let me clarify what's going on here. When an application running on behalf of its user or some process in the kernel needs the use of a block of memory, for example, it needs a buffer, some buffer space to store some incoming communications data, the app or a kernel process makes a request of the operating system's memory management system. For decades, a memory manager - for decades, you know, in the past the way this works is that a memory manager would locate some free memory, increment that memory's usage count to show that it's now in use, and return a pointer to the requested memory to its requestor. From that point on, that memory would be considered to be "owned" by the requesting application, and it would be free to do anything with it that it wished.

Unfortunately, the required flexibility of access required that the memory's ownership not be enforced. Any other process that knew where the memory was located could also access it. This is what the introduction of MTE changed. Under Arm's Memory Tagged Extension, the requestor would receive not only a pointer to a block of memory that satisfied its request, but also that short "tag," that color, a 4-bit secret key that would need to be present any time that memory was accessed. The theory was that while bad guys might be able to arrange to determine where some memory was that had recently been freed or might still be in use, requiring that they would need to determine that memory's access tag significantly raised the bar for memory access abuse.

Okay. But MTE alone proved to be insufficient for Apple's needs. They wrote: "We conducted a deep evaluation and research process to determine whether MTE, as designed, would meet our goals for hardware-assisted memory safety. Our analysis found that, when employed as a real-time defensive measure, the original Arm MTE release exhibited weaknesses that were unacceptable to us, and we worked with Arm to address these shortcomings in the new Enhanced Memory Tagging Extension (EMTE) specification, released in 2022." So three years after the 2019 release of MTE, working with Apple, Arm released a new specification, the Enhanced Memory Tagging Extension, EMTE, in 2022. They said: "More importantly, our analysis showed that while EMTE had great potential as specified, a rigorous implementation with deep hardware and operating system support could be a breakthrough that provides an extraordinary new security mechanism."

They said: "Consider that MTE can be configured to report memory corruption either synchronously or asynchronously. In the latter mode, memory corruption does not immediately raise an exception, leaving a race window open for attackers. We would not implement such a mechanism. We believe memory safety protections need to be strictly synchronous, on by default, and working continuously. But supporting always-on, synchronous MTE across key attack surfaces while preserving a great, high-performance user experience is extremely demanding for hardware to support.

"In addition, for MTE to provide memory safety in an adversarial context, we would need to finely tune the operating system to defend the new semantics and the confidentiality of memory tags on which MTE relies." Okay. Again, I'll just pause to say that MTE was, remember, was designed to help developers and debuggers. It was not meant as a proactive security measure. So Apple was - this exploration that Apple talked about going on, this deep analysis was can we use Arm's MTE released in Arm 8.5-A as a security measure. And they said, unfortunately, no. It comes up short.

They said: "Ultimately, we determined that to deliver truly best-in-class memory safety we would carry out a massive engineering effort spanning all of Apple, including updates to Apple silicon, our operating systems, and our software frameworks. This effort, together with our highly successful secure memory allocator work, would transform MTE from a helpful debugging tool into a groundbreaking new security feature.

"Today we're introducing the culmination of this effort: Memory Integrity Enforcement (MIE), our comprehensive memory safety defense for Apple platforms. Memory Integrity Enforcement is built on the robust foundation provided by our secure memory allocators, coupled with Enhanced Memory Tagging Extension - that's the EMTE from 2022 - in synchronous mode, and supported by extensive Tag Confidentiality Enforcement policies, again for use against malware. MIE is built right into Apple hardware and software in all models of iPhone 17 and iPhone Air and offers unparalleled, always-on memory safety protection for our key attack surfaces including the kernel, while maintaining the power and performance that users expect. In addition, we're making EMTE available to all Apple developers in Xcode as part of the new Enhanced Security feature that we released earlier this year during the Worldwide Developer Conference.

"The rest of this post," they wrote, "dives into the intensive engineering effort required to design and validate Memory Integrity Enforcement."

Okay. So let's get all these abbreviations straight. Originally, to aid in debugging, ARM designed and introduced MTE (Memory Tagged Extension) in 2019. But MTE was never designed to be used in an adversarial environment. It was designed to be a debugging aid. So, for example, it was acceptable if it operated asynchronously from the code, notifying a developer of a violation sometime after the fact. That was okay because they could go back and see what had caused that, acceptable for a debugger. But in an adversarial setting the damage might have already been done by the time an exception was raised. Thus Apple's need for synchronous checking, that is, the instant you try to access memory, if you shouldn't be doing it, your butt is terminated.

They said after, well, so what they found was: "After experiencing for themselves MTE's limitations, three years later, in 2022, they worked closely with Arm on the development and implementation of an extension to that, EMTE, their Enhanced or Extended Memory Tagging Extension."

Original MTE also allowed non-tagged memory regions, that is, like, okay, if you're not going to tag this, that's fine. For example, global or static allocations or untagged regions could be accessed without any tag checks, meaning that attackers could exploit out-of-bounds writes into such regions. EMTE addressed this by requiring access from a tagged memory region into non-tagged memory to respect the tag knowledge. This prevented the use of untagged memory from being used as a tag bypass. Again, Apple just looked at every single aspect of this and just said, you know, no no no no no, we need to fix these things. I mean, to me this represents them really, really getting serious about, you know, nipping this stuff once and for all.

EMTE also brings more comprehensive enforcement of tag mismatches, especially in synchronous mode, so that buffer overflows and use-after-free bugs are blocked immediately, not just signaled later or more coarsely. So much more granular control and, as I said, synchronous, meaning the instant something tries to make a fetch, if it should not be doing so, the process is terminated and an exception is logged. So there's a lot more to the improvements that EMTE brought over its predecessor MTE. But with their A19 ARM chips, Apple has already moved on to their next generation of even more rigorous protections.

So Leo, let's take our final break.

Leo: Okay.

Steve: And we're going to continue looking at what Apple has done here.

Leo: Really interesting stuff.

Steve: Yeah. This is a take-no-prisoners. We're through fooling around here. We have our own silicon. We are comfortable with how Arm technology works. We're going to extend this and make what they called a "significant commitment" in silicon in order to just end this whole class of problems.

Leo: Darren Oakey asked this question, maybe it's a dumb question, he says, "Why don't you just wipe the memory after it's free, zero it all out each time?" But I guess this is not just what you're working with. It's overflows, too; right?

Steve: So, yes. And so it's overflows. And OSes do get around to zeroing memory after it's been freed.

Leo: Right, just do it right away, right.

Steve: Exactly. And so that would introduce a huge amount of overhead, releasing a large buffer. And then everything would have to stop while you overwrote it with zeroes. So what happens is buffers that are released are put on a dirty chain. And then free time that the operating system has is used to go zero them and move them over to the ready-to-allocate chain. And then all of those free memories are aggregated and consolidated. So there's a whole bunch of stuff going on behind the scenes.

Leo: That's actually like at our house because Lisa says I should wash dishes while I'm cooking. But I say I'm going to cook, and then I'm going to wash the dishes afterwards. I think that's more efficient, personally. But, you know.

Steve: Yeah. I tend to go for the same [crosstalk].

Leo: This episode of Security Now!, we'll get back to this, is really interesting and very impressive, really, that Apple would say, you know, we're going to tackle this.

Steve: It is a huge investment.

Leo: Yeah. It's exciting. We'll find out what Apple did do to enhance MTE in just a moment. Okay. You got to cool off a little bit, have a little tea. I'm not talking [crosstalk] our audience.

Steve: You're going to love the way these 4-bit tags work, Leo.

Leo: All right.

Steve: So Apple's MIE can best be seen as an evolution of EMTE, the Enhanced MTE, where MIE adds various final touches to EMTE's already very useful protections. So at first glance, for example, these 4-bit tags might not appear to be very useful because 4 bits, having just 16 possible states, cannot contain much security entropy. But the way they're employed is very clever. Allocations are made with the same granularity as memory pages, which on Arm are 16 Kbytes each.

One of the guarantees made by the system's memory allocator, now under MIE, is that adjacent allocations of memory will always have differing tags. This cleverly nips buffer overflows in the bud. If some adversary were able to arrange to compromise an application to obtain access to both its memory and its associated memory access tag, it would be unable to read or write outside of the application's allocated memory region because those adjacent "buffer overflow" regions would be guaranteed to be using a differing tag, with neither the benign application nor its malicious compromiser having any way of knowing or predicting any adjoining allocation tag's differing 4-bit value. Thus the infamous buffer overwrites are stopped cold.

The equally pernicious and ubiquitous use-after-free vulnerabilities are similarly prevented - and this actually addresses the question that the listener had a second ago, Leo. Use-after-free vulnerabilities are prevented by having the updated EMTE memory allocator, now the Apple's MIE memory allocator, change the access tags after any freed memory is freed. Thus, in the same way, if an application had been compromised so that malware obtains access to the memory pointer and the tag of its memory, after it has been released back to the system, any subsequent attempt by the malware to use that memory after it's been freed will be trapped and blocked immediately. No more use of memory after being freed.

So if you pardon the pun, "ARMed" with this bit of background, Apple's further explanations will make some more sense. Apple wrote: "A key weakness of the original MTE specification is that access to non-tagged memory, such as global variables, is not checked by the hardware. This means attackers don't have to face as many defensive constraints when attempting to control core application configuration and state. With Enhanced MTE, we instead specify that accessing non-tagged memory, like these global variables, from a tagged memory region, meaning one under control, requires knowing that region's tag, making it significantly harder for attackers to turn out-of-bounds bugs in dynamic tagged memory into a way to sidestep EMTE by directly modifying non-tagged allocations."

And they said: "Finally, we developed Tag Confidentiality Enforcement to protect the implementation of our secure allocators from technical threats and to guard the confidentiality of EMTE tags, including against side-channel and speculative-execution attacks.

"Our typed allocators and EMTE both rely on confidentiality of kernel data structures from user applications, and of the tags chosen by the allocator. Attackers might attempt to defeat EMTE, and in turn Memory Integrity Enforcement" - Apple's newest technology - "by revealing these secrets. To protect the kernel allocator backing store and tag storage, we use the Secure Page Table Monitor, which provides strong guarantees even in the presence of a kernel compromise. We also ensure that when the kernel accesses memory on behalf of an application, it's subject to the same tag-checking rules as user space."

So ARM began with MTE, which Apple utilized once it was available. But its limitations caused Apple to work with ARM to create EMTE, but Apple was able to obtain sufficient real-world experience with EMTE, examining the many ways that it could and still was being bypassed in the field, that they then further enhanced, and that already enhanced memory tag extension to create MIE. I guess they didn't want to go with EEMTE, Enhanced Enhanced MTE.

So anyway, Apple has clearly essentially taken the second generation of MTE, known as EMTE, and moved it to always on, synchronous, and as strong as possible. If we were to summarize just sort of in a bullet-pointed fashion the things they did, they made EMTE synchronous so that tag verification occurs immediately before memory accesses, and any tag mismatch crashes the process to prevent its exploitation. So this eliminates opportunities where malicious behavior might slip by due to delayed or asynchronous checking which, due to the overhead, was the way MTE would be used. They also enforce always-on system-wide deployment. MIE is enabled by default across Apple's entire kernel, and for more than 70 userland processes. Previous and other systems were forced to rely on optional or per-app memory tagging, which unfortunately reduced the performance significantly.

They have secure typed allocators, where Apple's memory allocators have been updated to use type information to isolate objects by type to reduce any type confusions style overlaps and help with the placement of allocations in memory so that different types get different tags and are less likely to misuse their targets. They also handle retagging and memory reuse safely. As I noted, when memory is freed and reused, Apple's system ensures that the free memory tag is changed so that stale pointers with old tags will no longer match. They also have protection for overflow across adjacent allocations by assuring that adjoining allocations have differing tags. They also no longer allow for access of non-tagged memory from non-tagged memory. It has to be tagged execution memory accessing non-tagged memory. So they foreclosed that, too.

And their hardware enforces the confidentiality of these tagging, which was never done before because MTE was not really focused on protecting against malicious abuse. It was always focused on helping debuggers to catch the bugs. All of this being done down now in the hardware and silicon. Because doing any of this in software would be prohibitive of performance overhead, they moved everything that was necessary down into, for MIE, down into hardware for the A19 and A19 Pro chips. So I'm just very, very impressed with the scale of Apple's commitment.

It is not difficult to imagine what the team behind MIE, who had just spent the last five years of their lives perfecting all of this new super-hardening technology, were probably feeling, when you think about it, with that just two weeks ago another successful exploit made against the hardware that they had already moved well past and were already, like, they were poised to replace it as they did last week with an entirely new system that would almost certainly no longer fall victim to exactly that exploit and probably nearly any other attack. As I said, it is the case that not every type of security problem is a use-after-free or a buffer overflow or some sort of memory exploit. But I don't know what the percentage is, 95% of them probably are.

I think we're - no one is ever going to suggest that there will never be another successful system-level exploit against Apple's latest or future iOS and iPadOS platforms. But there is a distinct possibility that that could be the case. You know, we heard of, as I mentioned before a while ago, from a past early Apple hobbyist, an exploit developer, who was lamenting that he had long ago hung up his spurs and was no longer attempting to find iPhone exploits because they have become insanely difficult to locate and engineer.

There will come a time, and we might now be there today, when the cost to develop any new exploit, if it's even possible, has become so high that even the highest end, most capable exploit developers, you know, join that earlier hacker in giving up on Apple and switching to more attackable platforms because, you know, Apple has just gone all the way and said no. Even though a tiny percentage of our users are ever being targeted, that's not okay.

Leo: Of course that means the people who will attack Apple are the ones most strongly motivated, actors from nation states who are going after...

Steve: But I'm saying, even at this point, I mean, that's the only people who have...

Leo: It's so hard.

Steve: That's who - those are the only people who have been attacking Apple.

Leo: Right.

Steve: And this raises the bar.

Leo: Is this enough to deter them, you think?

Steve: Yes.

Leo: Yeah. Interesting.

Steve: I think what it means is we're going to be rebooting our phones for software security updates much less often.

Leo: Great. Boy, that would be great.

Steve: Because Apple won't be in a panic, needing to protect this against the latest zero-day. We're just going to have many, many fewer zero-days.

Leo: Now, as you know, Apple has locked things down so much it's hard for security researchers to actually work on iPhones.

Steve: Yeah.

Leo: But they have opened up a program, in fact they just opened up applications for the new phones, for security researchers to get specially modified iPhones that are less protected so that they can at least work on these things. So I really admire the way Apple has gone into this.

Steve: I am so impressed. I mean, this is a - no other company has made this sort of commitment.

Leo: Yeah. Fantastic. Well, that's what happens when you make your own silicon. You can do more. And thank goodness that their decision has been to do more and not save more and charge more.

Steve: Yeah. They called it an "unprecedented percentage" of their silicon real estate is now devoted just to this. Not to making it faster, not to more cores and more, you know, neural nonsense, it's no. We're saying this is how we're tagging the memory, and we're going to stop you cold if you don't have the magic token for doing so. And bad guys can't get that.

Leo: One thing I did notice that worried me was that they have enhanced the branch prediction capabilities. They are not abandoning branch prediction, which as we know is one of the sources for these timing attacks, like Rowhammer.

Steve: Yeah. Yup.

Leo: Would this help in that kind of event? No. This is a different kind of problem.

Steve: I think we're going to have to see whether those - so those are side channel, and they are saying that this is also proof against side-channel attacks. They have hardened this against that.

Leo: So the memory leaks, that's what's happening is they leak in these branches.

Steve: Yes. It's the side-channel attack that gets the malware the pointer that it can then abuse.

Leo: So if it can't abuse it. Ah, brilliant.

Steve: It doesn't matter if the bad guys get the pointer.

Leo: Ah. Wow. Thank you for explaining this. I'm venturing that there are very few places you can get this kind of information. You could read the whitepaper for yourself, but it's going to take somebody like Steve to explain its implications. Somebody who's been doing this for a long time and knows exactly where the bodies are buried. Good on Apple. Good on Apple. And thank you for explaining this.

Steve: Yeah, I'm very impressed.

Leo: What I love is you don't shy away from the really technical stuff. And you know what, I think our audience appreciates that. They [crosstalk]. Yeah, yeah. Fantastic.


Copyright (c) 2014 by Steve Gibson and Leo Laporte. SOME RIGHTS RESERVED

This work is licensed for the good of the Internet Community under the
Creative Commons License v2.5. See the following Web page for details:
http://creativecommons.org/licenses/by-nc-sa/2.5/



Jump to top of page
Gibson Research Corporation is owned and operated by Steve Gibson.  The contents
of this page are Copyright (c) 2026 Gibson Research Corporation. SpinRite, ShieldsUP,
NanoProbe, and any other indicated trademarks are registered trademarks of Gibson
Research Corporation, Laguna Hills, CA, USA. GRC's web and customer privacy policy.
Jump to top of page

Last Edit: Sep 22, 2025 at 08:15 (139.31 days ago)Viewed 3 times per day