Transcript of Episode #951

Revisiting Browser Trust

Description: How can masked domain owners be unmasked? What new and very useful feature has WhatsApp just added? How did Iranian hackers compromise multiple U.S. water facilities across multiple states? Did Montana successfully ban all use of TikTok statewide, and is that even possible? How many Android devices are RCS-equipped? What's the EU's Cyber Resilience Act, and is it good or bad? Is ransomware finally beginning to lose steam? What's the deal with all of these new top level DNS domains? Do they make any sense? Has CISA been listening to this podcast, or have they just been paying attention to the same things we have? What's up with France's ban on all "foreign" messaging apps, and did the Prime Minister's nephew come up with an alternative? And I want to share two final insights from independent industry veterans regarding the EU's proposal to forcibly require our browsers and operating systems to trust any certificates signed by their member countries.

High quality  (64 kbps) mp3 audio file URL: http://media.GRC.com/sn/SN-951.mp3

Quarter size (16 kbps) mp3 audio file URL: http://media.GRC.com/sn/sn-951-lq.mp3

SHOW TEASE: It's time for Security Now!. Steve Gibson is here. What happened when Montana tried to ban TikTok? What's EU's Cyber Resilience Act? And how good, how lucky are we to have CISA? And then the way the EU is about to break browser security for everyone worldwide. Steve explains it all next on Security Now!.

Leo Laporte: This is Security Now! with Steve Gibson, Episode 951, recorded Tuesday, December 5th, 2023: Revisiting Browser Trust.

It's time for Security Now!, the show where we cover the latest news about security, what's going on in the Internet with this guy right here, who knows all, sees all, tells all: Mr. Live Long and Prosper, Steve Gibson. Hi, Steve.

Steve Gibson: Yo, Leo. Welcome back from your weeklong retreat. We missed you last week. Ant held down the fort.

Leo: Thank you for the tribute you did. That was very sweet.

Steve: Well, it's funny because no major topic occurred. And when I looked at the previous week's episode, that was Ethernet Turned 50. And I had received a notice from my phone saying, hey, you know, Leo's about to have an event.

Leo: It was my birthday was the 29th, so yeah.

Steve: Yeah. And so what was funny, too, was that my calendar didn't have your year of birth, so I didn't know which number this was. So I went to Wikipedia. And of course it knew. And it said 66.

Leo: It's wrong.

Steve: So I put all the - I know. Well, no. It said 66 on Monday or Tuesday.

Leo: The day before, yes, right.

Steve: And so I forgot to add one. So I produced the show notes, I had the title was Leo Turns 66. Woohoo!.

Leo: I wish.

Steve: And sent email out to everyone. And I quickly got a note from John saying, "Steve, Leo's been 66 all year."

Leo: A whole year.

Steve: I said, "Ooh, crap, that's right, I need to add one." Anyway.

Leo: I really appreciated [crosstalk].

Steve: So I quickly fixed everything. And anyway, it was - we had a great show. We had one piece that Ant was great for, was one of our listeners asked about my technical suggestions for how to block his third very most technical savvy of all three kids from any access to the Internet. And my answer was...

Leo: Fuggedaboutit.

Steve: Don't try.

Leo: Yeah.

Steve: You know, this is a problem that requires parenting, not firewalls. And anyway, so we had a good time talking about that. I'm going to drag our listeners one final time, I hope it's final, through the issue of what the EU is planning to do because two notable industry people weighed in last week. And actually the second of these two had shared some statistics about the distribution of certificate authorities which is sort of astonishing. So today's title is "Revisiting Browser Trust." I think everyone's going to find it interesting. If not, by the time you get there, you'll be exhausted anyway. So you could, just say, okay, you know, I don't want to hear Gibson talk about this anymore. But there are a couple important new pieces of information I think are going to be useful.

But before we get to that, we're going to answer some questions. How can masked domain owners be unmasked? What new and very useful feature has WhatsApp just added? How did Iranian hackers compromise multiple U.S. water facilities across multiple states recently? How did Montana successfully - oh, I'm sorry, did Montana successfully ban all use of TikTok statewide? And is that even possible? How many Android devices are RCS-equipped now? What's the EU's Cyber Resilience Act all about, and is it good or bad? Is ransomware finally beginning to lose steam? What's the deal with all these new top-level DNS domains? Do they make any sense? Has CISA been listening to this podcast, or have they just been paying attention to the same things we have? What's up with France's ban on all "foreign," literally, in quotes, "foreign" messaging apps, and did the Prime Minister's nephew come up with an alternative?

And as I said, I want to share two final insights from independent industry veterans regarding the EU's, like, I mean, this is like all but happened at this point, signed behind closed doors, new legislation to forcibly require our browsers and operating systems to trust any certificates signed by their member countries. So, and of course - I know. It's just, I have a lot of commentary at the end about how I just, well, we'll get there. So, and of course we have a pretty funny picture of the week. So I think another great podcast for our listeners.

Leo: I would expect nothing less. Wow. You know, I think I can see the next free GRC program as a program that rips those certificates out at the roots from your browser.

Steve: Yeah.

Leo: Because, I mean, seriously, we're going to need that, Steve. Boy, just unbelievable. Let's get jiggy with our Picture of the Week. What are you laughing at? Can I show it?

Steve: I'm looking at the picture.

Leo: Oh, you're waiting - okay. Let me put the camera back on me, and I am going to show the Picture of the Week. Should I show the world at the same time?

Steve: Oh, sure. It's just so good.

Leo: Okay, here we go [laughing]. Okay, that is funny. Will you describe this, Steve?

Steve: So what Leo is laughing at, and I have to say when I looked at it again this morning when I was selecting from among the archive, I did, I burst out laughing because it's just so perfect. It is a sign that says - and it's a real authentic sign that in a yellow rectangle says "Warning Low Flying Aircraft." And it's got like then a diamond above it showing a picture, you know, sort of an iconic picture of an aircraft. What makes it so funny...

Leo: Yeah, this is obviously near an airport. I mean, this, you know, you see this.

Steve: Yeah, and I was thinking that. When, where would you encounter a sign that's warning you of low-flying aircraft?

Leo: A small airport, small plane airport, you'd definitely see it.

Steve: Yeah. Anyway, what's so funny about this is the sign has been knocked over. So you can see it's broken off at the base, and it's lying on the grass because the presumption is...

Leo: Low-flying aircraft.

Steve: ...the low-flying aircraft got it.

Leo: Anyway, that's as low as you can go. It's on the grass, yeah.

Steve: It is just - it's just so good.

Leo: Wow. That's hysterical.

Steve: So I just, you know, it makes me love humanity even more. Last week we had one where there was a sign in front of an escalator that had some yellow, you know, warning tape across it. And it said: "This escalator is refusing to escalate."

Leo: I love people with a sense of humor. That's awesome. That's great.

Steve: And actually someone tweeted me, and he said, okay, that's dumb. An escalator is the one piece of equipment which is still useful even when it's broken.

Leo: Well, that's a good point.

Steve: As a set of stairs.

Leo: Yeah. Not quite as good, but it still works.

Steve: And of course, you know, liability and all that. You couldn't do that. Anyway, okay, so one of the things that was always chafing while I was with Network Solutions, which, you know, I'm very loyal, I started with them in the beginning.

Leo: They were your registrar.

Steve: Yes, GRC.com, on day one, my domain was at Network Solutions because they were the guys. They were the Big Kahuna. They may have been the only Kahuna back in those days. But what really chafed was the idea of my - and this of course came along later - the idea of my needing to pay them additional money, annually even, to redact the domain registration listings they themselves had created for ICANN's Internet's WHOIS database queries. The original idea behind domain registration was for it to be public. Right? I mean, this whole thing was we're all one big happy globe, worldwide network, and this is all going to be wonderful. And so the people who register domain names should be public.

But it wasn't long before spammers and scammers were scraping the public domain registration WHOIS database for information and abusing it in every way imaginable. So it soon became prudent to have that data masked, and masking services appeared. Then the domain registrars themselves began offering this extra service, with many seeing the provision of this masking as of course another revenue opportunity. So just one of the many reasons I'm so glad that I left Network Solutions and moved over to Hover, who has been, if they're not still, a sponsor of the TWiT Network.

Leo: They're not, but they're good, and they still do free WHOIS privacy; right?

Steve: Exactly. I went back to make sure that that was the case. So, you know, that's the way it should be.

Okay. Now, in the EU with GDPR, things are somewhat different. Now, as we know, the GDPR has had its pluses and minuses. One of the minuses we all now experience every day is the pervasive annoyance of every website being forced to wave its cookie policies in our faces and obtain our acknowledgement and consent. On the flipside, one of the pluses is that the GDPR includes a stringent data protection law that has forced domain registrars to redact information on owners from their publicly available WHOIS databases. This information is still present in the private databases of domain registrars, which, you know, they have to have that in order to maintain the domain. And it has historically been made available to some organizations, but usually only in a very limited fashion, you know, like under court order, or responding to subpoenas, or following intelligence-sharing arrangements and agreements of some sort.

Okay. I'm bringing this all up today because last Tuesday ICANN announced a new facility to improve the current situation for those, such as in law enforcement, who have a legitimate need to obtain access to otherwise redacted domain ownership information. So with a bit of editing, here's what ICANN said. They wrote: "The Internet Corporation for Assigned Names and Numbers" - that's what ICANN stands for, I-C-A-N-N - has launched the Registration Data Request Service (RDRS). The RDRS is a new service that introduces a more consistent and standardized format to handle requests for access to nonpublic registration data related to generic top-level domains (gTLDs).

"Personal data protection laws now require many ICANN-accredited registrars to redact the personal data from public records that was previously available in their WHOIS databases. With no one way to request or access such data, it can be difficult for interested parties to get the information they need. The RDRS helps by providing a simple and standardized process to make these types of requests. The RDRS can be an important resource for ICANN-accredited registrars and those who have a legitimate interest in nonpublic data, like law enforcement, intellectual property professionals, consumer protection advocates, cybersecurity professionals, and government officials.

"The RDRS is a free, global, one-stop-shop ticketing system that handles nonpublic TLD registration data requests. The RDRS connects requestors of nonpublic data with the relevant ICANN-accredited registrars for TLD domain names that are participating in the service. The system will streamline and standardize the process for submitting and receiving requests through a single platform. The service does not guarantee access to requested registration data. All communication and data disclosure between the registrars and requestors takes place outside of the system.

"By utilizing a single platform and request form, RDRS provides a consistent and standardized format for handling nonpublic TLD registration data requests. This simplifies the process for requestors by automatically identifying the correct registrar for a domain name and preventing the need to complete multiple forms with varying sets of required information managed by different registrars. The service also provides a centralized platform where requestors can conveniently access pending and past requests. They also have the ability to create new requests, develop request templates, and cancel requests when needed."

Finally, "Registrars can benefit from using the service as it provides a mechanism to manage and track all nonpublic data requests in a single location. Registrars can receive automated alerts anytime a request is submitted to them. The use of a standardized request form also makes it easier for the correct information and supporting documents to be provided to evaluate a request."

So to me, this seems like it's been a long time coming, and it makes so much sense. You know, there are, today, there are so many shenanigans going on with Internet domain names that abusers of the system need to know that their ability to hide is being reduced. And legitimate domain owners should have a reasonable expectation of privacy. So the idea of having, you know, the WHOIS databases not all public, yet still creating a uniform, less hassle-full means of obtaining that nonpublic data across registrars who all have their own ways of doing things to me, you know, the idea of standardizing this process for obtaining the information seems like makes a lot of sense and seems like a long-missing piece that's finally being provided. So, you know, props to ICANN for this, you know, yay. I think that just - that works.

Due to the strength of Facebook, Meta's WhatsApp, as we know, is the world's number one most used, most popular messaging app. And last Thursday WhatsApp announced a significant new feature which was missing when they announced something earlier in May known as "Chat Lock" last May. So, okay, first, here's what they announced on May 15th under the headline "Chat Lock: Making your most intimate conversations even more private." They said: "Our passion is to find new ways to help keep your messages private and secure. Today we're excited to bring to you a new feature we're calling Chat Lock, which lets you protect your most intimate conversations behind one more layer of security." Now, right off, I think that sounds like a great idea. And we'll look at why they think so.

They said: "Locking a chat takes that thread out of the inbox and puts it behind its own folder that can only be accessed with your device password or biometric, like a fingerprint. It also automatically hides the contents of that chat in notifications, too. We think this feature will be great for people who have reason to share their phones from time to time with a family member, or those moments where someone else is holding your phone at the exact moment an extra special chat arrives. You can lock a chat by tapping the name of a one-to-one or group and selecting the lock option. To reveal these chats, slowly pull down on your inbox and enter your phone password or biometric. Over the next few months we're going to be adding more options for Chat Lock, including locking for companion devices and creating a custom password for your chats so that you can use a unique password different from the one you use for your phone."

And it is that last feature that I had on my mind the whole time I was reading the foregoing. It was like, well, that's nice that you're going to move this out of the inbox, and you're going to give it its own place to live. But then you're going to allow the same phone password or biometric to unlock it. That's not optimal to me, or I should say maximal. And for this, the whole point is to obtain something maximal.

So what they announced last week was, they said: "Earlier this year we rolled out Chat Lock to help people protect their more sensitive conversations. Today we're launching Secret Code, an additional way to protect those chats and make them harder to find if someone has access to your phone or you share your phone with someone else. With Secret Code you'll now be able to set a unique password different from what you use to unlock your phone to give your locked chats an extra layer of privacy. You'll have the option to hide the Locked Chats folder from your chat list so that they can only be discovered by typing your secret code in the search bar." They've just done this whole thing exactly right.

"If that doesn't suit your needs, you can still choose to have them appear in your chat list. Whenever there's a new chat which you want to lock, you can now long press to lock it rather than visiting the chat's settings. We're so happy our community has been loving Chat Lock, and hope that Secret Code makes it even more useful to them. Secret Code starts rolling out today, and in the coming months will be available globally. We're excited to keep bringing more function to Chat Lock to help people protect their privacy. Let us know what you think."

Anyway, as I've been saying, I think it makes total sense, and I predict that it will become a heavily used feature. From a privacy and security standpoint, it makes sense for our devices to have multiple layers and levels of protection. We need to have more than just a device being locked or unlocked. That's no longer sufficient, or at least certainly not for all possible use cases, and that's what this is allowing WhatsApp to be extended to. And I don't think that using the same password or biometric, as I said, makes sense for an "inner" level of protection. Locking enhanced layers of privacy behind "something you know" makes I think the most sense. So bravo to Meta for doing this.

Okay. I said recently that one of the broad changes to the way we've always done things must somehow be the elimination of any initial default password from our devices. My first thought was to require the user to set a password themselves, while preventing them from setting it to "password" or "Monkey123," by also embedding some minimal complexity requirements. But I don't think that's the right solution. I think the right answer is to have the device randomly assign a strong password when it's initially set up, and that's it. The user needs to write it down. Period. We've been talking for years about the need to be using strong passwords that we cannot recall. That needs to apply to equipment as well as websites.

So here's the news that brought me back to this train of thought. Get a load of this: The U.S. government has confirmed that an Iranian hacking group named Cyber Av3ngers, where the A-V-E of Avengers is actually a numeral 3. So, you know...

Leo: Okay. We know they're 12-year-old boys now. Go ahead, yeah.

Steve: Yes, Cyber Av3ngers; right.

Leo: Okay.

Steve: And actually, Leo, you're right because a 12-year-old could do this. They gained access to the equipment at water facilities across multiple U.S. states. CISA, the FBI, the NSA, and other agencies say the attacks began around November 22nd.

Leo: Jesus.

Steve: I know, and exploited PLCs, you know, Programmable Logic Controllers that we've spoken of many times in the past, manufactured by the Israeli company Unitronics. The group targeted Unitronics PLCs that were still using the default password "1111."

Leo: Oh. My. Good. Ness. That's absurd. And that's a water supply.

Steve: I know.

Leo: Oh, god.

Steve: It's un-frigging-believable. So last week CISA asked U.S. organizations to please change the default password, enable multifactor authentication, and remove the devices from the Internet. Gee, what a concept. U.S. officials say the Cyber Av3ngers group is affiliated with the IRGC, an Iranian military and intelligence organization. Maybe they're their kids. According to the Shadowserver Foundation, between 500 and 800 Unitronics PLCs are currently exposed to the Internet. And let me just say that you don't, you almost, you probably have absolutely no actual business purpose for connecting a PLC to the Internet. You know, it runs equipment in factories. Well, and obviously water systems. But who needs to hook it to the Internet?

Okay. 66 were identified in Australia, 52 in Singapore, 42 in Switzerland. Those are the top three: Australia, Singapore, Switzerland. 37 are known to be in the United States, and apparently they've all been hacked, and they're all controlling our water supply. Then following up is Estonia and Spain, both with 31, and then it continues to dwindle down the list. Like pretty much everybody has one, you know, every country.

Unlike web servers, PLC systems, as I said, typically have no need to be exposed to the Internet. Doing so, if you like actually had to do that, it should require jumping through some real hoops. Under no circumstances should a device be produced where it both has a well-known default password, all set to 1111 in the factory, and is also exposing any interface protected by that default password to the Internet. In today's world, designing and selling such systems is really nothing short of irresponsible.

We've talked in the past about countries becoming proactive in scanning their own Internet address space with an eye toward getting ahead of attackers and cleaning up some of these issues. This is the sort of thing that CISA in the U.S. ought to be considering because in CISA, I'm increasingly impressed by them, we finally have a proactively useful cybersecurity entity.

Leo: Yeah, yeah. Chris Krebs was great when he ran it. He's a smart guy.

Steve: Yeah. They're on the ball.

Leo: Yeah. I'm not surprised. So I think Retcon5 in our chatroom said it's simple, you just change it to 2222. And next year, 3333.

Steve: That's good. I mean, at least it would - at least the kiddies...

Leo: Rotating.

Steve: The script kiddies would not be able to get to you. My god. Unbelievable.

Leo: Oh, my god.

Steve: Okay. So a while back we covered the news that a bunch of states were enacting legislation to block the use of TikTok on government devices within their jurisdictions. Doing that was likely within their power. But the state of Montana wanted to go further and - get this, Leo - outright ban all use of the TikTok service statewide. Okay, now, from a purely technical standpoint this would be somewhat tricky, since network boundaries and state borders are not currently aligned since there's never been any need to align them. But now it appears that it might not matter about that after a recent federal ruling which occurred just last Thursday. NPR's coverage of this also provides some interesting background. So I want to share it.

They wrote: "A federal judge has blocked a law in Montana that sought to ban TikTok across the state, delivering a blow to an unprecedented attempt to completely restrict a single app within a state's borders. The ruling, which came on Thursday, means that Montana's TikTok ban, which was set to go into effect on January 1st, has now been temporarily halted. U.S. District Judge Donald Molloy said Montana's TikTok ban 'oversteps state power' and 'likely violates the First Amendment.'

"Molloy wrote that though officials in Montana have defended the law as an attempt to protect consumers in the state, there is 'little doubt that Montana's legislature and Attorney General were more interested in targeting China's ostensible role in TikTok than with protecting Montana consumers.' Montana as a state does not have authority over foreign affairs, Molloy said, but even still, he found the national security case presented against TikTok unconvincing, writing that, if anything, the Montana law had a 'pervasive undertone of anti-Chinese sentiment.'

"The ruling is preliminary with a final determination to be made following a trial expected sometime next year. TikTok, which has more than 150 million American users, has for years been under intense scrutiny over fears that its Beijing-based parent company, ByteDance, would hand over sensitive user data to Chinese authorities, or that Beijing would use the app as a propaganda tool, even though there is no public proof that either has ever happened.

"Although several states and federal government have prohibited the app from being downloaded on government devices, Montana was the first state to pass an outright ban of the app. Some critics have accused it of government overreach. In May, TikTok sued the state over the law, arguing that it amounts to an illegal suppression of free speech. Lawyers for TikTok argued that the national security threat raised by officials in Montana was never supported by any evidence. Molloy, the judge overseeing the case, was skeptical of the ban in an October hearing on the lawsuit. He pointed out that TikTok users voluntarily provide their personal data, despite state officials suggesting the app was stealing the data of users. He said state officials justified the Montana ban under a 'paternalistic argument.'

"As Washington continues to debate TikTok's future, states have been acting faster, and the law in Montana was considered an important test case of whether a state-level ban of an app would survive court challenges. Backing the Montana law were 18 primarily Republican-led states that were eyeing similar bans of TikTok. Aside from legal hurdles to implementing such laws, cybersecurity experts have raised questions of how, from a technical standpoint, such a ban would even be possible." Right. Count me in that group. Those pesky technical details which keep tripping up the legislators who believe that they can simply have any magical technology that they demand.

And then, anyway, NPR concludes: "President Trump clamped down on TikTok and attempted to outlaw the app, but his efforts were twice struck down in the courts. National security experts say TikTok is caught in the middle of escalating geopolitical tensions between the U.S. and China, as Washington grows ever more concerned about the advancement of Chinese tech, like semiconductors, and the country's investments in artificial intelligence.

"Supporters of restricting or banning TikTok in the U.S. point to Chinese national security laws that compel private companies to turn information over to Beijing authorities. They also point to ByteDance, TikTok's corporate owner. It admitted in December that it had fired four employees, two of whom worked in China, who had improperly accessed data on two journalists in an attempt to identify a company employee who leaked a damaging internal report."

Now, I'll just say that by no means am I defending TikTok. But let's not forget that many domestic companies, as well as many of our own U.S. law enforcement agents - we've covered these issues in the past - have also been caught with their hands in the cookie jar. It appears that access to personal and private data is quite tempting. So it's not just Chinese misbehavior.

Oh, and finally, this is significant, too. TikTok says China-based employees no longer have access to U.S. user data under a new firewall it has put in place with the help of Oracle. With this change, dubbed "Project Texas" after Oracle moved its corporate headquarters to Austin, all Americans' data will be stored on servers owned and maintained by Oracle, with additional oversight from independent auditors. So it seems clear that TikTok is obviously, well, we know that TikTok is obviously an extremely successful and valuable service. It seems to me that they're making every effort to legitimately assuage concerns of secret Chinese influence. And of course today's social media is all about influence. But such influence is as pervasive over with Facebook and X as it is anywhere else.

Leo: That's the problem. It's very selective enforcement. I mean, it's all crap. How do you pick one out? You know, it's all propaganda. It's all lies. And as you and I know very well, if the Chinese government wants information about U.S. citizens, they just go to a data broker. It's cheap. So...

Steve: Right, right. And in fact there are some, oh, it's Senator Ron Wyden is in the...

Leo: Wyden's smart. He's good.

Steve: Yes. He's smart. He's threatened to stop the appointment of somebody, I don't remember whom, until the NSA answers questions...

Leo: Oh, yeah.

Steve: ...about whether it's been purchasing this private data about U.S. citizens.

Leo: They've been really hedging their responses to that question, which tells me of course they are.

Steve: Right. Otherwise they'd just say no.

Leo: And by the way, that's why Congress will never pass a law against data brokers. Because law enforcement and our three-letter agencies are saying you can't do that, we need that information.

Steve: Yeah, yeah.

Leo: Sad.

Steve: Okay. So just a quick note. RCS is now enabled on more than one billion Android devices. We recently noted Apple's announcement that they would be upgrading their non-iMessage messaging, which is currently using SMS and MMS to RCS. So it was noteworthy that last Thursday Google announced that its RCS messaging system is now enabled on more than one billion Android devices. So it appears that Android users will be ready once Apple joins them with RCS next year. And as I said, I will be quite happy to have a better messaging experience when any members of my little otherwise iOS group who have an Android force the whole group down to SMS. You know, be really nice to have the RCS features. Which looks like it's pretty much at parity, largely, with iMessage.

Leo: Well, and this will cross your desk later today, and I'm sure you'll want to talk about it next week, but Beeper has just announced this week, maybe you heard us talking about it on MacBreak Weekly, this new program Beeper Mini, that basically they jailbreak an iPhone and reverse engineered the protocols to use to log into the iPhone servers. So you are able now on an Android device to use Messages legally and be a blue bubble and all of that.

Steve: Now, what do you mean "legally"?

Leo: There is a carve-out in the DMCA apparently, I didn't know this, Jason Snell explained it, that allows you to reverse engineer this particular kind of thing. So it wasn't illegal to reverse engineer it. Furthermore, and this is what I wonder, and I would love to hear your thoughts on, it is speculated that Apple can't stop this without breaking their own authentication servers. So they're kind of over a barrel because if they attempt somehow to prevent this login - now, I don't know that - I wouldn't be surprised if Apple had some secret way of doing this.

Steve: Yeah, I would think you'd have to have like a - you'd have to have a pseudo-iPhone in order to connect to their servers; right? I mean, like, yeah.

Leo: I don't know. It works.

Steve: You're right.

Leo: It's two bucks a month.

Steve: I'm curious.

Leo: What we said is I wouldn't subscribe for a year.

Steve: And did it just happen?

Leo: This morning. Beeper Mini.

Steve: Wow.

Leo: It goes on Androids, gives you full parity. It's basically using iMessage on an Android. Logging in with an Apple account. And it's open source, and they, well, at least the Python part is.

Steve: Oh, logging in with an Apple account.

Leo: Yeah, yeah, yeah. Yeah, you're using your Apple account and going through the servers. The question is, I've got to think Apple has some sort of fingerprint.

Steve: How can they not know it's not an iPhone?

Leo: Right.

Steve: Yeah.

Leo: But I don't know.

Steve: Maybe they just never needed to worry about it because they figured no one can break in. It's proprietary protocol.

Leo: Right. And then furthermore, there's no man in the middle. It is effectively you are using this software like you would use Messages on an iPhone. It's then encrypted, direct to, you know, I mean, it's very interesting. And I, you know, Apple may or may not have a technical ability to defeat it. But if they do, then there's the secondary question of would they, given that it would certainly raise the ire of regulators all over the place because that is certainly an anti-competitive move, to say no, no, you can't...

Steve: Especially if there's a carve-out that says it's possible to reverse engineer the protocol.

Leo: Exactly. So if it's legal, even if Apple could block it, would they is the question. It's a fascinating subject. You'll be talking about this, I'm sure.

Steve: Yeah. So I guess their concern would be that having a non-iPhone endpoint running iMessage protocol is inherently insecure.

Leo: That's right. So that will be, if they do break it, they will say, no, no, we're protecting your security because Android devices are inherently insecure and should not be allowed on our network in this way. That's probably not the case, but that would be their - I guarantee you that'll be the verbiage. Oh, no, we're just protecting the network. It's the same verbiage AT&T used in Carterfone. They said they can't put any non-AT&T devices on our phone network. That would break it.

Steve: Boy, I remember those days. Wow.

Leo: Yeah. You had to rent a phone from Ma Bell. And of course the Carterfone decision, the Supreme Court decision overturned that and changed the world.

Steve: Yeah.

Leo: As always in this business we're in interesting times.

Steve: It also did lower the quality of telephones.

Leo: Well, that's true. It did, didn't it.

Steve: Remember those old AT&T sets, you can run over them with a truck.

Leo: The Western Electric, yeah, they were made like of hard rubber. They were tough.

Steve: Yeah. I think it was Bakelite.

Leo: I think it might have been Bakelite in the early days, for sure.

Steve: Yeah, and then a steel base plate, I mean, they were really built. It was the 501 was the model number, that classic phone.

Leo: Right. And the mic pickups were carbon.

Steve: Yes, yes.

Leo: And sometimes they get clumpy, so you bang it.

Steve: Yup.

Leo: We sound like two old men.

Steve: Yes, children, back in the day.

Leo: You would bang, if it started to sound bad, you'd bang your phone, and it would fix it. Dad, you're making that up. No, it's true.

Steve: Break loose the carbon granules.

Leo: Right.

Steve: Oh, god. Okay. So much as I'm becoming, as we're going to hear later, increasingly annoyed with the EU over their move to commandeer our web browsers' well-established system of trust, it appears that the EU's European Council and Parliament have reached a useful agreement known as the Cyber Resilience Act. This is a piece of legislation designed to improve the security of smart devices sold within the European Union. The new regulation, it will take three years to come into effect; but god bless 'em, this is a good thing. This new regulation applies to products ranging from baby monitors and smartwatches to firewalls and routers.

Under the new rules, vendors must establish processes to receive reports about vulnerabilities, and must support products for at least five years. Moreover, products will be required to come with free and automatic security updates as the default option. They must ensure confidentiality using encryption, and vendors must inform authorities of any attacks.

This won't be happening immediately, as I said. The requirement set by the new rules will come into effect three years after the Cyber Resilience Act is formally voted on on the EU Parliament floor. Given the requirements, which will likely require some redesign and new infrastructure to support them, that seems reasonable to me. At least in this regard, the EU is finally leading in the direction we need to be heading, and not backwards, as they are unfortunately with browser security. But so this is really cool. This says that connected consumer devices, three years from the time they sign this into law, which is imminent, will be required to auto-update by default. Which means all of the consumers who have heretofore not been protected and are running routers and firewalls and everything else, baby monitors, with extremely obsolete firmware, will get five years of support, including automatic updates to them. So, yay.

The forensics industry is getting better at tracking cryptocurrency flows, and cyber insurance firms are being more forthcoming about what they're seeing. So we know more now than we have previously. For example, one of the newer upper echelon ransomware groups we've referred to before is known as Black Basta. This gang is believed to have netted more than $107 million in ransom payments since it first appeared and began operations early last year. Those who watched this have believed that it was, like, emerged from the ashes of Conti after Conti shut down.

Okay. Since we're closing out 2023, and it emerged for the first time early last year, that's $107 million in less than two years' time. That $107 million represents payments made by more than 90, nine zero, victims of the, get this, 329 organizations known to have been hit by the gang. Okay. So there's 365 and a quarter days in the year. Yet in less than two years, 329 individual organizations were breached by this group. So on average that's about one every other day, a breach every other day. The largest payment observed was $9 million, while the average ransom payment is $1.2 million times 90. So this is according to joint research published by the blockchain tracking company Elliptic, and the cyber insurance provider Corvus Insurance.

Now, unfortunately, what this shows is that there is a great deal of money to be made through cyber extortion, which is really what this all boils down to. And the hostile governments - or in this case government, since this group is known to be operating out of Russia - harboring these criminals are more than happy to turn a blind eye. This means that a great deal of pressure will continue to be placed on the security of our networks and systems.

And unfortunately, as the last few months of many very serious large weaknesses and compromises have continued to show, our networks and systems are not up to the challenge. Years of laxity in the design, operation, configuration, and administration of these systems is catching up with us. We know that thanks to the inherent inertia which works against change, we're not going to fix these endemic problems all at once. But they're never going to get fixed at all if we don't apply constant effort in that direction. And I have some good news to that effect here in a minute.

I did want to mention that Google is offering a new ".meme" top-level domain for anyone who wants to play with meme-related Internet properties, and to observe that it's difficult to keep up with all of the new TLDs which are appearing. And it does feel as though this aspect of the Internet's original design, which is to say the original concept of a hierarchy of DNS domains, anchored by just a few major classifications, is not evolving that well. There are companies that attempt to snatch up their existing dotcom second level domain names in every one of the other top level domains, presumably to preserve their brand and their trademark. But that's certainly not in keeping with the spirit of creating additional DNS hierarchies for future growth. You know, I have no interest in grc.meme, and grc.zip would have caused all kinds of confusion. You know, what is that? GRC's entire website in a ZIP archive? Who knows?

Leo: I think grc.meme might be kind of fun.

Steve: Well, while you were telling us about canary.tools I thought, well, there's a perfect example...

Leo: Yeah, there's a good use, yeah.

Steve: ...of a good use of one of the newer TLDs. I'm sure that canary.com was taken, you know, decades ago, so that wasn't available.

Leo: I use a dot email domain, TLD, for my email.

Steve: Yeah.

Leo: I have a variety of domains that I use for email that are nontraditional TLDs.

Steve: Right.

Leo: You can't get a good dotcom anymore. They're gone.

Steve: It's true.

Leo: Yeah.

Steve: It's true. Okay. Speaking of CISA, last Wednesday they introduced a new series of publications called - oh, be still my heart - "Secure by Design" with its first alert titled "How Software Manufacturers Can Shield Web Management Interfaces From Malicious Cyber Activity." And if I didn't know - as I do - that anyone who's focused on security would naturally come up with the same thoughts, I would think that they'd been listening to this podcast. Get a load of what's in this first document. And it's short.

So CISA writes: "Malicious cyber actors continue to find and exploit vulnerabilities in web management interfaces." Newsflash, right. "In response, software manufacturers continue to ask why customers did not harden their products to avoid such incidents." Like, what do you mean, you left the default set to 1111? That's crazy. Uh-huh.

Leo: Mm-hmm. Who would do that?

Steve: CISA says: "'Secure by design' means that software manufacturers build their products in a way that reasonably protects against malicious cyber actors successfully exploiting vulnerabilities in their products. Baking in this risk mitigation, in turn, reduces the burden of cybersecurity on customers. Exploitation of vulnerabilities in web management interfaces continues to cause significant harm to organizations around the world, but can be avoided at scale. CISA urges software manufacturers to learn from ongoing malicious cyber activity against web management interfaces by reviewing the principles below." And again, they're quoting this podcast recently.

"Principle 1: Take Ownership of Customer Security Outcomes." And actually there's only one principle, that one. "Take Ownership of Customer Security Outcomes." They said: "This principle focuses on key areas where software manufacturers should invest in security: application hardening, application features, and default settings. When designing these areas, software manufacturers should examine the default settings of their products. For instance, if it is a known best practice to shield a system from the public Internet, do not rely on customers to do so." Again, oh, thank you, thank you.

Leo: Yeah, because we know customers aren't going to do it.

Steve: They're not. Apparently they've plugged it in and left it set to 1111.

Leo: Exactly.

Steve: God. They said: "Rather, have the product itself enforce security best practices."

Leo: Yes, yes.

Steve: Examples include, and we have three bullet points: "Disabling the product's web interface by default" - oh, yes, thank god this is in print - "and including a 'loosening guide' that lists the risks, in both technical and non-technical language" - right, make it very simple for Johnny - "that come with making changes to the default configurations. Two, configuring the product so that it does not operate while in a" - oh, look - "does not operate while in a vulnerable state, such as when the product is directly exposed to the Internet. Third, warning the administrator that changing the default behavior may introduce significant risk to the organization." Okay, now, look. Think about that. This is a complete reconception of the way everything is done today. You know, yay for CISA.

Leo: I love this principle. Take Ownership of Customer Security Outcomes. That's exactly right. You know, don't let - it's your job. It's not their job.

Steve: Right. Right. Which is a complete turnabout.

Leo: Yes.

Steve: They said: "Additionally, software manufacturers should conduct field tests to understand how their customers deploy products in their unique environments and whether customers are deploying products in unsafe ways. This practice will help bridge the gap between developer expectations and actual customer usage of the product. Field tests will help identify ways to build the product so customers will securely use it." And finally: "Furthermore, software manufacturers should consistently enforce authentication throughout their product, especially on critical interfaces such as administrator portals."

So, wow. Amen to all of that. Now, as I said, none of these concepts will come as news to the listeners of this podcast, but it would be great if those manufacturers to whom CISA is addressing this alert would immediately take heed. We know it's going to take time for any such change to work their way through the entire supply chain, from drawing board into final deployment. It would have been nice if we could have started that "Secure by Design" process 10 years ago, but we haven't even fully started it today. The fact that this alert has been published, with what it says, is a very good sign. I suspect that this may be the first step toward beginning to hold the designers of these systems accountable for their default security.

Unfortunately, due to the "hold harmless" nature of software and equipment licensing agreements, accountability, as we've discussed before, is difficult to create. I intensely dislike the idea of having government criminalize insecure design. That's a slippery slope that's not far from what the EU is planning to do with their eIDAS 2.0 web certificate overreach. Legislation and technology rarely make great bedfellows. But one of the ways we've seen government influence things for the better is by using its own purchasing power to create voluntary incentives.

Leo: Mm-hmm.

Steve: With CISA, the U.S. government finally has a highly effective and worthwhile cybersecurity agency. Based upon what CISA just published last Wednesday, that thing I just read, it would not be a stretch to imagine adding exactly those default network behavioral requirements to any future software and equipment purchasing made by state and federal government agencies. That would affect voluntary change overnight. Vendors would be required to legally attest that their equipment abides by this new set of requirements; and if it was later found not to be true, then they could be held liable for damages resulting from the functional out-of-spec behavior of their equipment. And just to be clear, not for bugs in their systems, but for the deliberate design of those systems. As I've repeatedly observed, anybody can make a mistake, but vendors can and should be held responsible for their policies. And design is a policy. So yay to CISA.

Okay. Last one before we take our break and then get into our topic. This one is really interesting. And it led me down a path I didn't expect. While we're on the subject of things governments do, also last Wednesday, France's government announced a near immediate ban - as in 10 days from last Wednesday - on the use of what they called "foreign end-to-end encrypted messaging apps."

Leo: That's so French. That's France. That's France for you.

Steve: So they've banned government officials from using foreign encrypted messaging services including specifically Telegram, Signal, and WhatsApp. Uh-huh. The government is notifying its ministers and their cabinet staff...

Leo: What do they have? What have they got?

Steve: ...that they must uninstall any such applications from their devices by this coming Friday, December 8th. French officials have been told to use the French-developed alternative messenger known as Olvid, O-L-V-I-D. Uh-huh.

Leo: Is no good. If it's not French, it's no good. Must be French.

Steve: Officials cited privacy risks and a need to "advance towards greater French technological sovereignty."

Leo: Mais oui.

Steve: Okay. So what the heck is Olvid? Even though we've never talked about it here, I have to say that it looks pretty good.

Leo: Good.

Steve: It's both open source, well, it is open source for both Android, iOS, macOS and Windows.

Leo: Okay.

Steve: And it's living over on GitHub.

Leo: Oh, well, that's fine.

Steve: Here's how it describes itself: "Olvid is a private and secure end-to-end encrypted messenger. Contrary to most other messaging applications, Olvid does not rely on a central directory to connect users. And there is no user directory. Olvid does not require access to your contacts and can function without any personal information. The absence of directory also prevents unsolicited messages and spam. Because of this, from a security standpoint, Olvid is not 'yet another secure messenger.'"

Leo: It's la French Tech. Wow.

Steve: "Olvid guarantees the total and definitive confidentiality of exchanges, relying solely on the mutual trust of interlocutors. This implies that your privacy does not depend on the integrity of some server."

Leo: Okay.

Steve: "This makes Olvid very different from other messengers that typically rely on some trusted third party, like a centralized database of users or a public blockchain. Note that this doesn't mean that Olvid uses no servers. It does. It means that you do not have to trust them. Your privacy is ensured by cryptographic protocols running on the client-side, on your device. And these protocols assume that the servers were compromised from day one. Even then, your privacy is ensured."

Okay. So this is less looney than it might seem at first, though it does have some feeling of nationalism and protectionism with the French government labeling everything else "foreign" and talking about the need to increase France's technological sovereignty. But that said, Olvid is not some random homegrown messaging app designed by the Prime Minister's nephew.

Leo: That's exactly what it sounds like; right? No, this is safe. My nephew said so.

Steve: So I've not had time to look at it closely, but it looks like the real deal. And the more I look at it, the more I like it. Over on Olvid's website, which is olvid.io, they proudly note that: "Olvid does not require any personal data: no phone number, no email, no name, no surname, no address, no date of birth. No nothing."

Leo: Yeah, that's nice. That's one of the things that bugs me about Signal. I don't like that.

Steve: Yes. I completely agree. "Unlike your previous messenger, Olvid will never request access to your address book." Okay, so those are some compelling features. And under the headline "Compatible with what you already have" they say: "Olvid is available for your macOS and Windows computers, as well as your iPhones, iPads, Android smartphones and tablets. No SIM? No problem. No SIM card required. WiFi is all you need. Since Olvid needs no phone number to work, you can use any of your devices. And they'll stay in sync. Olvid even works in an emulator. Geeks will love it."

Okay. So Olvid uses something known as SAS-based authentication. Of course, Leo, you'd expect them to be sassy, them being French.

Leo: They are French.

Steve: SAS stands for "Short Authenticated Strings." The concept of SAS was produced and formalized in a 311-page PhD thesis by, of course, a French cryptographer, Sylvain Pasini, back in 2009. So here's what Pasini explained in the first two paragraphs of his PhD thesis. He said:

"Our main motivation is to design more user-friendly security protocols. Indeed, if the use of the protocol is tedious, most users will not behave correctly; and, consequently, security issues occur. As an example..."

Leo: You are not behaving correctly. You must go back to the beginning.

Steve: "An example is the actual behavior of a user in front of an SSH certificate validation. While this task is of utmost importance, about 99% of SSH users accept the received certificate without checking it. Designing more user-friendly protocols may be difficult since the security should not decrease at the same time. Interestingly, insecure channels coexist with channels ensuring authentication. In practice, these latters may be used for a string comparison or a string copy, for example, by voice-over-IP spelling. The shorter the authenticated string is, the less human interaction the protocol requires, and the more user-friendly the protocol is. This leads to the notion of SAS-based cryptography, where SAS stands for Short Authenticated String."

Finally: "In the first part of this thesis, we analyze and propose optimal SAS-based message authentication protocols. By using these protocols, we show how to construct optimal SAS-based authenticated key agreements. Such a protocol enables any group of users to agree on a shared secret key. SAS-based cryptography requires no pre-shared key, no trusted third party, and no public-key infrastructure. However, it requires the user to exchange a short SAS, for example, just five decimal digits. By using the just agreed secret key, the group can now achieve a secure communication based on symmetric cryptography." And yes, Leo, five digits is all it takes.

"Since 2009 this SAS proposal first outlined by this guy's PhD, has received a great deal of further scrutiny within the security community, and it has held up 100%. So this works by having the users at each end initially discover each other by sharing the short tokens being displayed on each other's devices."

Leo: Ah. That's a little hardship because I'd have to tell you what that pre-shared key is somehow.

Steve: Yes.

Leo: Over a secondary channel; right?

Steve: Yes, yes. I think that's exactly the case. So for that some form of already authenticated out-of-band channel is used, like an audio or a video call, to exchange the information that each user's device presents. And this simple process has been proven, as I said, to be cryptographically sound. But notice also this eliminates spam completely.

Leo: Right. Good.

Steve: It's over.

Leo: Yeah.

Steve: So I also really like the fact that its integration with the desktop. That's something I've been missing, you know, as a cross-platform iOS and Windows user.

Leo: Right.

Steve: And, as you said, Leo, Signal is annoying with its required tie to a phone number. Signal claims that's needed to prevent spam, but with Olvid there's no possibility of being spammed.

Leo: Yeah. People know my phone number. That does not prevent spam at all.

Steve: Right, right. And we also, as we covered - I think it was you, maybe it was Ant - when we talked about the breakdown of Signals' cost structure.

Leo: Yeah, yeah, I was here, yeah.

Steve: It was you before you left.

Leo: Yeah.

Steve: That telephone authentication is a huge percentage of Signals' total annual outlay because verifying those phone numbers is very expensive. So a real-world out-of-band interaction is required to establish a channel between two participants or among the participants in a group. After that, the devices remain linked for further communication.

Okay. So what pays for this? The system runs on a "freemium" model. All bidirectional text messaging and incoming audio, encrypted audio, is free. You get unlimited messages, unlimited attachments, secure group discussions, unsend and edit messages, remote deletion, ephemeral messages, multiple profiles, user mention, markdown, Olvid Web, whatever that is, and inbound secure audio calls. The system is financially supported at a 5-euros-per-month level.

Leo: That's steep.

Steve: Yeah, by those who want to be able to initiate secure voice calls, as well as use Olvid on multiple devices that receive all messages and keep themselves cross-synchronized. So, yeah. So it's only free if you limit yourself to text messages. But for there, it really is. There are also more powerful enterprise plans that have much more features. It's interesting that the French government is telling their ministers and cabinet staff that they must switch to Olvid. Since only text messaging is completely free, so one wonders who's going to pay for that.

Leo: They bill it to the French public. So congratulations.

Steve: Well, there is an enterprise classification. So presumably France government would act as an enterprise and then would make all of their individual government employees subscribers underneath that one umbrella policy.

Leo: If I'm Mr. Olvid, I'm going to give it to them free. This is the best publicity you could ask for.

Steve: I know.

Leo: Right? I mean...

Steve: Yes.

Leo: Yeah.

Steve: Yes. Anyway, so I wanted to make sure that all of our listeners were aware that it existed.

Leo: I'm installing it right now.

Steve: It might suit many peoples' needs, yes. Cross-platform, desktop, for Mac and Windows. I'm sure it runs under WINE, and there's probably a way to get it running under Linux easily.

Leo: Well, if there's a web version, which there is, you just do it in the web. And that's, yeah, that's straightforward.

Steve: Oh, okay. Olvid.io.

Leo: Yeah. Yeah, I'm installing it right now. Looks good. The one negative is that they give you a backup key because it will do encrypted backups. But you can't cut and paste it, so I have to type in this very long, 32 letter and number backup key. But I'm typing it in right now.

Steve: Or just take a picture of it with your other phone.

Leo: Oh. Aren't you smart. You must be Steve Gibson. I know you. No, this is cool. You know what? My name is Leo Laporte. I should be using Olvid. The problem is, as with all these messaging systems, you have to get other people to use it or, I mean, that's the problem.

Steve: Yup. Yup.

Leo: Yeah. And I don't know anybody who uses Olvid, so so much for that. Mr. Gibson?

Steve: Okay. So we've been covering the news of the now-impending EU eIDAS 2.0 legislation...

Leo: Bleah. Bleah.

Steve: I know, mostly from the standpoint of the two open letters that those in the industry and academia have authored and co-signed. And by "those in the industry and academia" I mean a number now totaling more than 500 individuals who are truly concerned about what the EU is about to unilaterally place into law. Four weeks ago this podcast was titled "Article 45," so I understand that we've already talked about this. But I just encountered, as I mentioned earlier, two new pieces of commentary from two well-placed technologists. So I decided to share their appraisals to create some "what it would really mean to the world" perspective. And there are a couple surprises.

The first person's name is Ivan Ristic. I was immediately curious when I saw that Ivan had chosen to weigh-in and address this issue because I know his name well. If Ivan's name doesn't immediately jump out and mean anything to you, you may know his well-known website and service, SSL Labs. For as long as I can remember, Ivan's SSL Labs (ssllabs.com) has been the go-to site for checking the security at both the server and browser ends of secured connections.

Ivan is also the author of two books: "Bulletproof TLS and PKI: Understanding and Deploying SSL/TLS and PKI to Secure Servers and Web Applications." Its first edition was published nearly 10 years ago, in 2014, and the book is now in its second print edition with added coverage of TLS 1.3. It's also available as an eBook. Ivan's second book is the "OpenSSL Cookbook: The Definitive Guide to the Most Useful Command-Line Features." And anybody who's ever looked at OpenSSL knows that a command-line reference would be a good thing to have.

Anyway, that one's in its third edition, also available for free. He and his wife Jelena are based in London. His piece, which he wrote last Thursday, and if all of that preceding didn't give you the idea, this guy understands authentication and certificates and SSL and TLS, and he's been at this for a long time. His piece is titled "European Union Presses Ahead with Article 45."

So Ivan wrote: "The European Union continues on its path to eIDAS 2.0, which includes the controversial Article 45 that basically tells browsers which certificate authorities to trust. eIDAS, which stands for Electronic Identification and Trust Services, is a framework aimed at regulating electronic transactions. As part of this proposal, the EU wants to support embedding identities in website certificates. In essence, the goal is to bring back Extended Validation certificates. Browsers, of course, don't want that.

"But the real problem is the fact that, with the legal text as it is at the moment, in its near-final form" - and I'll just mention that this is what was signed behind closed doors - "the EU gets the final say in which Certificate Authorities are trusted." I mean, that's the crux of this. And we have a lot more to say about that. But, he says: "The global security community has been fighting against Article 45 for more than two years now. We wrote about it on a couple of occasions. As of November 2023, the European Council and Parliament have reached a provisional agreement. The next step is for the law to be put to the vote, which is usually a formality.

"In November, ahead of the crucial vote, the campaign intensified, with browser providers (Google and Mozilla), civil society groups (EFF) and other companies, and more than 500 security experts voicing their concerns. In the end, it did not help. The bureaucrats drafted the text and voted behind closed doors with little acknowledgement of the protests.

"And therein lies the main problem. The EU doesn't understand the global technical community. Internet standards are developed collaboratively and organically, with careful deliberation of the details. The EU, on the other hand, prefers a top-down approach that ignores the details and apparently involves no debate. They expect everyone to trust that the details will turn out all right. The text voted on was published only after the fact.

"The EU might have the right to govern its territory, but when it comes to these global matters, it also has a duty to respect and compromise with the rest of the world. Above all, care must be taken to separate technology and politics as much as possible. After all, it took the world a very long time to achieve reasonable security of global website authentication. A decade ago, we were witnessing hackers breaking into CAs and government agencies issuing certificates for Google properties. Today, we have much stricter issuance and security standards, and we also have certificate transparency, which provides visibility and auditing. No one knows what's going to happen with that, and the EU doesn't engage.

"Where are we now? The EU wants browsers to display legal identities embedded in the qualified certificates, but it also wants to control who issues them. It so happens that the same certificates are used to store the identities and authenticate websites. It's not at all clear if the EU cares about the latter part. In fact, the following statement appears in the recitals in the provisional agreement: 'The obligation of recognition, interoperability, and support of QWACs is not to affect the freedom of web-browser providers to ensure web security, domain authentication, and the encryption of web traffic in the manner with which the technology they consider most appropriate."

So he writes: "Can browsers recognize and show legal identities from the EU-approved CAs, but continue to require full compliance with current technical standards in order to fully trust qualified certificates? Or can browsers require two certificates, one for TLS and the other for identities, like Mozilla proposed a year ago? We'll need to wait and see." So that's what Ivan wrote, who is way, you know, been around the block and paved a bunch of the block.

Leo: Yeah.

Steve: The second piece, which Ryan Hurst wrote a little over two weeks ago, is titled "eIDAS 2.0 Provisional Agreement: Implications for Web Browsers and Digital Certificate Trust." And here's where some really interesting numbers come up. What Ryan wrote goes further than anything I've seen so far to provide an assortment of interesting facts to clarify the way things are today, and to examine what the EU's proposed changes would mean to the industry and to the world. So he leads with a summary, writing: "This document contains my notes on the problematic elements of the provisional agreement on the EU eIDAS 2.0 legislation reached by EU legislators on November 8th."

So six main points. "Mandatory Trust in EU-Approved Certificate Authorities: Browsers will be required to trust certificate authorities approved by each European member state. This could lead to scenarios where the government forces the trust of CAs that put global users at risk. Two, Lower Standards for EU-Approved Certificate Authorities: Establishes a lower standard for European CAs, limiting the browser's ability to protect users from underperforming EU certificate authorities. Third, EU to Override Browser CA Trust Decisions: In cases where an EU investigation does not lead to the withdrawal of a certificate's qualified status, the EU can request browsers to end precautionary measures, forcing them to trust the associated CA." And, like, why would that be in there? I mean, that's just, like, asking for a fight.

"Number four establishes global precedent for further undermining encryption on the web. When a liberal democracy establishes this kind of control over technology on the web, despite its consequences, it lays the groundwork for more authoritarian governments to follow suit with impunity. Next, browsers are forced to promote legal identity for authentication of websites. Browsers will be required to have a user interface to support the display of legal identity associated with a website, potentially reversing previous design choices made based on user behavior and research." And I'll come back to this point later, but what gives the EU any authority over the design of third-party browsers over which, you know, they have no say?

And lastly, "The Inconsistencies of Recitals with the Substantive Legal Text. The recitals in the legal text have ambiguities and contradictions which will cause long-term negative consequences for the web." In other words, the recitals were put in in order to try to soften what the legal tech says. But of course the legal text is what's binding.

Okay. So Ryan explains: "The text says browsers must either directly or indirectly take a dependency on the EU Trust List to determine if a CA is trusted for website authentication. This is a list of CAs as determined by each member state to be in conformance with the legal obligations under eIDAS." He says: "To put this into context, based on the currently authorized organizations on this list, we can expect to see 43 new organizations added to both the Mozilla and Chrome Root Stores. This is just a number, though. Let's give it a little color."

And Leo, there's a chart here at the bottom of conveniently numbered page 13. He says: "Today there are seven organizations in the Web PKI that are responsible for 99% of all certificate issuance." That's astonishing. Once again, let me say that: seven organizations, seven certificate authorities, seven certificate signers. Those seven are collectively responsible for 99, actually I think it's 99.36, if I recall, percent of all certificate issuance. So this chart has this big, huge, blue region.

Leo: Yeah. Who's Internet Security Group? That's almost half. Who is that?

Steve: Uh-huh. And that's Let's Encrypt.

Leo: Oh, I love you, Let's Encrypt. Good for you. Wow. That's great.

Steve: Isn't that astonishing?

Leo: Oh, my gosh.

Steve: Let's Encrypt has 46.52% of all currently non-expired web certificates in circulation.

Leo: That's really awesome. That's what I use for my website, yeah. Love it.

Steve: Well, I'm still with number two, but number two has about half of that, and that's of course DigiCert.

Leo: Yeah. They're very good, yeah.

Steve: They're my favorite. You know, they are still my CA.

Leo: They're expensive. They're not cheap. They're more expensive than others, yeah.

Steve: That's true. Though what you get in turn is a higher level of assurance.

Leo: Right.

Steve: Inherently, Let's Encrypt is only a domain validator. That's all it's able to do.

Leo: Right.

Steve: Although it is able to do that for free. And, as we can see from this pie chart, that's what half of the Internet is using today.

Leo: Wow.

Steve: 46.52%. So what astonishes me, though, is that - so we have DigiCert at 22.19, Sectigo at half of that, that is, half of DigiCert's at 11.89. Google Trust Services is a 8.88.

Leo: Hmm. Surprising, actually.

Steve: Followed by GoDaddy...

Leo: Yeah, that's who we use.

Steve: ...at 5.77.

Leo: Yeah.

Steve: Yeah. Microsoft Corp. has 3.45, and then IdenTrust commercial root CA is down at 0.63. So if you sum all of those, those are the top seven, and you can almost argue that you don't need that last 0.63. But if you include that in order to get to 99.32%, those seven CAs alone give you coverage of 99.32%, which says you're only missing a bare 0.63% in all the others. The hundreds of others, Leo, only account total for 0.63%. So that, you know, this should bring everyone up short. This means that the industry, in the guise of the CA/Browser forum, has been incredibly permissive about extending our global browser trust to organizations whom we really have very little actual need to trust. Yet today we're inherently trusting the signatures of certificates that most of us are never going to see.

And of course the great controversy is that any of them could sign the certificate for any domain they wanted to, and a browser would trust it because we trust anything that any of them sign. So you know, that suggests to me that we're going in the wrong direction here. Even the idea of adding any more, it's like, what? No. We can survive with seven.

Leo: Yeah. Who's IdenTrust? I don't...

Steve: That's a good question. I don't know whose certificates they're signing.

Leo: They're small.

Steve: Yeah.

Leo: And why isn't VeriSign on this list?

Steve: Yeah.

Leo: That's surprising.

Steve: Good question.

Leo: Yeah. I love it that Let's Encrypt is so dominant. I mean, is that okay? People who come to our site do not look at the certificate and say, oh, it's GoDaddy, not DigiCert. And by the way, that saved us hundreds of dollars. I mean, so, yeah, DigiCert is the gold standard. But I don't think our users really care.

Steve: Yeah. And frankly, if Let's Encrypt had existed 10 years ago, the pie chart would not look like this.

Leo: I agree. It's going to be 99% soon enough.

Steve: And in 10 years it won't look like this, yes.

Leo: Right, right.

Steve: So, okay. Here's what Ryan has to say about this. He writes: "There are between 75 and 85 organizations in the various root programs constituting the entire Web PKI that can issue certificates for the entire web. If we use the higher estimate of 85" - okay, now, so that's like all the rest which includes these seven, so those other 82 have just microscopic shares; right? He says: "If we use the higher estimate of 85" - and the reason it's an estimate is like he says between 75 and 85, it's like, those last 10, it's like, you know, they signed, you know, monkeymoose, and no one, you know, and their certificate expired. You know, I mean, it's just like...

Leo: Monkeymoose. I want that one. Good one.

Steve: So, and by the time this podcast is over somebody will have registered it. So he says: "If we use the higher estimate of 85, the addition of the EU's 43 member countries represents an increase of over 50% in the number of organizations that are trusted for doing what they're doing." And they don't have to abide by anybody's rules. The EU says you must trust these.

Ryan says: "Why is all this significant? While it's true that there are numerous CAs in the Web PKI beyond the seven mission-critical ones, each additional CA represents an increased surface area for all users of the web. The 'long-tail' CAs, those lesser-relied-upon entities, are part of the Web PKI because they ostensibly meet the same objective technical and procedural standards as their more prominent counterparts."

Okay. So in other words, we all trust all of those essentially unneeded CAs because the way the system has evolved, it would be considered rude not to give anyone the benefit of the doubt and trust their work signing certificates until and unless they give the world reasonable cause not to. However, it's also likely that with the ISRG's Let's Encrypt having changed the rules, no one in their right mind today would attempt to establish a new commercial Certificate Authority. That would be nuts. Given today's startling distribution of signed web certificates, my feeling is that we ought to be running in the exact opposite direction than what the EU proposes. If IdenTrust at 0.63% was also eliminated, presumably also with minimal impact, we could reduce CA trust to just six well-proven certificate authorities. That sure seems like the future as opposed to adding 43 new and highly political trust roots.

So Ryan continues, writing: "Web browsers set these standards to participate in their programs, striving for objectivity, openness, and consistency. This approach not only keeps the web open and fosters the development of sovereign digital capabilities in various countries, but also involves a balancing act, mitigating the risks associated with the expanded attack surface that each new CA introduces. What the EU proposes tips the scales of this system by lowering the bar for European CAs, allowing them to meet a lower standard while at the same time putting these governments in charge of which CAs meet the bar."

Leo: Ugh.

Steve: I know.

Leo: Such a terrible idea.

Steve: It is. I just can't - we can't allow this to happen. He says: "To put this in context, consider this case in 2013 where a French agency that was allowed into the Web PKI was caught minting SSL certificates that impersonated major sites like Google. Putting 27 governments in a position to add more CAs that are trusted by the world means they can do this at scale if they decide to do so."

Leo: Oh, my god.

Steve: "Or an attacker uses these governments' ability to do so for their own benefit."

Leo: That's a good point. That's a good point. If an attacker gets in and gets the root certificate, all hell breaks loose.

Steve: Yup. Overnight. It's worth noting that the browsers distrusted this CA when it did this. In this new world that won't be possible.

Leo: Wow.

Steve: "Now consider for a moment that there are 195 sovereign nations in this world, for now. The 27 member states of the EU will be the only countries in the world with the ability to force browsers to trust arbitrary CAs like this or to add their pet features. How long," Ryan asks, "do we think that will last if browsers become compliant with this new legislation?"

He linked to an article which appeared in Ars Technica 10 years ago, in 2013, and what I read above. I was pleased to see that this podcast had covered every one of those incidents cited. And I'll just skim it over here because, for example, there was the trust wave was discussed, and also remember this, the Netherlands-based Diginotar.

Leo: Diginotar. I do.

Steve: I know you'll remember Diginotar.

Leo: Yeah, yeah.

Steve: So there have been several other instances in addition to this Cyber Defense Agency that have been caught in the past. And the good news is browsers are immediately able to blacklist the hashes of those known fraudulent certs and then yank the trust from the root. The French legislation prevents that. I mean, it's hard to believe it does.

He says, okay, so - and of course now we know that we only really need to trust six or seven CAs to obtain trust coverage of 99.32% of the entire web. So next Ryan makes the point in the EU's legislation, says requires browsers to have a user interface to support the display of legal identity associated with a website. He writes: "Extended Validation certificates, once used by about 9 to 10% of websites, now represent only about 3.8% of all certificates on the web." And one wonders once they expire if they'll be renewed as EV because as we know browsers no longer show anything special. "Web PKI CAs originally marketed these tools for increasing conversion rates, among other supposed benefits," meaning more consumer belief in the value of the site. "But there was never any data supporting these claims. Over time, it became apparent that they provided little to no value, and in some cases even harmed users. A notable example of the confusion arising from this paradigm is a case where a security researcher demonstrated the ability to quickly and inexpensively create a legitimate company with a name conflicting with a well-known organization, without needing to reveal the identity."

Okay. So for those interested, the researcher was a guy named Ian Carroll. Ian filed the necessary paperwork to incorporate a business called Stripe, Inc.

Leo: Oh.

Steve: And he did this in a different state than where the actual Stripe, Inc. was incorporated, which is perfectly legal.

Leo: Is legal, legal, yeah.

Steve: He then used the legal entity to apply for and receive an EV certificate to authenticate the website stripe.ian.sh. But of course Ian was unable to get stripe.com because the real Stripe owned that domain. But creating a stripe.com subdomain, or a stripe subdomain under his own ian.sh domain was sufficient. This was because, at the height of EV certificate usage, the domain's EV certificate details would be shown instead of that messy HTTP URL.

Leo: Oh, my god.

Steve: So what visitors to Ian's demo site saw was simply "Stripe, Inc."

Leo: Oh, boy.

Steve: Yep. And I'll note that this followed three months after a different researcher named James Burton established a valid business entity, "Verified Identity," to demonstrate how the resulting EV certificate might be used to add an air of authenticity to a scam site.

Leo: So you go to the scam site, and it says "Identity Verified."

Steve: Oh, that's right.

Leo: Must be legit.

Steve: That's right.

Leo: That's clever.

Steve: Yes. The bottom line was that since typical users don't actually have any idea what's going on, all of this extra specialness was abandoned. Or, as Ryan puts it: "This incident, along with several others and research based on large-scale analysis of user reliance on browser trust indicators, led to the de-emphasis of all these affordances in the browser UI. The previous UI, which highlighted this information, was redesigned and demoted in the visual hierarchy, setting it on a path for a likely eventual removal as a result."

Which is where we are today. But the legislation that is poised to become law in the EU after several years of the industry warning against all of this in the strongest possible terms, requires browsers, literally is enforcing the design of browsers to bring this back, and for the EU to be able to add their own identity assertions to the browser's location bar display.

Leo: Ugh. This is horrible.

Steve: Leo, it's unbelievable.

Leo: It's breaking security.

Steve: Yes, it is. Another point of serious concern is that the EU's forthcoming legislation explicitly and deliberately limits the ability of browsers to protect users from poor-performing EU certificate authorities.

Leo: You mean they're forced to accept the certificate, even if they know these guys are bozos.

Steve: Correct. Correct. It is no exaggeration to say that we depend upon our browsers to have our backs in countless ways. Ryan writes this: "Today, CAs are removed as trusted for a vast range of reasons. For example, last year" - and I checked this out, it was in 2022 - "a Turkish Certificate Authority, e-Tugra, demonstrated they lacked the most basic security practices and could not effectively respond to a security incident and were distrusted as a result. Not due to having made any mistake, but because their service was clearly shown to be unconscionably insecure. When I dug into this, Leo, their web portal had never changed the default admin login credentials."

Leo: Oh, admin/admin.

Steve: Of a certificate authority.

Leo: But that's why the system works because the browsers then say, yeah, we're revoking that CA.

Steve: Yes. Yes. Ryan writes: "Under this new legislation, browsers will no longer have the ability to distrust European CAs that are trusted for QWACs except for 'breaches' and 'loss of integrity of an identified certificate,' whatever that means. Each of the CAs trusted within the Web PKI represents a risk to users. This is why it is so important that browsers, acting as the agents of their users, are empowered to establish uniform criteria to ensure all the CAs meet minimum best practices and have the ability to remove them when those minimum best practices are not met. The text reduces the cases substantially in which they may do that." And he says: "Unfortunately, it gets worse."

He then cites some legislation that will take effect. And so the legislation reads: "Web browsers may take precautionary measures related to a certificate or set of certificates in case of substantiated concerns related to breaches of security or loss of integrity of an identified certificate. When such measures are taken, the browsers must notify their concerns in writing without undue delay."

Leo: With a quill pen.

Steve: Oh, my god, yes.

Leo: On a piece of parchment.

Steve: "Along with a description of the measures taken to mitigate those concerns. This notification should be made to the Commission, the competent supervisory authority, the entity to whom the certificate was issued, and the qualified trust service provider that issued the certificate or set of certificates. Upon receipt of such notification, the competent supervisory authority is expected to issue an acknowledgement of receipt to the web browser in question."

Leo: We have received your missive and shall respond...

Steve: Oh, my gosh. Did anyone ever see, what was it, "Brazil"?

Leo: Yes.

Steve: I think it was the movie...

Leo: Yes, it's totally "Brazil," yes.

Steve: Oh, my god.

Leo: The bureaucracy. Submit your form in triplicate. B/7935. Wow.

Steve: So anyway, the legislation contains language such as "shall not be subject to any mandatory requirements other than the requirements laid down earlier and shall not take any measure contrary to their obligations set out in Article 45," referring to the browsers. Again, the EU is flatly asserting absolute authority over the trust that browsers will place in any certificates issued by their member states. And the text also says that even in the cases where there are "breaches of security or loss of integrity of an identified certificate," the EU can override the browsers and force them to trust the associated CA anyway.

"When the outcome of an investigation does not result in the withdrawal of the qualified status of the certificate or certificates, the supervisory authority shall inform the web browser accordingly and request it to put an end to the precautionary measures referred to." In other words, there is no other way to look at this, Leo, other than that they are absolutely getting into business they have no business getting into.

Leo: Shocking.

Steve: The more time I've spent looking into this, the worse it seems. The world has spent a great deal of time slowly and carefully evolving an equitable system of trust. And now, for essentially commercial, like ego reasons, to force the display of website digital identity through the equivalent of their own system of EV certs, this legislation would force all web browsers to accept root certificates from every EU member state, which would then use them to assert the identity of anything they choose. And there's nothing any browser can do about it.

What I'm most wondering now is what gives the EU the right to dictate the operation of our web browsers? To me, this seems like uncharted waters. Users currently have some say over the certificates which populate their root stores. If they wish to remove trust from some certificate authority, nothing prevents them (us) from doing so. But the EU is stating that browsers will be required to honor these new, unproven, and untested certificate authorities and thus any certificates they issue, without exception and without recourse. Does that mean that my instance of Firefox will be legally bound to refuse my attempt to remove those certificates?

If the EU wants to create their own "EU Browser" based upon a fork of Chromium, embellish it with their own certificates and a user interface display of whatever those certificates wish to assert, then require that their own citizens use it, the only people who would have any problem with that would be their own citizens, who could then decide whether they want to keep those legislators in office. To me, that appears to be the only feasible course of action.

What's completely unclear, and what I haven't encountered anywhere, is an explanation of the authority by which the EU imagines it's able to dictate the design of other organization's software. Because that's what this comes down to. The UK tried to do this, as we know, with end-to-end encryption. Every last publisher of that technology said no, and the UK blinked.

Edge and Chrome on Windows obtain their root stores from Windows. So the EU is telling Microsoft that they must add and unilaterally trust 43 new root certificates to their operating system's root? And what about Linux? Who's going to make Linux do this? Good luck sneaking this past Linus! That's never going to happen.

Leo: Yeah, because the OS stores, in fact most of the time your OS is the root.

Steve: Firefox is the only exception.

Leo: Yeah.

Steve: And in a weird thing that you missed last week, Leo, Firefox 120 just added a new option to include the Windows root certificates into its own root store.

Leo: But that's optional; right?

Steve: And that's optional, yes. It could be turned off. And of course, and really, there's the very real specter of what other doors this opens. If the EU shows the rest of the world that it can successfully dictate the terms of trust for the independent web browsers used by its citizens, what other countries will follow with similar laws? Now everyone gets to simply require that their own country's certificates get added? This takes us in exactly the wrong direction. That's crazy. None of this is good.

And it's not even as if there's some actual problem that needs to be solved here, Leo. Like someone just invented this over in the EU. It's just crazy. The more I think about this, the more I like the idea of disabling Firefox's newly added "trust the certs in the underlying operating system's root store" option, then pruning all but six of Firefox's current root certificates. That trusts 99% of the Internet's certificates, and likely 100% of any certificates that I would choose to trust.

Leo: Well, when are we going to get the Steve Gibson Prune Your CAs app? Because is it - how easy would it be to prune out all the CAs?

Steve: Definitely doable.

Leo: You could list them and have checkboxes next to them.

Steve: Yes.

Leo: You could have your recommended six.

Steve: Yup.

Leo: Press a button.

Steve: The good news is this is generic enough that somebody will do it by the next podcast.

Leo: Yeah. If they weren't all doing Advent of Code right now, maybe they would. Interesting.

Steve: Wow. Did you miss - is that the week you missed?

Leo: I missed it.

Steve: Aww.

Leo: I mean, it's not over. It's all month. But if you don't start on day one...

Steve: Yeah, it ramps up.

Leo: Yeah.

Steve: So you're just going to skip it this year.

Leo: Well, I'm not going to do it in real time. I'll probably get around to doing it. I'm just - I'm not, honestly, I'm not really in the mood to sit down and...

Steve: You're altered. Yes, you have an altered state of...

Leo: Altered state, yeah.

Steve: Yeah.

Leo: Basically I want to go out and hug a tree, and that's it. So, wow. This is hair on fire bad. I'm very glad you covered it.

Steve: Yeah, it's astonishing.

Leo: But the funny thing is, it's trivial to prune these out.

Steve: Yes.

Leo: It's not illegal even in the EU; right?

Steve: Probably not. As we know, the problem is all the people who won't prune it. So, I mean, and that's who the browsers are worried about is, you know, they're wanting to protect everybody.

Leo: Normal people won't. Only the listeners of this show will know what to do.

Steve: Yeah. I mean, so can the EU outlaw the use of a browser unless it complies? I mean, like, by what authority could they dictate the operation of software they don't control?

Leo: That's interesting. So if Firefox says, and I hope they do, if the Mozilla group says no, we're not going to do it, then what? Can they ban it EU wide? No. Maybe they could. They could fine them, I guess. You know, the EU's done some good things, I have to say.

Steve: Yeah.

Leo: Absent a U.S. government that's willing to protect our privacy, it's good that the EU is.

Steve: Yup. Yup. GDPR does have some privacy-centric things.

Leo: They've done some dopey stuff. This cookie thing just drives me crazy.

Steve: Oh, my god.

Leo: It's so...

Steve: I know. I know.

Leo: It's so meaningless. And how many billions of human hours are lost clicking that thing?

Steve: Yup.

Leo: And now this. I don't know. I mean, what's worse? Somebody who doesn't, who is just like laissez-faire, like the U.S. government is, do whatever you want? Or somebody who does some good things and some bad things? Thank you, Steve. Have a wonderful week. We'll see you all next time.

Steve: Okay, buddy.

Leo: On Security Now!.

Steve: Bye.


Copyright (c) 2014 by Steve Gibson and Leo Laporte. SOME RIGHTS RESERVED

This work is licensed for the good of the Internet Community under the
Creative Commons License v2.5. See the following Web page for details:
http://creativecommons.org/licenses/by-nc-sa/2.5/



Jump to top of page
Gibson Research Corporation is owned and operated by Steve Gibson.  The contents
of this page are Copyright (c) 2024 Gibson Research Corporation. SpinRite, ShieldsUP,
NanoProbe, and any other indicated trademarks are registered trademarks of Gibson
Research Corporation, Laguna Hills, CA, USA. GRC's web and customer privacy policy.
Jump to top of page

Last Edit: Dec 11, 2023 at 09:33 (553.61 days ago)Viewed 5 times per day