GIBSON RESEARCH CORPORATION https://www.GRC.com/ SERIES: Security Now! EPISODE: #950 DATE: November 28, 2023 TITLE: Leo Turns 67 HOSTS: Steve Gibson & Leo Laporte SOURCE: https://media.grc.com/sn/sn-950.mp3 ARCHIVE: https://www.grc.com/securitynow.htm DESCRIPTION: Since last week's podcast was titled "Ethernet Turned 50," it only seemed right to title this one "Leo Turns 67." I'll have more to say about that at the end. Until then, Ant and I will examine the answers to various interesting questions, including: How many of us still have Adobe Flash Player lurking in our machines? What can you do if you lose your VeraCrypt password? Firefox is now at release 120; what did it add? What just happened to give Do Not Track new hope? Why might you need to rename your "ownCloud" to "PwnCloud"? How might using the CrushFTP enterprise suite crush your spirits? Just how safe is biometric fingerprint authentication? How's that going with Apache's MQ vulnerability, and have you locked your credit bureau access yet? Should Passkeys be stored alongside regular passwords? What's the best way to prevent techie youngsters from accessing the Internet, and is that even possible? What could possibly go wrong with a camera that digitally authenticates and signs its photos? Could we just remove the EU's unwanted country certificates if that happens? What's the best domain registrar, and what was Apple's true motivation for announcing RCS messaging for their iProducts? SHOW TEASE: Hey, it's time for Security Now!. I am Ant Pruitt. I'm not Leo Laporte. He is out and about, having some interesting birthday celebrations. We'll leave it at that. But I'm joined this week by Mr. Security, Steve Gibson himself. And today we're going to talk about what's going on with CrushFTP, the enterprise FTP package. There's also some interesting stuff going on with the fingerprint scanners and biometrics on your laptops and devices that are out there. Are you really secure? Hmm. Some think that they are. Maybe they're not. Then also there's some big news happening again with all of these credit agencies out there. TransUnion and Experian hacked yet again. Good grief. All of that and more is coming up here on Security Now!. Y'all stay tuned. ANT PRUITT: This is Security Now!, Episode 950, recorded Tuesday, November 28th, 2023: Leo Turns 67. Hey, what's going on, everybody? I am Ant Pruitt, and this is Security Now! here on TWiT.tv. Yes, I said I am Ant Pruitt. No, I am not Leo Laporte. That man is somewhere way off in the distance with no telephones, no computers, no tech at all. So he's probably pretty frustrated right now. I'm kidding. No, it's all for his own good, having himself a nice little retreat this week. So I am sitting in for him and going to sit down with Mr. Steve Gibson as we get into a great episode of Security Now! and get into all of the ins and outs and questionable things happening in the world of cybersecurity. How you doing, Mr. Gibson? STEVE GIBSON: Hey, Ant. It's great to be with you this week. There's one other thing that Leo may be without, and that would be a birthday party. Since last week's podcast was titled "Ethernet Turned 50," and since no other major topic grabbed the headline for today, I thought it only seemed right to title this one "Leo Turns 67." ANT: I dig it. STEVE: Since that will be his age tomorrow. Now, wherever it is, whatever cave he's in somewhere, maybe they'll have a little birthday party for him there. But apparently, you know, if he really is off sequestered somewhere, it won't be happening with his normal family. They'll have to defer, I presume, until he gets back. ANT: Right. STEVE: But you and I are going to examine the various answers to some interesting questions, including how many of us still have Adobe Flash Player lurking in our machines? What can you do if you lose your VeraCrypt password? Firefox is now at release 120. What did it just add? What just happened to give Do Not Track new hope? Why might you need to rename your ownCloud to PwnCloud? How might using the CrushFTP enterprise suite crush your spirits? Just how safe is biometric fingerprint authentication? How is that going with Apache's MQ vulnerability? And have you locked your credit bureau access yet? Should Passkeys be stored alongside regular passwords, or kept somewhere else? What's the best way to prevent techie youngsters from accessing the Internet, and is that even possible? What could possibly go wrong with a camera that digitally authenticates and signs its photos? Could we just remove the EU's unwanted country certificates if that happens? What's the best domain registrar? And what was Apple's true motivation for announcing RCS messaging for their iProducts? We're going to cover a lot of ground today. ANT: Oh, my. I mean, you have a lot of questions there, sir. STEVE: I am so full of questions, I don't know what's going on. ANT: You've got a lot of questions. I hope you have a lot of answers because that's some pretty interesting topics there. And some of it's probably going to provoke some people and anger some people because I'm looking at one of those issues, especially regarding youngsters. STEVE: That I want to talk to you about because you're a dad, yup. ANT: Yeah. I have thoughts. And, yeah. We'll get into that. So Mr. Gibson, I love how you always start the show out here each and every week with some funny little things that we get from our awesome listeners. We have a bunch of them hanging out here in our Club TWiT Discord. Thank you all for being here, you members. But what's the first thing you've got to share with us today, sir? STEVE: Our Pictures of the Week, the original concept was to do something security related, you know, something, you know, techie and security related. What's happened, however, is that we've, I guess mostly because I was so fascinated by the idea of these gates, like a locked gate out in the middle of a field somewhere. It's like, what? ANT: Why? STEVE: So, you know, then we sort of wandered far afield. Anyway, this one was a great one, and I gave it the caption "Sometimes ya gotta love humanity." So this is at Euston, E-U-S-T-O-N, Railway Station in London. And we have a yellow Do Not Enter tape stretched across an escalator which is not functioning. What's so beautiful is the sign that they put up in front of this. It reads: "This escalator is refusing to escalate." Okay, that's wonderful. And it says: "This has been escalated to the engineer who is on their way up (or down) to check it out. Please use the lift." So anyway, just props to people. "This escalator is refusing to escalate." Indeed. ANT: It's better than just saying "Out of Order." Got to give them that. STEVE: Yeah, exactly. You know, it's perfect. Okay. So speaking of out of order, or maybe in order, I'm just - this came about because I was still stuck last week on the task of performing unattended server-side Microsoft Authenticode code signing. I am managing to inch forward with that challenge, and I've already made one very useful breakthrough, which was to figure out how to programmatically unlock a PIN-protected hardware token whose key is stored in one of the new style, they call it a KSP, a key storage provider, you know, HSM hardware dongle. And I do look forward, since nobody's been able to do this, I look forward to sharing that with the open source community as soon as I come up for air. Okay. So while working on this, last week I discovered an amazing piece of free technology that I would have gladly paid hundreds of dollars for. It's simply called "API Monitor." I have a link to it here at the top, here in the show notes. It was once a commercial product; but it went free about 10 years ago, likely because, incredible as this thing is, it's only going to appeal to a relatively small audience. Ant, I love you, but I doubt that you need an API monitor to track down the intermodule API linkage calls in Windows apps. ANT: You might be correct, sir. STEVE: Okay. However, when you need one, oh, boy. So it's going to have a relatively small audience. They probably didn't sell many copies. But if this thing is what you need, there's nothing else like it. I would send the guy a donation, I've tried to write to him, I don't know what happened to him, you know, he's like nominally around, but he hasn't replied. There's not even a donation button on his site. Due to the incredible lack of documentation on Microsoft's next-generation cryptography APIs - they literally call it CNG for Cryptography Next-Generation - I've been reduced to doing a bit of reverse engineering. This API Monitor facilitates the creation and exploration of a detailed interactive log of all Windows module API calls. Now, being an OCD perfectionist myself, I'll admit it, I'm not really impressed - I'm sorry, I meant to say I'm not readily impressed with other things that I see. But this thing is truly incredible. I won't spend any more of everyone's time raving about it. You know, this is the sort of thing that, if you might find it useful, you already know enough to go grab it. It is one of the most utterly stunning pieces of work I've encountered in years. Okay. Here's why I'm talking about it. ANT: Okay. STEVE: Other than just to give its gifted author more well-deserved public praise. One of its process capture modes allows it to be triggered upon any new Windows process that starts up, like in the background. Or in the foreground for that matter. But in this case it was the background. While I was coming up to speed on it on how this thing works - because it's covered with buttons and options and stuff. It also has tutorials. I had not disabled, because it's enabled by default, the "trace starting processes" option, which I did not need. So I kept seeing pop-ups which I would dismiss. And it's not unusual for Windows, which is very busy in the background, to be just sort of autonomously doing stuff for you. But one in particular caught my eye. It kept popping up, Adobe's Flash Player Updater. So while this thing was running, this API monitor, it would pop up a dialog saying, "Adobe's Flash Player Updater," you know, "has launched. Would you like to trace it?" And it's like, well, first of all, no. But, like, why is Adobe's Flash Player Updater trying to launch? So the first few times, because I was busy doing what I was doing, I just dismissed it. I said no, I don't want that. But finally I thought, okay, wait a minute. What is going on? Okay. So in my machine Adobe Flash Player Updater was still alive. The problem with leaving something like this attempting to run in the background is I don't know what URL this thing was constantly querying. Hopefully it was a subdomain of Adobe, and not some separate domain from Flash Player's legacy Macromedia days that might not be renewed. But if whatever domain the updater was querying were ever to become available for any reason, a bazillion PCs around the world, apparently like mine, would be querying it for an update. Now, hopefully, Adobe also did the right thing and had any updates digitally signed with a "pinned" certificate, so that the updater would only accept updated code that had been signed with an absolutely specific Adobe certificate. ANT: Okay. STEVE: That would, on its face, prevent a malicious actor from injecting their code into these bazillion systems around the world that are all still apparently attempting to update their long-since-retired copy of Adobe Flash. Okay. So here's what Adobe had to say about their retirement of Flash Player. They said: "Since Adobe no longer supports Flash Player after December 31st, 2020, and blocked Flash content from running in Flash Player beginning January 12, 2021, Adobe strongly recommends all users immediately uninstall Flash Player to help protect their systems." Okay. Well, as we all know, Flash Player was nothing short of a catastrophic security disaster from the moment it appeared. And of course, I mean, like, how many episodes in this podcast's past were we saying, well, once again, exploits of a Flash Player, blah blah blah. So, now, Adobe was never seen to be using language like "Adobe strongly recommends all users immediately uninstall Flash Player to help protect their systems." ANT: To help protect their systems. STEVE: While it was a going concern; right? It was only after they'd decided to finally give up that they said, okay, now get rid of it because it's a disaster. On their Flash Player End-Of-Life FAQ page they ask themselves: "Why should I uninstall Flash Player from my system?" Then they provide the answer: "Flash Player," they said, "may remain on your system unless you uninstall it. Uninstalling Flash Player will help secure your system since Adobe will not issue Flash Player updates or security patches after the End of Life date. Adobe blocked Flash content from running in Flash Player beginning January 12th, 2021, and the major browser vendors have disabled and will continue to disable Flash Player from running after the EOL date." Okay. So first of all, I have no idea why my computer still had it installed. As we know, at one time, millions of websites and many standalone enterprise applications were dependent upon Flash Player for their operation. I had it installed for research purposes, and I had been blocking its operation through browsers since early in this podcast because it was clearly a disaster. But had I received any proactive Adobe reminder or suggestion that Flash Player had gone EOL - I mean, we talked about it at the time on the podcast - I would have clicked their "remove Flash Player" option. I doubt that ever happened. I certainly would have clicked it if it had. This suggests to me that Adobe may not have been as proactive in promoting Flash Player's removal as they might have been. And even now, when for the past several years their Flash Player Updater code has been running every hour in my system... ANT: Oh, my. STEVE: ...probing to see if there's been an update, why could they not have provided one final update, at any time, which would have caused Flash Player's Updater to remove itself, either immediately or the next time my system restarted? This could have been done any time in the last three years. ANT: Sounds like the call back to home, if you will, is just another way for Adobe to do some data extraction potentially? Say, hey, who is calling the home? STEVE: It certainly has that ability; right? They have a server somewhere, presumably still answering these calls. Maybe not. Maybe there's nothing there any longer. Or they're answering the calls, and they've got this massive library of all the IPs of all the systems. And who knows what else, what other information it's sending back? I don't want to disparage them anymore, but still. I guess my point is why haven't they shut this down themselves because they could? Okay. So anyway, I first examined my system's registered system services. And sure enough, right up there at the top when sorted in alphabetical order, was A-D-O-B-E; right? Adobe Flash Player Update Service. The service's run state was set to "manual." So next I went over to my system's Task Scheduler app; and, once again, along with scheduled tasks to keep Google Chrome and Microsoft Edge and a few other odds and ends updated, was the task to run Adobe Flash Update Service hourly around the clock. My next stop was Windows Programs and Features, where Adobe Flash Player was, once again, at the head of the class. I highlighted it and clicked Uninstall, and to its credit it did indeed remove every trace of itself from my system. And good riddance. Okay. So this leaves us with two questions: First, how many of this podcast's security-minded listeners might also still have Flash Player and its very persistent updater present in their systems? We don't know. I did. ANT: We don't know, but I think it's, again, being someone that listens to this show, they're probably really security-conscious and, like you, potentially had it on the system just for research purposes. You know? STEVE: Could have been. Or one of those I'll get around to doing it and never did. ANT: Right. STEVE: Anyway, since it doesn't show unless you peek into the proper corners of your system, it might be worth taking a look at your various Windows machines under Programs and Features. Just make sure that it's not still represented there. And while you're at it, why not scan through that list and remove any of the other cruft that most systems tend to accumulate over time? You know, I bet you that there's a bunch of stuff there that you're never going to use again. Okay. The second question is, why hasn't Adobe at least been proactive in shutting down the probable millions of Adobe Flash Player Updater instances that must still be running around the world? The idea of them still having their hooks, literally, into all of these systems is more than a bit disturbing. Nearly three years ago they formally stopped further updating of Flash Player. So if they were unwilling to proactively remove Flash Player from everyone's machines, at least they could use everyone's hourly query to remotely shut down all future queries by removing the Task Scheduler entry and the Update service from everyone's machine. You know, Adobe never did seem to be highly responsible with their shepherding of Flash Player after they acquired it from Macromedia. But, you know, at this point it's everyone's individual responsibility to protect themselves. So I just wanted to give a heads-up. This happened last week. I didn't know, even suspect that it was still there doing this. If anybody finds it in their machine, clearly it's long since time to get rid of it. ANT: Yeah, well, like I said, it's probably a data grab. I put nothing behind these companies, these big tech companies. It's all data, and it's all cash, and they're just going to figure out a way to continue to capitalize off of all the data that they can get. Mr. Gibson, one more thing I wanted to point out. One of our members here in the Discord mentioned that the API Monitor URL, we must mention that it is not an HTTPS URL. STEVE: Ah. ANT: So just... STEVE: That's a very good point. ANT: Take it with a grain of salt. And thank you. [Crosstalk]. STEVE: Yeah. And actually that flows from the point I made a while ago which got me in trouble with our listeners when I said, you know, there's a lot of still very useful stuff on the Internet which is not HTTPS. So it may be, I mean, I'm wondering who's paying for this and where this is hosted. It is at his own domain. But that's a very good point. I just scrolled back and looked. It is not HTTPS, just HTTP. So that's a good point. And I don't think I checked the signature. I normally do. I don't think I did. So bad on me for not seeing if it was digitally signed. But still, wow, beautiful, beautiful piece of work. So over the weekend I received a note from a desperate person. Now, I don't know if he's a listener. It's unclear why he wrote to me. Perhaps GRC came up in a Google search for the term VeraCrypt, because it does, because we've talked about VeraCrypt a lot in the past. ANT: Yup. STEVE: But in any event, this is what he wrote. He said: "Hi. VeraCrypt password lost. How can I get into my device? VeraCrypt site says it's impossible." And then he provides a link. He says: "So everything on this device is lost? Please, if you can help, appreciate any/all help." And he just signed off with his initials, CGS. He later sent a second email inquiring whether my consulting services were available for hire. They're not. But, you know, I would have told him the same thing. And I did write back to him. So sorry as we might be for this hapless person, the entire reason he, or someone, presumably chose to encrypt his device with VeraCrypt is because, assuming the use of a good password, just as VeraCrypt's FAQ correctly stated, no help is possible or available by deliberate design. ANT: Right. STEVE: And I said "he or someone" just now because we only have his statement to lead us to believe that he has any actual legal or ethical right to the data that has been encrypted on that device. Right? Knowing nothing more, it could just as easily be that a thief has stolen someone else's drive, knowing that it contains the password information for that person's cryptocurrency, which could be worth millions of dollars, where the only thing protecting that crypto from the theft is the device's VeraCrypt encryption, and that right now at this very minute the original true owner of this device is thanking his lucky stars. ANT: Right? STEVE: That first, he chose VeraCrypt; and second, he also locked that drive up with a password that no one will ever be able to brute force. However, in the meantime, out of an abundance of caution, this person whose jewels have been stolen has had plenty of time to relocate his crypto to some other wallet where it will now, again, be safe. In any event, the lesson here is, (a), use VeraCrypt. You know, it remains the go-to solution for open source, audited, whole drive or partition encryption; (b), no matter what, always use a really strong and difficult to brute force password; and (c), be very careful to create ample backups of the password you assigned. Again, there is - I know our listeners know this. But just as a reminder, there is no way to get your VeraCrypt data back on purpose. You don't want anybody else who were to steal your drive to be able to get your data back. So make a copy of your impossible-to-remember password. You have to. ANT: You know that phrase right there, make a copy of your password, could sometimes be problematic, depending on the person you're saying that to. What type of suggestions or recommendations do you have for someone that's going to make a copy of their "master password" far as what should it be in, where should it be stored, that kind of thing. Because we don't - we're not necessarily saying write your password down, put it on a sticky note, and stick it up under the seat cushion. STEVE: Right. I would not post it behind you when you're doing web conferences. That would not be good. ANT: Yeah, or that. STEVE: Bruce Schneier, the security guru that we refer to frequently on the podcast, made a comment once that has stuck with me when asked about that. He said: "Write it down." He said: "We are very good at managing little bits of paper." He said: "We are not good at storing things securely." So the point being I would print it out on a piece of paper and put it somewhere. I mean, and consider the risk profile. It is often the case that you're protecting yourself from someone in Russia or China. ANT: Mm-hmm, mm-hmm. STEVE: Some hostile foreign nation. They're not going to come to your home from their country and get into the shoebox you have tucked away under the bed. That's not going to happen. So that would be a safe place to store your password. Now, it would not be safe to store it from your kids because they would get into the shoebox that you have. So you have to keep the threat model in mind. ANT: Right. STEVE: If you keep a safety deposit box, or if you have an attorney you trust. And also remember you don't have to actually store the correct password. You could store the password with one change being made to it, and nobody would ever know what that change was. Or drop off a chunk of text that you know you always add to the end of your passwords. That kind of thing. So there are lots of things you can do in order to arrange to have it available for sure when you need it. But it's worth taking those kinds of measures. It's sort of a variation on the theme of what happens if you die. And, like, you know, now what? What about your spouse and people you care about and their need to access your stuff in order to help manage the existence that you had up until then. These sorts of things really do need to be given some consideration. ANT: Good point. Good point. STEVE: So exactly one week ago, last Tuesday, Firefox released to their so-called "release channel" their version 120. And while this is going on all the time with all of our browsers, this one is worth taking a moment to examine. As we noted last week, with the impending end of Chrome's support for Manifest 2.0, which will disrupt the operation of some of Chrome's more popular advertising and tracking controlling extensions, we may soon be seeing a welcome resurgence in Firefox's popularity because presumably they will continue to support uBlock Origin and Privacy Badger and then Adblocker and the other things that the people really like to use. So Firefox release 120, as I said, brings us a few new features that are worth noting. For one thing, its right-click pop-up context menu when you right-click on a link now adds a new feature down toward the bottom named Copy Link Without Site Tracking, which Mozilla says ensures that any copied links will no longer contain tracking information. And I think that's a great new feature. ANT: Yeah. STEVE: Why would you not do that? As we know, one of the ways that we are being tracked is that links are being embellished with unique identifiers. And if you just copy the whole link, you're going to copy that identifier. And so, you know, wherever that link goes, whoever clicks on it in the future, is going to be sending an identifier to the link's target unwittingly. So copy link without site tracking, cool new feature. Firefox now also supports the welcome setting, it's under Preferences > Privacy & Security, to enable GPC, which we did a whole podcast on a while back. That's Global Privacy Control, which is a newly defined beacon which is receiving some legal support in order to enforce and encourage its adoption by websites. So although this is opt-in during normal browsing, it is enabled in private browsing mode by default. Also in that same Privacy & Security region you'll find the Do Not Track request which can also and should be enabled. As I mentioned a few weeks ago after rediscovering the EFF's Privacy Badger, Privacy Badger also adds those beacons to every user's web requests. But there's no harm in adding a belt to go with those suspenders. So I also have some very encouraging news about the future of DNT to share in a minute. But a few more Firefox features first. Firefox's private windows and its ETP strict privacy configuration now also enhances its Canvas APIs with Fingerprinting Protection, essentially continuing to protect the users' online privacy. As we've discussed in the past, allowing websites to probe various subtle details of how a given browser renders specific pixel illumination to the user's viewing canvas is just one more trick that the trackers have developed to follow us around the Internet. So Firefox has taken measures to obfuscate that. ANT: Outstanding. Outstanding. STEVE: Yes. And listen to this one. This is a cool one. Firefox is rolling out Cookie Banner Blocking in the browser by default in private windows, but only for users in Germany, during the coming weeks. ANT: Aww. STEVE: Well, but first. Firefox will now auto-refuse cookies and dismiss annoying cookie banners for supported sites. Furthermore, also only for all users in Germany for the time being, Firefox has enabled URL Tracking Protection by default in private windows. Firefox will remove non-essential URL query parameters, just like we were talking happens if you right-click on a link and select that option now, which are often used to track users across the web. Again, I'll have more to say about Germany in a minute. Firefox now imports TLS trust anchors, you know, web certificate authority certificates, from the operating system's root store. Now, that's a little controversial, for reasons we'll explain. This will be enabled by default on Windows, macOS, and Android; and, if needed, can be turned off in Settings under Preferences > Privacy & Security > Certificates. On my Firefox 120 this checkbox is labeled; and, yes, it was enabled by default. It says: Allow Firefox to automatically trust third-party root certificates you install. Okay, now, my problem with this wording is that it's misleading. It sounds as though users, you know, "you" is the word he uses, as though users would be the ones to install those third-party certificates. But that's almost never the case. Presumably, Mozilla is attempting to be more compatible with the third-party TLS proxying middle boxes increasingly employed by enterprises to filter their network traffic. The use of any of those requires that the browser trusts the certificates that those boxes mint on the fly. Those third-party root certs are typically installed directly into the operating system over the network through active directory and group policies. Firefox has been unique in that it has always used its own root store which it has brought along and has not been dependent upon the hosting operating system's root store. So it must be that this recent move just now, with Firefox 120, is intended to make the use of Firefox easier in such enterprise settings. Now, that's great for the enterprise. We'd like to see more use of Firefox there. The worry is that, if the EU gets its way with this misdirected eIDAS 2.0 QWACs certificates mess, and is able thereby to force browsers and operating systems to install their member countries' web certificates into their root stores, then this mechanism, which has just now been added to Firefox 120, would automatically place it into compliance with that EU effort. Okay, now, given Mozilla's clearly and quite strongly publicly stated position on the EU's eIDAS 2.0 QWACs certificates, it seems unlikely to me that pre-compliance with something to which Microsoft and Mozilla and others quite strongly disagrees is likely. Remember that rather than signing onto that large open letter that most others co-signed, Mozilla chose to write one of their own, which the likes of Cloudflare, Fastly, the ISRG, and the Linux Foundation and others, all co-signed. So my guess is that it's that smoother functioning within the enterprise which was the sole motivation. And note that we do have a simple checkbox that any of us can uncheck if we do not want to have Firefox's root store augmented or polluted by its use of the underlying host OS's root store. Okay. So what about Firefox and this Germany business? ANT: Yes, please. STEVE: There were several interesting changes in Firefox, which I just mentioned, which only benefit German users. What's up with that? Turns out the German courts have been weighing several issues, and that their decisions have come down on the side of user privacy and choice. TechRadar pulled together a nice piece, providing this recent news and also some back story about the DNT and GPC beacons. So with a bit of editing by me, here's what TechRadar explained. They said: "Germany is perhaps the most proactive country when it comes to protecting its citizens' privacy, something that privacy advocates and enthusiasts have been aware of for a while now, and the country recently reiterated its stance against Microsoft-owned LinkedIn. A Berlin Court found in favor of the Federation of German Consumer Organizations, which filed a lawsuit against LinkedIn for ignoring users who turned on the Do Not Track function in their browsers. According to the German judge, companies must respect these settings under the GDPR." So that's big news. It's a small victory for privacy. But this Do Not Track ruling might end up reshaping how websites and other online platforms have to handle our data more broadly. Adoption and support of DNT has been in sharp decline from its initial introduction back in 2009. Now, adblocker and VPN service provider AdGuard was asked by TechRadar. AdGuard believes this is a potentially game-changing court decision which could, as they put it, could exhume the once-abandoned - and apparently buried - privacy initiative for good. So a little bit more back story here. Do Not Track headers, as we know, are beacons sent by browsers to proactively inform a website not to collect or track that visitor's browsing. DNT was first proposed by security researchers Christopher Soghoian, Sid Stamm, and Dan Kaminsky well back in 2009 to limit web tracking. And I just jumped on this thing full on because it was such a clean, beautiful, simple idea. Just simply turn on a header that says thank you very much, I'm a person who does not want to be tracked. A year later, after 2009, the U.S. FTC, our Federal Trade Commission, gave its approval and called for the creation of a universal mechanism to give users more agency over their data. The first web browser to support the new initiative was Mozilla Firefox, which added the feature in March of 2011. Other services followed suit, including Microsoft's Internet Explorer. And that, back then, IE was the big browser at the time. So them adding it was a big deal. Apple's Safari and Opera were included. Google Chrome embraced the industry trend back in 2012. Now, as our longtime listeners may remember, the problem was IE from Microsoft turned this on by default. The fact that it was on by default allowed those who were fighting against it to take the position that, well, wait a minute, if it's on by default, then it doesn't necessarily reflect what the user wants. ANT: Choice, no choice there. STEVE: So because it's always on, we're going to ignore it, thank you very much. Anyway, so AdGuard said: "The early 2010s was perhaps the time when the enthusiasm for the DNT and its potential to improve privacy was at its peak." Okay. So unfortunately, after initial success with browsers, the DNT wave seemed destined to dwindle. The problems started from the lack of similar support among websites and advertisers. You know, they didn't want to do it. I mean, they didn't want to obey it. And nobody was making them obey it. So why would they? That's where GPC stands to benefit, and why this recent decision in Germany is significant. The German court is saying, you know, you have to pay attention to this. And, wow, even Google, that implemented the DNT feature on its browser, refused to "change its behavior" on its own websites and web services. In other words, naturally, they declined to honor DNT requests themselves, even from users on their own browser who had turned it on. ANT: Shocking, yeah. STEVE: The final nail in the coffin came in 2019, when the group working on standardizing DNT was finally dismantled just due to lack of anything happening. And we talked about this at the time on the podcast. So privacy advocacy groups did not want to renounce giving users a better way to help protect their personal data and browsing activities. They wanted to hold on. AdGuard explained, they said: "While DNT failed to gain much support, the need for a mechanism that would allow people to opt out of having their personal information shared or sold was still strong." Privacy focused experts believe organizations should allow their customers to decide whether to have their information shared or sold in the first place. It was from this need for an alternative that the Global Privacy Control (GPC) was born a couple years ago in 2020. Like DNT, the GPC is a signal sent with every web request over HTTP and HTTPS to opt-out having browsing data collected or sold. Supporters of this new initiative include many privacy-first browsers and search engines like DuckDuckGo, Brave, and Firefox; and browser extensions such as Abine's Blur, Disconnect, OptMeowt, and EFF's Privacy Badger, as I mentioned before. GPC seems to have gained more traction than DNT was ever capable of, until now at least, where DNT may be catching up. And I think it's just that back in 2009 it was still ahead of its time. The idea was correct. But, you know, pressure hadn't built up enough against all of this going on as it has now. I mean, we even have Google designing their own non-tracking system in order to perform some user profiling in order to deliver more relevant ads. So, you know, clearly tracking is in trouble at this point in our entire industry. So in August of last year, GPC won its first legal battle in California against the commercial retail brand Sephora. And now, on October 30th, 2023, that date may be remembered as a milestone for the DNT initiative because that was the date that the Berlin Regional Court ruled that LinkedIn can no longer ignore its users' DNT requests. Now, LinkedIn was not silent on this. We'll get to that in a second. But Rosemarie Rodden, a legal officer with the German consumer rights group who brought the lawsuit, said: "When consumers activate the 'do-not-track' function of their browser, it sends a clear message: They do not want their surfing behavior to be spied on for advertising and other purposes. Now, website operators must respect this signal." It turned out that the judge agreed with Rosemarie, ruling that LinkedIn is no longer allowed to warn its users that it will be ignoring their DNT signals. That's because, under the GDPR, the right to opt out of web tracking and data collection can also be exercised using automated procedures, period. In other words, the court found that a DNT signal is legally binding. This sets a precedent and revives the all-but-abandoned idea of Do Not Track. Now, as I said, not everyone is happy with this decision. The LinkedIn spokesperson told CyberNews: "We disagree with the court's decision which relates to an outdated version of our platform, and intend to appeal the ruling." Now, outdated? What's outdated, the LinkedIn platform? If the ruling only applies to an outdated version of the LinkedIn platform, then why appeal it? And surely you can see with the growing support for the closely related GPC signal, and with Google, as I said, developing their own non-tracking means to obtain interest categories for web users, this is a tide that is finally beginning to shift. So anyway, more enforcement is likely coming on the horizon. One decision in one country doesn't change the world overnight. But it is sure a step in the right direction. And there are times when having some ambulance-chasing attorneys around can come in handy. Let's let them, you know, sic them on some other large websites that are not abiding by this court order, and we'll begin to get things turned around. And Ant, I think we should take our second break so I can catch my breath. And then we will proceed to talk about PwnCloud. ANT: I wanted to ask you something in following up on this. STEVE: Yeah, yeah. ANT: With the DNT. Because I think about all of the websites that I've gone through, you know, throughout the years, that are just random websites, whether it be a photography site or some online shopping or what have you. Some of them they put the banner up about tracking cookies. And it gives you the option to reject, so on and so forth. And then there's others that say, hey, our site uses tracking cookies. Period. You can close this window and carry on, or you can get the heck off our website. Period. So this is only going to apply for German sites? Because I guess we're still going to see a bit of the wild, wild west everywhere else, and still have those banners pop up and say, you know what, you don't have a choice. We're going to track you anyway. Is that what you're saying? STEVE: Right. So the popup banners and the asking the user for permission or acknowledging that, that was something that the GDPR instigated and required that sites do, that they proactively get users' permission. The DNT and the GPC - and, you know, it's kind of dumb to have two, be nice if we just amalgamated them into a single beacon. But those are settings that the user can set in their browser so that the browser is itself saying, "I don't want to be tracked. I don't want to have my information aggregated and sold or shared with others." So I think where we are at the moment is in this confusing set of changing times. So, you know, the GDPR said to websites, if you're going to be using tracking, you need to inform the user and obtain their permission. That's why we got all these ridiculous banners that you have to click in order to make go away. Now the court is saying, you know, even that is obsolete. If the user has DNT and GPC beacons on, period, you are never allowed to track them, no reason to put up a cookie banner because you can't be using cookies that are going to be tracking them. ANT: Right. And that's if they use an automated beacon, is what you're saying. STEVE: Right. And all the browsers, I mean, that's the thing that Firefox just added and other browsers are beginning to return to. And so I think this is what's going to happen. Basically, browsers are simply going to be saying this, and websites are going to start having to conform. They're going to, you know, they're going to be kicking and screaming. On the other hand, Google has now formally rolled out their interest category system. That's what everyone's going to have to switch to. And remember, it's way weaker than tracking. I mean, tracking allows the aggregation of all kinds of crap, instead of just like what are the interests that your recent web behavior has suggested that you might have, which is the Google replacement system. So I have a feeling tracking's not long for the world. As we know, things don't change soon, but they do eventually change. ANT: Right. This is a start. This is a start. So ownCloud, PwnCloud, tomato, tomahto, is it one of those things? What's the deal here? STEVE: Well, it appears that anyone running any instance of the very popular open source "ownCloud" file sharing system needs to take immediate action - as in, stop listening to this podcast right now, and immediately unplug anything running ownCloud to get it off the Internet. Unfortunately, due to today's ultra-swift nature of the exploitation of any publicly announced new vulnerabilities - in this case it's a remotely exploitable CVSS 10 out of 10, which is difficult to get - it may already be too late. But even if so, at least closing the unlocked front door and working to clean up any damage still needs to be done. Okay. So here's the story. GreyNoise reported the following in their coverage of CVE-2023-49103. Yesterday they wrote, under the title "ownCloud Critical Vulnerability Quickly Exploited in the Wild," they said: "On November 21st, 2023" - so that's exactly one week ago today - "ownCloud publicly disclosed a critical vulnerability with a CVSS severity of 10 out of 10. This vulnerability, tracked as CVE-2023-49103, affects the 'graphapi' app used in ownCloud. ownCloud," they wrote, "is a file server and collaboration platform that enables secure storage, sharing, and synchronization of commonly sensitive files. The vulnerability allows attackers to access admin passwords, mail server credentials, and license keys. GreyNoise has observed mass exploitation of this vulnerability in the wild as early as November 25th, 2023." Okay? So it took four days from the announcement for mass exploitation to take off. They said: "The vulnerability arises from a flaw in the 'graphapi' app, present in ownCloud versions 0.2.0 to 0.3.0. This app utilizes a third-party library that will reveal sensitive PHP environment configurations, including passwords and keys. Disabling the app does not entirely resolve the issue, and even non-containerized ownCloud instances remain at risk. Docker containers before February 2023 are not affected. Mitigation information listed in the vendor's disclosure includes manual efforts such as deleting a directory and changing any secrets that may have been accessed. In addition to this vulnerability 49103, ownCloud has also disclosed other critical vulnerabilities, including an authentication bypass flaw (49105) and a critical flaw related to the OAuth2 app (49104). Organizations using ownCloud should address these vulnerabilities immediately." Okay. ANT: Boy. STEVE: That's bad. So for ownCloud users, we have a potential four-alarm fire situation. There are three newly disclosed CVEs with ratings of the difficult-to-obtain 10.0, a highly critical 9.8, and a still very bad 9.0. That 49103 CVE with a CVSS of 10.0 allows for disclosure of sensitive credentials and configuration in both containerized and non-containerized deployments. The 49105 CVE is the second worst, with a CVSS of 9.8. It's a WebDAV API authentication bypass using pre-signed URLs which impacts core versions from 10.6.0 to 10.13.0. And the third 49104 CVE with the CVSS of 9.0 is a subdomain validation bypass impacting OAuth2 prior to version 0.6.1. So in the case of this first worst mistake - and really a mistake is what it is. It's not some fancy Log4j tricky-to-exploit vulnerability. Anyone who has any experience with PHP knows that you never want to expose PHP's phpinfo applet to the public Internet, yet that's exactly what this 'graphapi' has done. Located down the path: "owncloud/apps/graphapi/vendor/ microsoft/microsoft-graph/tests/" is a "GetPhpInfo" PHP file - and it can be accessed remotely to disgorge all of the system's internal sensitive data, including all the environment variables of the web server. And in containerized deployments this includes the ownCloud admin password, mail server credentials, and license keys. ownCloud recommends deleting that file and administratively disabling the very dangerous phpinfo function. This can be done by simply adding "phpinfo" to the "disable_functions" list in the system's php.ini file. And sadly, that list is empty by default, meaning PHP ships with phpinfo enabled. Not on my PHP instances, but by default. After doing this, however, do not make the mistake of not also immediately rotating all of the system's credentials - the admin password, the mail server and database credentials, as well as Object-Store S3 access keys if the ownCloud instance was hosted by an S3 cloud provider. Again, multiple, I think it was 12 independent, probably malicious, we don't know, maybe they were some security firms. But there were 12 unique IPs identified scanning the Internet, looking for instances of ownCloud. So in this instance you have to presume your instance was infected by something if it was exposed to the Internet, since essentially they all were. The second problem makes it possible, that is, the other CVE at 9.8, to access, modify, or delete any file without any authentication if only the username of the target is known, and if they have no signing key configured on their account, which is the default behavior, that is, not to have one. That's also obviously quite a potentially serious vulnerability, to be able to access, modify, or delete any file without any authentication, knowing only the username of the target. And the third 9.0 flaw relates to a case of improper access control that allows an attacker to pass a specially crafted redirect-url which bypasses the validation code and thus allows the attacker to redirect callbacks to a TLD controlled by the attacker, a top-level domain controlled by the attacker. Anyway, bottom line, everyone using ownCloud should update to the latest builds and make sure that everything else is still okay because I did say 12 unique IPs were found to be scanning the Internet looking for instances of ownCloud and were carrying through with exploits. So update. Change all of your sensitive credentials. It's a must-do, unfortunately. While we're on the topic of critical vulnerabilities that will wreck your day or your week or maybe even your month, anyone using the sadly named "CrushFTP enterprise suite," and there are currently somewhere around 10,000 publicly exposed instances of it on the Internet, must immediately update to v10.5.2 or later. Back in August, the security firm Converge Technology Solutions responsibly disclosed a critical unauthenticated zero-day vulnerability, meaning you don't need authentication to use it, to exploit it, which affects the CrushFTP enterprise suite. Having 10,000 of these instances publicly exposed is bad enough, but a great many more are known to be residing behind corporate firewalls which malware might manage to crawl behind. The exploit permits an unauthenticated attacker to access all CrushFTP sites, run arbitrary programs on the host server, and acquire plaintext passwords. The vulnerability was fixed in CrushFTP version, as I said, 10.5.2, and it affects software in the default configuration on all operating systems. What's more, Converge's threat intelligence has discovered that the security patch which resolved this problem has been reverse engineered, and adversaries have developed proofs of concept. So forthcoming exploitation can be presumed. Update immediately. The attack chain hinges upon an unauthenticated query when CrushFTP parses request headers for a data transfer protocol called AS2. By exploiting the AS2 header parsing logic, which obviously has a flaw, the attacker gains partial control over user information Java Properties. This Properties object can then be leveraged to establish an arbitrary file read-and-delete primitive on the host system. Using that capability, the attacker can escalate to full system compromise, including root-level remote code execution. So it's really bad. 10,000 instances are public. The patch has been reverse engineered. Those 10,000 servers are going to get attacked. Update to 10.5.2 ASAP if your enterprise is using the CrushFTP because, as I noted at the top, you don't want to have your spirits crushed. ANT: Wow. With something like this, because I just looked up this CrushFTP, are you saying this is 10.5 or greater where we need to step up to? They're already up to, like, 10.9 with this application. Is there any reason why someone would use this? Because there are plenty of other FTP options out there, including some open source options that are out there. Why would an enterprise not even look at the open source side of things? STEVE: Well, I did not look to see what CrushFTP enterprise suite means. And, for example, this AS2 protocol is something I've never encountered by FTP. So it may be that it's got a bunch of extra special features that specifically target it to the enterprise. And who knows, a suite implies there are other components to it. So it may be a big package that has a whole bunch of other stuff that is targeted at the enterprise, and this CrushFTP server is just one of the modules. Unfortunately, it only takes one to be bad in order to give the whole thing, make the whole problematic. ANT: True, true. STEVE: So a very interesting set of flaws has been found in the fingerprint sensors manufactured by Goodix (G-O-O-D-I-X); Synaptics, they're a very popular supplier, I ran across Synaptics, they make the touch pads in most laptops; and ELAN (E-L-A-N). The OEMS who purchase and integrate those sensors and whose equipment is therefore vulnerable to fingerprint sensor-based authentication bypasses include the Dell Inspiron 15, the Lenovo ThinkPad T14, and Microsoft Surface Pro X laptops, just to name a few which are known to contain these popular sensors. So this is what happens when a hardware-savvy security firm takes a close look at what's going on inside of fingerprint sensors. And as is often the case, the result is frightening. All three of the fingerprint sensors are the "good kind," which perform something known as MoC verification, which stands for "match on chip." That's what you want since it integrates the matching and other biometric management functions directly onto the sensor's integrated circuit. But the researchers said: "While MoC prevents replaying stored fingerprint data to the host for matching, it does not, in itself, prevent a malicious sensor from spoofing a legitimate sensor's communication with the host and thus falsely claiming that an authorized user has successfully authenticated." Okay. So to thwart this problem in general, Microsoft created something known as the Secure Device Connection Protocol (SDCP). It's designed to eliminate this problem by establishing an end-to-end secure channel between the sensor and the machine's motherboard. And we know how this can be done in practice. You know, TLS that is the protocol that HTTP uses to establish a secure connection between endpoints in full public view, it works. So SDCP can theoretically work. You're able to share secret keys, establish a symmetric key, and as long as you have authentication of the endpoints, that can potentially be secure. But the researchers designed a novel technique that can successfully circumvent these SDCP protections to create adversary-in-the-middle attacks, as they call them. ANT: Of course they did. STEVE: Now, that's - uh-huh. That's if this is turned on. And get a load of this. The ELAN sensor which interfaces over USB doesn't even offer SDCP. So it's easily spoofed simply by sending cleartext security identifiers. ANT: Oh, boy. STEVE: This allows any device, any USB device, to masquerade as a fingerprint sensor and claim that an authorized user is logging in. In the case of the Synaptics fingerprint sensors, not only was SDCP found to be turned off by default, the implementation used a known-flawed custom TLS stack to secure its USB communications between the host driver and the sensor. So once again it was possible to defeat the biometric authentication. The exploitation of the Goodix sensors leverages a fundamental difference in enrollment operations carried out on a machine that's using Windows versus Linux, or a machine that transiently boots a copy of Linux. This takes advantage of the fact that Linux does not support SDCP. So we have what is essentially a protocol downgrade attack. This truly lovely hack is done as follows: Boot Linux. Enumerate valid IDs. Enroll the attacker's fingerprint using the same ID as a legitimate Windows user, and again you can do this because SDCP is not supported by Linux. Intercept the connection between the host and the sensor by leveraging the cleartext USB communication. Then boot to Windows. Intercept and rewrite the configuration packet to point to the Linux database. And finally, log in as the legitimate Windows user with the attacker's fingerprint. Essentially, this allows for the installation of an attackers fingerprint and association to the legitimate user's fingerprint. It's also worth noting that, although the Goodix sensor design anticipated this bait-and-switch weakness and therefore uses separate fingerprint template databases for Windows and for non-Windows systems, the attack is still possible thanks to the fact that the host driver sends an unauthenticated configuration packet to the sensor to specify what database to use during sensor initialization. So you simply change that, point it to the Linux database, and now you log in with a fingerprint that you set up when the system had been booted under Linux. To mitigate such attacks, the researchers have recommended that OEMs enable SDCP and ensure that the fingerprint sensor implementation is audited by independent qualified experts; you know? These guys did that. So we insert our standard refrain there: Have the security audited by somebody who wants to find flaws, not by your own people who just finished writing it themselves and assume it works. Just for the record, this is not the first time Windows Hello biometrics-based authentication has been successfully defeated. In July of 2021, Microsoft issued patches for a medium-severity security flaw, 2021-34466 that had a CVSS of only 6.1. But it could still permit an adversary to spoof a target's face and get around the login screen. The researchers said that: "Microsoft did a good job designing SDCP to provide a secure channel between the host and biometric devices, but unfortunately device manufacturers seem to misunderstand some of the objectives." Like having it turned off by default. I would call that a misunderstanding. "Additionally, SDCP only covers a very narrow scope of a typical device's operation, while most devices have a sizable attack surface exposed that is not covered by SDCP at all." Okay. So I think our takeaway from this should be to not over rely upon the convenience offered by biometric authentication. Yes, it's convenient to be able to hold your phone up and have it look at you and say, oh, that's Steve, or that's Ant, or whomever. But this is why Apple, whose biometric authentication has been very tightly designed by security-crazed engineers, will require the "something you know" to be provided initially when you're unlocking your device following any restart. You know, if I had any device whose security was truly critical to me, I'd encrypt its drive, and I would supply the key with an outboard USB dongle, not something biometric. In fact, that's what I did when I was in Europe traveling with a laptop during the SQRL tour years ago. It was deeply encrypted, and I used a physical dongle in order to supply the unlock key. ANT: And Mr. Gibson, looking at this story, I think about there's an old adage of, you know, if there's a problem, typically you can throw some money at it to fix it. And I know that may not apply for everything, especially when it comes to hardware, computer hardware and whatnot. But I'm looking at these manufacturers, these OEMs that provide this. What was that one touchpad, hold on, I've got to scroll up, Synaptics. STEVE: Yeah. ANT: They're in everything. STEVE: Yup. ANT: That touchpad is in everything. And typically it's in the less expensive laptops that are out there. STEVE: Yup. ANT: Microsoft. Is this something, is this on Microsoft? Or is this on Synaptics? Because Microsoft clearly got a deal for that licensing, to put it in all of these devices. But yet Synaptics dropped the ball by cutting that off, by cutting off SDCP by default. STEVE: So it's probably the case that Synaptics sensors offer SDCP, but I would imagine - so we don't know which of these sensors goes with which of the OEMs that I mentioned. ANT: Okay. STEVE: I would imagine that Microsoft would require SDCP be enabled. The only one of those three of ELAN, Synaptics, and Goodix, where the researchers found SDCP actually enabled and running, was on the Microsoft Surface tablets. ANT: Oh, okay. All right. STEVE: So that would be my guess. It would be surprising if Microsoft supported this protocol and Synaptics was allowed to leave it off. It would probably have to be turned on. ANT: Yeah. I was thinking about it, and I was like, and then you brought up Apple. And I was like, yup, that's exactly my thought because Apple spends a lot of money on this stuff from a security standpoint, and those relationships, and making sure things are done in a particular way. Sounds like is this just where people are cheaping out on things and should just spend a little bit more money with these relationships and these licenses? STEVE: Well, and that's the problem is that, as we know, Apple did a really strong job of implementing a fingerprint sensor. You know, they really, we covered it in detail when it happened on the podcast. They nailed this technology. The problem is other people come along with a fingerprint sensor, and the consumer thinks, oh, you know, it's a fingerprint. Mine's unique, and it's not going to be like anybody else's. They're only looking at, literally, at the surface of their skin. That says nothing about the technology that implements what happens when that skin hits the road and actually has the fingerprint read. So just the fact that it requires some biometrics, it implies nothing about the security behind that. And that ELAN sensor, it has no security at all. It's just wide open. Crazy. ANT: Yikes, yikes. Unbelievable. So what's going on with Apache ActiveMQ, sir? STEVE: Well, so we talked about this a few months ago. They had a horrible problem with something known as Apache ActiveMQ, a message queuing vulnerability. It was another of those 10.0s. I just wanted to mention that it remains under very active exploitation. A proof of concept exploit was initially posted on GitHub. It was later updated to add an English language translation, and then two weeks ago it was further improved to change and basically bolster its TLS support. So by now, pretty much any Apache ActiveMQ server that has been left unattended will be spinning its fans as fast as possible because cryptominers have been observed being installed into any still-vulnerable servers. They've just been turned into cryptominers for as long as they can run, you know, generating cryptocurrency for the bad guys. Okay. So a public service announcement by way of our major credit reporting bureaus. Two of them, TransUnion and Experian, were both just hacked, with their super-sensitive consumer data exfiltrated. The hacking group named "N4ughtySecTU," so that's Naughty Sectu, is asking - I know - is asking for $30 million from each firm, threatening to release its customers' data online. And this is the second time the N4ughtySecTU group has hacked TransUnion, having previously done so back in March of last year, 2022. So let me just take this opportunity to once again remind everyone that all four of the major credit reporting bureaus support credit locking, and that everyone - everyone - should be taking advantage of this feature. Given today's cybercrime environment, and the fact that those who are holding and aggregating our private information without our permission - no one ever gave these credit reporting agencies the right to collect all this stuff on us. They just do it, and they resell it for money, and they track us, so without our permission. They have been proven unable to keep it private. So we need to minimize the chance that our private information, if it escapes, will then be leveraged against us for identity theft. Identity theft is one of the most debilitating, and difficult to recover from, things that can happen to an individual. A few years ago I decided that since I was such a large customer of Amazon, I would start routing my purchases through their card to obtain an additional several percent of savings. To apply for their card, I needed to briefly drop my credit reporting agency shields to allow Amazon's credit folks to verify my credit worthiness. What I learned at the time was that it is now possible to ask the bureaus to temporarily drop our shields for a specified duration, you know, like seven days, after which time they will automatically snap back up. So that really removes the last barrier of inconvenience from having one's credit reporting blocked by default. Everyone listening and everyone you care about should be running with their shields up full-time. There's just no reason not to. ANT: Yeah, I agree 100%. STEVE: And Ant, as we start taking some listener questions, why don't we take our last break, and then we will see what Christian has asked. ANT: Oh, sure, yeah. I'm looking forward to some of the feedback from our fine listeners. Thank you again, Mr. Gibson. Another great episode so far here on Security Now!, and thank you to everybody hanging out watching this live, whether it be in IRC or here in our Club TWiT Discord. Really do appreciate y'all being here. Mr. Gibson, about time to close the loop and check out what our awesome listeners have sent in; right? STEVE: Yup. So Christian, I think his name, I would pronounce it Rutrecht, R-U-T-R-E-C-H-T. He said: "Hi, Steve." ANT: I'm not going to try to pronounce it. STEVE: "Not sure" - yeah - "if you have managed to catch up to Passkey support in Bitwarden. I have not heard it mentioned lately in Security Now!. I've just started testing it on selective services, and it works flawlessly across my various devices. I am very impressed, I must say." And we need to mention that Bitwarden is a sponsor of the TWiT network. He said: "What I would like to know is the view you have on adding all your Passkeys to a combined password vault? I know that you have a standpoint that MFA verification apps or physical token devices should be separated. But what about Passkeys? Personally, I prefer combining everything for the sake of convenience. I have family members and colleagues that I try to nudge towards using a password manager. For them to be able to use it, it must be easy going. Even the concept of having 'some thing' remembering their password for them is complicated to comprehend for some. "In my research I've found that the best way of keeping my password security posture up to date and ready available for access is to have a single vault/app for everything. I chose Bitwarden for that purpose at the conclusion of my research two years ago, as it was the best hardened platform available, including the support for authentication tokens, and now Passkeys. Keep up the good work." So Christian, I would classify Passkeys exactly as I would passwords. Passkeys are just superior passwords because, by using public key asymmetric crypto instead of secret key symmetric crypto, Passkeys are inherently immune to a great many of the attacks and failure modes that have always beset passwords. In other words, I think it's entirely acceptable to keep Passkeys in the same vault, managed right alongside your traditional passwords. And as you noted, I do feel strongly that the entire point of multifactor authentication is to create a clean and clear security boundary for use when remotely authenticating to a higher than usual security facility. For that reason, the idea of having a password manager also able to fill in the time-varying six-digit MFA token makes me shake my head. Why bother at all, if that's what you're going to do? It's true that some benefit will be derived from the inherent time-varying nature of the token. So simple replay attacks will be thwarted. But if you're going to go to the trouble of using some form of multifactor authentication, why not get as much benefit from it as you can? And that means keeping it separate from your web browser that is able to fill in all the rest of your information. Don't have it also fill in your multifactor authentication. ANT: But sir, we want convenience. Users want convenience and stuff to be done like right now. STEVE: In fact, if you want convenience, I would say don't bother adding multifactor authentication to any accounts, if you're going to have your browser automatically fill them in. That's nutty. Okay. So here's the one that's I think interesting, Ant, that I asked you if you had kids about earlier. I'll read his question, I'll share my reply to him, and then let's talk about it. So a listener named Victor wrote. He said: "Howdy, Steve. Longtime IT guy here, but recent, about a year, listener as I didn't really do podcasts until Security Now!. "I have a question for you that may be something of a rabbit hole, but I am seeking opinions on parental controls. I consider myself a well enough accomplished IT guy, but I'm facing a problem that one of my kids is about step into the realm of getting a phone. We, my wife and I, have held off until now, citing COPPA laws, but we are out of excuses at this point. The issue is that, for all of my IT experience, this kid, my third, is exceedingly more tech savvy than any of my other kids..." ANT: Imagine that. STEVE: Uh-huh, "...having already proven their ability to circumvent restrictions on school-operated technology and continuing to do so without any repercussion as the school can't seem to collect any evidence of their wrongdoing. "I've done my best to protect the home front - Pi-hole, pfSense router with static routes to nowhere for undesired sites, et cetera. But once on the phone, the kid will be able to connect when/wherever they please. And I have yet to find a truly secure parental control app which will do all the standard watchdog things AND self-protect from deletion on Apple and/or Android. Any advice is welcome. Here's to 999 and beyond. Signed, Victor." Okay. So I wrote to him. Though I never had my own kids, during my late 20s, 30s, and 40s I participated in raising several long-term girlfriends' kids from pre-teen through their teens. And although I was never more than "Mom's boyfriend," I was around during some important years, so we bonded. And I've remained in touch with several of them who are now married and with their own kids. So I'm not a total newbie on this front. Victor didn't share the age of his youngest and most technically savvy of the three, so this might not apply if this individual is too young. But if this person is this tech savvy, then perhaps they're not that young. What occurs to me is to wonder whether this particular problem has a technical solution. I think that perhaps the solution lies in parenting rather than in technology. I am horrified by what is now available - I was distracted, I should explain to our listeners, by Ant, who just did a big "yay, congratulations" by my saying that the solutions lie in parenting rather than technology. ANT: My bad. My bad. STEVE: No, that's okay. I'm horrified, thank you for that, I'm horrified by what is now available on the Internet. And I completely get it that age appropriateness is a real thing. There are many salacious adult depravities that young minds should not be exposed to until they have obtained sufficient context and maturity to understand them for what they are. But at the same time, "blocking" feels like a losing uphill battle. The Internet is truly pervasive. If this youngster wants access to the Internet, he or she is going to obtain that access. If not at home where IT security is strict, then over at a friend's home whose parents never considered this to be a problem, or by breaking through the school's security. And erecting technical blockades might just present a challenge to make what's hiding behind them seem all the more intriguing. Given what's out there, I understand the dilemma that today's parents face. And I would not want to be in that position today. But I also believe that there's a very real limit to a parent's ability to control what a free-ranging young person is exposed to. I think that if I were in this place, in Victor's place, I would sit down with all of my kids as a group and talk to them honestly and openly about what's on the Internet, and why. About how a great deal of what's there does not represent what most people think and feel. About how it's often deliberately extreme. About how behind a lot of it is a profit motive, trying to separate people from their money one way or the other. And I would also take some time to explain about predation on the Internet. About how there are truly dangerous people hiding behind fake names, photos, and identities. That these people are often not who they claim to be. They may well be in a far off country and not be at all nice people. And that the only thing that you ever really know are the people you've met in the real physical world. I would not pull any punches. I'd tell them that I'm terrified by the idea of them being exposed to what's out there on the Internet. And that the only thing that will keep them safe is their own common sense and keeping lines of communication open with their true friends in the real world, and with their parents. ANT: And look here. First off, let me just give you an applause because everything you just said, we're pretty much in agreement here. I echo you. This stuff with the Internet, and I think back to my time with my kids - granted, they're older now. STEVE: Right. ANT: I have one in college, he's a junior in college. I have a high school senior. And my oldest boy, my stepson, he is like, I think he's 25 now. You know, so they're out and about. But when phones began to be a big thing in their lives, I was tough. I was hard on them. I was like, no, you're not getting a phone. Period. You're not getting a phone. And it sucked. I hate to say it that say. It was hard to do that. But I knew that having a phone wouldn't necessarily help them, far as getting their work done and being good students and whatnot. But I also thought that it was going to lead to some social issues, with kids being kids, because kids will make fun of you if you don't wear the right type of T-shirt for whatever reason. STEVE: Yup. And if you don't have a phone, oh, your mommy won't let you have a phone. ANT: Exactly. So I knew I was going to be in it for that. But I thought in the long run they were going to be better off. But just as you mentioned, I explained to them why. I explained to them the realities of the Internet. There's a lot of things out there that are just, whoo, wow. But then there's also some good stuff on the Internet, great information. And I had to teach them, you know, there's going to need to be a balance. And right now at this particular age you're not ready for a phone. I did the same thing with particular video games. Grand Theft Auto wasn't allowed in my house for a little while. You know. Now, when they got older, sure. Now you can play it because you have a little bit more common sense and know that's not the reality. But mentioned here in our Discord, I believe it was Berserk, says 100% this kid would see the parents as a "challenge." I agree. That right there. I did not want to use tech to get in the way and put up firewalls and things like that because all they were going to do is just try to figure out a way to get around it and put a lot of effort in that that they didn't need to when they should have been putting that same amount of effort into stuff that matters, like homework. You know? So I didn't want to just throw up a brick wall for everything. I just tried to be upfront and say no when I needed to say no. And when it was time for them to be able to get phones and access to the Internet and stuff like that, I gave it to them with a couple caveats and just sort of eased them into it. STEVE: Well, and if nothing else, your being that strong, even if they did go to a friend's house in order to have their curiosity satisfied, the fact that you made such an issue of it that, I mean, someone who their dad was saying, this is really bad, even that would tend to create a barrier of caution that would serve to help them. ANT: It planted a seed because there were instances where they would come back to the house after visiting, you know, so-and-so relative, you know, this cousin, this friend or what have you, and they would pretty much tell me everything they saw the other kids doing. And they were not comfortable. You know? And I had to tell them, okay, yes, thank you for letting me know. It's all right that they did this and that or what have you. But that's in their own home, under their own parents' jurisdiction, not my jurisdiction. I appreciate you honoring our relationship and our agreement. STEVE: Well, it's very cool, too, that those lines of communication are kept open because that's the crucial thing you want is for them not to be sneaking around and keeping secrets and thinking that they can't share with their parents what's going on. ANT: And don't get me wrong. My kids are known as "hard heads," I call them #hardheads. STEVE: Wait. Your kids? I can't imagine that. ANT: There's a reason for that. But at the same time I do appreciate them being able to come to me and their mother when it comes to stuff about technology and just information security and so forth, or social media. And we still have those talks. And like I said, they're a lot older now. And every now and then we still have some conversations about things we see on TikTok or things we see on Instagram or what have you because it's not... STEVE: It's a crazy world out there. ANT: It's not just the fact that something could be pornographic or whatever. Some things are just flat out lies sold as truth. STEVE: Right. ANT: And they need to say, hey, I need to check my sources here because that doesn't look right, you know. But, yeah, kudos to you, and applause to you, sir. STEVE: So Alphageek asked, he said: "I'm interested in your take of this from a security standpoint." And Ant, this is about cameras so you're going to like this, too. From a security standpoint, he said: "Thanks for the years of helping keep my brain sharp. As an EE, your podcast has helped me look smart at important times." So, great. So what Alphageek was curious about, he provided me a link, was an interesting solution to the problem with deep fake photos, talking about things being fake on the Internet just now, Ant. The IEEE Spectrum Magazine carried an interesting story about a new Leica camera that binds authenticating metadata into the photos it takes, then digitally signs them as they are taken. And there's more. So here's what the article explained. Article said: "Is that photo real? There's a new way to answer that question. Leica's M11-P, announced in late October, is the world's first camera with support for content credentials, an encryption technology that protects the authenticity of photos taken by the camera. The metadata system can track a photo from shutter snap to publication, logging every change made along the way. "Award-winning photographer David Butow said: 'In the last few years it's become easier to manipulate pictures digitally. Photographers can do it; and when the photos are out on the web, other people can do it. I think that puts in jeopardy the strength of photography, the sense that it's a true representation of what someone saw.' "In November of 2019, Adobe, The New York Times, and Twitter partnered to solve this problem by founding the Content Authority Initiative (CAI). Twitter left the CAI after Elon Musk purchased the company. But CAI now boasts over 200 partners, gave itself the difficult task of finding a 'long-term holistic solution' for verifying the authenticity of photos. In 2021 it joined with another initiative called Project Origin to form the Coalition for Content Provenance and Authenticity (C2PA). "Leica's M11-P is the first hardware embodiment of its solution. The camera has a toggle to flip on content credentials, which is based on the C2PA's open technical standard. The M11-P then embeds identifying metadata such as the camera, lens, date, time, and location in an encrypted C2PA 'manifest.' The M11-P digitally signs the manifest with a secure chipset that has a stored private key. The manifest is attached to the image and can be edited only by C2PA-compatible software which, in turn, leaves its own signature in the manifest. Once published, the image can display a small interactive icon that reveals details about the photo, including the device used to take the photo, the programs used to edit it, and whether the image is wholly or partially AI-generated. "It's still early days for content credentials, however, so support is slim. Adobe's software is the only popular image-editing suite to support the standard so far. The presentation of the data is also an issue. The interactive icon isn't visible unless an application or program is programmed to present it. "David Butow said: 'The way this technology is integrated in Photoshop and Lightroom, which is what I use, is still a bit beta-ish.' David used the Leica M11-P for several weeks prior to its release, but he says these early problems are countered by one key win: The standard is easy for photographers to use. 'You shoot normally; right? There's nothing that you see, nothing that you're aware of when you're taking the picture.' "The Leica M11-P's support for content credentials wasn't the only reason it made headlines. It arrived with an intimidating price tag of $9,195." ANT: Oh, that's just typical for Leica. STEVE: The article said: "That's a high price for authenticity, but Leica says" - exactly as you said, Ant - "says the camera's cost has more to do with Leica's heritage. Kiran Karnani, Leica's vice president of marketing, said: 'If you look at the price points for our M series cameras, there's absolutely no added cost to have the content credentials feature in the M11-P.' And the M11-P is just the tip of the iceberg. "Canon and Nikon already have prototype cameras with content credentialing support. Smartphones will also get in on the action. Truepic, a startup that builds 'authenticity infrastructure,' has partnered with Qualcomm to make Qualcomm's Snapdragon 8 Gen 3 chips support content credentials. Those chips will power flagship Android smartphones next year. "No news organization currently requires photographers to use content credentials, but the C2PA standard's influence is beginning to be felt. Karnani points out that The New York Times and BBC are members of the CAI, as are The Wall Street Journal, The Washington Post, the Associated Press, Reuters, and Gannett. Karnani notes that 'Adoption is certainly a goal.'" So back to Alphageek's question. This all sounds great on the surface. A digital camera contains a digital representation of an image which can be digitally signed by the camera itself. The way this would be done is that metadata would be added to the image. Then a cryptographic hash would be taken of the combined file. That hash would then be encrypted using the camera's private key. Then at a later time it would be possible to verify that not a single pixel of the original image had been tampered with by rehashing the image, and using Leica's published public key to decrypt and verify that the hash bound to the image matches the one that was just made. But from everything we know of crypto there would appear to be one glaring problem with this entire concept. ANT: Do tell. STEVE: A web server's private key is secure only because no unauthorized people are able to obtain its key. If that key is in a hardware HSM, then that key won't even exist in the machine's memory, making it even less accessible. Although asymmetric encryption offers many cool features and powers, it does still rely upon a secret being kept. Its private key must remain private. And that's the Achilles heel that I fear any digitally signing camera will face. A web server's private keys are safe only because no one has unauthorized physical access to its hardware. If you can get to the hardware, all bets are off. Just ask the folks that thought that encrypting DVD discs was a great idea. They thought, hey, no problem. We'll just embed all of the decryption keys into every consumer DVD player so that they'll be able to decrypt the discs. Right. Back in the day, my copy of "DVD Decryptor" was one of my favorite tools. ANT: I have no comment on that. STEVE: It was, oh, it was and still is entirely legal to decrypt one's own DVDs, and I appreciated the freedom that that afforded. In order for this Leica, or any other camera, to digitally sign anything, it must carry a secret. It's the camera's secret that makes its signature mean something. But the camera is obviously not locked up in some data center somewhere. Just like a DVD player, it must be out in the open to do its job. And everything history has taught us is that these secrets cannot be kept, not under these conditions. And if that's true, it creates another new problem that we never had before, digitally verified deep fakes. Once a camera's secret signing key escapes, deep fakes will be signed and digitally authenticated, making the problem worse than it was before. So it'll be interesting to see how this all turns out. Mark me down as skeptical and a bit worried. ANT: Okay. So yes, this story got my attention. And I spoke about this back in October on Tech News Weekly with our hosts Mr. Mikah Sargent and Mr. Jason Howell, October 12th, 2023. This was right after the Adobe Max Creative Conference. And a big discussion was of course AI. But it was also about content authenticity. Adobe, as the emailer mentioned, has been working with the C2PA for quite a while now, a handful of years now. And so we've had some developments on this. And it was all good news. But just as you say, there are a couple caveats. First of all, the presentation. I could shoot something with my camera that is certified and has all of the proper encryption to put that badge on the image. But what if I put that on Instagram? Instagram's not going to show that badge. It's not going to mean a hill of beans. You know. And then I never thought about the aspect that you brought up of that key being out in the public and available to anyone. They could take it and make some totally ridiculous images that are fake. But they're properly signed. STEVE: Right. ANT: So they're still considered official. STEVE: Right. ANT: So, yeah, this is - I'm glad that this is in place. I'm glad that there's some headway being made here with the different partners with the C2PA because it's Microsoft, I'm looking at the page now, Microsoft, BBC, Intel, Sony. So people are talking about this, and they have good intentions, especially right now with this being November here in the U.S. and the official start of the U.S. election season. I think this is really important to figure out a way to get our hands on some of this content that's being put out there as mis- and disinformation. But again, this is very early. It's not perfect. But it's a start. STEVE: So Andrew Drapper said: "If the EU demand their certificates are in our root store, could we not just remove them or have a script or extension that does?" Okay. So great question. Many questions remain about this whole unresolved mess. For example, would the EU's certs be countersigning traditional certificate authority certs? Based on the behavior that the EU wants, that could be a requirement. But if not, then removing those trust roots would prevent access to those EU web services that had only been signed by the EU certificates and had not been countersigned. Would these EU certs be trusted all by themselves, if they're standing alone? If not, then we really don't have anything to worry about. So long as a traditional CA also needs to sign a website's certificate, the EU's signing would simply be adding additional information. But if this were the case, everyone would not be all up in arms over this, and everyone is. So it sounds like the EU certs are going to be able to stand alone, and that's a problem. So it appears that the EU wants their certs that way, to be able to stand alone. Would these EU certs carry some distinguishing mark that would allow an automated cert sweeper to uniquely identify and remove them? I suspect that the CA Browser forum would require some form of clear designation, and the good news is that certs have all manner of means for carrying such markings. This would make a cert cleaner entirely safe. It would be able to identify those certs which were based on this EU eIDAS law. One potential problem is that users of affected machines, such as in the enterprise, may have limited access to their machine's certificate root stores. But the biggest problem is that while those listening to this podcast and other in-the-know techies might know enough to clean their root stores, most of the world would not. So, yeah, even if some of us were to keep our machines clean, that doesn't help everyone else in the world. ANT: Right. STEVE: Mike asked: "Hey, Steve. What company do you recommend for a domain registrar? I currently have all my domains with Google Domains, and they are moving to Square Space. I only need a place to store the domains, as my name servers are with various other providers. Thanks, Mike." So without any hesitation I would and do always recommend Hover, and I cannot imagine why I would ever move. I did move once, and that was away from Network Solutions. They were the original primary registrar of domains for the Internet, but let's just say they did not age well. I became so tired of Network Solutions' constant upselling attempts. When I did anything there, I would be forced to decline one "special limited time offer" after another, endlessly, just to renew a domain. I'm inherently loyal, so I did stick with them as long as I could. But finally it was too much. So I went looking for an alternative. A good ultra-techie friend of mine, Mark Thompson at AnalogX, has all of his domains with GoDaddy. But GoDaddy's style doesn't appeal to me either. They just don't seem serious. And the one thing you want in a domain registrar is seriousness. They've also had security problems in the past with some of their services, though I don't think with their domain registrar business. By comparison, Hover is just a clean and simple domain registrar. They do offer some other services, but they are never pushed. For a long while they were advertisers here on TWiT, but it was one of those situations where I had switched to Hover and was already singing their praises every chance I got long before they began advertising here. And I still am, for the same reason, singing their praises. So anyone who's looking for a clean and simple, no-frills, no annoying upselling, domain registrar I think will find that in Hover. And I know that Leo feels the same way. ANT: As do I. STEVE: Cool. And our last question from Glenn F. He said: "Hi, Steve. I was just listening to SN-949 and wanted to let you know you may have been a bit overly charitable when describing Apple's motives around RCS. Looks to me like Google got creative and used the EU as a cudgel to 'encourage' Apple to adopt RCS. From what I can tell, Apple's RCS announcement appears to coincide with the deadline for their response to the EU. Love the show and just recently joined Club TWIT due to all the great content. Glenn." ANT: Woohoo. STEVE: Okay. So, yeah. Glenn's tweet linked to an interesting article at The Verge. The Verge's headline reads: "Google turns to regulators to make Apple open up iMessage." And their tag line is: "In addition to shaming Apple for not supporting RCS, the search giant has reportedly co-signed a letter arguing that iMessage should be designated a core platform service under the EU's Digital Markets Act." Okay. So I read the entire piece, and I agree with Glenn's assessment. What Google really appears to want is to force Apple to open iMessage, since today's green bubbles are lame by comparison to the blue ones. I have a text messaging group where one of its five members is an Android user. As a consequence, the entire group is forced out of iMessage into SMS and thus reduced to this lowest common denominator due to the presence of this one individual in our group who is not an iPhone user. So if Apple were to upgrade the rest of us iPhone people to RCS, then the green bubbles would be at parity with iMessage blue bubbles. But as for opening up iMessage? From a technical standpoint I cannot see how that's really possible due to the closed security ecosystem iMessage lives within. I bet that's the last thing that Apple would consider doing. But the addition of RCS does seem like a clever countermeasure designed to take the pressure off Apple in this regard. And I think that Google should be happy with it. I know I would be. ANT: Again, what more do they need to do? STEVE: Right, right. Basically giving iMessage features to the Android user community and allowing that to cross over into the iOS ecosystem. I think that sounds right. ANT: Yeah. STEVE: And I'll just wrap up by saying, reminding everyone the title of today's podcast is "Leo Turns 67." And as I mentioned, last week's podcast was titled "Ethernet Turned 50." So I decided to go with "Leo Turns 67" since that's happening tomorrow, on November 29th. And even though he's currently sequestered in some far off cave somewhere with no Internet and no other technology, doubtless contemplating the nature of life, the universe, and everything, you might want to send him birthday wishes, which I'm sure he'll discover once he emerges and rejoins the rest of humanity, much as he'll be rejoining us this time next week. And Ant, thank you for being my host this week. Bye. Copyright (c) 2023 by Steve Gibson and Leo Laporte. SOME RIGHTS RESERVED. This work is licensed for the good of the Internet Community under the Creative Commons License v2.5. See the following Web page for details: https://creativecommons.org/licenses/by-nc-sa/2.5/.