GIBSON RESEARCH CORPORATION https://www.GRC.com/ SERIES: Security Now! EPISODE: #1003 DATE: December 3, 2024 TITLE: A Light-Day Away HOSTS: Steve Gibson & Leo Laporte SOURCE: https://media.grc.com/sn/sn-1003.mp3 ARCHIVE: https://www.grc.com/securitynow.htm DESCRIPTION: Microsoft makes very clear what data they are NOT using to train their AI models. What's a "Digital Epileptic Seizure"? What induces them? And why you don't want your self-driving car to have one! A public plea for help in the form of volunteer bridge servers from the Tor Network. If you are one of 140 million Zello users, heed their notice to change your password. The U.S. Federal Trade Commission opens a broad antitrust investigation into whether Microsoft has been naughty or nice. A new form of Android smartphone "scareware" simulates a seriously malfunctioning, cracked, and broken screen. It's almost certainly positively and completely safe to leave WireGuard open and listening for incoming connections. Is "almost certainly positively and completely safe" safe enough? If the Internet fills with AI output, what happens when AI starts training on that? It seems we know. Last week, Australia passed the social media age restriction law. Now what? And finally, not only is Voyager 1 nearly an entire light-day away, it's beginning to have some harder to remotely repair problems. How much longer will we be in touch with it? SHOW TEASE: It's time for Security Now!. Steve Gibson is here. We're going to respond, or at least get Microsoft's response to Steve's episode last week. They say, no, we don't use your data to train AI. What is a digital epileptic seizure? And why does your self-driving vehicle have fits when it approaches an emergency vehicle? Do you use Zello? Time to change the password. And then we're going to talk a little more about our favorite friend, the farthest object humanity has ever put in space, Voyager 1, now nearly a light-day away. It's going to be another great Security Now! episode, coming up next. LEO LAPORTE: This is Security Now! with Steve Gibson, Episode 1003, recorded Tuesday, December 3rd, 2024: A Light-Day Away. It's time for Security Now!, the show where we cover your security, your privacy online, how things work, what's a great book to read when you're trying to get some sleep and you don't want to and, I don't know, all sorts of stuff. What's a good show to watch? What's a good vitamin to take? Steve Gibson is a polymath. He knows everything and tells all on the show. Hello, Steve. STEVE GIBSON: Great to be with you again for Episode 1003. LEO: Yikes. STEVE: And still I look at these four digits, and I think, wow, okay. LEO: We're getting used to it now, though. STEVE: It really does feel like somehow a lot more than just three digits, which... LEO: It is a lot, yeah. STEVE: It was a cliffhanger there for a while. But we made it over the cliff, and we're still flapping. We've got a bunch of fun stuff to talk about. Microsoft makes very clear what data they are NOT going to be using to train their AI models, so we're revisiting that topic that we touched on last week. Also, what's a "digital epileptic seizure," what induces them, and why you don't want your self-driving car to have one. LEO: Oh, no. STEVE: Yes. We've got a public plea for help in the form of volunteer bridge servers being asked for by the Tor Network, that we're going to talk on and explain. Also, if you're one of 140 million Zello users, you should heed their notice to change your password. LEO: Zello or Zelle? Zelle? STEVE: That's Zello. I had to double-check that, too. And in fact some of the reporting, I think the reporters were so used to typing Zelle, Z-E-L-L-E, that some of the text was mixed up. So it's Zello, which it's a push-to-talk app for smartphones. LEO: Oh, okay. They have that many users, 140 million users? STEVE: 140 million. LEO: Holy cow. STEVE: Nobody wants to dial a number. So yeah, apparently you just press the screen, and you get to talk to your mom, I don't know. Anyway, the U.S. Federal Trade Commission opens a broad antitrust investigation into whether Microsoft has been naughty or nice. A new form of Android smartphone "scareware," which is really sort of interesting at first glance, it simulates a seriously malfunctioning, cracked, and broken screen, and scares people into, like... LEO: Oh, no. STEVE: Yeah, getting tech support. LEO: That's hysterical. STEVE: It really is. And when you see it, I've got a picture of it in the show notes, it's like, whoa, okay, that would freak me out. Anyway, it's almost certainly positively and completely safe to leave WireGuard open and listening for incoming connections. LEO: Almost. STEVE: Is "almost certainly positively and completely safe" safe enough for you? We're going to look at that. If the Internet fills with AI output, what happens when AI starts training on that? It seems that we know, that some experiments have been done, and it's not looking good. LEO: It's not good, yeah. STEVE: We're going to lose some very popular dog breeds, among other things. Last week, Australia passed the social media age restriction law. Now what? And finally, we're going to talk about, once again, one of our sort of favorite side topics, Voyager 1. Not only is it now nearly an entire light-day away - think about that, it takes a day to... LEO: That's amazing. STEVE: Like if that's how far out it is, it is beginning to have some harder to remotely repair problems. There was so much interesting science and engineering shared in the last week that I thought, okay, this is just - it's just cool stuff. I mean, it's like, you know, we're beaming up, and we're doing warp drive and all this crap that we can't - phaser beams, we don't have any of that. What we actually have is a shockingly well-designed piece of hardware from the '70s... LEO: Seventies. STEVE: ...that is still going. So, and of course, we do have a great Picture of the Week. I've already had some feedback from people. LEO: I haven't looked. STEVE: And, yeah. And so I think a great show for everybody, probably worth your time while you're mowing your lawn or commuting to work or walking your dog, whatever you're doing. LEO: I always, every time you do a Voyager segment, I always call it Vger. And I should clarity that after the first one, I looked it up, and the Vger from Star Trek is actually supposedly... STEVE: Oh, the worst movie ever made. LEO: Is that the one where Spock dies? I can't remember. STEVE: No, no. That was a good one. LEO: That was the good one. STEVE: No, I think that might have been "The Wrath of Khan." LEO: Vger was the first one, maybe. STEVE: Yeah, oh, it was the first, and they had bad uniforms, and it's like, what happened? You know? LEO: I remember watching, though, and being so thrilled when that elevator opens, and there are Kirk and Spock and McCoy. And it was just like, oh. STEVE: They're back. LEO: They're back. STEVE: Yeah. LEO: Anyway, Vger from that movie is theoretically Voyager 16. There is no Voyager 16. So the Voyagers we're talking about, 1 and 2, are not Vger, just... STEVE: And I didn't say this, and I may forget, so I'll say it now. One does need to wonder, like, why they're expending all this effort. I mean, it's done its job. I mean... LEO: More than. STEVE: It is outside the heliopause. We are getting info, we're getting science data we've never had before. LEO: Yeah. STEVE: But at this point it's clearly just can - let's see how... LEO: It's a flex. STEVE: ...what we can do. LEO: Yeah. STEVE: Exactly. What, you know, can we keep this little sucker aimed at us? LEO: They can. That's what's amazing. STEVE: Yeah. Yeah. Wait till you hear what they're... LEO: That'll be fun. STEVE: Wait till you hear what's happening now. LEO: Oh, I can't wait. All right. I'm ready for the Picture of the Week, Mr. Gibson. STEVE: So this one, I gave it the caption - and not for the first time. We've had a few other ironic pictures. But I called this one "Irony Defined." LEO: All right, I'm scrolling up. That's got to be - that can't be - that's hysterical. STEVE: It is just too fun. It is too fun. LEO: And read it for us, for those not watching the video. That's hysterical. STEVE: Right. And so what I clipped out of the photo, one of our listeners sent me what looks like his camera screen. LEO: So this is real. STEVE: I think it's real. LEO: Wow. STEVE: And so, yeah, so what we have, we're looking through a glass door into a region behind which we learn is, because of the headline on the sign that's been posted on this glass door, this is the Mall Maintenance Shop. So it's some sort of like a large mall. And it looks authentic. You can see a very long ladder, an extension ladder, against the far wall. There's some coiled up stuff. In the foreground looks like an industrial, you know, tile cleaner kind of thing. So, I mean, this looks like the real deal. This is clearly a mall, you know, like some large retail mall maintenance shop. And the sign brags about their capabilities, saying "We can repair anything." But then it says, in parentheses below that, "Please knock hard on the door. The bell doesn't work." LEO: Okay. STEVE: So they haven't... LEO: They probably just have a good sense of humor. STEVE: We haven't gotten around to fixing the bell yet. Otherwise, other than our own bell, you know, if you've got something broken, we'll fix it. So, yeah. And it would be really fun, I agree with you, Leo, to learn the actual back story here, you know. It may just be a crusty old guy who's got a great sense of humor, as you say. But I have a feeling that the bell doesn't work. LEO: No, I think it's true in that respect. Maybe there isn't even a bell, you know. STEVE: Okay. So Microsoft felt the need to clarify what had become the widespread misapprehension that they would be training their AI models against the private and personal data of their Office product users. And of course we looked at that speculation behind that last week. So the day after we did so, last Wednesday, BleepingComputer did a great job of summing up the situation. So I've edited what they said, but you'll get the gist. They wrote: "Microsoft has denied claims that it uses Microsoft 365 apps - including Word, Excel, and PowerPoint - to collect data to train the company's artificial intelligence AI models. This comes after a Tumblr blog post spread on social media, claiming that Redmond used their Connected Experiences feature to scrape customers' Word and Excel data for AI training." And by the way, Paul was correct on Windows Weekly the day after our last podcast, saying that nowhere did any of Microsoft's own documentation ever say that. It didn't use the word "AI training." So that was a presumption. "A Microsoft spokesperson told BleepingComputer: 'Microsoft does not use customer data from Microsoft 365 consumer and commercial applications'" - now, I should just mention I wish that the person hadn't put that caveat in. They should have just said Microsoft does not use customer data from Microsoft 365 applications. Why say "consumer and commercial applications"? You know, it's like a little - are they hedging? I don't know. Anyway, "'to train large language models. Additionally, the Connected Services setting has no connection to how Microsoft trains large language models.' Okay, so that's good. So the company also told BleepingComputer that this optional setting has been on by default since it was first made available in April 2019.'" So five years ago, always been on. BleepingComputer was also told: "The Connected Experiences feature enables features like co-authoring, real-time grammar suggestions, and web-based resources." And Leo, this is precisely the assumption you were making also last week. They said: "These features are on by default because they're features people naturally expect in a cloud-connected productivity tool. However, customers always have control," they wrote, "and can adjust their Connected Experiences settings at any time. "So as Microsoft explains on its support website, the feature is used to, first, provide design recommendations, editing suggestions, or data insights based on the Office content, through features like PowerPoint Designer or Translator; and it also downloads online content templates, images, 3D models, videos, and reference materials, including but not limited to Office templates or PowerPoint QuickStarter. To toggle this feature off, Microsoft 365 users have to open their Office apps (like Word or Excel) and choose whether to enable or disable experiences that download online content or analyze their content under 'Connected experiences' after going to the File > Account > Account Privacy > Manage Settings menu." So as we said last week. So, quoting them: "The Connected Experiences setting enables cloud-backed features designed to increase your productivity in the Microsoft 365 apps like suggesting relevant information and images from the web, real-time co-authoring and cloud storage, and tools like Editor in Word that provide spelling and grammar suggestions. Microsoft has been using their AI in Microsoft 365 for years" - now, maybe that's where some of this confusion comes in because they're calling Spellcheck "AI." You know, this is them saying Microsoft has been using AI in Microsoft 365 for years to enhance productivity and creativity through features like Designer in PowerPoint, which helps create visually compelling slides, and Editor in Word, which provides grammar and writing suggestions. You know, that's not today's definition of AI. But they then said: "These features do not rely on generative AI or large language models, but rather use simpler machine learning algorithms." Microsoft added that the setting has been available since April 2019, with enterprise admins having the option to choose if connected experiences are available to users within their organizations using multiple policy settings designed to manage privacy controls for Microsoft 365 Apps and Office for Mac, iOS, and Android devices. So, okay. We're certainly, all of us, I'm sure, glad for the clarification. Whatever Microsoft is doing exactly, and unless anything has changed recently, it's been doing whatever it is for the past five years. It's always been on by default, you know, like grammar and spelling suggestions, and anyone who isn't comfortable with this is free to turn it off if they wish. If nothing else, it seems very clear that this has nothing whatsoever to do with Copilot+ and any of the recent concerns over Microsoft's AI being used to otherwise enhance their users' experiences. And it's one thing to be mistrustful, and another thing to accuse them wrongly. We can certainly have one without the other. Given what I've witnessed firsthand of what they've done to Window's Start menu, tray and Edge - none of which enhances my own use of Windows - I'm obviously not a big fan of the direction they're taking their consumer desktop. Nevertheless, make no mistake, I love Windows. So I got some feedback from people saying, wow, you know, if you're so unhappy with Microsoft and Windows, why are you still using it? I love it. I mean, for my purposes it's far better than any alternative. And I'm hopeful that when I set up my next Windows desktop, my Microsoft Developer access to the Enterprise edition of Windows 10 will provide me with the cleaner experience that I look for in what I consider to be a tool rather than a toy. You know, I just don't have any interest in Windows being a toy, offering me Candy Crush Soda Saga and Xbox features on my Start Menu, in addition to everything else they have done. So anyway, you know, Microsoft is obviously very sensitive to all of this after the pushback and concern that the industry showed with their stumbling rollout of what they plan to do with Recall in Copilot+. So, you know, they're going to great pains to calm people. And there's every reason to believe this is just grammar and spelling checking. It is worth noting that in BleepingComputer's coverage they don't talk about the fact that Microsoft does say whatever it is they're doing with Connected Experiences, there are those where they're collecting data over the lifetime of the user's account. So maybe that's just their learning what spelling mistakes people always make, or they're, like, learning the grammar of the user and getting better at helping them to correct themselves. You know, that's what I presume. So, but we did learn last week that, from their own statements, that there is something that continues to exist at their end in the cloud on a per user account basis, presumably helping it to do a better job with those things that it's been doing for the last five years. And unfortunately they call that "AI," which, you know, nobody else bothers to. Okay. So I was put onto some new research from our friends at the Ben-Gurion University of the Negev and Fujitsu, researched by both groups, by one of the researchers who's also one of our listeners, Ben Nassi. The title of their 21-page paper is "Securing the Perception of Advanced Driving Assistance Systems Against Digital Epileptic Seizures Resulting from Emergency Vehicle Lighting." Okay, now, I suppose it's unavoidable to anthropomorphize driving assistance systems. But somehow calling this problem "digital epileptic seizures" rubs me the wrong way. You know, the overlap in apparently this behavior is the flashing of lights, which as we know can trigger human actual epilepsy, you know, epileptic seizures. So they're saying that auto driving systems don't like lights flashing either. Anyway, I'm not sure what bothers me about it, but something does. In any event, it turns out that driving assistance systems do have a problem with the flashing lights used by emergency vehicles. WIRED has a nice summary of the very good research this group has just conducted and published. Under WIRED's headline "Emergency Vehicle Lights Can Screw Up a Car's Automated Driving System," with the subhead "Newly published research finds that the flashing lights on police cruisers and ambulances can cause," and here we go, "'digital epileptic seizures' in image-based automated driving systems, potentially risking wrecks." And actually apparently there have been 16 instances that have been seen so far. Anyway, we'll get to that. WIRED wrote: "Carmakers say their increasingly sophisticated automated driving systems make driving safer and less stressful by leaving some of the hard work of knowing when a crash is about to happen, and avoiding it, to the machines. But new research suggests some of these systems might do the virtual opposite at the worst possible moment. "A new paper from researchers at Ben-Gurion University of the Negev and the Japanese technology firm Fujitsu demonstrates that when some camera-based automated driving systems are exposed to the flashing lights of emergency vehicles, they can no longer confidently identify objects on the road. The researchers call the phenomenon a 'digital epileptic seizure,' epilepticar for short, where the systems, trained by artificial intelligence to distinguish between images of different road objects, fluctuate in effectiveness in time with the emergency lights' flashes. The effect is especially apparent in darkness, the researchers say." And that kind of makes sense, you know, much greater contrast there. "Emergency lights, in other words," writes WIRED, "could make automated driving systems less sure that the car-shaped thing in front of them is actually a car. The researchers write that the flaw 'poses a significant risk' because it could potentially cause vehicles with automated driving systems enabled to 'crash near emergency vehicles' and 'be exploited by adversaries to cause such accidents.'" LEO: You know, it's interesting because a lot of Teslas have crashed into emergency vehicles. STEVE: Exactly. LEO: And maybe we now know why. STEVE: Exactly. They said: "While the findings are alarming, this new research comes with several caveats. For one thing, the researchers were unable to test their theories on any specific driving systems, such as Tesla's famous Autopilot. Instead, they ran their tests using five off-the-shelf automated driving systems embedded in dash cams purchased off of Amazon." And WIRED said: "(These products are marketed as including some collision detection features, but for this research, they functioned as cameras.) They then ran the images captured on those systems through four open source object detectors, which are trained using images to distinguish between different objects. The researchers are not sure whether any automakers use the object detectors tested in their paper. It could be that most systems are already hardened against flashing light vulnerabilities." Okay, now, to me, while this might appear to render the value of this research more questionable, there was at least some good reason to wonder, and the researchers' findings bore this out. WIRED says: "The research was inspired" - to your point, Leo - "by reports that Teslas using the electric carmaker's advanced driver assistance feature, Autopilot, collided with some 16 stationary emergency vehicles between 2018 and 2021, says Ben Nassi, a cybersecurity and machine learning researcher at Ben-Gurion University who worked on the paper. 'It was pretty clear to us from the beginning that the crashes might be related to the lighting of the emergency flashers. Ambulances, police cars, and fire trucks are different shapes and sizes, so it's not the type of vehicle that causes this behavior.'" In other words, these guys started by probably correctly inferring that, okay, what is it that is unique about these emergency vehicles that Teslas keep crashing into. Well, they've got flashing lights. "So a three-year investigation," writes WIRED, "by the U.S. National Highway Traffic Safety Administration into the Tesla-emergency vehicle collisions eventually led to a sweeping recall of Tesla Autopilot software, which is designed to perform some driving tasks like steering, accelerating, braking, and changing lanes on certain kinds of roads without a driver's help. The agency concluded that the system inadequately ensured drivers paid attention and were in control of their vehicles while the system was engaged." They said: "Other automakers' advanced driving assistance packages, including General Motors' Super Cruise and Ford's BlueCruise, also perform some driving tasks, but mandate that drivers pay attention behind the wheel. Unlike Autopilot, these systems work only in areas that have been mapped. "In a written statement sent in response to WIRED's questions, Lucia Sanchez, a spokesperson for the NHTSA, acknowledged that emergency flashing lights may play a role. She said: 'We are aware of some advanced driving assistance systems that have not responded appropriately when emergency flashing lights were present in the scene of the driving path under certain circumstances.' "Tesla, which disbanded its public relations team in 2021, did not respond to WIRED's request for comment. The camera systems the researchers used in their tests were manufactured by HP, Pelsee, Azdome, Imagebon, and Rexing; none of those companies responded to WIRED's requests for comment. "Although the NHTSA acknowledges issues in 'some advanced driver assistance systems,' the researchers are clear: They're not sure what this observed emergency light effect has to do with Tesla's Autopilot troubles. Ben Nassi said: 'I do not claim that I know why Teslas crash into emergency vehicles. I do not know even if this is still a vulnerability.' "The researchers' experiments were also concerned solely with image-based object detection. Many automakers use other sensors, including radar and lidar, to help detect obstacles in the road." LEO: Not Elon. STEVE: "A smaller crop of tech developers, Tesla among them, argue that image-based systems augmented with sophisticated artificial intelligence training can enable not only driver assistance systems, but also" - here we go - "completely autonomous vehicles." LEO: Oh, boy. STEVE: Uh-huh. "Last month, Tesla CEO Elon Musk said the automaker's vision-based system would enable self-driving cars next year." LEO: He's been saying that for 10 years. STEVE: 2025, baby, yeah. LEO: It's been next year for at least six years. STEVE: That's right. That's right. LEO: Yeah. STEVE: "Indeed," they wrote, "how a system might react to flashing lights depends on how individual automakers design their automated driving systems. Some may choose to 'tune' their technology to react to things it's not entirely certain are actually obstacles. In the extreme, that choice could lead to 'false positives,' where a car might hard brake, for example, in response to a toddler-shaped cardboard box. Others may tune their tech to react only when it's very confident that what it's seeing is an obstacle. On the other side of the extreme, that choice could lead to a car failing to brake to avoid a collision with another vehicle because it misses that this is another vehicle entirely. "The Ben-Gurion University and Fujitsu researchers did come up with a software fix to the emergency flasher issue. It's designed to avoid the 'seizure' issue by being specifically trained to identify vehicles with emergency flashing lights. The researchers say it improves object detectors' accuracy. "Earlence Fernandes, an assistant professor of computer science and engineering at University of California, San Diego, who was not involved in the research, said it appeared 'sound.' He said: 'Just like a human can get temporarily blinded by emergency flashers, a camera operating inside an advanced driver assistance system could get blinded temporarily.' "For researcher Bryan Reimer, who studies vehicle automation and safety at the MIT AgeLab, the paper points to larger questions about the limitations of AI-based driving systems. Automakers need 'repeatable, robust validation' to uncover blind spots" - so to speak - "like susceptibility to emergency lights, he says. He worries some automakers are 'moving technology faster than they can test it.'" Okay. So my own take is that this sort of research conducted by independent researchers is vitally important. It needs to be done. It's obvious that the various car manufacturers are holding their cards - and their cars - very close to their vests. They understandably consider their future auto-driving technology to be ultra proprietary, because they want the best, and no one else's business. Yet flesh-and-blood human beings and pets are moving within the same space as these autonomous high-speed rolling robots. It's a recipe for disaster, and this has the feeling of being driven by the same sort of gold rush mentality as the push for General Artificial Intelligence. So the headlines that these researchers have generated will doubtless, if nothing else, induce all of the developers of similar self-driving technology that actually is, you know, being fielded, to consider and test the effects of bright flashing lights on their driving AI. You know, the lives of people and pets have probably been saved. So hats off to these guys. And they have a - I have links to their 21-page paper where they really dig into the technology. They show the operation of the AI learning neural networks and just how badly they are upset by flashing lights. So this has absolutely been useful for the long-term safety of vehicles. And again, I just think that, because the proprietary interests of automakers is to keep their stuff proprietary, not open, this limits what researchers are able to test. But this kind of research is, I think, vitally important. And Leo, I know that you had a Tesla for quite a while. LEO: Well, we got rid of it. STEVE: Right. LEO: Lisa used to call it "Christine" because it would drive her into things. And then do exactly what they were talking about, which was just stop randomly, you know, screech to a halt, as if it had seen something; you know? STEVE: Wow. LEO: And I think that that's the same, you know, the flipside of that coin; right? STEVE: Yeah. I have a - I finally replaced my 21-year-old BMW, and I have a car that's got sensors, too. And when I'm backing up... LEO: Oh, it beeps like crazy, I bet. STEVE: I have garages in both locations where there's not a lot of space. And it's going dinging and donging and buzzing. And it actually creates anxiety in me. LEO: Yes. STEVE: Because I'm thinking it's seeing something I don't know about. LEO: Yes. Lisa says she literally - I have a BMW i5, which is a very highly technically advanced machine, an EV. And she won't - she says, "Back it out of the garage before I get in because it makes me crazy, all the beeps and the boops." And I have a heads-up display, you know, from "2001: A Space Odyssey" showing me the different vectors and... STEVE: Synthetic imaging [crosstalk] generation. LEO: Yeah. And it overlays all sorts of stuff on top of it. But I've learned what to pay attention to and what not. And, you know, you can see why, you know, at least for now, AI is not good enough to replace a human. It's a nice pal. STEVE: Yes. LEO: It's useful. STEVE: And the problem is everybody, you know, there is clearly a rush to the promise of this "Your car can drive itself." LEO: Yeah. STEVE: And, you know, it feels like they're always going to be pushing ahead of the envelope that they should stay in. And it's you know, research like this that is the only place we get an independent reality check. And so even though they weren't able to actually train on infield self-driving technology, you know, they were able to look at similar systems and say, uh, guys, there seems to be a problem with flashing lights over here. LEO: Well, I hate to say it, but anytime I hear the words "Elon Musk said," I discount most of what follows because he is - he's a marketer. He's a hype monster. STEVE: We, too, have been trained by Elon Musk to discount... LEO: To discount everything he says. STEVE: You know? He does, at the same time, you know, he captures returning rocket boosters with chopsticks, you know, and foldout legs and, you know. And Starlink is providing Internet connectivity to people... LEO: To me. STEVE: ...who would otherwise never have it. LEO: Yeah, I mean, this is our backup when Comcast goes down, which they do, sadly, a little more often than a podcast network would like. Ubiquiti fails over to the satellite dish on the roof right up here. STEVE: Yeah. LEO: And it's, by the way, it's very reliable, even in rain. It's really pretty amazing how well that works. So I'm not saying that Elon's companies don't produce good products. I'm just saying he is, like most marketers, prone to overstating things. STEVE: Okay. We're 35 minutes in. Let's take a break, and then we're going to talk about the Tor Network and how they need you. LEO: They need me to operate a Tor node, I'm guessing, but we'll see. All right. Steve? STEVE: Okay. So last Thursday the Tor Network posted their plea for volunteer help. They wrote: "Recent reports from Tor users in Russia indicate an escalation in online censorship with the goal of blocking access to Tor and other circumvention tools. This new wave includes attempts to block Tor bridges and pluggable transports developed by the Tor Project" - which I'll explain in a second - "removal of circumvention apps from stores, and targeting popular hosting providers, shrinking the space for bypassing censorship. Despite these ongoing actions, Tor remains effective. One alarming trend is the targeted blocking of popular hosting providers by [none other than] Roskomnadzor." LEO: I'll put an echo on it for the next time. STEVE: "As many circumvention tools are using them, this action made some Tor bridges inaccessible to many users in Russia. As Roskomnadzor and the Internet service providers in Russia are increasing their blocking efforts, the need for more WebTunnel bridges has become urgent." Okay. So they say: "Why WebTunnel Bridges?" And I'll explain a little bit about what they are in a second. They wrote: "WebTunnel is a new type of bridge that is particularly effective at flying under a censor's radar. Its design blends itself into other web traffic, allowing a user to hide in plain sight. And since its launch earlier this year, we've made sure," they wrote, "to prioritize small download sizes for more convenient distribution and simplified the support of uTLS integration, further mimicking the characteristics of more widespread browsers. This makes WebTunnel safe for general users because it helps conceal the fact that a tool like Tor is being used. "We're calling on the Tor community and the Internet freedom community to help us scale up WebTunnel bridges. If you've ever thought about running a Tor bridge, now is the time. Our goal is to deploy 200 new WebTunnel bridges by the end of this December (2024) to open secure access for users in Russia." LEO: So a bridge is not the same as a Tor node. STEVE: Correct. LEO: Okay. STEVE: Correct. It is literally a bridge to a node. So it is not itself a node. It is an endpoint which - and this is what's so cool - which uses technology, they call it "plug and protocol" technology, to hide the fact that what the user is doing that connects to the bridge is using Tor. So anyway, their posting goes on to explain how to set up and run a WebTunnel. Among other things, it can be as straightforward as just hosting a Docker image. So I've got a link to this posting in the show notes: blog.torproject.org/call-for-webtunnel-bridges. Since we haven't looked closely at Tor's WebTunnel technology, I wanted to share a bit about it from their description where it was introduced just last March. It was titled "Hiding in Plain Sight: Introducing WebTunnel." So they wrote: "Today, March 12th, on the World Day Against Cyber Censorship, the Tor Project's Anti-Censorship Team is excited to officially announce the release of WebTunnel, a new type of Tor bridge designed to assist users in heavily censored regions to connect to the Tor network. Available now in the stable version of Tor Browser, which as we know is based on Firefox, WebTunnel joined our collection of censorship circumvention tech developed and maintained by The Tor Project. "The development of different types of bridges are crucial for making Tor more resilient against censorship and stay ahead of adversaries in the highly dynamic and ever-changing censorship landscape. This is especially true as we're going through the 2024 global election megacycle. The role of censorship circumvention tech becomes crucial in defending Internet Freedom. "If you've ever considered becoming a Tor bridge operator to help others connect to Tor, now is an excellent time to get started." And this was their posting back in March. "You can find the requirements and instructions for running a WebTunnel bridge in the Tor Community portal." "So what's a WebTunnel, and how does it work? WebTunnel is a censorship-resistant pluggable transport designed to mimic encrypted web traffic (HTTPS). It works by wrapping the payload connection into a WebSocket-like HTTPS connection, appearing to network observers as an ordinary HTTPS connection. So for an onlooker without the knowledge of the hidden path, it just looks like a regular HTTP connection to any web server giving the impression that the user is simply browsing the web. "In fact, WebTunnel is so similar to ordinary web traffic that it can coexist with a website on the same network endpoint, meaning the same domain, IP address, and port. This coexistence allows a standard traffic reverse proxy to forward both ordinary web traffic and WebTunnel to their respective application servers. As a result, when someone attempts to visit the website at the shared network address, they will simply perceive the content of that website address and won't notice the existence of a secret bridge, the WebTunnel." And I'll explain a little bit about that in a second. They said: "WebTunnel's approach of mimicking known and typical web traffic makes it effective in scenarios where there's a protocol allow list and a deny-by-default network environment." In other words, Russia can put up a firewall that only allows web traffic, not Tor, not anything unknown. That is, it's a deny-by-default. But after all, we need to let people visit websites; right? This is indistinguishable from someone visiting a website. And in fact the censors can go to the site that they observe Russians going to, and they see a website. Whereas the people using this really cool Tor technology see Tor. They said: "Consider a network traffic censorship mechanism as a coin sorting machine, with coins representing the flowing traffic. Traditionally, such a machine checks if the coin fits a known shape and allows it to pass if it does or discards it if it does not. In the case of fully encrypted, unknown traffic, as demonstrated in the published research 'How the Great Firewall of China Detects and Blocks Fully Encrypted Traffic,' which doesn't conform to any specific shape, it would be subject to censorship, meaning, you know, being discarded. In our coin analogy, not only must the coin not fit the shape of any known blocked protocol, it also needs to fit a recognized allowed shape; otherwise, it would be dropped. WebTunnel traffic resembling HTTPS web traffic, a permitted protocol, will be allowed to pass." So this is so cool. Again, what this means is that any regular website, and you don't have to be hosting a website, but you can be, can also be hosting a Tor WebTunnel at the same IP and same port, side by side, and no one would ever be the wiser. Since in this case Russia or any other censoring regime would be unable to detect that someone is not just visiting a website, the traffic would not be blocked. But this also makes it clear that the more pseudo websites are available, the better. So if any of our listeners is moved to help the Tor project, and specifically Russian citizens who are unable to see out past their country's censorship, and presumably Chinese citizens, as well, which is being enforced, of course, for propaganda purposes, the Tor Project needs you. To make following up on this easier, I created a GRC shortcut link. So it's just grc.sc/tor. LEO: Help them out. STEVE: Grc.sc/tor. And that will take you to the recent posting that has updated resources including just a Docker container that you can download if you're interested in exploring this and getting going. But if you've got a Linux system you can install stuff and so forth. LEO: It's probably not a very heavy process, either; right? I mean, it probably doesn't use a lot of CPUs or... STEVE: Right. LEO: Might use bandwidth. STEVE: Oh, yeah, exactly. Bandwidth only, very little CPU because it's just forwarding traffic through. Very cool. So Zello, Z-E-L-L-O, is a mobile push-to-talk service used by 140 million first responders, hospitality services, transportation, and family and friends to communicate via their mobile phones using a simple push-to-talk app. The news is that over the past two weeks, starting on November 15th, Zello's customers have been receiving legitimate notices from Zello, because of course everything is suspect these days, asking them to change their passwords. The notice reads: "Zello Security Notice. As a precaution, we are asking that you reset your Zello app password for any account created before November 2nd, 2024. We also recommend that you change your passwords for any other online services where you may have used the same password." Well, doesn't take a rocket scientist, nor anyone who's been following this podcast for more than a few months, to know what must have happened over at Zello headquarters. And it's not good news. But Zello is also not saying. BleepingComputer has reached out to Zello and been rebuffed. Customers who received that notice told BleepingComputer that they had not received any further information from Zello, and BleepingComputer's repeated attempts to contact the company have gone unanswered. So at this point it's unclear whether Zello may have suffered a data breach or a credential stuffing attack, but the notice certainly does imply that threat actors may have access to the passwords of any users who had accounts before November 2nd. BleepingComputer noted in their reporting of this that Zello had previously suffered a data breach in 2020, which also required users to reset their passwords... LEO: Oh, great. STEVE: Yeah, I know. Whoops. LEO: It's happened before. STEVE: Yeah, after threat actors stole customers' email addresses and hashed passwords. In any event, 140 million users is a substantial user base. As you noted, Leo, it's like a big chunk of the U.S., but of course it's global. LEO: Yeah, I'm surprised. STEVE: If our listeners or anyone they know may be affected by it, it might be a good idea to heed this notice. And just a short note that the U.S. Federal Trade Commission has opened an antitrust Microsoft probe, announcing a broad antitrust investigation into Microsoft's business practices. The investigation will cover the company's software licensing practices, cloud computing, cybersecurity, and AI business units. The FTC allegedly received complaints that Microsoft was locking-in customers - gee, perhaps like the U.S. government? - preventing them from moving to competitors. In September, Google filed an official antitrust complaint against Microsoft's cloud business in the EU. So this will be something to keep an eye on. And we don't know what the fate will be. You know, nothing much will happen, right, this month. And we get a new administration in early January, so we don't know what approach the second Trump administration will take. So we'll see. LEO: There's been so much activity from the FTC and other, and FCC and the CFPB in the last few weeks, and I really feel like they're going, let's get everything done before January 20th. STEVE: But you can't get anything done; right? LEO: Right. STEVE: In three weeks. LEO: And then on January 20th, who knows what's going to happen? I mean, there are plenty of people in the Trump administration who don't like big tech. But there are people like Elon and others who do. And so... STEVE: Who IS big tech. LEO: Who IS big tech. So it's really kind of an interesting - it's really uncertain what's going to happen. Right? I don't know if this Microsoft case will go past January 20th. It might not. STEVE: Right. It just could get dropped, you know, in favor - or put on the back burner in favor of what the new administration perceives as more urgent priority. LEO: Exactly. Yeah. And it's unpredictable. You know, Trump has said I hate Google, the way they're too big, they're big tech. But he's also said, but on the other hand, China's afraid of them, so I love Google. So you just don't - you just don't know. You don't know what the hell's going to happen. It's going to be an interesting few years. That's, I guess, the truth. STEVE: It will indeed. Okay. So check out this screen, Leo. I've got a picture of it in the show notes. LEO: This is unbelievable, yeah. STEVE: Under the headline "You mean this actually convinces someone?" - and that's actually my headline - security researcher Lukas Stefanko has identified a new form of Android scareware uses that he refers to as "convincing full screen images" that resemble cracked or malfunctioning screens which trick users into calling tech support numbers or downloading malware on their devices. Now, I included a photo of this malware in action in the show notes. Now, I can see how a neophyte might be led to believe that something has gone very wrong with their phone because the screen looks like it's no longer even remotely able to display an image. LEO: Except... STEVE: The only problem, exactly, the only problem with this is that it is at the same time having no problem whatsoever, apparently despite the cracked and malfunctioning screen, of displaying the malware's warning pop-up notice claiming that a virus has been detected on the handset. So I suppose we'll give them points for coming up with something new. LEO: It gets your attention. I mean, initially you look at that and go, oh, whoa. STEVE: I mean, and down there in the lower right, I mean it looks like... LEO: It looks real. STEVE: It really does look like, oh, shoot, something bad has happened to my phone. Thank goodness that notice telling me to click here to remove the virus is still visible. LEO: Right. STEVE: Wow. LEO: Now, I'm curious because, if you click "remove this," is that sufficient? I would think they'd put a phone number in there or something. I mean... STEVE: Yeah. LEO: Or maybe it's just a click to - it'll run the virus because you clicked it. STEVE: Right. That's often the case. LEO: That's all it takes. Oy. STEVE: If it said "I'm a virus, click me," you'd be disinclined to do that. LEO: That's a good point. Point well taken, Steve. I'd better not click that. STEVE: Yeah, I don't think so. Okay. So Matt Warner said: "Hi, Steve. Regarding your comment about WireGuard's static ports in Episode 1002," so last week, he said, "I run WireGuard on an OPNsense firewall with Suricata and CrowdSec watching my WAN interface. Neither ShieldsUp! nor any other port scanner could find an open port, even when I specify the port number. I don't have WireGuard mapped to a specific allowable IP because that changes depending on my location. I'm happy to leave this as it is for now, but will certainly change my setup if a new vulnerability surfaces in any of the tools I use. Love the podcast. I look forward to it every week." Okay. So there is no reason to believe that it is not completely safe to leave a WireGuard VPN server running on a firewall, such as OPNsense, listening for incoming connections from a WireGuard client. There's no reason to believe that's a problem, until there is. Everything we know tells us that this COULD flip from "absolutely safe" to "Oh my god!" within a single heartbeat of a skilled hacker who, while studying WireGuard's open source code, notices something no one else has. That's one of the ways these things happen. Or perhaps the hacker is throwing nonsense packets at WireGuard's listening service port, and one of them suddenly crashes the WireGuard server. That's another way this could happen. The specific packet that crashed the server is then examined, and the source of the crash is reverse-engineered to create a repeatable working exploit. But it's every bit as true that none of this may ever happen. It's also true that perhaps it can't. The conundrum of security is that "could happen" does not necessarily mean "could happen." Perhaps it really can't. The trouble is, today's systems have become so complex that it's no longer possible for us to be absolutely and mathematically provably certain about the behavior of anything above a distressingly low level of complexity. Today, we just can't know. That's one of the things I'm hoping future AI might be able to help us with. My intuition suggests that this is the sort of thing that ought to be right in AI's backyard. But we don't have that today. What we have today is hope. Hope's better than nothing, but hope is not enough for me. I fully respect Matt's decision and position. It's one that's shared by tens of thousands of others. But my network is not the typical residential network. It's both the development and production arms of GRC. So the stakes, for me, are higher. I'm not suggesting that my network is utterly impervious to attack. But it's as utterly impervious as I've been able to make it, without exception. So deliberately exposing a WireGuard process, no matter how safe I hope it is, to the public Internet would be an exception I will not make. Another listener, identifying himself as "An On," reminds us why we trust, and should trust, WireGuard's design. He wrote: "Hi, Steve. Regarding the discussion of WireGuard and port knocking on this week's Security Now! episode, I just wanted to let you know that it's not really necessary. With WireGuard, the server will not respond to client connection requests AT ALL" - he has that in all caps, and he's right - "unless the client provides a public key that the server knows and trusts. This, in addition to the fact that the protocol is UDP-based, means that it's not possible to even know if there is a WireGuard server listening on a specific IP and port unless you already have public key credentials to connect. "While it technically would still be possible to have a bug where this can be bypassed, this is very unlikely because this is the first thing the server checks, so the code surface for bugs is minimal. This technicality would also apply to any port knocking techniques which can have their own bugs in implementation. Regards, Non." Okay. So Non is 100% correct. And this is why WireGuard represents the best of the best today. Is that good enough? Almost certainly. And his point about the possibility that adding port knocking to introduce an additional layer of pre-WireGuard security might itself introduce a new vulnerability is also a keen observation. That could happen. My defense of the use of port knocking is that from an implementation standpoint, unlike anything like WireGuard that necessarily invokes a huge amount of complexity in order to validate a cryptographic certificate, port knocking adds an appealingly trivial layer of complexity while providing virtually absolute protection. In other words, what might be termed as its "security gain" is nearly infinite. And the port knocking service is inherently sitting behind the firewall which it's monitoring. So it's much more difficult to see how its failure could do anything other than fail to open a port. And all of this is, of course, what makes the study of security so interesting. So great points from our listeners. And, as always, great incoming feedback to securitynow@grc.com. Thank you, everybody, for that. One of our listeners, Richard Craver in Clemmons, North Carolina pointed me at something that was so interesting it needed sharing. First of all, here's what Richard wrote. He said: "Hi, Steve and Leo. I just finished the AGI episode. Interesting to ponder. I personally am not a particular fan of AI in general, as I see it as crowdsourcing knowledge that may or may not be correct. Science is based on challenging and testing prevailing assumptions and thought. AI, in my humble opinion, discourages critical thinking. But for good or bad, it's here." He said: "Below is a link to Tom Fishburne the Marketoonist, with a thought-provoking cartoon and short viewpoint message." And I have the cartoon in the show notes. It's got two frames. On the left, one guy is saying to someone else, "Once we train our AI, I can't wait to see the wide variety of new ideas it comes up with!" And in the foreground we see a conveyor belt with all different shapes and sizes and brightly colored bottles and containers of different sorts. And this conveyor belt is sending them into a box in the middle that divides the two frames, labeled "AI." On the right-hand side we see this guy with his hand up to his chin as if thinking, hmm. And what's coming out is a nearly identical set of almost the same shape and size and color bottles. So the AI has sort of generified everything. Okay. So the interesting information that Tom Fishburne shares, he writes: "It's still early days with AI generation tools. We're all still learning potentials and limitations. One watch-out is the bias toward homogeneity, the tendency for AI results to look alike. As AI predicts what to generate, the path of least resistance is an averaging of the content in its source material. Ian Whitworth once referred to this as 'The Great Same-ning,' writing: 'ChatGPT, Jasper, and all the rest are powerful conformity machines, giving you the ability to churn out Bible-length material about yourself and your business that's exactly the same as your competitors.'" Tom continues: "A couple months ago, Oxford and Cambridge researchers illustrated the risk of homogeneity in a study of AI-generated content in Nature magazine." And for anyone who doesn't know, Nature magazine is a serious magazine. Lorrie and I were subscribing to it for a while. But the articles were so dense that it was like, okay, well, we're just wasting our time on this. So, I mean, it's the real deal. He says: "The risk increases as AI gets trained not only on human-created content, but on other AI-generated content. As an example, the researchers studied an AI model trained on images of different breeds of dogs. The source material included a naturally wide variety of dogs - French Bulldogs, Dalmatians, Corgis, Golden Retrievers, et cetera, the works. But when asked to generate an image of a dog, the AI model typically returned the more common dog breeds (Golden Retrievers), and less frequently the rarer breeds (French Bulldogs). "Over time, the cycle reinforces and compounds when future generations of AI models are trained on these outputs. It starts to forget the more obscure dog breeds entirely, soon only creating images of Golden Retrievers. Eventually, the researchers found, there's 'Model Collapse'" - and I love that term, model collapse - "where the LLM is trained so much on AI-generated Golden Retriever images that the results turn nonsensical and stop looking like dogs at all. "Now," he writes, "substitute dog breeds for whatever you're trying to create new products, new packaging, new advertising, communication - and the risk is that all outputs devolve to look the same. A related study from the University of Exeter found that AI generation tools have the potential to 'boost individual creativity,' but with a 'loss of collective novelty.' The good news is that this baseline situation creates opportunities for those who can push against this new status quo. Homogeneity is ultimately at odds with distinctiveness. As with all tools, it's all in how you use them. You can't break through the clutter by adding to it." So anyway, I love that. You know, these conclusions feel intuitively correct to me, and the research cited above supports that intuition. Also, it's certainly true that there's an unrealized danger as the Internet's content becomes more and more AI-generated while our AI models are being continuously trained against the Internet's content. Future historians may wonder, what happened to all the French Bulldogs? And on that, Leo, let's take another break. LEO: Yes. STEVE: And then we're going to look at some more questions and feedback from our listeners. LEO: Good. Great. On we go with the show, Mr. G. STEVE: Okay, yes. So our listener Greg Haslett has an interesting problem. He said: "Steve, I have an EdgeRouter." You know, that was the router that we were loving for a while. LEO: Loved that. I still have one, yeah. STEVE: Yeah, it's a... LEO: I've upgraded now to the full Ubiquiti system. That impressed me so much. STEVE: Oh, and it was so inexpensive and so powerful in terms of the way it could be configured. So he said: "I have an EdgeRouter and created a IOT network. My problem is I cannot reach my ASUS RT-66 to update the firmware that's on the IOT network." LEO: Oh, boy. STEVE: You know, so he created isolation, and now he's isolated. LEO: Yeah, it worked. STEVE: Yeah. He said: "Any quick ways to allow temporary access to the ASUS router? My last-ditch answer would be to back up the EdgeRouter" - meaning its config - "and reset to original settings, hopefully find the IP address of the ASUS and update the firmware, then restore the EdgeRouter from backup with IOT. Longtime listener and met you at the SQRL take in Irvine." So that's very cool. So, okay. I'm not 100% certain that I completely understood Greg's problem and question. But I think I do. But my first thought is that maybe he's making things too complicated. Leave the EdgeRouter alone and just temporarily rearrange some wires. LEO: Take it out of the line. STEVE: Exactly. Rather than get fancy with reverting the EdgeRouter's configuration to its original simple switch, why not plug the ASUS RT-66 into the LAN where a PC is located and update its firmware. I suppose if Greg doesn't have a spare old wired Ethernet switch lying around - and I have to think he would, you know, who doesn't, they make great doorstops - then that could be a problem. But it's also possible to plug the ASUS RT-66 directly, point-to-point, into a PC's LAN socket. So if I understood Greg's question, it would appear that being less fancy and going old school might be the right solution. LEO: That is the issue with VLANing off your IOT and creating an IOT network. If the IOT device is done, you know, controlled through the cloud... STEVE: Right. LEO: ...then it's not a problem because you're going to on one VLAN contact the cloud. STEVE: Right. You go up the cloud, it comes back down, yup. LEO: Yeah. But more and more, and actually for security this is probably a good thing, and for long-term survivability it's a good thing, these guys are talking directly, you're talking directly to the IOT device, which of course isn't going to work if it's on a separate VLAN unless you create some rules. That's the other way around it. I ended up just giving up. I put it all on one. STEVE: Yeah. Our solution is to have, because we also want to have guests over who are bringing untrusted equipment... LEO: Right. STEVE: ...we have two radios. So we have our network, and then on the IOT network is a different access point. And so if I need to talk to something there, I just quickly switch my WiFi over to that. LEO: Yeah. We were doing that. But it's a pain in the butt, if you want to print, to switch to the secure/insecure VLAN, print, and then switch back. You know. STEVE: Yeah. And printing is a good example because, boy, printing is so security riddled and problematic. LEO: You don't want to put a printer on your network. STEVE: Not if you can help it, no. LEO: Yeah. So this is tough. It really is. That's the truth of it. STEVE: Oh. And while we're on the topic of old-school solutions that are, in this case, obvious in retrospect, our listener Troy was responding to something we were talking about last week about my having a problem typing on this horrible keyboard screen of my iOS device and wondering about a solution for reversing the dongle, the Bluetooth keyboard dongle that you put into your computer. He said: "Steve. Congrats on Security Now!. Hey, regarding typing long messages on the iPhone, I hope you know that you can connect a Bluetooth keyboard to your iPhone." And this is where the use of the expression "Doh!!" comes in. I confess I had completely forgotten that. And I should have remembered it because one of my first reactions to the loss of the wonderful physical clicky-button keyboard of my beloved Blackberry - oh, I loved it so much. But I had to switch to an iPhone because, you know, one has to. I added that little add-on keyboard that you could stick onto the bottom of the phone, which did, indeed, link the phone via Bluetooth. And it worked perfectly. So needless to say I have a cute little Bluetooth keyboard now, thanks to Troy's note, which allows me to quickly type on my iPhone. So thank you, Troy. Earl Rodd in North Canton, Ohio shared some facts about social media age restrictions. He said: "The recent book by Jonathan Haidt titled 'Anxious Generation'..." LEO: Okay. I know he loves it, and you're going to read his praise. STEVE: Okay. LEO: But that's not widely accepted. STEVE: Haidt is nonsense? LEO: Experts in the field said that it's not true. So go ahead. STEVE: So who said? LEO: So I will send you the article by, I think, what was her name, Odgers, who is an expert in the field. Jonathan Haidt is a polemicist. He's a social psychologist. STEVE: Psychologist. LEO: Yeah. And a lot of what he claims in the book is highly disputed by experts in the field. So it's convincing if you read the book. There's a lot of stuff, you know, when people are polemicists they write convincing books - Malcolm Gladwell does it, too - that aren't true. But they sound right, and a lot of people come away with this conviction. As a result, this is why there's that Australian law, there's this widespread thought that social networks are causing major mental illness issues with our kids. But experts disagree. Had to say that. Now go ahead. You can read his note. STEVE: Well, okay. LEO: I just want to inoculate people against what you're about to say. What he's about to say. STEVE: Okay. Okay. So I will because it gives me the context for my reactions to it. So he said: "The recent book by Jonathan Haidt, 'Anxious Generation,' has extensive discussion of the age limit issue. The main theme of the book is rather convincing evidence" - to your point, Leo - "that the dramatic (100-200%) increase in teen mental health problems which corresponds to the introduction of smartphones is in fact CAUSED" - he has in all caps - "by the use of those phones and, in particular, social media. "Haidt's argument rests on his work as a social psychologist combining knowledge of the vulnerability of early teens due to brain development happening at the time of life with research on how social media is carefully designed to 'hook' young adolescents. If Haidt is right [and our listener says] and I think he is, the problem is VERY severe. We make a huge mistake equating our older adults who grew up before the smartphone era use of various apps and how we handle it with adolescents during critical brain development years." And he says: "(Note: My adult children have been telling me this for years, that I cannot transfer how I use social media for just the few things I want to the experience of youngsters.)" And he says: "The book has an extensive discussion of what to do. In that section, Jonathan discusses some technical ideas, not at the technical depth of Security Now!, but also the social factors, like parental role, the problem of peers having more access, and how some methods can be neutralized. The book has references to extensive discussions of both social scientists like Haidt, and technical sources by people who have thought through a lot of ideas. While I share some skepticism of the effectiveness of age verification, I think the combination of laws requiring age verification, more parental awareness, and cooperation between schools and parents can have a very positive impact." So my response was to say that, you know, in our recent discussion I happened to also touch on a number of the same potential pitfalls of age restriction, such as parents being pushed by their own children to make exceptions for them, which is then followed by other kids complaining to their more strict parents that their peers have been given access by their parents, so why can't they have the same, and saying, "After all, how bad can it be if 16 year olds are able to have access?" I note also that, among other things, my wife Lorrie is an accomplished therapist. And while she rigorously honors the privacy of her clients, she's noted on a number of occasions that many of today's parents appear to be afraid of their own children, whom they appease by giving them anything they want. So how are such parents not going to capitulate to their children's demands, especially having previously established that pattern? So anyway... LEO: I'll point you, now that we've talked about it, to - this is a great place to start, Mike Masnick's article in which he quotes Candice Odgers, who is an actual expert on this stuff and has been doing this kind of research for years; and then his podcast about this, essentially debunking Haidt. Haidt is a polemicist. He is not an expert, period. STEVE: So do you not think, do you not conclude that there is something age-related, or that there is not damage, or that kids are not addicted, or what? LEO: Yeah. So the research shows that it's not the case. Period. He's saying something that makes sense. And this is the problem with a lot of these just-so stories. Oh, yeah, that makes perfect sense. That makes a lot of sense. But if you actually look at the research - by the way, you can read her article in Nature, your favorite magazine, all about this. The issue is, is there an increase in mental health issues with kids because it's more reported? There are a lot of - correlation does not equal causation, as you well know. And because the iPhone came out in 2007, they're correlating that to a rise in mental health issues. There are many other issues involved in this including COVID and isolation of kids, stranger danger from the '80s, which made a lot of parents keep their kids at home instead of letting them out to play because they were so afraid of - by the way, this was also a specious argument - there were strangers in the neighbor about to abduct them. We know perfectly well that the real danger to kids, as people may know, is people at home, their relatives. But this stranger danger actually prompted a lot of parents to say, oh, no more playing outside for you. That could be one of the causes. There are many things going on. Correlation does not equal causation. STEVE: And as we've said many times. LEO: And when you do the actual research, which many have done, including Candace Odgers, it is in fact under - it's problematic because it's very easy to say, oh, it's social media. We put an age limitation on social media, we limit iPhones, we give parents the power to stop doing all this stuff, it's all going to get better. And what you're not addressing, for instance, is the fact that schools no longer have mental health professionals, let alone nurses, in the school. There are a lot of other issues you're not addressing because you - oh, all fixed. STEVE: You've found the problem. LEO: You've found the problem. So I would recommend people look at Mike Masnick, I think our audience trusts and likes, did an excellent podcast with her about youth mental health, talking about Jonathan Haidt's book. The problem is it's become a political issue. STEVE: So do you think the actual driver is mental health, or that people don't want kids so stuck on their phones? LEO: Steve, you remember when you were young, and your parents said "Stop listening to that rock and roll and cut your hair?" Do you remember when Minow, the chairman of the FCC, said that television was a vast wasteland and ruining the brains of our young people? STEVE: And then we have the whole videogame phenomenon. LEO: Do you remember when Tipper Gore said video games are ruining our children? It's happened again and again. The problem with that kind of moral panic is you can be - you can focus on the wrong problem. STEVE: Right. LEO: And not really address the issues. So there is a huge replication crisis, or problem with the data that Haidt quotes. It's not been replicated. The actual experts who are working in this field, have been working in this field for decades, say we actually don't see that. If you're interested, and everybody should be, watch this podcast. It's a great starting point. It's at Techdirt.com. It's the Techdirt podcast with Candice Odgers, O-D-G-E-R-S. Title, "Making sense of the research on social media and youth mental health." Actually, I think Haidt's on it. So that would be kind of interesting. STEVE: Well, and of course our interest for the podcast is just the idea that legislation is going to impose a new technical requirement. LEO: Right. Well, it's nonsense that Australia has said, no, nobody under 16 can use social media. Besides the, I mean, you can make the case that social media is how kids socialize today and will isolate a great many kids and cause worse problems. How do you do it? How do you... STEVE: Yes. LEO: And so there's no good technical way without violating human privacy, our own privacy, to identify who's an adult, who's not an adult. STEVE: Yes. And that is the interest of this podcast is what are they going to do? You know, like something is going to happen unless the law gets overturned and/or is implemented. The fines are the equivalent of 50 million Australian dollars, equivalent about 32.5 million U.S. dollars. LEO: Which makes me think companies like Meta and others will just pay the fine. STEVE: Do you think it's a one-time fine? And the other thing that I thought was odd was that YouTube is excluded. LEO: Yes. STEVE: It's not considered... LEO: Perfect example. Perfect example. It's nonsense. And by the way, the campaign in Australia was started by Rupert Murdoch and Rupert Murdoch's newspapers, who in the spring of this year launched a massive campaign and convinced the Australia legislature to do this. STEVE: Well, from a technology standpoint it's going to be fascinating to see what they come up with. LEO: We talked about it on Sunday, and I think the consensus of the panel was this is really mostly just kind of saying "Fix it," because it's more than a year away; right? STEVE: Yes, takes effect on November 20th of 2025. LEO: Yeah. So we think it's mostly just saber-rattling and trying to convince them, do something so that we can sit back in this law. But if not, we've got a problem. STEVE: We have a need for some technology there. LEO: Yeah, that doesn't exist. STEVE: So, finally, Dawn appreciates our Picture of the Week for audio-only listeners. She says: "Hello, Steve and Leo. I've listened to your show for a while now, and, I really enjoy it. I love all things computers, technologies, et cetera, and there's one thing I can definitely say with 1000% assurance: There will ALWAYS [she has in all caps] be a need for this podcast, and experts such as yourselves to cover and explain it all. With the added challenge of putting the cookies on the bottom shelf where the kids can get them, which you are very good at doing. "I wanted to write you an email thanking you for describing the Pictures of the Week. I have to admit I got quite a bit of laughs from the one last week, where the little troublesome twosome were finding a way to get upstairs. Even now, as I write this, I'm chuckling. It means a lot to me that you guys describe the Pictures of the Week because I'm completely blind." LEO: Oh, interesting. STEVE: "Without your descriptions, I would not be able to get any enjoyment out of them." LEO: Very good. STEVE: She said: "Sometimes I think we do things like this without a second thought, and without knowing the impact that we have, and will have on someone when we do those things. This is one of them. Please keep the picture descriptions coming. Before you ask, I think one of my favorite Pics of the Week was the one that said 'Treat your passwords like your underwear.'" LEO: Change it daily. STEVE: She said: "I remember I just couldn't stop laughing for a long time after that one, and had to rewind the podcast a couple of times just for the laughs. I must admit I had never heard password safety put that way before. Thank you once again for the podcast and image descriptions, and please keep them coming. Dawn." LEO: Awesome. Thank you. STEVE: And Dawn, I hope you're listening. Thank you for your note, and I can promise that we'll keep the Picture of the Week descriptions coming. LEO: Yeah, you're very good about it. You realize that we have audio listeners, and they aren't seeing it. And so you're always very good about that. It does remind us, though, also, when you post images online, you should always use the alt tags in HTML. STEVE: Right. LEO: So that blind viewers who are using screen readers will actually know what that picture is. And I forget sometimes. I actually have a little thing on my Mastodon account that pings me when I post a picture without an alt tag and says "You didn't put your alt tags in. It's not too late. Go back and edit it." And I always do. Thank you, Dawn. It's nice to have you with us. STEVE: Okay. Our last break, and then we're going to catch up on the current status of Voyager 1. LEO: Ooh, I'm excited. STEVE: As its continuous, its, well, endless journey because it's way outside the sun's gravity field at this point. So... LEO: And just along the Australia thing, you remember that it was the Australian Parliament, a parliamentarian in Australia who said we don't have to worry about maths. Math doesn't, from our point of view, there's no need to pay attention to math. That doesn't matter. STEVE: Well, and I love - and this is another one of those examples of legislators ignoring the technology, even though they're legislating technology, I mean, saying that... LEO: It's hand-waving. It's hand-waving. STEVE: ...social media companies like some - and a subset of social media companies have to do something. And but we don't know how, but you can do it. It's like the EU saying, well, we want you to block CSAM, and we don't know how you're going to do it, but you have to do it without breaching anyone else's privacy. It's like, uh, what? LEO: Here it is. It was the Australian prime minister who said: "The laws of mathematics don't apply here." STEVE: Oh, boy. LEO: He's no longer prime minister. STEVE: Those pesky mathematicians. LEO: How dare they. Yeah, governments do that. They say, well, you'll figure it out. STEVE: Yeah, yeah, you guys are smart. LEO: You guys with the smart big brains, you figure it out. STEVE: Yup. LEO: Turnbull is no longer, I don't think, Malcolm Turnbull's no longer the prime minister. But math lives on, which is kind of interesting. STEVE: I love math. Math makes it all go around. LEO: Yeah, math is eternal. Math lasts longer even than Voyager. STEVE: And if you didn't have math, we wouldn't have Voyager 1, that's for sure. LEO: Mm-hmm. There you go. Yeah, I often say, when people say, oh, science, you know, science isn't always perfect, dude, you're listening to a technology podcast. All technology is, is science applied; right? Give me a break. That's all we've got. STEVE: Yes, we live in a noisy world, and yet the digital bits get from point A to point B perfectly. LEO: Somehow magically. Well, math doesn't apply here. That's, no, I don't know what that is. Vger. STEVE: Okay. So our listener Rob Woodruff brought this bit of news to my attention. NASA's posting was titled "NASA's Voyager 1 Resumes Regular Operations After Communications Pause." And I'm going to share it because, as I said, it contains a bunch of interesting and amazing science and engineering information. And then we're going to even dig down a little deeper. So they wrote: "NASA's Voyager 1 has resumed regular operations following a pause in communication last month." LEO: Geez. STEVE: Yeah. "The probe had unexpectedly turned off its primary radio transmitter, called an X-band transmitter, and turned on the much weaker S-band transmitter. Due to the spacecraft's distance from Earth about 15.4 billion miles, 24.9 billion kilometers this switch prevented the mission team from downloading science data and information about the spacecraft's engineering status. "Earlier this month, the team reactivated the X-band transmitter and then resumed collecting data the week of Nov. 18 from the four operating science instruments. Now engineers are completing a few remaining tasks to return Voyager 1 to the state it was in before the issue arose, such as resetting the system that synchronizes its three onboard computers. The X-band transmitter had been shut off by the spacecraft's fault protection system when engineers activated a heater on the spacecraft." Whoops. LEO: Okay. STEVE: "Historically, if the fault protection system sensed that the probe had too little power available, it would automatically turn off systems not essential for keeping the spacecraft flying in order to keep power flowing to the critical systems. But the probes have already turned off all nonessential systems except for the science instruments. So the fault protection system turned off the X-band transmitter and turned on the S-band transmitter because it uses lower power." Unfortunately, it also means it transmits at lower power, which means you can't get the data through, which is why they had stopped collecting data. They said: "The mission is working with extremely small power margins on both Voyager probes. Powered by heat from decaying plutonium that is converted into electricity, the spacecraft lose about four watts of power each year. About five years ago, some 41 years after the Voyager spacecraft launched, the team began turning off any remaining systems not critical to keeping the probes flying, including heaters for some of the science instruments. To the mission team's surprise, all of those instruments continued to operate despite reaching temperatures lower than what they'd been tested for. "The team has computer models designed to predict how much power various systems, such as heaters and instruments, are expected to use. But a variety of factors contribute to uncertainty in those models, including the age of the components and the fact that the hardware doesn't always behave as expected. "With power levels being measured to fractions of a watt, the team also adjusted how both probes monitor voltage. But earlier this year, the declining power supply required the team to turn off a science instrument on Voyager 2. The mission shut off multiple instruments on Voyager 1 in 1990 to conserve energy, but those instruments were no longer in use after the probe flew past Jupiter and Saturn. Of the 10 science instruments on each spacecraft, four are now being used to study the particles, plasma, and magnetic fields in interstellar space," which is where both probes are. "Voyagers 1 and 2 have been flying for more than 47 years and are the only two spacecraft to operate in interstellar space. Their advanced age has meant an increase in the frequency and complexity of technical issues and new challenges for the mission engineering team." Okay. So reading that, the article said: "The X-band transmitter had been shut off by the spacecraft's fault protection system when engineers activated a heater on the spacecraft." What it didn't tell us is why the JPL engineers turned on that heater. And there's even more fascinating information about that. Our listener Jeff Root in San Diego supplied the link to a story in The Register, of all places, titled "Best Job at JPL: What it's like to be an engineer on the Voyager project." This was posted two days later on the U.S.'s Thanksgiving Thursday. And it, too, is chock full of interesting science and engineering insight. So the Register wrote: "The Voyager probes have entered a new phase of operations. As recent events have shown, keeping the venerable spacecraft running is a challenge as the end of their mission nears." And of course "end of the mission" just means we don't know what happened; right? I mean, it's like, it's way past its design end of mission, and it keeps getting extended. So they wrote: "As with much of the Voyager team nowadays, Kareem Badaruddin, a 30-year veteran of NASA's Jet Propulsion Laboratory, divides his time between the twin Voyager spacecraft and other flight projects. He describes himself as a supervisor of chief engineers, but leaped at the chance to fill the role on the Voyager project. Suzanne Dodd, JPL Director for the Interplanetary Network Directorate, is the Project Manager for the Voyager Interstellar Mission. "Badaruddin told The Register: 'She knew that the project was sort of entering a new phase where there was likely to be a lot of technical problems. And so chief engineers, that's what they do. They solve problems for different flight projects.' "Dodd needed that support for Voyager. Badaruddin would typically have found someone from his group, but he said: 'I was just so excited about Voyager, I said, you know, look no further; right? I'm the person for the job.'" In other words, this was one he did not want to delegate. He said: 'I'm your engineer. You know, please pick me.' "So Badaruddin has spent the past two years on the Voyager project. After decades of relatively routine operation, following plans laid out earlier in the mission when the team was much larger, the twin Voyager spacecraft have begun presenting more technical challenges to overcome as the vehicles age and power dwindles. "The latest problem occurred when engineers warmed up part of the spacecraft, hoping that some degraded circuits might be 'healed' by an annealing process. Badaruddin explained that 'There's these junction field effect transistors (JFETs) in a particular circuit that have become degraded through radiation. We don't have much protection from radiation in an interstellar medium'" - remember, where this thing was never designed to function, right, because it wasn't expected to live this long. "'We don't have much protection in an interstellar medium because we're outside the heliosphere, where a lot of that stuff gets blocked. So we've got this degradation in these electronic parts, and it's been proven that they can heal themselves if you get them warm enough, long enough. And so we knew we had some power margin, and we were hopeful that we had enough power margin to operate this heater. And as it turned out, we didn't. It was a risk we took to try to ameliorate a problem that we have with our electronics. So now the problem is still there, and we realize that we can't solve it this way. And so we're going to have to come up with another creative solution.'" So The Register says: "The problem was that more power was demanded than the system could supply. A voltage regulator might have smoothed things out, but the Voyagers no longer have that luxury. Instead, engineers took a calculated risk and ran afoul of the then-innovative software onboard the spacecraft. The under-voltage routine of the fault protection software shuts down loads on the power supply; but since the Voyager team had already shut down anything that's not essential, there isn't much left for it to shut down. "Badaruddin explained. He said: 'So the under-voltage response doesn't do much except turn off the X-band transmitter and turn on the S-band transmitter. And that's because the S-band transmitter uses less power, making it the last safety net to save you.' He said: 'And save the mission it did. While the S-band is great for operations near Earth, such as the Moon, it's almost useless at the distance of the Voyager spacecraft. However, by detecting the faint carrier signal of the S-band transmission, the team was able to pinpoint that the problem had been the act of turning on the heater, even without X-band telemetry from the spacecraft. "'The challenge for engineers isn't just the time it takes to get a command to the Voyagers and receive a response, but also checking and rechecking every command that gets sent to the spacecraft.' He said: 'The waiting is apparently not as frustrating as we might think.' Badaruddin said: 'This is the rhythm we work in. We've grown accustomed to it. It used to be a very small time delay, and it's gradually grown longer and longer through the years.' "With duplicate physical hardware long gone, the team now works with an array of simulators. Badaruddin said: 'We have a very clear understanding of the hardware. We know exactly what the circuitry is, what the computers are, and where the software runs. And as for the software? It's complicated. There have been so many tweaks and changes over the years'" - remember, 47 years - "'that working out the exact revision of every part of Voyager's code has become tricky.' Badaruddin said: 'It's usually easier to just get a memory readout from the spacecraft to find out what's going on out there.' "The challenge for the Voyager team is that the spacecraft are nearing the half-century mark, as is the documentation. He said: 'We have documents that were typewritten in the '70s that describe the software, but there are revisions. And so building the simulators, we feel really good about the hardware, but we feel a little less good about understanding exactly what each instruction does.' The latest bit of recoding occurred with the failure of one of Voyager's integrated circuits, which manifested itself as meaningless data last year." And of course we talked about that on the podcast at the time. "Badaruddin reminds us: 'The basic problem was figuring out what was wrong with no information. We could see a carrier signal; we knew we were transmitting in the X-band; we knew we could command the spacecraft because we could tweak that signal slightly with commands. So we knew the spacecraft was listening to us, and we knew the spacecraft was pointing at Earth because otherwise we wouldn't get a signal at all.' "The engineers went further down the fault tree, and eventually managed to get a minimum program to the spacecraft to get a memory readout. That readout could be compared to one retrieved when the spacecraft was healthy. 256 words were corrupted, indicating a specific integrated circuit. Code was then written to relocate instructions around that failed area." And remember, this is almost a light-day away at that point, a year ago. "The problem there is the code was very compact. There was no free space that we could take advantage of. So we had to sacrifice something." So they're patching on the fly on an operating machine, what is it, 15 billion miles away. That something that needed sacrificing was one of the Voyager's higher data rate modes, used during planetary flybys. And that makes sense; right? It's like, hey, what don't we need? Well, we don't need the high data rate mode used during planetary flybys because we're not going to be flying by any planets. So now back to the present. "The current challenge" - if you'll pardon the pun - "involves dealing with the probes' thrusters." And here's the problem, Leo. Silicon from bladders inside the fuel tanks has begun to leach into hydrazine propellant. Since silicon doesn't ignite like hydrazine, meaning it doesn't get burned off, a tiny amount gets deposited in the thrusters and slowly builds up in the thruster capillaries. Badaruddin uses the analogy of clogging arteries. Eventually, the blockage will prevent the spacecraft from firing its thrusters to keep it pointed at Earth. "However, the pitch and yaw thrusters, each of which have three branches, are clogging at different rates. The current software works on the basis that branch 1, 2, or 3 will be used. But could it be operated in mixed mode, where branch 2 is used for the pitch thruster, but branch 3 is used for yaw? "Badaruddin notes: 'So that's a creative solution. It would be very complicated. This would be another modification in interstellar space to the software.' And getting it right the first time is not just nice to have, it's almost essential. By the time the results of a command come back from the Voyager spacecraft, it might be impossible to deal with the fallout of a failure." LEO: Wow. What do they write it in? Is it assembly language? What is it? STEVE: Oh, yeah. It's all individual, like, they have - they invented their own processor. LEO: Oh, of course. STEVE: They're not using any commercial processor. They invented a computer that reads this code. And that's where he's saying sometimes we're not sure what an instruction does, because somebody typed it in 1970 and may have said, oh, it's lunchtime, I'll get back to you later. LEO: Wow. Wow. This is amazing. STEVE: It is just incredible. LEO: Oh, my god, good stories, yeah. STEVE: He said: "The Voyager spacecraft are unlikely to survive another decade. The power will eventually dwindle to the point where operations will become impossible." LEO: Is it a nuclear power plant on that? STEVE: Yeah, yeah. It is a nuclear power. It is using decaying plutonium, the heat generated from the particle decay, to heat a thermocouple which generates the electric current to drive all of this. LEO: Oh. So it's a tiny bit of... STEVE: And it's been exponentially decaying for 47 years. LEO: Pretty good. STEVE: Since this thing was first launched. LEO: That's a long time, wow. STEVE: Yeah. So he says: "High data rates, which is to say 1.4 kilobits per second, will only be supported by the current Deep Space Network until 2027 or 2028. After that, some more creativity will be needed to operate Voyager 1's digital tape recorder. Badaruddin speculates that shutting off another heater (the Bay One heater) used for the computers would free up power for the recorder." I should mention that we're only able - the Deep Space Network, as I recall, is only out of Australia. And so it's only during a brief time window once a day as the Earth rotates that the Deep Space Network antenna is able to point at Voyager 1. And so Voyager 1 records its data during the dark period and then dumps it to us when it knows we're able to receive it. So he says: "Turning off the Bay One heater used for the computers would free up power for the recorder, according to the thermal model, but it'll be a delicate balancing act. And, of course, the recent annealing attempt demonstrated the limitations of modeling and simulations on Earth. "So does Badaruddin have a favorite out of the two spacecraft? He replies: 'Well, Voyager 2 is the one that's been flying the longest, and Voyager 1 is the one that's furthest from Earth. So they both have a claim to fame.' He said: 'To use another analogy, they're essentially twins. They're basically the same person, but they live different lives, and they have different medical histories and different experiences.'" LEO: What a great line. STEVE: "Badaruddin hopes to stick with the mission until the final transmission from the spacecraft. He said: 'I love Voyager. I love this work. I love what I'm doing. It's so cool. It just feels like I've got the best job at JPL.'" LEO: And he's, I'm sure, in his 60s if not 70s; right? STEVE: Yeah. LEO: He's been with it for 30 years with JPL. STEVE: Yeah. LEO: Wow. STEVE: So I just checked on the Voyager 1 mission status, which is what gave me the title for today's podcast. That intrepid little spacecraft is now so far away that light and radio signals take more than 23 hours to travel in each direction. Not round trip. Each direction. So two days round trip. So it's nearly an entire light-day distant. Yet Voyager 1 - and this is what boggles my mind - is managing to keep itself pointed at our Earth across all that distance, and we still have working bi-directional communication with it. This entire endeavor has been an astonishing example of incredible engineering. The original design - and this, too. The original design was flexible enough and software controlled enough that even though it was designed in the 1970s and launched on September 5th, 1977, all well before the Internet and all of the technology we now take for granted, this machine has endured and has exceeded everyone's expectations many times over. The story does make one principle absolutely clear: No pure hardware solution could have ever done this. No pure hardware solution would still be alive, functioning, and communicating after 47 years of space flight. Nor even could any fixed firmware hybrid hardware/software solution. The reason is that none of what has transpired since Voyager 1's original mission was redefined and extended, after it continued to perform so brilliantly, could have been anticipated by NASA's brilliant engineers in the mid-'70s. The sole key to Voyager 1's success today is that to an extremely large degree the original designers of the spacecraft put the machine's hardware under software control. The reason they did that way back in the '70s was different from the reason they're now glad they did that. They created a deeply software-based control system back then because software doesn't weigh anything, and the spacecraft didn't have an ounce of weight to spare. So the engineers of the '70s put their faith in software. And that faith, and the inherent dynamic redesign flexibility it enabled, has given the spacecraft a far longer life than it could have ever otherwise enjoyed because software doesn't weigh anything. LEO: Isn't that amazing. STEVE: And all of that said, yesterday's and today's software is ultimately at the mercy of hardware. You know? If the attitude control systems' capillaries ultimately become clogged with leached and deposited silicon, the spacecraft's ability to maneuver and keep itself pointing at the Earth will eventually be lost. At some point in the not too distant future it will still be alive out there, but we'll have lost contact with one another. You know, what an amazing accomplishment, Leo. LEO: It's a great story. STEVE: I mean, it makes you proud. LEO: It also - there's another lesson which is sometimes constraints force a kind of creativity that's better than if you have unlimited hardware and software, unlimited memory, unlimited storage. STEVE: It's why I'm pointing at that PDP-8 behind me. It came with 4K words of memory. And it was expandable to 16, I think, or 12. It's what I miss about the old days where you really - there was creativity and engineering instead of just asking ChatGPT for a program. LEO: Right. STEVE: You know, which it spits out from having ingested the Internet. LEO: Right. STEVE: It is a different world. LEO: Yeah. Fascinating. Well. You know, we've covered this story for a couple of years now, and it's... STEVE: As it's been - that intrepid little probe has been out there. LEO: And there are, I've mentioned already, there are some documentaries. There's one fairly recent one that covers the old folks. STEVE: And I watched it after your recommendation. It was fantastic. Really fun. LEO: So great, these guys. This is their life work. It's just really neat. Amazing. Thank you, Steve, once again, for a great show. As always, Steve hits it out of the park each and every time. I hope you listen. We do the show live on Tuesdays, right after MacBreak Weekly, which usually ends up being somewhere between 1:30 and 2:00 p.m. Pacific, let's say 5:00 p.m. Eastern time, 2200 UTC. You can watch us live on eight different platforms. Thanks to our Club TWiT members, of course, we are on Discord. That's where our Club TWiT members live. But we're also on YouTube, Twitch. We're on X.com. We're on Facebook. We're on LinkedIn. We're on Kick. We're even on TikTok. So you can watch us live there if you're around of a Tuesday evening. If not, of course there's on-demand versions of the show. We have a 64-bit audio version and a full video version you can watch at TWiT.tv/sn. Steve has the 64Kb audio, but he also has the 16Kb audio, which he hand crafts himself every week so that you can listen if you're bandwidth-impaired. And one of the bandwidth-impaired folks is our own Elaine Farris, who does the transcripts. So she downloads that and literally by hand, transcribes everything we say, does a beautiful job of that. STEVE: It's actually why we have the 16Kb. It was for Elaine that I created, I started doing that. LEO: That's so nice. So if you want to read along as you listen or use it for searching, that's also on his site. And of course the full show notes. And Steve does a really nice, better show notes than anybody I've ever seen. I mean, it's all written out there, lots of images, links, and you can also get that from Steve's site. You can get it emailed to you, as well. Steve has a couple of newsletters, one of which is the Security Now! newsletter, the show notes. And all you have to do to get on his mailing list is go to GRC.com, that's his website, GRC.com/email. What you're actually doing is validating your email, so that gives you the opportunity to email him. You have to validate it first because he doesn't want spam. It's a very effective technique against that. But you'll see there are two boxes that you could check. They are unchecked by default. But you could check them if you want to get those newsletters. GRC.com/email. Copyright (c) 2024 by Steve Gibson and Leo Laporte. SOME RIGHTS RESERVED. This work is licensed for the good of the Internet Community under the Creative Commons License v2.5. See the following Web page for details: https://creativecommons.org/licenses/by-nc-sa/2.5/.