Transcript of Episode #1039

The Sad Case of Scriptcase

Description: What AI website summaries mean for Internet economics. Time to urgently update Plex Servers (again). Allianz Life stolen data gets leaked. Chrome test Incognito-mode fingerprint script blocking. Chrome 140 additions coming in two weeks. Data brokers hide opt-out pages from search engines. Secure messaging changes in Russia. NIST rolls out lightweight IoT crypto. Syncthing moves to v2.0 and beyond. "Alien: Earth" - first take. What can we learn from another critical vulnerability?

High quality  (64 kbps) mp3 audio file URL: http://media.GRC.com/sn/SN-1039.mp3

Quarter size (16 kbps) mp3 audio file URL: http://media.GRC.com/sn/sn-1039-lq.mp3

SHOW TEASE: It's time for Security Now!. Steve Gibson is here. Lots to talk about. Allianz Life stolen data now leaked, including, yes, Social Security numbers. Oh, boy. Some new features in Chrome, some good, maybe some not so good. And NIST rolls out encryption for IoT devices. That's a good thing. That and a whole lot more coming up next on Security Now!.

Leo Laporte: This is Security Now! with Steve Gibson, Episode 1039, recorded Tuesday, August 19th, 2025: The Sad Case of Scriptcase.

It's time for Security Now!. Yes, indeedy, you wait all week, I wait all week for Tuesday because it's a chance to talk to this brilliant man right here, Mr. Steven "Tiberius" Gibson, the head, the chief, the man in charge of Security Now!. You know, now that I've been playing the piano, I think I can do - I could do Live Long and Prosper.

Steve Gibson: Oh, oh.

Leo: Oh, I've got to work harder on it.

Steve: And now that you're playing the piano, you may live long and prosper.

Leo: Well, one of the things you learn with the piano is to be able to kind of independently move the pinkie and the ring finger because there are ligaments tying them together. I can do it with one hand, but not the other. Anyway, hello, Steve.

Steve: Yo, my friend. So this is actually, I was wrong about it being a day near our birthday. It was August 19th, 2005.

Leo: Oh, man. Happy birthday!

Steve: That we recorded Episode Number 1. And our listeners who are paying attention are very focused on facts, which is I think why they like this podcast. I made some comment about, oh, we'd be 21, and he said, no, Steve, on your 20th birthday you are now 20.

Leo: But we begin our 21st year.

Steve: Yes, that is true. But we are now 20.

Leo: Happy birthday, Steve. Yay.

Steve: So, yeah.

Leo: Congratulations. Wow. Twenty years. It doesn't feel like that; does it? Or does it?

Steve: But that's the problem, Leo, is that it would be better if it felt like 20 years because time accelerates as you age. I don't understand why. It's like you're circling the drain or something so your velocity...

Leo: It's going faster.

Steve: Your velocity is increasing.

Leo: Faster and faster.

Steve: And so there's a relativistic time dilation effect, I think.

Leo: Ah. Happy birthday. That's what...

Steve: That just suggests that the next 10 are just going to fly by.

Leo: Whoo.

Steve: Yeah. Something weird happened as I was musing over today's topic.

Leo: Yes.

Steve: Some concepts that we've been toying with, some things we've been talking about the last couple years, gelled into a stronger awareness and statement of what it means that we're never going to get rid of bugs, and we're never going to arrest all the bad guys. Like it shifts the responsibility to where I think it should be. Anyway, I think I've got today a really interesting topic, which starts out being kind of strange. Today's title is "The Sad Case of Scriptcase." Which is, as we'll see, just another application. This happens to be - they call themselves a "low code website generator." The PHP code gets spit out, and you use a UI, a drag-and-drop UI through the browser to build websites.

And as I began looking into it, I kind of got this sinking feeling about what it means. And anyway, we're going to have a great time looking at that, but we're going to - I want to briefly touch on something we talked about last week that I've continued to feel. It's just one last thing I wanted to talk about, about how website summaries affect Internet economics, because as I work to educate myself further after we talked about this last week, I saw that this whole Cloudflare/Perplexity conflict actually was a catalyst for a lot of discussion about sort of the not user-agents and robots.txt files, but what does it mean, like how is the Internet being changed.

Leo: Yes.

Steve: And exactly as you have said, Leo...

Leo: That's the thing. That's the conversation, yes.

Steve: Yes. So I want to touch on that because I found a really interesting article online. Also it's time to urgently update Plex servers again. Allianz Life's stolen data - unfortunately, Leo, yours might be there, too.

Leo: Oh, it is. It's in there, yeah.

Steve: Has been leaked now onto the Internet as they threatened. Chrome is testing an incognito-mode-only, unfortunately, fingerprint script blocking, which is interesting. And Chrome 140, where that will appear in two weeks, also has some other things we're going to talk about. Data brokers, not surprisingly, are hiding their opt-out pages from search engines. Secure messaging changes are coming to Russia. NIST has rolled out their lightweight IoT crypto, and we're going to take a look at that. That's big news, too, because of course Leo you are encumbered, not encumbered, you're encrusted with AI things that are monitoring everything that's in your environment.

Leo: Dongles. You know, Lisa complained, I have to say. She said, "You are wearing another thing that's listening to everything we say?"

Steve: You don't wear them to bed, do you?

Leo: Well, they're in the bedroom because that's where they charge.

Steve: Oh, okay.

Leo: But I've stopped wearing them because she makes an excellent point. There's really probably, I mean, it's one thing to record everything I say. But to record everything everybody around me says is a little, maybe a bridge too far.

Steve: Maybe rude is...

Leo: Rude would be the word.

Steve: The four-letter word you were looking for.

Leo: My interest in these, ultimately as a tool, as an agentic tool, is genuine. I would someday love to have, you know, a little agent that knows everything that's going on. But we haven't resolved the privacy issue.

Steve: Well, and you are running a podcast that, you know, focuses on issues of AI.

Leo: This is my job.

Steve: And so you want to bring feedback. It lets you write off the cost of these, although they're not that expensive.

Leo: No, they're not that expensive, yeah.

Steve: So, no. Anyway...

Leo: Anyway, I feel bad.

Steve: So the point is that, if we get lighter weight crypto, then that means all of those little things that are communicating with other things can do so saving battery power and having much stronger encryption as they're communicating. So that's good. Also Syncthing has moved to v2.0.

Leo: Oh.

Steve: And actually beyond, pretty quickly. We'll talk about that. I have a first take about "Alien: Earth," the first two episodes of which aired last Tuesday, and the third one tonight.

Leo: And I watched so that we could talk about it.

Steve: Good. No spoilers here, but it'll be fun to talk with you about that. And then what can we learn from, maybe finally, from yet another critical vulnerability. And of course we've got a fan favorite Picture of the Week that I think everyone will enjoy. So with any luck we've got the hang of this, Leo, after a full 20 years, as we move into year 21.

Leo: I should have baked a cake or something. I just - it's so exciting that we've done this for so long. And just so grateful to you, Steve, because, I mean, folks, those of you who listen to this show, and maybe some of you have listened to all 1039 episodes, just think of the wealth of learning you've got for free from this guy who works so hard to bring this.

Steve: I got an interesting bit of feedback from someone who joined us not that long ago. And he'd heard me referring to other people's feedback about going back and listening from the beginning. So, and he got caught up, and he didn't have anything else going on, so he started doing that. And he's at like Episode 400. And he said, oh, my god, there is so much back here.

Leo: It's so rich.

Steve: That, like, you know, and there were, like, a lot of deep dives, a lot of multi-episode tutorials about what packets are and the notion of deterministic routing and dropping packets and how CPU architecture is created. I mean, you know, everyone sort of thinks, oh, but that was 20 years ago. What could possibly be germane? Well, it's like a lot of that hasn't changed.

Leo: A word of warning, though. If you do want to listen to the entirety, Patrick tells me there are 76 days, 18 hours, 44 minutes, and 7 seconds worth of shows. So put a few months aside to do that. Do you want to know the average length of each show? They're getting longer. It was, or is, the average, an hour, 46 minutes, and 6 seconds per show. But the first one was only 18 minutes. So that brought the average down.

Steve: It's funny, too, because that's where the 18 came from. The reason I thought our birthday was yesterday was 18.

Leo: Oh, the length.

Steve: It was 18 minutes. It was 18 minutes, not 18th of August. So when I checked when I was talking to Benito when we were setting up here, I went back to make sure it was the 18th, and I found, oops, the 19th. Today.

Leo: You can go to the website. They're all there. TWiT.tv/sn1, and you'll find the very first episode. So congratulations.

Steve: "As the Worm Turns," I think was the...

Leo: As we begin, I'm going to say it properly, as we begin our 21st year in doing this show. What a great 20 years. Again, I don't understand how 20 years went by. That's a huge amount of time. I don't get it. But, you know.

Steve: One day at a time. One foot in front of the other.

Leo: Front of the other. These things happen.

Steve: And you get where you're going. That's right.

Leo: Unbelievable. Thank you, Steve. I really, I sincerely cannot thank you enough.

Steve: It's been great. Really been really - and I know from the feedback how much this podcast...

Leo: What a difference.

Steve: ...means to our listeners. I mean, it's really - careers have been launched, and I'm flattered by that.

Leo: Absolutely.

Steve: I'm humbled.

Leo: People grew up listening to this show. People got into the business because of this show, people who have got certificates and promotions and better jobs because of this show. You've done a world of good. And a lot of us...

Steve: Wait, wait. I wouldn't be here without you.

Leo: Well...

Steve: This was your dumb idea 20 years ago.

Leo: I had the idea, but then ever since I've just been a rider on the Steve Gibson train. Thank you, Steve. Really appreciate it. All right, Steve. I have not looked. I have not peeked.

Steve: Okay. Now, this picture needed a longer caption.

Leo: I see that.

Steve: So I gave it the caption, "Although it can prove awkward, escorting a terminated IT worker as they collect their belongings and leave the building is strongly advised."

Leo: Walk them out of the building. Don't let them back into the wire closet. Is that what you're saying?

Steve: That's what I'm saying. And you'll see why.

Leo: Let's see why here. Scrolling up. Oh, dear. Oh, this is your worst nightmare. Oh.

Steve: So what we have is a picture of what was a heavily populated rack of switches and routers of some sort - they just look like, you know, high-density switches - which were very highly populated with green Cat6 networking cables and some white ones, where they've just all summarily been clipped with wire cutters, leaving about a one-inch pigtail off the end of the RJ45 connectors.

Leo: Somebody was really angry.

Steve: So, and in a hurry. They didn't have much time, so they couldn't pull everything out. They just went through and went snip, snip, snip, snip, snip. So as I said, although it can prove awkward, escorting a terminated IT worker as they collect their belongings and leave the building...

Leo: I thought there was going to be a pun with "terminated." I thought maybe there was a pun in here. But no, this is the worst kind of termination.

Steve: Oh, boy.

Leo: Yikes. Wow.

Steve: Okay. So what I learned in following up a bit further on the whole Cloudflare versus Perplexity question...

Leo: Oh, good.

Steve: ...is that the Internet is facing a profound change that's being driven by the presence of AI web-summary generators. When I went poking around to better educate myself about this issue, I discovered that a lot of the portion of the Internet that thinks about such things had blown up over this, you know, over the Cloudflare/Perplexity thing. I mean, it was the catalyst. And by "this" I don't mean the mechanics of bots and user agents, which is what we were focused on last week, but over the fundamental change that users - and that's the key - users are driving in the way information is obtained from the Internet.

I found a terrific posting on a site called "Contrary Research." Last Friday they posted a piece titled "Debating the Open Internet: Cloudflare vs. Perplexity." I've got a link in the show notes for anyone who wants to go to that source material. They examined and explained both viewpoints of the debate, and toward the end they said this. They said: "Regardless of what people may think the Internet should do, it seems clear what it will do, which is to march to the beat of consumer preferences. Just ask Betamax, LaserDiscs, and the Concorde." And, you know, if this podcast's younger listeners are unaware of Betamax and LaserDiscs, that's the point.

They said: "What the consumer wants, the consumer tends to get, consequences be damned. And today the consumer is compelled by agentic Internet consumption. Many people believe the future of the Internet is what's now being called zero-click. Those seeking to bring that future to life see Cloudflare's concern as the worries of a bygone era."

And I think that's the key, more than anything else, and it signals a profound change in the economics of the Internet. While I've been working out, you know, all of this for myself and our listeners, who watch me do it in plain sight on this podcast, trying to figure out what AI means, I've observed that my own use of chatbot AI has evolved into using it as a sort of super Internet search engine. And I know that's what you're doing also, Leo.

Leo: Yeah, yeah.

Steve: And whereas I would once have spent 15 minutes poking around the Internet looking for an answer, starting from a page of Internet search engine result links, today I often start and finish my search simply by asking ChatGPT.

Leo: Yeah.

Steve: That's often all I need. You know, I'll get a satisfying answer almost immediately, and that will often be the end of my quest. The reason this represents a massive change in the economics of the Internet is that the Internet is still by and large advertisement driven. And in the old days - meaning before last year - yeah. Those 15 minutes of poking around, which no longer happens for me, I would have been exposed to many advertisements which would have served to finance the sites I was visiting. That's the traditional economic model that AI summarizing has flipped on its head and killed.

TechCrunch's August 6th headline was: "Google denies AI search features are killing website traffic." Whether or not and to what degree that might be true, the fact that it's a headline is the message. In mid-April, Forbes wrote: "Roughly 60% of searches now yield no clicks at all, as AI-generated answers satisfy them directly on the search results page. In addition, Google's AI Overviews have displaced top-ranked links by as much as 1,500 pixels - which is about two full screen scrolls on a desktop and three full screen scrolls on a mobile device - significantly lowering click-through rates even for highly ranked pages. Recent research has shown that AI Overviews can cause a whopping 15-64% decline in organic website traffic, based on industry and search type. This radical change is causing marketers to reconsider their whole strategy regarding digital visibility."

Four months ago, over in the SEO (search engine optimization) Reddit, a poster wrote: "In the recent months, one of our top performing website's visits decreased by 66%. And after some investigation, we noticed everything is going well. We still have the same positions and the same click-through rates. However, the only issue we see is that websites are not getting searches. It dropped by, like, more than 50%. When we search for it, we see it still on the top like normal. Are people not using Google search as often and relying more on AI? Are we missing something? Please advise and let us know if you are experiencing something similar."

And that posting began a thread that was followed up on by many people saying variations of "ChatGPT, Perplexity, and Google's AI Overviews." One person wrote: "I recently performed a study on SERPs (Search Engine Results Pages)." He said: "It's quite obvious that three things are happening: Zero-click is a real thing. Every person in the study expanded the AI Overview, and nobody opened the citation links. Second, some people scrolled to see up to the top five links, and fewer opened them. And finally, most people trust Google entirely, and don't fact check the AIOs." That's AI Overviews.

He said: "Those who don't trust Google were showing signs that they eventually will." He said: "(One tester said out loud 'Hmm. I don't feel I trust these entirely,' but out of all the queries they performed, they only briefly read the first result once)." And he finished: "And yes, Google has lost some market share to ChatGPT, Perplexity, Claude, and others."

Paraphrasing what the Contrary Research site said: Consumers usually wind up dictating what wins and what loses. It's quickly become clear - I'm doing it, too - that consumers simply want quick answers to their questions. They want them quickly and without a lot of muss and fuss. Given that so much of the web has been financed by search engines driving traffic to websites, which in turn generate revenue for themselves by presenting visitors with advertisements, Large Language Model Chatbots appear to be driving a generational change in the way the Internet finances itself. The political strategist James Carville is credited with coining the phrase "It's the economy, stupid." Meaning: "Nothing else matters." So it's going to be interesting to see what shape the next-generation Internet economy takes. And Leo, I have no idea what's going to happen.

Leo: No. We have to solve this, obviously, because the other side of it is, as we mentioned last week, AI needs content.

Steve: Yeah.

Leo: So if people who create content aren't getting paid to create content and disappear, the AI's going to suffer from it, too. So that's not the solution.

Steve: Yeah. I mean, it really, I mean, and there's been sort of a precarious feeling. I can't remember what the context was when we were talking about this years ago. But there was like a question of, you know, do we need all these websites? Like there seemed to be like just so many junky websites, just to show us ads. And it's like, whoa, you know, do we...

Leo: That, by the way, part of the argument is look what, you know, yeah, okay, we created an ad-supported Internet. But look what happened as a result.

Steve: Right.

Leo: It became the Internet of Crap.

Steve: Yes.

Leo: Plus, you know, people are blaming Google for a loss of traffic, but a lot of this also is because people have decided to monetize by putting themselves behind paywalls, which people get around, but definitely is going to hit your traffic. So...

Steve: I think I've seen the paywalls getting stronger, too.

Leo: Oh, yeah.

Steve: Yeah. I think...

Leo: It's a cat-and-mouse battle. Look at YouTube. I mean, it's back and forth and back and forth all the time. I just don't know what the answer is. I think the ad model is not a good answer. I look at this is what we've concluded on TWiT. It's why we have the Club. But we don't want to do a paywall, either, because I don't want to - I want people who want to see the show, for instance, be able to see it for free, ad supported for free. But if advertisers abandon podcasts, which they're, by the way, kind of in the process of doing right now, then I think the Club is the only sensible way to go forward. We need - we can't do it for free. That's the problem. And, you know, look, I'm on both sides of this equation because I'm a content creator, and I make my living on the support of our audience, whether through advertising or subscription. So I don't know what the answer is.

Steve: And keeping one's wife happy is important, too.

Leo: Yeah. Yeah. Yeah. That's true, too. I mean, I'm a believer in what we do, and I think it's really important what we do. I think the content we create is really important. I probably would still do it for free, but I would have to have another job. I have to make a living. I have to pay rent, to keep the lights on. Have to pay our hosts. So it's really an interesting challenge. And at the same time, yes, people want these AI summaries. People want what AI is giving them.

Steve: And I do, too. I mean...

Leo: Me, too.

Steve: ...it is a shortcut. It is, you know, and presumably - I was just telling you before we began recording, that I asked AI a very - I just put it into my Google search, a very specific question, and I got Google's AI Overview, and it was definitively incorrect. And so we've got to fix that, too. I mean...

Leo: Well, this is a problem with Google particularly because Perplexity and ChatGPT and others, while they do hallucinate, seem to do a much better job than Google does. Google's...

Steve: Well, you have to imagine that they were in a hurry to get something up on the page, too.

Leo: Maybe that's it. Yeah, they rushed it. I mean, I pay for a Google replacement called Kagi. We've interviewed the founder and creator of Kagi. It's a public benefit corporation. He was on Intelligent Machines a couple of weeks ago. And they have a Perplexity-style search orchestrator that's really, really, really useful. And, you know, the future of search is clearly doing this. Even despite the fact that Google's AI summaries are awful, there are good ways to do this. So we've got to find a way to make this work all around. I think, you know, the problem is AI companies, as much money, as much funding as they have, they are not an infinitely deep well of funding for content creators, and certainly The New York Times and Reddit and others have gone to them and said, "Give us money."

Steve: Well, they can't even fund themselves at this point.

Leo: Yeah, yeah. So that's not it. Part of this is that we're living in a fool's paradise. We thought the Internet was free. And for 20 years we've told people, we've communicated that, oh, it's free. It's not. It's not. So we need to find a way to make it work. I don't know if that's going to happen. I don't know what's going to happen. We live in interesting times, Mr. Gibson.

Steve: We do. And we're here to chronicle it to our best ability.

Leo: Yes, yes.

Steve: Last Thursday, Plex notified some of its users to urgently update their media servers due to a recently patched security vulnerability. Or, you know, made available security vulnerability, patched, but not yet updated. So they've got to follow through and update; otherwise, you know, that patch sitting over at Plex doesn't do your server any good. Although the unknown vulnerability doesn't yet have an assigned CVE-ID, Plex did indicate that it impacts their Media Server versions 1.41.7.x to 1.42.0.x.

They said: "We recently received a report via our bug bounty program" - and so props for Plex having one - "that there was a potential security issue affecting Plex Media Server" through those ranges. "Thanks to that user, we were able to address the issue, release an updated version of the server, and continue to improve our security and defenses. You've received this notice because our information indicates that a Plex Media Server owned by your Plex account is running an older version of the server. We strongly recommend that everyone update their Plex Media Server to the most recent version as soon as possible, if you have not already done so."

So Plex Media Server version 1.42.1.10060 has this vulnerability patched and can be downloaded from the server management page or the official downloads page. And again, props to Plex for also being so proactive. You know, that is very nice to see. Our long-time listeners will recall, and actually it wasn't that long ago, Plex has experienced its share of critical and high-severity security flaws over the years.

It was in March of 2023 that CISA tagged a then three-year-old remote code execution flaw, which was numbered CVE-2020-5741, in the Plex Media Server as being actively exploited in attacks. That was after three years, actively exploited. And as we're going to see a little bit later, hackers don't bother with them, and there are many of them. So there were a bunch of them. Plex had explained two years earlier, at the time it released the patches, that successful exploitation can allow attackers to cause Plex server to execute malicious code.

Our listeners will also likely recall that it was a long-neglected Plex server running at a LastPass developer's home that was eventually found to be the cause behind the devastating LastPass security breach that led many of us to decide that it was finally time for us to change our password manager allegiances. The engineer had never updated their Plex server. This allowed the bad guys to surreptitiously install a keystroke logger onto the developer's PC, which then allowed them to obtain his LastPass authentication credentials, then compromise LastPass's network, their corporate vault, and their backups. So Plex is being much more proactive today, which is great to see. And anyone who may still be using a Plex server would be, should I say "well served" to make sure that they're running the latest release.

Three weeks ago we noted that Allianz Life's network and servers had been breached, and that they had lost control of their customer's data. So last week we learned that hackers had released that stolen data, exposing 2.8 million records worth of sensitive information on Allianz Life's business partners and their customers in ongoing Salesforce data theft attacks. What we learned last month was that Allianz Life had suffered a data breach when the personal information for what they said was the "majority" of its 1.4 million customers was stolen from a third-party, cloud-based CRM system on July 16th. Although Allianz Life did not name the CRM partner, it was reportedly part of a wave of Salesforce-targeted thefts carried out by the ShinyHunters extortion group. Yes, the ShinyHunters.

BleepingComputer reported: "Over the weekend, ShinyHunters and other threat actors claiming overlap with 'Scattered Spider'" - now, remember that Scattered Spider are the very potent social engineering guys - "and 'Lapsus$,' another group, created a Telegram channel called 'ScatteredLapsuSp1d3rHunters' to taunt cybersecurity researchers, law enforcement, and journalists while taking credit for a string of high-profile breaches. Many of these attacks had not previously been attributed to any threat actor, including the attacks on the Internet Archive, Pearson, and Coinbase.

"One of the attacks claimed by the threat actors is Allianz Life, for which they proceeded to leak the complete databases that were stolen from the company's Salesforce instances. These files consist of the Salesforce 'Accounts' and 'Contacts' database tables, containing approximately 2.8 million data records for individual customers and business partners, such as wealth management companies, brokers, and financial advisors. The leaked Salesforce data includes sensitive personal information, such as names, addresses, phone numbers, dates of birth, and Tax Identification Numbers, also known as Social Security Numbers, as well as professional details like licenses, firm affiliations, product approvals, and marketing classifications."

Leo: Oh, my god.

Steve: It's just awful. "BleepingComputer has been able to confirm with multiple people that their data in the leaked files is accurate, including their phone numbers, their email addresses, their tax IDs, and other information contained in the database." So again, props to Bleeping for, like, following up and actually contacting some of these people and saying, hey, noticed your data among that. Is that correct? And they said, oh, yeah, it is, unfortunately. "BleepingComputer contacted Allianz Life about the leaked database, but was told that they could not comment as the investigation is ongoing." And we know how these things go. It will be for years.

Leo: Yeah. We're never going to comment.

Steve: That's right. That's right. Well, like until it all dies down, and then no one cares anymore. And it's like, oh, well, who cares now?

They finish, saying: "The Salesforce data theft attacks are believed to have started at the beginning of the year, with the threat actors conducting social engineering attacks to trick employees into linking a malicious" - get this, Leo - "tricking employees into linking a malicious OAuth app with their company's Salesforce instances." That's just diabolical. "Once linked, the threat actors used the connection to download and steal the databases, which were then used to extort the company through email."

Leo: So it's social engineering.

Steve: Oh, yes, yeah. It was the, who are they, the Scattered Spider are the social engineering guys, you know, in this team. So bad guys convince an employee with sufficient access privileges that they are the company's IT department, or maybe an outside agency that's been tasked by the company with increasing the company's security profile. So the unwitting employee is instructed to download an OAuth application to strengthen their authentication, which they'll then use to authenticate. It would never occur to the employee that the OAuth app itself is malicious, and it's been modified, and that its use will be creating a backdoor for the bad guys to use to get in. So once again we see that, despite all of our fancy technology, it all depends upon people doing the right thing.

Leo: Yeah. And you've got to train them, and it's hard. It's hard.

Steve: I would argue that that's a four-letter word. The word we need is "impossible."

Leo: Yeah.

Steve: Which has many more letters. Unfortunately, it's the nature of security, right, that every single person must never even once do the wrong thing, since all that's needed is a single slip-up. And, you know, it's really not fair that the good guys must always be perfect every time, while the bad guys only need to find - or create - a single mistake once. I mean, the asymmetry of this is insane.

Leo: It's funny, that's exactly - one of our listeners was, maybe still is, responsible for security at West Point, the U.S. Military Academy. I said, "That's a terrible job." He said, "Yeah. I have to be perfect. I cannot make one mistake."

Steve: Yeah.

Leo: Unbelievable.

Steve: Yeah. BleepingComputer even had a conversation with the perps, which I love. They wrote: "Extortion demands were sent to the companies via email and were signed as coming from ShinyHunters. This notorious extortion group has been linked to many high-profile attacks over the years, including those against AT&T, PowerSchool, and the Snowflake attacks. While ShinyHunters is known to target cloud SaaS applications and website databases, they're not known for these types of social engineering attacks, causing many researchers and the media to attribute some of the Salesforce attacks to Scattered Spider.

"However, ShinyHunters told BleepingComputer that the ShinyHunters group and Scattered Spider are now one and the same. They said: 'Like we have said already repeatedly, ShinyHunters and Scattered Spider are one and the same. They provide us with initial access, and we conduct the dump and exfiltration of the Salesforce CRM instances. Just like we did with Snowflake.'"

Leo: It's synergy, man. It's a corporate synergy.

Steve: Yeah. We each have our roles. That's right. They said: "It's also believed that many of the group's members share their roots in another hacking group known as Lapsus$, which was responsible for numerous attacks in 2022 and 2023, before some of their members were arrested. Lapsus$ was behind breaches at Rockstar Games, Uber, 2K, Okta, T-Mobile, Microsoft, Ubisoft, and NVIDIA. Like Scattered Spider, Lapsus$ was also adept at social engineering attacks and SIM swapping attacks, allowing them to run over billion and trillion-dollar companies' IT defenses."

And they finish: "Over the last couple of years, there have been many arrests linked to all three collectives, so it's not clear if the current threat actors are old threat actors, new ones who have picked up the mantle, or are simply utilizing these names to plant false flags." So...

Leo: How do you feel about publicizing these guys, though? Maybe they're in it for the money; but it feels like, especially when they create a Telegram channel to taunt the journalists, that they really love the publicity.

Steve: Yeah.

Leo: They ate this stuff up.

Steve: Yeah.

Leo: I mean, it was us, Scattered Spider, heh heh heh.

Steve: Yeah. I think the good news is it probably stays within a relatively small audience.

Leo: Yeah.

Steve: I mean, we're talking about it. Bleeping Computer is.

Leo: It's security experts know the names, not normal folks.

Steve: And, you know, no one cares unless they end up being victims. But the story is interesting, I think, because it is another in what has now become a long string of examples of the way modern attacks are now occurring. It is now the people, not only the technologies, that present the greatest source of vulnerabilities. Therefore, it's the people who are now being attacked.

Leo: Yeah.

Steve: You know, the only recourse I can imagine for any large company with many employees - remember I famously said back in the early days of that Sony breach, "I don't want that job" of, like, trying to secure Sony Entertainment. Just like your West Point guy.

Leo: He loved his job, by the way. I don't want to imply that he didn't like his job. He loved it. But he said it's stressful.

Steve: But it's challenging.

Leo: Yeah.

Steve: Yeah. So the only recourse I can imagine for any large company with many employees, you know, each and every employee of which presents a potential vulnerable point of entry for bad guys, is to unfortunately assume that inadvertent misconduct will occur on the part of any employee. So work to design a network architecture that inherently mistrusts its own users. That's the way our operating systems are now designed. You know, there's a well-understood concept of a system administrator versus its user.

Of course, it's easy for me to sit back here, you know, and armchair quarterback the network architecture that enterprises should design. I don't have the task of actually doing so, and I cannot imagine the difficulty of actually doing so. But for what it's worth, what I'm sure of, what all the evidence teaches us, is that the designers of any contemporary enterprise's information systems must design their systems under the assumption that malicious users will be authenticated on that enterprise's network.

It should be clear that having an impenetrable perimeter defense is important. But it's now equally clear that the battle has moved inside and is now being waged against individual employees inside that perimeter. The malefactor's goal is to penetrate an employee's human defenses, and from there to move laterally into the enterprise's network. This means that today's and tomorrow's rational security design needs to be resilient against attacks from the inside.

Leo: Well, and if you'll forgive me, but that's why you see advertisers like ThreatLocker promoting Zero Trust because that would have worked. That would have stopped it here.

Steve: And our Thinkst Canary.

Leo: The Thinkst Canary, which at least if somebody gets in, you would know that they're wandering around.

Steve: Yup.

Leo: And advertisers like Hoxhunt and other advertisers that do training. But I think this is where Zero Trust is great because you could install that OAuth as an employee, but it wouldn't be useful as malware until somebody with a higher level authorized it; right? And so you can - it's a lot easier to say, well, look, our customer service reps, as good as they are, we can't fully trust them. Anything that's going to roam the network has to be authenticated by somebody with a lot more skills and training.

Steve: Yeah, yeah.

Leo: You could see - it's funny because I've watched the flow of advertising. And you could see how it's moved more in that direction.

Steve: Well, and we know that the users chafe; right? It's because...

Leo: Oh, yeah. I don't blame them.

Steve: ...they used to, what do you mean, I have to authenticate to use the printer? I never used to have to do that.

Leo: Right.

Steve: You know? What do you mean, I have to, you know, I have to use my entry card to go use the bathroom? I never used to have to do that. You know, what, are you tracking me around the building? Yes.

Leo: Yes.

Steve: If something bad happens, we need to know where you were.

Leo: It's a rough world out there. Most of our listeners are CISOs or IT professionals or people who really are dealing with MSPs day in, day out. You have our support and our sympathy, yeah.

Steve: Yeah. And our next sponsor.

Leo: Oh. Well, I'm glad to tell you about our next sponsor. We have great sponsors. It's funny because for a long time, remember it was, well, our very first sponsor, Astaro, was about perimeter defenses. That was the gold standard. But over time that's evolved.

Steve: It's moved in.

Leo: It's not enough to just keep people out because you can't guarantee that's going to happen. Now, I've been using that EFF tool to see how secure my browser is. And I'm very happy to say that Safari blocks fingerprinting. I was really pleased to see that. And all the Safari derivatives block fingerprinting.

Steve: Cool.

Leo: Chrome, on the other hand, almost feels like they encourage it.

Steve: Fingerprinting? What that? So I just fired up Chrome to see what version was shipping. It's been quite a while since I last launched it because that's not what I use. So I got the big What's New announcement page, you know, because they're like, oh, he hasn't used me for a while, so let's tell him. The Help About showed that we're currently at major v139. And the goodie that I want to talk about isn't due to land until major v140. Chrome 140 entered into beta two weeks ago, on August 6th. And it is scheduled to begin rolling out to its general audience two weeks from today, on Tuesday, September 2nd. So two weeks from today Chrome will have - get this - Script Blocking in Incognito Mode.

Leo: Oh.

Steve: Yeah.

Leo: That's interesting.

Steve: Yeah. The overview says - and what they're doing is they're doing it in an interesting way, which is not kind of what we want, but okay. We're going to understand this. Their overview says: "Mitigating API Misuse for Browser Re-Identification." Okay, mitigating API misuse for browser re-identification.

Leo: Isn't that what we're talking about with fingerprinting? That's fingerprinting; right?

Steve: Yes. That is fingerprinting. That's the fancy way of saying, yeah, like, you know, deleting your cookies and then getting your browser re-identified, you know, "otherwise known as Script Blocking," they said, "is a feature that will block scripts engaging in known, prevalent techniques for browser re-identification in third-party contexts. These techniques typically involve the misuse of existing browser APIs" - meaning, you know, JavaScript stuff that we talked about, like battery level and canvas drawing, you know, subtle changes in what the pixels end up being sent to - "to extract additional information about the user's browser or device characteristics. In other words, a fingerprint."

They said: "This feature uses a list-based approach" - okay - "where only domains marked as 'Impacted by Script Blocking' on the Masked Domain List (MDL)" - we'll explain all this in a minute - "in a third-party context will be impacted." In other words, blocked. They don't want to say that for some reason. Yeah, uh-huh.

"When the feature is enabled, Chrome will check network requests against the blocklist. We," says Google, "will use Chromium's subresource_filter component, which is responsible for tagging and filtering subresource requests" - meaning third-party - "based on page-level activation signals, and a ruleset is used to match URLs for filtering." So this is a little inside baseball, you know, developer jargon.

They said: "The enterprise policy name is PrivacySandbox FingerprintingProtectionEnabled." Okay. So the section headlined "Motivation" says: "Browser re-identification techniques have been extensively studied by the academic community, highlighting their associated privacy risks. We want to improve user privacy in Incognito mode" - but not otherwise - "by blocking such scripts from loading."

Okay. So just to be clear, this is not at all what, for example, Safari, to your point, Leo, or the Brave browser is doing. Brave is deliberately fuzzing the results of various fingerprintable modern browser techniques to prevent any and all known AND unknown first- and third-party fingerprint tracking against a user's wishes. What Chrome is doing is better than nothing, but it's a far cry from what Brave is doing. In the first place, Chrome is only doing anything for users who are in Incognito mode. And when in Incognito mode, based upon Google's description, Chrome will cross-reference the domain names of any third-party resource fetches against what they're calling their MDL, their Masked Domain List. And if a cross-reference is found, if a match is found, then they will proactively block the execution of any scripting by any resource returned from a fetch from any of those domains.

So on the one hand, it's better than Brave in that all potentially troublesome scripting is blocked, completely blocked, you know, scripting just doesn't work, rather than allowed to run and be fuzzed. But on the other hand, it only applies while the user is viewing websites in Incognito mode, and it only blocks previously known and, you know, blacklisted troublesome domains. So, you know, it's better than nothing. And it also makes sense that Chrome would do this, since Chrome's MDL is already being used to deliberately obscure the user's IP, which is an extremely cool and useful feature, which I don't think Google and the Chromium developers have received enough credit for.

I've previously noted that, despite any other measures we users might take, our IP addresses are likely still providing the strongest possible tracking signal, since they so very rarely change. Given that, it's reasonable to ask what's the point of jumping through all of those other hoops with anti-fingerprinting and cookie erasing and all, if all of our browser fetches to third-party trackers will be made from the same IP?

The Chromium developers clearly understood this. The MDL term, meaning the Masked Domain List team, that whole thing is a list of domains from which someone using Incognito mode's Internet IP address will be masked. In other words, Google actually takes it upon themselves to proxy any requests a user in Incognito mode might make to any third-party domain on the MDL. Meaning that Chrome doesn't request that domain. It requests it through Google so that the domain sees Google making the request, not the user.

That MDL is a public, GitHub-hosted list of domains that Chrome treats as higher risk for cross-site tracking. So when one of those domains loads in a third-party context in Incognito, Chrome provides extra identity protection by routing the request through privacy proxies so that untrusted third-party sites, what they see are requests arriving from what Google calls a "masked IP," rather than the user's actual Internet address. Which is extremely cool.

As for the MDL, Google defines their inclusion criteria for participating in the list, and Disconnect.me evaluates and maintains the list for the Chromium Project following the criteria that Google laid down. It's published publicly and maintained on GitHub. You know, that "naughty list" contains domains that commonly run as a third party across multiple sites - and, you know, basically they're trackers - and either participate in ads and marketing data flows, so serving/targeting/measuring ads or collecting user data, or which appear to collect device and user data that might be usable for cross-context re-identification. I mean, so they're working hard to the degree that they are to, you know, shut that down in third-party context. And additionally, Chrome also detects independently widely used JavaScript fingerprinting patterns which can also get a domain listed.

The IP proxying has been in place for most of this year. But someone must have also noticed that it would still be possible to run a powerful fingerprinting script through a proxy which would only be obscuring the user's IP address. In other words, sure, the proxy's good for masking the IP, but if you are still allowing fingerprinting through the proxy, then you're still allowing some way of tracking. So what's being added in two weeks to Chrome 140 is that, in addition, Incognito mode will be blocking third-party scripting in addition to the existing IP proxying. So props to Google and the Chromium team; you know? These are useful, good additions.

And we're getting a couple other things in two weeks from Chrome 140. Anyone who's ever been annoyed, as I have been, by the need to explicitly write JavaScript to encode text or binary data into URL-safe Base64 ASCII text and also go the other direction, decode Base64 back into its original form, essentially by hand, in JavaScript, will be happy to see.

Google writes: "Base64 is a common way to represent arbitrary binary data as ASCII. JavaScript has Uint8Arrays to work with binary data, but no built-in mechanism to encode that data as Base64, nor to take Base64'd data and produce a corresponding Uint8Array." They said: "This is a proposal to fix that. It also adds methods for converting between hex strings and Uint8Arrays." So that's a handy new feature coming to JavaScript in two weeks in Chrome. And, you know, it is part of the W3C standard. So, I mean, W3C just keeps throwing all this stuff out there, and the various browsers are, you know, moving forward at whatever pace they are to incorporate the standard as we go, which is why there are always tables of which browser versions support which features and not, because everybody's always playing a game of catch-up because the W3C never stops throwing new stuff out there.

And here's a second biggie regarding something we've just been talking about recently: a web browser directly accessing the network of its own hosting machine, right, through localhost. Google writes: "Chrome 140 restricts the ability to make requests to the user's local network" - not just localhost but its local network - "requiring a permission prompt." They wrote: "A local network request is any request from a public website to a local IP address or loopback, or from a local website such as an intranet to loopback.

"Gating the ability for websites to perform these requests behind a permission mitigates the risk of cross-site request forgery attacks against local network devices, such as routers. It also reduces the ability of sites to use these requests to fingerprint the user's local network. This permission is restricted to secure contexts. If granted, the permission also relaxes mixed content blocking for local network requests, since many local devices cannot obtain publicly trusted TLS certificates for various reasons."

So all of this is great. This means that IPs within the same network as the browser's host machine will require an affirmative granting of permission before Chrome 140 and later will fetch anything from that local IP. For example, I currently access my cable modem at 192.168.100.1, and my pfSense firewall is at 192.168.0.1, and our ASUS router is at 192.168.1.1. So in two weeks, any attempt to access those devices through my browser, which is the way we access them, right, is through browser UIs, should produce some sort of "Are you sure?" permission request, like telling me what my browser is trying to do and saying something about this is on your own network. Do you want to go there? You know, is this what you're intending? So that seems, given how infrequently we need to do it from our browser, minimally intrusive and definitely worthwhile.

One of the things that the testers of the DNS Benchmark, you know, the one I'm working on, have noticed, since the Benchmark has always tested remote DNS resolvers to see whether they would block or resolve private IPs - none should - is that the once-common prevention of what's known as DNS rebinding attacks has apparently disappeared, fallen by the wayside from the public Internet.

A rebinding attack is something which we actually talked about a few months ago, when a public domain name was returning the IP 127.0.0.1. That can be used as a type of black hole to kill traffic, but doing that is not safe, and that was a malicious domain that we were talking about at the time. Returning 0.0.0.0 is a much better solution for null-routing a domain name. If a public domain were to return, for example, 192.168.1.1, then asking a browser page to connect to a public-appearing domain name would cause it to connect to a network's local ASUS router, in my case, which is almost certainly not what you would expect, or want, to have some JavaScript running in your browser to be doing.

So the abuse of this is known, as I said, as a DNS rebinding attack; and there is no clear reason for resolvers of public DNS domains to return non-routable IPs which have been reserved for use within private networks. But unfortunately now all of them are doing that except just a very few which exist out on the public Internet. So I'm glad that Chrome is now taking proactive measures, and hopefully Firefox and other browsers will follow because there are - there was an attack we talked about a few years ago involving other protocols which routers were involving themselves in.

Essentially, routers were proxying some other protocols, and there was a way of using, if you could determine what the address of the router was, that is, actually the user's gateway on their local network, then you would be able to use other ports on that gateway and create some security vulnerabilities which, you know, put all this on the map. And people were saying, okay, browsers should not be poking around behind their user's back on their own local networks. And look how long it's taken for anything to happen to begin to fix that.

The Markup's headline was "We caught companies" - and this is not surprising, but the number of companies is somewhat surprising. "We caught companies making it harder to delete your personal data online." Now, I suppose we shouldn't be surprised, but I thought it was interesting. The article's tease said: "Dozens of companies are hiding how you can delete your personal data, The Markup and CalMatters found. After our reporters reached out for comment, multiple companies have stopped the practice." So this is why it's good to have people poking at things and looking at things and reporting on things and basically embarrassing companies into changing their practices. Unfortunately, unless we have that, companies will do it until they're found out.

The Markup wrote, explaining what they found. They said: "Data brokers are required by California law to provide ways for consumers to request their data be deleted. But good luck finding them."

Leo: Yep.

Steve: And wait till you hear the number of them, Leo. They wrote: "More than 30 of the companies which collect and sell consumers' personal information hid their deletion instructions from Google, according to a review by The Markup and CalMatters of hundreds of broker websites. This creates one more obstacle for consumers who want to delete their data. Many of the pages containing the instructions, listed in an official state registry, used code to tell search engines to remove the page entirely from the search results." Not something that can happen by mistake. "Popular tools like Google and Bing respect the code by excluding pages when responding to users."

Okay. So upon reading that, I was tempted to suggest that users ask Perplexity. But anyway: "Data brokers nationwide must register in California" - get this. "Data brokers nationwide must register in California under the state's Consumer Privacy Act, which allows Californians" - like you and me, Leo - "to request that their information be removed, that it not be sold, or that they get access to it. After reviewing the websites of the 499 data brokers registered with the state..."

Leo: Wow.

Steve: 499.

Leo: It's a good business to get into. Anybody can do it.

Steve: That's right. "We found that 35," they wrote, "had code to stop certain pages from showing up in searches." Okay. There are 499 data brokers registered with the state. Who said the data broker business was not booming? We didn't. But in any event, 35 of those 499 had website pages containing search engine non-indexing flags.

The Markup said: "According to Matthew Schwartz, a policy analyst at Consumer Reports who studies the California law governing data brokers and other privacy issues, while those companies might be fulfilling the letter of the law by providing a page consumers can use to delete their data, it means little if those consumers cannot find the page. Matthew said: 'This sounds to me like a clever workaround to make it as hard as possible for consumers to find it.'"

Leo: Not that clever. It's pretty...

Steve: Yeah, right.

Leo: Ooh, clever.

Steve: Those who aren't doing it just never thought of it, apparently.

Leo: Yeah. Yeah. That's why it's only 30. That actually shocks me.

Steve: Yeah. "After The Markup and CalMatters contacted the data brokers, eight said they would review the code on their websites and remove it entirely, and another two said they had independently deleted the code before being contacted. The Markup and CalMatters later confirmed that nine of the 10 companies had removed the code. Two companies said they added the code intentionally" - get this - "to avoid spam at the recommendation of experts, and would not change it. The other 24 companies didn't respond to a request for comment; however, three removed the code" - silently, apparently - "after The Markup and CalMatters contacted them. After publication, one company that had not previously responded, that was USPeopleSearch.com, said it had removed the code. Most of the companies that did respond said they were unaware the code was on their pages." What?! How'd that get there? Uh-huh. Right.

"May Haddad, a spokesman for data company FourthWall" - this is one of the brokers - "said in an emailed response: 'The presence of the code on our opt-out page was indeed an oversight and was not intentional. Our team promptly rectified the issue upon being informed. As a standard practice, all critical pages, including opt-out and privacy pages, are intended to be indexed by default to ensure maximum visibility and accessibility." Okay.

The Markup and CalMatters later confirmed that the code had been removed as of July 31st. I still cannot get over that number, that one fewer than 500 registered data brokers exist in California.

Leo: Yeah, wow.

Steve: "Some companies," they wrote, "that hid their privacy instructions from search engines included a small link at the bottom of their homepage. Accessing it often required scrolling through multiple screens, dismissing multiple pop-ups for cookie permissions and newsletter sign-ups, and then finding a link that was a fraction of the size of the other text on the page." And of course this should not surprise anyone; right?

Leo: Yeah. One of our sponsors today is DeleteMe.

Steve: DeleteMe.

Leo: And this us - you could say, well, the state makes it possible for you to go to each of those 499 data brokers and request deletion. You could do it. Yes, you could do that manually, if you could find the link.

Steve: Yeah. You know, these companies are scraping and purchasing personal information from everywhere they can about everyone they can. So the last thing they're going to do is invite anyone to delete the data that they've purchased.

Leo: Right. Right. It's malicious compliance.

Steve: California law notwithstanding.

Leo: Right.

Steve: Unless the law were to explicitly and clearly state that their opt-out pages must be as accessible and searchable as any other page, you know, with opt-out links prominently displayed and as visible as any others, and with stiff fines imposed if these requirements are ignored, companies are going to do whatever they can to make it difficult. And they're going to always say, oh, we're sorry, we don't know how that code got in there. That wasn't supposed to be there. And then they'll reluctantly remove it.

Leo: Our anti-spam experts told us to do this.

Steve: That's right. We were told that we were going to get search engine spam if we didn't block those, if we didn't put that in there.

Leo: By the way, how hard is it to do their job when companies like Allianz basically give our Social Security numbers away for free? It's not hard to create those dossiers, is it. I mean, that's why there's 500 of these guys.

Steve: Yeah. You just go on the dark web, and you just suck them up.

Leo: Suck them up.

Steve: They gave two other examples. They said: "Consumers still faced a serious hurdle when trying to get their information deleted." They said: "Take the simple opt-out form for 'ipapi,' a service offered by Kloudend" - spelled with a K - "Inc., that finds the physical locations of Internet visitors based on their IP addresses. People can go to the company's website to request that the company 'Do Not Sell' their personal data or to invoke their 'Right to Delete' it. But they would have had trouble finding the form, since it contains code excluding it from the search results. A spokesperson for Kloudend described the code as an 'oversight,' and said the page had been changed to be visible to search engines. The Markup and CalMatters confirmed that the code had been removed as of July 31st.

"Telesign, a company that advertises fraud-prevention services for businesses, offers a simple form for 'Data Deletion' and 'Opt Out/Do Not Sell.' But that form is hidden from search engines and other automated systems, and is not linked to its homepage." Leo, how do you find it? "Instead, consumers must search about 7,000 words into a privacy policy filled with legalese to find a link to the page. A spokesperson for Telesign didn't respond to a request for comment." Wow. So, yeah. We're in an industry where our data is being collected without our permission. None of us asked for those big credit bureaus to collect and sell all of our information. It happened anyway. And they're all resisting its removal, to no one's surprise.

Leo, after this next break we're going to talk about the changes coming to messaging in Russia.

Leo: Yeah. Ooh, how exciting. From Russia.

Steve: Well, we actually have some Russian listeners, it occurred to me, so it may affect them.

Leo: Oh, good. I wonder if I should get my, gasp, what is it, Roskomnadzor voice.

Steve: Well, we'll be talking about Roskomnadzor.

Leo: Roskomnadzor. I'll get it ready.

Steve: Okay.

Leo: Just in case.

Steve: No, you're going to need it.

Leo: Okay.

Steve: So assuming that we may have some listeners in Russia, and I know that we do, I've heard from them...

Leo: Have you? Oh, wow, that's cool. That's neat.

Steve: Oh, yeah. Now, this may be of direct interest to them. And for everyone else it's at least interesting as another example of the changing Russian cyber landscape. Everyone - and here it comes, Leo - everyone's favorite Russian Internet watchdog, Roskomnadzor...

Leo: I'm sorry. I can't resist.

Steve: Perfect. It's coming up again in a second, "...has started restricting voice and video calls over Meta's WhatsApp messenger and Telegram."

Leo: For everybody?

Steve: Yes, everybody.

Leo: Okay.

Steve: Restricting voice - yes. This is why this is really important.

Leo: Yeah.

Steve: Restricting voice and video calls over Meta's WhatsApp Messenger and Telegram. And Roskomnadzor said the two messengers were used to commit fraud and terrorist activities.

Leo: Oh, yeah, sure.

Steve: Of course, that's what they're going to say.

Leo: Yeah.

Steve: But get this, there's actually a different reason. We know from our previous reporting that there has been some correlation between Telegram use and arrests in Russia. The assumption has been that while the content of any messaging using Telegram remained secret, the metadata, that is to say the fact that there had been messaging between two given endpoints, may have remained accessible to those in a position to monitor Telegram's digital traffic.

Forbes Russia reported last Monday that Russia's - get this, here it is - four largest telcos petitioned the government for the ban.

Leo: Oh, they wanted it. Because?

Steve: They argued that a ban would return traffic to the phone networks and increase their revenue. So that's the way things operate in Russia. Rather self-serving there. The ban also comes as the Russian government is pushing users over to its own soon-to-be-released, never to be trusted, national instant messenger app named Max.

And to that end, the Kremlin has ordered government officials to move their Telegram channels to the country's emerging domestic messaging app, Max. Officials will still be allowed to have accounts on other platforms, but the Max channels are now mandatory. The official Max accounts are expected to go live in the coming weeks, when the Max app is expected to come out of beta and become broadly available to the public. Okay. So that's kind of clever; right? No one is saying they cannot also use something else. But if every government official is required to have an account on Max, it's foreseeable that over time government employees will just, you know, are likely gravitate to it, since they will know that all other officials will be there also.

And again, who's going to trust the official Kremlin instant messaging app?

Leo: Well, who wouldn't?

Steve: Yeah, that's right. That's right.

Leo: Hello, comrade. Are you listening?

Steve: So is Roskomnadzor.

Leo: So is Roskomnadzor.

Steve: That's right. Okay. So the United States' NIST, the National Institute of Standards and Technology, is the organization the entire world has come to rely upon to corral and organize technical domain experts and manage the complex development of current and next-generation technologies and protocols. Even though the results emerging from these efforts are open and free for the world to use, there is still a desperate need for there to be universally agreed upon standards for things like communications protocols, device interfaces, and encryption algorithms. Even Russia uses them. Nothing works for anyone unless we have, as a bare minimum, interoperability among interacting systems. NIST provides the required organization to see that we have at least that. So thank god for NIST.

In fulfilling that mission, last Wednesday NIST posted some welcome news under their headline "NIST Finalizes 'Lightweight Cryptography' Standard to Protect Small Devices," with the teaser "Four related algorithms are now ready for use to protect data created and transmitted by the Internet of Things and other electronics."

And NIST's announcement led with three bullet points. "First, many networked devices do not possess the electronic resources that larger computers do, but they still need protection from cyberattacks. NIST's lightweight cryptography standard will help. Second, the four algorithms in the standard require less computing power and time than more conventional cryptographic methods do, making them useful for securing data from resource-constrained devices such as those making up the Internet of Things. And finally, NIST has finalized the standard after a multiyear public review process followed by extensive interaction with the design community."

And of course this is crucially important for the future, so I want to share NIST's overview and their brief summary comments about these four new finalized and now standardized algorithms. They said: "NIST's newly finalized lightweight cryptography standard provides a defense from cyberattacks for even the smallest of networked electronic devices. Released as Ascon..."

Leo: Not the greatest choice.

Steve: No. Only one "s," luckily.

Leo: Okay.

Steve: "Ascon-Based Lightweight Cryptography Standards for Constrained Devices, and that's under NIST Special Publication 800-232, the standard contains tools designed to protect information created and transmitted by the billions of devices that form the Internet of Things, as well as other small electronics, such as RFID tags and medical implants." In other words, we really need this.

"Miniature technologies like these often possess far fewer computational resources than computers or smartphones do, but they still need protection from cyberattacks. The answer is lightweight cryptography, which is designed to defend these sorts of resource-constrained devices."

Okay. So I'll break in here just to comment that all around us we see everything becoming smaller and lighter and running on smaller and smaller batteries. And Leo, as I mentioned, your upper body has become encrusted with various AI monitoring, recording, and summarizing technologies.

Leo: Oh. Yes. I'm glad you finished that sentence.

Steve: In many cases they need to communicate through the air, and privacy may be important. Or take the case of wireless keyboards. We've seen how past keyboards used incredibly lame fixed-byte XOR encoding, which statically flipped some of the bits of each transmitted byte. Determining the byte for any given keyboard and decrypting all of the keystrokes would make a great junior high school computer science fair project, because it's about at the seventh- or eighth-grade level of difficulty. Keyboards were forced to go to Bluetooth or to proprietary systems to obtain greater security.

But unless we have encryption that is both secure and lightweight - meaning requiring very little or very economical computation - either battery life or security will need to be compromised. Having a NIST-approved standard that's both secure and lightweight at the same time will translate directly into superior IoT consumer products and much greater security.

They wrote: "NIST computer scientist Kerry McKay, who co-led the project with her NIST colleague Meltem Sonmez Turan, said: 'We encourage the use of this new lightweight cryptography standard wherever resource constraints have hindered the adoption of cryptography. It will benefit industries that build devices ranging from smart home appliances to car-mounted toll registers to medical implants. One thing these electronics have in common is the need to fine-tune the amount of energy, time, and space it takes to do cryptography. This standard fits their needs.'

"The standard is built around a group of cryptographic algorithms in the Ascon family, which NIST selected in 2023 as the planned basis for its lightweight cryptography standard after a multi-round public review process. Ascon was developed in 2014" - so it's 11 years old - "by a team of cryptographers from Graz University of Technology, Infineon Technologies, and Radboud University. In 2019 it emerged as the primary choice for lightweight encryption in the CAESAR competition. This all showed that Ascon had withstood years of examination by cryptographers."

Leo: Are you saying that Ascon came from Radboud at the Caesar Competition? Okay. I'm not going to say anything more. Just continue on, please.

Steve: "In the standard are four variants from the Ascon family that give designers different options for different use cases."

Leo: Could it be Ascon? Never mind.

Steve: Yeah. How about Azcon instead of Ascon?

Leo: Azcon. Azcon.

Steve: Azcon instead of Asscon. Okay. Azcon.

Leo: Much better. There you go. Thank you.

Steve: We'll pretend it's a Z.

Leo: Yes.

Steve: "The variants focus on two of the main tasks of lightweight cryptography: authenticated encryption with associated data (AEAD) and hashing."

Okay, now, AEAD algorithms are where the world has ended up with authenticated encryption because it is extremely useful. For example, I used it to securely store SQRL's user identity. For SQRL, there needed to be some parameters of the user's identity that were accessible without the user's secret key - in other words, stored as plaintext - and other parameters that needed to be protected by the user's secret, so they were stored encrypted. But all of the information, whether stored without encryption or, well, stored without encryption. That's what's known as "Associated Data."

So it's bound to the encrypted blob, but not itself encrypted. It all needed to be protected either way against tampering. If any bit of the stored data, whether the encrypted data or the visibly readable plaintext, was altered, the authentication of the entire package would be broken. That's AEAD. And these AEAD algorithms are very cool, with many applications.

So here's what NIST says about the four ASCON algorithms: "We have ASCON-128 AEAD." They said: "It's useful when a device needs to encrypt its data, verify the authenticity of the data, or crucially both. A common weakness of small devices is their vulnerability to 'side-channel attacks,' in which an attacker can extract sensitive information by observing physical characteristics like power consumption or timing." And boy, Leo, so much of this early podcast's episodes talked about side channel attacks because they used to be a real problem.

Leo: At least timing attacks, yeah.

Steve: Right. They said: "While no cryptographic algorithm is inherently immune to such attacks, ASCON is designed to support side-channel-resistant implementations more easily than many traditional algorithms. Devices that can benefit from this approach include RFID tags, implanted medical devices, and toll-registration transponders attached to car windshields."

Then we have "ASCON-Hash 256," they wrote, "takes all the data it encrypts and uses it to create a short 'hash' a few characters long, which functions like a fingerprint of the data. Even a small change to the original data results in an instantly recognizable change in the hash, making the algorithm useful for maintaining the data's integrity, such as during a software update, to ensure that no malware has crept in. Other uses are for protecting passwords and the digital signatures we use in online bank transfers. It's a lightweight alternative to NIST's SHA-3 family of hash algorithms, which are widely used for many of the same purposes."

And then, finally: "ASCON-XOF 128 and ASCON-CXOF 128," they wrote, "are hash functions with a twist: Both algorithms allow the user to change the size of the hash. This option can benefit small devices because using shorter hashes allows the device to spend less time and energy on the encryption process.

"The CXOF variant also adds the ability to attach a customized 'label' a few characters long to the hash. If many small devices perform the same encryption operation, there is a small but significant chance that two of them could output the same hash, which would offer attackers a clue about how to defeat the encryption. Adding customized labels allows users to sidestep this potential problem."

And I should note, if any of this sounds familiar to our listeners, it also sounds familiar to me because we talked about this, like, six years ago, back in 2019, when this was all happening, pre-standardization, which only happened last week. But once again, this podcast did cover all the important news of the time.

"McKay said the NIST team intends the standard not only to be of immediate use, but also to be expandable to meet future needs." She said: "We've taken the community's feedback and tried to provide a standard that can be easily followed and implemented, but we're also trying to be forward-looking in terms of being able to build on it. There are additional functionalities people have requested that we might add down the road, such as a dedicated message authentication code," you know, a MAC. "We plan to start considering these possibilities very soon."

So the world now has a new set of NIST-approved, well-vetted, easy-to-implement, lightweight and secure cryptography standards for the first time ever. I have a link to NIST's announcement and to the 52-page, the full specification of the cryptography and hashing for anyone who wishes to dig deeper. So, very cool that we now have that.

Leo: Do you think it's any less robust because it's small?

Steve: Oh, yeah. There's definitely a tradeoff in security. For example, I'm assuming, and I did not look, that the ASCON 128 encryption is a 128-bit instead of a 256-bit key.

Leo: Ah, of course, yes.

Steve: So it's going to be, you know...

Leo: Still better than XOR, yes.

Steve: Oh my god, yeah. And the idea is that many applications do not need the kind of encryption and authentication security that, for example, a long-term digital signature must have.

Leo: Right.

Steve: You know, they're just sending a message to turn on the coffee pot or turn on the lights. So they want to prevent, you know, malicious spoofing. And that's why, for example, those variable-length hashes where, like, okay, we don't need an 8-byte hash verification. For our purposes, three bytes is enough.

Leo: Yeah, yeah.

Steve: And in turn you save a lot of time, and you save a lot of power. So there are absolutely applications where you can trade off the strength of your cryptography for power saving where any greater crypto is just overkill. I mean, really, really overkill because you don't expect a message to have a life of more than a few seconds.

Leo: It's right-sizing it. That makes perfect sense.

Steve: Yes. That's exactly what it is. It is right-sizing it. Syncthing had a major upgrade. And Leo, we're an hour and a half in. Let's take our almost, our second-to-the-last break, and then we're going to look at that and a couple other bits of trivia.

Leo: Yeah. Just checking my Syncthing to see which version I have. But we'll talk about that in just a bit. You and I are both Syncthing fans.

Steve: Yes, fans. So Syncthing's version announcement page started off with: "This is the first release of the new 2.0 series. Expect some rough edges and keep a sense of adventure."

Leo: Oh, wait a minute. Maybe I don't want to update. Holy...

Steve: Unh-unh. You know, there are places where a sense of adventure makes sense.

Leo: Not here. Unh-unh.

Steve: But Syncthing takes an honored place in the middle of my workflow, and "adventure" is not something I'm hoping to be treated to by my multi-system backup solution.

Leo: Yeah. I'm still on, I just looked, and I'm still on 1.3, which I think I probably want to stay there. Gold Grasshopper.

Steve: Yeah, that's exactly where I was. And that's where I'm staying also.

Leo: Yeah, yeah.

Steve: What's more, in my case the UI pop-up warned: "This is a major version upgrade. A new major version may not be compatible with previous versions. Please consult the release notes before performing a major upgrade." Now, as I said, this is particularly salient for me because one of the systems I'm still syncing with Syncthing is a Windows 7 machine, and I had to turn off its automatic updating quite a while ago when Syncthing's newer release broke it, and it stopped working. I had to roll back to the previous version and then turn off automatic updating. So I'm not updating. By the year's end I plan to be consolidating my two locations into one, and that will spell the end of the Windows 7 machine. And I'm happy with Windows 10. But that hasn't happened yet.

But Syncthing, as our listeners know, is this podcast's favorite file synchronization tool. You and I both use it, Leo, and we could use anything there is in the world, and I've looked at them all, and I'm sure you have, too. This is the one we've chosen. So I wanted to quickly note the changes to Syncthing with its move to v2.0.

They said: "Database backend switched from LevelDB to SQLite." They said: "There's a migration on first launch which can be lengthy for larger setups. The new database is easier to understand and maintain and, hopefully, less buggy." Well, yes, let's have fewer bugs. That'd be good.

Also they changed, they said: "The logging format has changed to use structured log entries, a message plus several key-value pairs. Additionally, we now control the log level per package, and a new log level WARNING has been inserted between INFO and ERROR." And they talk about logging some more. And here's one that's interesting: "Deleted items are no longer kept forever in the database." We were just talking last week, I think, Leo, about deletion. "Deleted items are no longer kept forever in the database. Instead they are forgotten after 15 months. If your use case requires deletes to take effect after more than a 15-month delay, set the --db-delete-retention-interval command line option or corresponding environment variable to zero, or a longer time interval of your choosing." Presumably zero disables deletion completely.

They said: "Modernized command line option parsing. Old single-dash long options are no longer supported. For example, '-home' must now be given as '--home.'" And that's, you know, in keeping with standards that we're all familiar with from Linux and Unix and other, you know, modern OS command lines. "Rolling hash detection of shifted data is no longer supported as this effectively never helped." No idea what that even is. They said: "Instead, scanning and syncing is faster and more efficient without it." So, okay, good. They, like, something never was useful, and they got rid of it. Now it's faster. Thank you very much. They said: "A 'default folder' is no longer created on first startup."

Leo: Oh, that's good because I always have to delete that. It makes me so angry. Yeah, I don't want...

Steve: Yup. Yup. Really annoying. And so I'm sure that they listened to all their users and said, why do we, you know, if we're using this thing, we really know what we're doing. Because I should just mention, Syncthing is not for the faint of heart.

Leo: No. Especially the command line version, let me tell you. That's fun.

Steve: Yeah. Here's a goodie.

Leo: [Crosstalk] XML files on [crosstalk].

Steve: Yeah. And here's a goodie: "Multiple connections are now used by default between v2 devices. The new default value is to use three connections: one for indexing metadata; two for data exchange." So that just seems like a nice performance improvement.

Leo: Yeah.

Steve: And here's something that might get some people. "The following platforms unfortunately no longer get prebuilt binaries for download at Syncthing.net and on GitHub, due to complexities related to cross compilation with SQLite." So that's dragonfly on amd64, illumos on amd64 and solaris on amd64, Linux on PowerPC 64 - I don't think anybody is using that - netbsd everywhere, openbsd 386 and openbsd on arm, and windows on arm.

Leo: Ooh, that's a big one.

Steve: Yeah. Yeah, it is a big one. So, and they said: "The handling of conflict resolution" - I didn't understand this. "The handling of conflict resolution involving deleted files has changed. A delete can now be the winning outcome of conflict resolution, resulting in the deleted file being moved to a conflict copy." I know. What?

Leo: I'm sure it will be logical when it does it.

Steve: Yes. They're not going to do the wrong thing.

Leo: Yeah.

Steve: So anyway, the biggest functionality change is the decision not to retain deleted files forever. I assume that the long-term endless collection of every past file, depending upon what the application may have been, you know, lots of people have automation, like logging and scripting and who knows what. That might have finally caused the development team to reassess their previous "forever," you know, keep-it-forever policy. Still, a 15-month default seems ample. Which is probably why they chose that, you know, a year and a quarter, essentially.

Multiple connections between peers, that seems like a nice addition. But I suppose that the lack of prebuilt binaries might be a bit of an inconvenience, especially as you've noted, Leo, for Windows on ARM. Building binaries from source, however, is a common occurrence for the various Linuxes and Unixes. So I don't imagine that anybody using OpenBSD or NetBSD is probably going to have a problem, you know, building their own binary. They're doing that for lots of other things. And I imagine that their package managers probably make that easy, you know, to manage anyway.

The final thing I'll mention is that, since the v2.0.0 release, which is the first notification I got, I looked at my - because I have Syncthing statically open on one of my screens. I mean, I use it a lot.

Leo: Yeah, yeah. I have a browser bookmark that's always there, yeah, yeah.

Steve: Yeah, yeah. So I saw in red, a red banner at the top, that notified me of the 2.0.0. Then I watched the sub-version number advance - I didn't touch it, of course, because also, I mean, if nothing else, it was a .0.0 release, which means let it stew for a while. The next time I looked, sure enough, 2.0.1, and then again the next time I looked, 2.0.2. So, you know, that's to be expected following feedback being received from a greater number of users after a release. And we don't know what sorts of "adventures" they may have had with the very first cut release. As I said, you and I don't need adventure from our backup solution.

Leo: No, no, no.

Steve: I scanned the changelog, and it appeared that the changes may have related mostly to the need for some users to build their own binaries. There were tweaks to the minimum compile-time library build versions and that kind of thing. So since my current Syncthing is the same as yours, 1.30.0 for 64-bits, and that was built recently - that was built on June 20th, less than two months ago, which is working perfectly for me - I see no need to go seeking "adventure" from my cross-device file syncing system. So I'll be remaining where I am until I decommission that old Windows 7 machine a couple months from now. If anyone's interested, I've got a link to the release's version tracking in the show notes.

And finally, before we talk about our main topic, I wanted to mention that IMDB's ranking of "Alien: Earth" has dropped from its stratospheric 8.8 to 7.8, still respectable, following last Tuesday's wider release of the first two episodes. And really I think that makes sense, given that the earlier release during Comic-Con would likely be a strongly skewed demographic. I have to admit that my wife Lorrie was somewhat bored by those...

Leo: Yeah, I couldn't - I didn't finish the first episode. And I love "Alien." I mean, it's not that I don't love "Alien." But it was okay.

Steve: So, you know, she will be glad that she'll only need to sit through another six episode hours of the first season. Because I'm curious enough that I want to see what the writers do with the various new pieces that they set in motion.

Leo: There's some interesting stuff in it.

Steve: Yes. For me, it was interesting to see all of the "Alien" mythology that was still present. You know, and actually to appreciate how much of it we have internalized from the previous movies.

Leo: Yes. There's a lot of canon, yeah.

Steve: Well, you see a ragged-edged hole in the floor, and you immediately think "Ah, yes, molecular acid for blood." Exactly. Or you see some egg-shaped pods split at their top sitting beneath a blue-tinged mist, and you think "Don't get your face too close to those!" You know, so there was a great deal of familiar comfort in what we saw during last week's two introductory episodes. And...

Leo: It's well done. It's beautifully produced.

Steve: Yes. Oh, they spent a lot of money, apparently around $250 million on this. So they're really - they're hoping that they're creating something that's going to have some future. And there were some promising new critters that we don't yet know much about. Maybe they'll be developed further. And I have to admit, Leo, I understand what you said, and I could see what my wife meant. The aliens themselves, you know, the "Alien" aliens that we know so well, they've become rather boring because we know them so well. We know what they look like.

Leo: Right.

Steve: We're aware of their entire life-cycle. You know. So we have a creature here that is pure animal, it has no language, it cannot be negotiated with, it is physically huge, ruthlessly brutal, and effectively unstoppable. So, yeah, while it's terrifying if it's in your neighborhood, it's also somewhat limited as a plot device because, I mean, it's just a berserker.

Leo: Right. You can't talk to them.

Steve: Right. What do you do with this thing except run as fast as you can?

Leo: Right.

Steve: So the value of the "Alien" franchise, actually, when you think about it, it's always been the human interest side of the crew's reactions to this creature and the events surrounding it. Without that, we only have what Bill Paxton's character said with some disgust in the second movie: "It's a bug hunt."

Leo: He also, by the way, said something else.

Steve: It's dry heat?

Leo: I think he said "Game over." Am I right?

Steve: He did.

Leo: "Game over."

Steve: That's right. He was wonderful. So the most interesting new feature, which I'm sure is the intended focus of the series, are the - and no spoilers here because everyone gets this immediately - are the recent earthbound experiments with a new hybrid...

Leo: Yes.

Steve: ...which is created by transferring a human consciousness into a fully synthetic, super-humanly strong, and highly intelligent body.

Leo: That's to me the most interesting part of this.

Steve: Yes. And to see what they're going to do with this.

Leo: Yeah.

Steve: The female leader of the group has already manifested an unexpected new ability, and I'm curious to know what she and her fellow hybrids will do next. So tonight's Tuesday. This evening I'll be watching Episode #3, not with super-high expectations of being blown away, but at least with some curiosity, to see what happens.

Leo: "Game over, man. Game over."

Steve: Oh, I do miss that second movie.

Leo: That was a great episode.

Steve: So good. Well, and that was James Cameron bringing everything he had to it, you know. He gave us "Terminator," and then he gave us that second "Alien" movie.

Okay. So that we don't break this in pieces, let's do our final sponsor insert and then - we're at about two hours, so its timing is right.

Leo: Good timing, yes, yes.

Steve: And then we're going to look at The Sad Case of Scriptcase, and the gelling of a final important message, why the responsibility is not that of the people who have bugs.

Leo: I think that's fair. Not the buggee, the bugger.

Steve: Oh. That's Ascon? Okay. For this week's podcast topic, I wanted to focus upon an interesting vulnerability that will not make any headlines. As I dug into the story I started to get a sinking feeling, not about it specifically, but about something we often touch on here, which is the state of today's cybersecurity environment. I titled today's podcast The Sad Case of Scriptcase, not because what I discovered about Scriptcase was special. What was sad about Scriptcase was that it was not special.

So let's first back up to take a look at the flaw, which somewhere around 2,800 of Scriptcase's users have remained vulnerable to, months after it was discovered and patched by its publisher. Then we'll look at what this all means.

This story here begins with a vulnerability disclosure posting by the well-known cybersecurity company Synacktiv. We mention them often. Their posting on the 4th of July was titled "Scriptcase - Pre-Authenticated Remote Command Execution." Now, everyone who follows this podcast will be well aware of the severity of the inherent problem of any pre-authenticated remote command execution. "Pre-authenticated" means that anyone anywhere can remotely execute commands on the targeted system without any need to be authenticated because this remote command execution is able to somehow be induced before any authentication is required of them. That being the case, we would not be surprised to see the severity of this was set to CRITICAL because indeed it is.

Synacktiv wrote: "Scriptcase is a low-code platform that generates PHP web applications. Developers use a graphical user interface to design and generate their website. 'Production Environment' is the name of an extension of Scriptcase that will be called 'production console' in the advisory for clarity. It's an administrative field to manage database connections and directories. While Scriptcase itself is not necessarily deployed with the website, the production console mostly always is.

"Pre-authenticated remote command execution is achieved by chaining two vulnerabilities. The first is the ability to change the administrator password of the production console under certain conditions, and the second is the simple authenticated remote command execution in the connection features where user input is directly concatenated to an ssh system command." Okay, that's just bad design, but okay.

Okay. So this seems rather like a straightforward mess for anyone who has this system deployed in the field. Synacktiv discovered a means for remotely changing the administrative password, and they also discovered that remote user-supplied data is being directly concatenated onto an SSH command.

The ability for an anonymous unknown remote user to remotely change an administrative password is clearly a very big mistake. We all know that I draw a distinction between mistakes that happen, and policies or designs that are necessarily the product of some deliberation. When you hear that a system allows anyone to remotely change an administrator's password, you think that it must be a horrible bug. But digging into the details of this reveals that the author of this system failed to provide the safeguards we all live with daily. Specifically, any new password can be set without the user providing their current password. Wow. So this, too, was a design-level decision.

So here are the interesting details of this first mistake: There's a flaw in the system's PHP code logical flow. The first thing the "change_password" function does, which can be invoked remotely, but the first thing it does is check to see whether the user's session has an "is_authenticated" variable defined for it.

Since the "is_authenticated" variable is only created inside the "initialize_session" function, the intent was that only someone who was already authenticated would be able to change their password. This perhaps excuses the lack of need to provide the current password; though if that had been required, this house of cards would not have collapsed.

The clear intention was that at the time the user was being authenticated, the "initialize_session" function would be called to initialize the session, and that would define the "is_authenticated" variable for the future. What the author of this code failed to take into account is that even a failed login attempt, trivially caused simply by directly calling the "login.php" function with an HTTP GET query, causes the "initialize_session" function to be called, and thus the "is_authenticated" variable would be created.

At that point, the system falsely believes that the user's session has been authenticated; and the "change_password" function, which checks for the presence of that variable, will then allow itself to be remotely called. And since that function requires no provision of the user's current password, any unauthenticated remote user is able to set whatever password they choose for the system's administrator simply by deliberately failing a first login attempt, setting the administrative password to anything they wish, then logging in for real.

So we have an unintended code path, coupled with the bad design pattern of not always requiring a user's current password when they're requesting its change, and we have the first half of a critical remote command execution vulnerability.

The Synacktiv guys wrote: "An attacker can arbitrarily reset the password of the administrator of the production console, so take it over. With this access, the attacker could retrieve database credentials and get access to them. As the production console is also vulnerable, the attacker could also leverage it to gain access to the server. Recommendation: Access to the password reset feature should be given only to authenticated users, which is change the condition checked by the 'change_password' function. Also, it should be based only on the session cookie. The 'change_password' function should not take an email argument from the user, but extract it from the session.

"While waiting for an official fix from the vendor, one should restrict the access to the Scriptcase Production Environment extension completely blocking" - and then they give a couple PHP files that should be blocked - which "would be enough to prevent any unwanted connection, as well as the exploitation of the password reset vulnerability."

So at this point in the flow of the exploit chain, essentially, we've remotely logged on as the system's administrator. So now what? The "now what?" is the exploitation of CVE-2025-47228 - the first one was 27, this one is 28 - a shell injection allowing remote command execution. The exact sequence of actions here is somewhat too dense to explain verbally on the podcast, but the situation is similar to what we've already seen. The developer of this low-code PHP-driven website creation system developed a system to allow less sophisticated users to create a complex PHP-driven website using a graphical user interface. The idea was that it would not be necessary to understand PHP to create a website.

Unfortunately, Synacktiv's analysis of the design and implementation of the tool would lead an impartial observer to conclude that this tool's developer also failed to fully understand the operation of PHP enough to create a system that not only worked, but also worked securely.

So what are the real-world consequences of all this? For that we turn to VulnCheck's recent posting last Thursday, which is what brought all this to my attention. On August 14th, VulnCheck posted under their headline: "Scriptcase - Hunt It, Exploit It, Defend It." They began with three key takeaways.

"First, hundreds of Scriptcase instances remain exposed a month after disclosure, with attackers actively scanning for them. Second, exploitation is simple, requiring only a few curl commands once a target is found, allowing full remote code execution. And third, clear detection paths exist, including version strings, network signatures, and suspicious processes or PHP files in the webroot."

So they write: "One month ago, Synacktiv published their disclosure and deep dive on a vulnerability chain affecting Scriptcase. The vulnerabilities, CVE-2025-47227 and '28, are an unauthenticated password reset and an authenticated command injection that, when combined, give an unauthenticated attacker full remote code execution. And yet, despite public disclosure, functional exploits, and available patches, hundreds of Scriptcase instances remain exposed on the Internet. That leaves the obvious question: does this matter enough to go hunting?

"At VulnCheck, one way we determine if a vulnerability matters is by looking for targets online. The logic is pretty easy: If there are zero targets online, well, who cares? If there are many targets online, then we care. If it's somewhere in between 0 and many, well, it depends. Naively, we started with a Shodan query of the title 'Scriptcase.' The results were annoying. Annoying because they aren't real Scriptcase servers at all. These are honeypots, frankenpots that seemingly pollute every single query. We've written about before the problem in 'There Are Too Many Damn Honeypots,' and this is another textbook case.

"But the fact that there are honeypots suggests that others care about Scriptcase, too. So we grabbed a copy of the software, built a Shodan query to avoid the decoys and, in a rare win, even got a simple Google search to work for finding actual instances." And they said: "The AI-sloppification of Google has largely destroyed this, so this felt like a small miracle."

They said: "The VulnCheck Initial Access Intelligence team routinely develops queries for Shodan, FOFA, ZoomEye, and Censys to track down vulnerable targets. While building out our fingerprints for Scriptcase on these services, we also found that our friends over at driftnet had turned up a solid hit count of roughly 2,800 Scriptcase servers exposed to the Internet via their Scan Content functionality." So 2,800.

"Finally, it's not just researchers looking for Scriptcase. GreyNoise is tracking a couple dozen known malicious IPs scanning specifically for /scriptcase/. That's proof attackers are on the hunt, too. At the end of the day, you've got all the ingredients to answer 'Does this matter?' There are discoverable targets online. There's a public proof of concept. And attackers are actively looking for these systems. That matters."

And now we've determined that this matters, they're now looking at exploitation. They said: "If finding vulnerable Scriptcase servers is straightforward, exploiting them is even easier. Synacktiv's blog and proof-of-concept go into detail, but the reality is that it boils down to just a few curl commands. No custom tooling required. Once the password reset has been achieved, we can navigate to the production environment login page and authenticate with the new credentials." And they show Scriptcase's very nice-looking production environment login page, with a password that they provided to Scriptcase, and then they provide to the login page, which logs them in.

They said: "Once authenticated, we land in the production environment. With access to the production environment, we can move on to exploiting CVE-2025-47228, a command injection vulnerability in the connection creation and testing feature. The injection logic lives in a modified version of the third-party library ADOdb. First, the command is built in $str_command using attacker-provided variables. With a web shell or reverse shell in place, the exploitation chain is complete; but that's only half the story. For defenders, the question becomes 'How do you spot this activity before or after it happens?'

"Defenders should check whether their Scriptcase deployment is vulnerable. By default, the landing page exposes a version string in its HTML, which can be compared directly against patched releases. We built a passive version scanner to run across Shodan data, and 57% of observed instances still reported a vulnerable version." That's as of the publication date, which was last Thursday.

And they conclude: "Whether you're hunting, exploiting, or defending, the playbook is straightforward: Know how to find vulnerable targets, understand how the exploit chain works, and have a clear detection and response strategy in place. The attackers looking for /scriptcase/ aren't waiting for you to patch. And the sooner you close these holes, the less likely you are to see your own server in someone else's shell prompt."

Okay. So one final piece of this that I didn't yet share was back at Synacktiv's disclosure timeline in their disclosure. Although they patiently - they did complain about this in their posting, they patiently waited until the Fourth of July before their public release. And there's no doubt the developer behind Scriptcase tried their patience. They show in their disclosure timeline that it was on February 18th of this year that they first sent a message to the editor at Scriptcase.

They got first contact live via tchat. I don't know if that means Telegram, or tchat might be on the website because they do have a support chat on the website. That was on - so they sent their first message on February 18th. Chat occurred on March 12th. On March 20th, their advisory report was sent to the editor. It took eight days, until March 28th, to get the first response from the editor. On April 4th, the editor asks to retest the vulnerability on the latest version. Meaning apparently they didn't even check it themselves. They said, well, test it on what we have now.

On the 29th, Synacktiv confirms the vulnerability still works on the latest version. Then on May 15th, Synacktiv contacts the editor for a status update on the progress of the vulnerability analysis because they had heard nothing. They waited two weeks. On May 30th, Synacktiv contacts the editor for a status update on the progress of the vulnerability analysis. Still nothing. On June 5th, Synacktiv sent the exploitation script to the editor and basically said, okay, we're releasing this publicly in a month. So, you know, you've had many months to fix this, you guys. And on July 4th they make their public release, full disclosure, proof of concept, everything any attacker needs to attack these systems.

So, from their initial contact which occurred on the 18th of February to their eventual release of the exploitation details on the 4th of July, nearly five months elapsed with Synacktiv typically responding within days and Scriptcase's side often responding either never or only after several weeks had transpired. As I've repeatedly observed, this bizarre system of vulnerability and reporting and updates and patching - and often never patching - is badly broken.

Now, Scriptcase, which is at www.scriptcase.net, has one of those stunning, lovely, state-of-the-art websites with beautiful graphics, happy people, tasteful imagery and design that would inspire confidence in anyone who visited. The company is based in Orlando, Florida; and along the bottom of the first page the names of several of their more prominent 45,000-plus customers scroll by. If you wait a minute, you'll see the names of Bosch, HP, Hyundai, and Yamaha slide past.

One reason Scriptcase might have taken so long to respond to Synacktiv's many attempts to communicate is that their developers appear to be far too busy just trying to keep up, fixing the many other problems that appear, that this product has, and appear to be broken. I thought that Microsoft was bad. Okay, Microsoft is bad. But these guys are even worse.

Leo: You think it's PHP is the problem?

Steve: I don't know. I don't think they know how PHP works, given - you should take a look at a changelog, Leo, scriptcase.net/changelog. Open those up. I'm just astonished. Every three or four days they do another release. Their changelog goes back 11 years. And 11 years of going back only gets them back to major release v8. They're now on major release v9. But it reveals that they have been updating this product every few days. Sometimes it's three days. Sometimes four. Sometimes five. I've seen a week go by. And this appears to have been going on since the early 2000s. Every few days they release another update, and their changelog reports, like 10 or so important-appearing things that they've just fixed.

Leo: Isn't that good?

Steve: Well, okay, except talk about update fatigue. You know...

Leo: Well, that's true.

Steve: Maybe they actually - maybe they are ex-Microsoft engineers. I don't know. Now, this development style, I have to say, drives me nuts. One of the reasons I stopped using GitLab for development tracking was its developers would never leave it alone. They were constantly spewing out new features that often fixed, you know, like spewing out new features, but that they were often mixed in with critical must-patch-immediately, hands waving in the air, updates. The process to update had never received much attention. It was not clean and seamless. Each one was a mess, and it was not possible to skip any. It was an endless series of incrementals, and it created a disaster. As our listeners know, I've also been annoyed by Notepad++'s author who, similarly, has a seemingly never-ending list of things he's fixing in his Notepad app.

So if today's - think about it. If today's model of vulnerability discovery, patching, and updating is already badly broken because users get tired of stopping everything they're doing to update some software that's already working fine for them, what do you imagine happens when new versions of a non-mission critical website authoring system are being offered daily or weekly? No one cares. And before long no one will bother to install updates. You are training your users. You're abusing them.

I looked through Scriptcase's changelog for the two CVEs that Synacktiv went so far out of their way to report and manage. There is no sign of them anywhere. Last Wednesday on August 13th they fixed four security problems: Missing Permissions Policy in the Scriptcase environment. Missing "Cache-Control" in the Scriptcase environment interface. Missing Content Security Policy Instances in the Scriptcase environment. And Missing 'X-Frame-Options' Header Instances in the Scriptcase environment. I'll just mention that those are not good to have missing, and they're fixing it now, after decades.

The week before, Tuesday, August 5th, they fixed another two. There was something that they labeled CVE (26024), which is a SHELL INJECTION (REMOTE COMMAND EXECUTION) in production environment. They said production environment needs to be updated. And then Duplicate HTTP Headers Detected in Scriptcase environments.

Now, the Shell Injection Remote Code Execution in production environment sure sounds like Synacktiv's, but it has a different CVE, and there's no mention of, or thanks anywhere, to Synacktiv. And nowhere is there a mention of this password change vulnerability. Maybe that didn't get fixed because these guys don't think it's important. I don't know. VulnCheck noted that more than half of the publicly accessible instances of Scriptcase were still vulnerable a month after their disclosure, and that dozens of known malicious IPs had been seen actively scanning for vulnerable systems. Dozens.

And we all know where this story ends; right? Every one of those enterprises that made the terminal mistake of giving this far-from-secure Scriptcase system any presence on the public Internet, almost certainly without need, will find itself ransomed and extorted. I never want to see that happen. No one ever deserves that. But the saddest thing is that the correct lesson will never be learned from experience.

While we've made an example out of Scriptcase, they are more the rule than the exception. There's now a massive industry composed of super-slick-appearing fancy websites which front for not-very-professionally designed software that nevertheless gets the job done and supports its own existence.

We've spent a great deal of time on this podcast examining the extreme difficulty of making any software securely publicly accessible. The only rational conclusion is that this should never be done unless "public accessibility" is the entire purpose of the software. Public accessibility is the entire purpose of a public web server, a public email server, or a DNS server. But it is assuredly not the purpose of the "low-code" Scriptcase website designer.

Scriptcase does not exist for the purpose of being on the public Internet. It has no purpose or reason for being widely visible on the Internet. And THAT is the lesson that should be learned from this. Not "Oops! A bug was found in some random software system we use, and before we could update with the patches, high-power super-skilled anti-Western genius hackers in China or Russia got into our system and are now holding us for ransom."

No! That is not the lesson. There will always be bugs, just like this, occurring in random networked software. Always. And the anti-Western genius hackers are also never going away. They're now part of the ecosystem, too. So we should not be waiting around for the day when all the bugs are gone and the hackers have been arrested. That day will never come, ever.

In the same way - and following the same philosophy - that today's IT designers need to design their networks so that malicious insiders cannot damage the company, the IT managers need to understand the only, only, only server-style systems that can be publicly visible to anyone everywhere are the servers that are expressly designed to be publicly exposed - those whose sole purpose is to offer widely available public services. THAT is the proper lesson to take away. It does not matter whether a server appears to require an identity authenticated login. It doesn't matter.

We've seen this over and over and over. How many times are we going to point the finger at this mistake or that mistake or they didn't update their software, before we start to realize that the actual mistake is ever attaching anything to the public Internet that does not, by virtue of its purpose, need to be widely visible to everyone everywhere.

I titled this podcast "The Sad Case of Scriptcase," not because what I discovered about Scriptcase was special, but because it was so sadly common. No company should have become a victim to Scriptcase's mistake because no company should have ever made their Scriptcase instance publicly visible to everyone, everywhere, on the Internet.

Leo: There's the key; right? Yes.

Steve: Yes. Any company that rigorously adopts - think about this. Any company that rigorously adopts and enforces the policy and philosophy of never having anything publicly visible to everyone everywhere unless that is the server's entire purpose will automatically - think about this - automatically be protecting itself from all of the Scriptcases now and in the future. Bugs are never going away, ever. And neither are bad guys. So it should be obvious that the only possible solution is to make certain that the bad guys can never get their hands on those bugs.

Leo: Fair enough. Certainly air gapping things is always a good way to secure them. Can't air gap everything.

Steve: Well, there is - what's happened is the world has adopted this belief that it's possible to authenticate.

Leo: Everything [crosstalk]. Yeah, yeah.

Steve: You cannot authenticate. You cannot authenticate. We see it, I mean, everything is authentication failure. So don't make it important. If you do not authenticate to a website, I mean, you login after you've gone in anonymously, but you're making an anonymous connection. You make an anonymous connection to an email server. And you anonymously ask for DNS. Anything that authenticates is bound to fail. So don't use authentication to protect yourself. That's not protection. It will fail.

Leo: Good to remember. A lesson for us all. Unless you have it.

Steve: We see it. I mean, we've been talking about - we've been talking around this for the last couple years, and it finally gelled for me as I was looking at yet another sad instance of this, 2,800 companies, many of whom are now ransomed, and they're going to be extorted because they put this crappy software on the Internet. If it's crappy software, keep it inside. It doesn't - it cannot defend itself against the Internet.

Leo: So this company isn't designing websites for people, just tools for people, which they choose to put online.

Steve: Which some of the user idiots put on the Internet.

Leo: Right.

Steve: It has no purpose of being on the Internet. But because it's says, oh, yeah, you know, you have to login to be an administrator, it's like, oh, let's put it on the Internet. You'll have to login. Except it turns out you don't. LEO: Here's the problem. Nowadays many companies have a majority of remote workers. So...

Steve: And so we have overlay networks, we have VPNs, we have all kinds of ways of getting into the corporate network first.

Leo: Tailscale and things, yeah, yeah, yeah.

Steve: Yes. Then use it inside the network.

Leo: Yes.

Steve: And so all of the Shodan scanning, all of these scanners, they're looking for morons that have put insecure servers on the Internet.

Leo: Yeah.

Steve: And from now on, I'm calling them morons because it is their fault that they got hacked.

Leo: In modern business you do have to put stuff out in the public. But limit it. Constrain it as much as possible because that's always a vector for attack.

Steve: If it requires authentication. You have to put things in the public, but you're putting your website on the public because you want everyone to visit.

Leo: Right. Our website. Right. I don't have to authenticate to visit GRC.

Steve: Bots come in. Bots are welcome here. You know, it's all anonymous. Authentication doesn't work. I mean, that's what we know. You know, this dumb Scriptcase thing, you're supposed to have to log in. Except it turns out you don't. Authentication doesn't work. And so you can't have all of this crapware stuck on the Internet where you have to authenticate it. Hide it.

Leo: Yeah.

Steve: And, you know, we've been blaming the wrong person. We've been blaming the authors of crappy software. Well, yes, technically. And we're blaming hackers in Russia and China. Well, yes, technically. But if it wasn't ever exposed to the Internet, the bad guys could never find it, and the bugs could never hurt you.

Leo: Right, right, right.

Steve: Don't put this stuff on the Internet. Period.

Leo: Just the words "preauthenticated remote command execution" should send a chill down your spine.

Steve: Well, look at all the ransomware. Remember that page that monitors, by day, how many new victims, well, I mean, it was hourly. It was constant.

Leo: Yeah. I mean, I understand. I don't want to blame the victim. And yet there's plenty of culpability to go around. I mean, the people who wrote the software put the bugs in, but you didn't have to expose it to everybody else; right?

Steve: Right.

Leo: If you don't have to, don't, I guess.

Steve: Apparently there were 45,000-plus customers. 2,800 of them put this on the Internet. So obviously you don't have to put it on the Internet in order to use it.

Leo: Yeah, right.

Steve: And no one should have. And essentially they were trusting that you needed to log in using your administrative password. Turns out you don't.

Leo: What could possibly go wrong?

Steve: What could possibly go wrong? What is guaranteed to go wrong?

Leo: Right, right.


Copyright (c) 2014 by Steve Gibson and Leo Laporte. SOME RIGHTS RESERVED

This work is licensed for the good of the Internet Community under the
Creative Commons License v2.5. See the following Web page for details:
http://creativecommons.org/licenses/by-nc-sa/2.5/



Jump to top of page
Gibson Research Corporation is owned and operated by Steve Gibson.  The contents
of this page are Copyright (c) 2026 Gibson Research Corporation. SpinRite, ShieldsUP,
NanoProbe, and any other indicated trademarks are registered trademarks of Gibson
Research Corporation, Laguna Hills, CA, USA. GRC's web and customer privacy policy.
Jump to top of page

Last Edit: Aug 26, 2025 at 06:21 (143.97 days ago)Viewed 4 times per day