Transcript of Episode #1051

Amazon Sues Perplexity

Description: FFmpeg teaching assembly language for performance. The state of Nevada recovers after not paying ransom. A "rounding error" nets a clever attacker $128 million. Why would Chrome decide to start form-filling driver's licenses? The UK's six major telecom providers to block number spoofing. XSLT support being removed from browsers. Will anyone notice? Firefox introduced paid support options for organizations. Russia continues to fight against non-Russian Internet. Google acquires another Internet security company (Wiz). The EU to finally fix their cookie permission mistake. More countries drop Microsoft office for open choices. More countries question and examine Chinese made buses. Microsoft discovers some information leakage from LLMs. What does Amazon's lawsuit against Perplexity's agents mean for next-generation browsers?

High quality  (64 kbps) mp3 audio file URL: http://media.GRC.com/sn/SN-1051.mp3

Quarter size (16 kbps) mp3 audio file URL: http://media.GRC.com/sn/sn-1051-lq.mp3

SHOW TEASE: It's time for Security Now!. Steve Gibson is here. FFmpeg says you ought to be using assembly language. Steve says, "Right on." Why would Chrome, the Chrome browser, start to offer to fill in your driver's licenses? Steve has a theory. Microsoft discovers a wild way you can get information out of LLMs. And finally, Steve takes a look at the fact that Amazon is suing Perplexity because they're using their agentic browser to buy things on Amazon. What's that all about? That and a whole lot more, coming up next on Security Now!.

Leo Laporte: This is Security Now! with Steve Gibson, Episode 1051, recorded Tuesday, November 11th, 2025: Amazon Sues Perplexity.

It's time once again for Security Now!, the show you wait - I wait - all week for. Every Tuesday we get together with this guy right here, Mr. Steve Gibson, to find out what's new in the world of security. More than 100,000 people listen every week, Steve.

Steve Gibson: And I wait for it as much as they do. What is going to happen this week? Who knows?

Leo: Well, let me guess. Ransomware. Security flaws. Actually you've got a story, your big story is a little different than the usual. But I'll let you tease what's coming up.

Steve: Well, it is because it's sort of the, well, if you had three feet, it would be the other shoe. It would be the...

Leo: The shoe after the other shoe, yes.

Steve: Yeah. After you've run out of your two feet, you're still holding this shoe, and then you dropped it because why do I have a third shoe? I only have two feet.

Leo: The third shoe will drop later in the show. What else?

Steve: Yes, it will. We have not yet looked at the whole different issue of agency as regards what our browsers may do for us. And it turns out that's different than the robots.txt file controversy that we got into with Cloudflare earlier, or the AI browser getting confused with text from the Internet versus text from its commander in the prompt injection issue. This is different. Today's podcast I just titled - and actually, Leo, this started out as just the first topic of news for the week. But as I fleshed out all the other news, it stayed big. And I thought, okay, let's just - let's focus on that as our main issue.

So today's title is "Amazon Sues Perplexity." Which is, well, first of all, boy, if you google that, your browser explodes with hits. I mean, the whole Internet went nuts over this because everyone recognizes that this is a big issue. Which we're going to get to for our 11/11/2025 Veterans Day episode of Security Now!, 1051. But we've got more stuff to talk about. We've got FFmpeg surprising everyone by deciding they need to teach people assembly language in order to get FFmpeg's performance up where it needs to be.

Leo: Okay.

Steve: And they made some claims that some notable industry people said, what? I don't think that's right. We'll talk about that. We've got the state of Nevada bragging, boasting about their recovery after not paying any ransom. Also, oh, a rounding error netted a very clever attacker $128-plus million in some DeFi, who knows what the hell is going on, but we'll talk about that. Also, why would Chrome decide to start autofilling driver's license numbers?

Leo: Oy.

Steve: That's an interesting question.

Leo: Don't want.

Steve: Uh-huh. The UK, six major telecom providers have decided that they're going to block number spoofing within the UK. Why didn't we think of that? XSLT is a feature that is being removed from all the browsers, but not tomorrow. Soon. But the question is, will anyone notice? And if it's something that you depend upon, well, you need to stop depending upon it. Kind of like Flash was, once upon a time.

Also Firefox has decided to introduce paid support options for organizations. What? Russia continues to fight against the non-Russian Internet. Okay. Sad for Russian citizens, I guess. Google has acquired another Internet security company. We'll talk about that. Oh, Leo, the EU looks like they're going to fix this whole cookie pop-up banner nonsense.

Leo: Oh, my god. No.

Steve: Yes.

Leo: Be still my heart.

Steve: I know. It's going to go away. It took them a few, what, years, many years...

Leo: Decade.

Steve: Yes, it's coming, yes. Also, more countries are dropping Microsoft Office in favor of open alternatives. We've got more countries worrying about Chinese-made buses phoning home. Microsoft has come up with a really interesting - at first it looks like, what? What? - leakage from LLM, by looking at encrypted LLM conversation TLS packets. But the darn thing actually works. And then we're going to look at what Amazon's lawsuit against Perplexity's agents mean for our next-generation browsers. So lots of good stuff to talk about.

I've got a little update. I have a nice bit of feedback from one of our listeners about SpinRite. An update on my DNS project at one year. We're done. And there was a third thing I don't remember, but we'll get to it. And of course a great Picture of the Week. So I think maybe, you know, a good podcast.

Leo: Once in a while you've got to, you know, keep making them, some of them will turn out. I'm just joking. They're always great. And we are excited about the Security Now!, now, now. Security Now! now. Now it's security. Now. But first - it'll be security in a minute. So sort of now.

Steve: Security now. Insecurity in a minute.

Leo: Insecurity temporarily. All right, Mr. Gibson. Picture of the Week time.

Steve: Picture of the Week.

Leo: Yes, sir.

Steve: So I gave this one the headline "An important consideration when you're able to decide where you should have your emergency."

Leo: Okay. Let's take a look. "Emergency phone not installed." That is absurd. "Please do not have an emergency at this location." Okay.

Steve: Again, an important consideration when you're able to decide...

Leo: Yeah, choose your - yeah.

Steve: ...where you should have your emergency. Okay, so for those who are not seeing the video, we have a partially installed emergency phone kiosk. But only the external framework is there. The phone equipment, I mean, obviously that mechanical structure has to go in first. Then the phone installers come along and put the guts in. So this has no guts at this point. So somebody who didn't want the appearance of this bright yellow emergency kiosk, which is probably familiar to those in the area from other similar bright yellow emergency kiosks. Didn't want anyone to believe that they could actually rely on this to report their emergency.

Leo: Don't run over there, no.

Steve: Yeah, right. There's a sign that's posted where the phone equipment would be, handset and keypad and things, saying, as Leo said, "Emergency phone not installed. Please do not have an emergency at this location."

Leo: No.

Steve: So, and the mailing went out yesterday afternoon to our subscribers, about 19,261 I think we're at now. And many of them noted that there was a strange droid with a light saber in the background.

Leo: It's a fire hydrant, folks, come on.

Steve: Yeah. And so I guess this must be like a heavy snow area?

Leo: Exactly.

Steve: Don't they normally have, like, those things to indicate where the curbs are.

Leo: Right.

Steve: And in this case, I guess, if there was a fire, and there was a lot of snow that was covering up the fire hydrant, which looks like of stubby, actually, this is like a...

Leo: I'm wondering about this picture. That looks too much like a droid with a light saber. I'm starting to think there's a little tongue firmly planted in cheek there.

Steve: I do think that that is a pole, bright red pole, sticking up from a fire hydrant so that the fire equipment people, also known as firemen, are able...

Leo: Will know.

Steve: Will know where the buried, not very tall fire hydrant is. It would take about, what, two feet of snowfall to cover up that hydrant. And then you'd think, we know that there's no emergency phone service in this location, but there's got to be a fire hydrant around here somewhere. Fortunately, if there's a red post sticking up out of the snow, you go, ah, that's the fire droid that we can use to hook our hoses up to.

Leo: Yes.

Steve: So at this point we're exhausted, and it's time for another sponsor break. No. Just kidding.

Leo: All right.

Steve: The news is that assembly language lives. Which of course is a topic near and dear to me. Last Wednesday on the 5th, the official FFmpeg 'X' account tweeted: "FFmpeg makes extensive use of hand-written assembly code for huge (10-50x) speed increases, and so we are providing assembly lessons to teach a new generation of assembly language programmers. Learn more here." And they have a link to a GitHub account and page, and then a big picture in their tweet, FFmpeg/asm-lessons. And it generated a lot of interest. This was November 4th, early in the morning.

So, okay. People who posted to that thread, which this FFmpeg posting started, questioned that 10 to 50x speed improvement could possibly arise from coding in assembly versus an efficient high-level language. And much as I love assembly and choose it for all of my own work, I agree.

What I suspect must be going on is a very unfair comparison. All modern processor instruction sets have extremely powerful and fast special purpose vector and array-handling streaming instructions, which are heavily pipelined and designed to do the kinds of things that FFmpeg needs to do with audio and video. And those can be used when the entire solution has been deliberately designed around using them. So by comparison, a sort of more generic solution that did not use those super special purpose, you really can't do anything else with them but this, instructions would be massively handicapped by comparison.

So any naive implementation which did accomplish the same function, which was written in a high-level language, but did not also take advantage of those special purposes, you know, like special purpose processor acceleration features would absolutely not have a chance. But you don't have to not take advantage of those instructions if you're using high-level language. You can use those. You have to sometimes, you know, drop down briefly and manually request that instruction. But the current high-level languages all allow you to drop down and hand code some things because it is recognized that there are some places where assembly language still can be the right way to solve a problem when there isn't some explicit special casing that was done in the high-level language for a given processor architecture.

So anyway, I wanted to share this 'X' posting from the FFmpeg group because those tutorials posted over on GitHub, all available in French, Spanish, and English, might be of interest to anyone who is curious about assembly language. Since our listeners know that assembler is my preference, I'm often asked by our listeners and others how they should get started in pursuing, you know, if nothing else, just sort of dipping their toes into the water of assembly. So it might be that these FFmpeg asm lessons would be worth looking at. And they do offer a Discord server for asking and receiving questions that might arise. So I have the link there in the show notes at the bottom of page 2, and I just wanted to put it on everybody's radar.

Last May, an employee with the State of Nevada made the mistake of clicking on a malicious search engine ad which installed a malicious sysadmin tool from a spoofed website. Employee didn't know any better, and this was back in May. Three months later Nevada received ransomware demands which it declined to pay. Having finally recovered in full, last Wednesday the state's press release carried the headline "Nevada completes 28-day recovery from statewide cyber incident, refuses ransom, and releases After-Action Report."

What they said was the following: "Carson City, Nevada, November 5th, 2025: The Governor's Technology Office (GTO) today released the 2025 Statewide Cyber Incident After-Action Report detailing Nevada's 28-day recovery from an August ransomware attack. Guided by pre-established incident playbooks and vendor agreements, the State did not pay a ransom, restored statewide services within four weeks" - and actually they initially restored much more quickly. I want to cover this in detail because there's a template here that is useful and actually kind of impressive - "and recovered approximately 90% of impacted data." The other 20 they're not trusting yet, so they want to be careful with that. "The remaining items, while still in control of the State, were not required for service restoration and are undergoing risk-based review with continued monitoring. The State will take appropriate notification or remediation actions if new information emerges."

They said: "Governor Joe Lombardo said: 'Nevada's teams protected core services, paid our employees on time, and recovered quickly, without paying criminals. This is what disciplined planning, talented public servants, and strong partnerships deliver for Nevadans.' State CIO Timothy D. Galluzi said: 'We executed, then communicated. Our staff and agency partners worked around the clock with expert vendors to contain the threat, rebuild securely, and bring services back online in measured phases.'

"The numbers are: 28 days to full service restoration across affected platforms. Around 90% of impacted data recovered. Residual items under risk-based review with enhanced monitoring. No ransom paid. Response executed under cyber insurance and pre-negotiated vendor agreements. 4,212 overtime hours by 50 State employees, at $210,600 direct overtime wages, fully-loaded estimated at $259,000. $1,314 million obligated to specialized partners - forensics, recovery, legal, engineering - to accelerate containment and rebuild."

Then they said: "How Nevada stepped up. Continuity of operations: Payroll processed on schedule. High-impact public safety and citizen-facing systems were restored in phased order. Speed and discipline: Around-the-clock State teams executed 24-7 playbooks alongside partners, enabling a 28-day full restoration, faster than many public-sector timelines for incidents of similar scope. Fiscal responsibility: Surge work was led by State staff. Even using conservative fully-loaded overtime costs, the State avoided hundreds of thousands of dollars versus an all-contractor model" - meaning they kept it in-house largely - "while retaining institutional knowledge and tighter change control.

"Within hours, Nevada engaged" - and I have a timeline I'll go over in a second. But they wrote: "...engaged pre-positioned experts for forensics, recovery, and legal/privacy support including Mandiant, Microsoft DART, Dell, SHI/Palo Alto, BakerHostetler" - that's their law firm - "and local engineering support from Aeris under cyber-insurance and statewide contracts. The complete after-action report outlines next-phase hardening and modernization, including the pursuit of a centrally managed Security Operations Center (SOC), unified Endpoint Detection & Response (EDR), identity hardening, OS and application control, and expanded workforce training to sustain resilience against evolving threats."

In other words, as a consequence of their direct hands-on involvement in this, rather than just throwing up their hands and bringing in outside people, they got a bunch of takeaways which are informing them how to do better next time, acknowledging that these threats are evolving. I cut out a lot of the glad-handing that was in that announcement. They seem rather pleased with themselves over this.

I was unable to find any indication of the size of the ransom demand they declined. I think it was never made public. But given the reporting of the event at the end of August, I imagine that the demand was hefty because the bad guys did knock the entire state off its knees. I mean, they were down. All of the automated services went offline. I mean, it was a sweeping attack. The Associated Press headline at the time was "Cyberattack shuts down Nevada state offices and websites, governor's office says." And Reuters headline read at the time "Nevada state offices close after wide-ranging 'network security event.'" You betcha.

So the most interesting data comes from their complete 30-page After-Action Report, which I'm not going to drag everyone through. But among that there were a couple interesting tidbits. We learn on August 24th, 2025 - get this - at 1:50 a.m. PDT, the State of Nevada Governor's Technology Office identified a system outage that resulted in multiple virtual machines going offline. Okay. 1:50 a.m. PDT on August 24th. Guess what day of the week August 24th is? If you said Sunday...

Leo: Friday? Saturday? Monday?

Steve: Yeah, Sunday. Sunday morning, 1:50 a.m., because you want nobody around. You want to surprise as much as possible, you want to get as many dastardly deeds done during as much time as you have before anybody is able to wake up to this. So very much like New Year's Eve or Christmas Eve sort of thing. So they wrote: "Initially locked out of the systems, the GTO team successfully" - that's the Governor's Technology Office team - "successfully regained access using backup credentials and discovered encrypted files alongside a ransom note. They isolated the affected VMs to prevent further spread of the ransomware. Legal counsel from BakerHostetler LLP was engaged and promptly brought in Mandiant, a leading cybersecurity firm under Google Cloud" - remember we talked about Google's purchase of Mandiant a while ago - "to conduct a privileged forensic investigation.

"The investigation revealed that the threat actor had infiltrated the system as early as May 14th [of this year] 2025, when a state employee unknowingly downloaded a malware-laced system administration tool from a spoofed website. This tool installed a hidden backdoor, which remained active despite Symantec Endpoint Protection quarantining the tool on June 26th. The attacker escalated their access by installing a commercial remote monitoring software on multiple systems, compromising both standard and privileged user accounts.

"By mid-August, the attacker had established encrypted tunnels and used Remote Desktop Protocol (RDP) to move laterally across critical systems, accessing sensitive directories including the password vault server. On August 24th, the attacker deleted backup volumes and deployed ransomware, encrypting VMs and disrupting critical services."

Elsewhere the report says: "Between August 16th and August 24th, the threat actor accessed multiple critical servers, including the password vault server, and retrieved credentials from 26 accounts. They meticulously cleared event logs to obscure their activities. On the day of the ransomware deployment, the attacker deleted backup volumes and altered security settings to facilitate the execution of unauthorized code. At 1:30 a.m. PDT, ransomware was deployed, encrypting VMs and disrupting critical services."

And as I said, not surprisingly, August 24th was a Sunday. So, very deliberately, at 1:30 a.m. on a Sunday morning, the attackers uncloaked and attacked. They relied upon no one being around, and minimal if any crew, even later in the morning on a Sunday, to enable their active attack to go unnoticed for as long as possible.

This report, as I said, pats themselves on the back frequently, and I've removed most of that since it's not informative, and it's frankly somewhat nauseating because, like, okay, we get it, guys. But in all fairness, Nevada's IT response was very impressive. On that Sunday morning at 1:52 a.m., the VMs that run the state were encrypted and went offline, crippling systems statewide. By 7:37 a.m. on that same Sunday morning, the incident had been escalated to the CIO and Governor's office. Only a little over two hours later, by 9:51 a.m. the credential lockout was lifted using backup credentials, and access to the internal systems was obtained. Encrypted files and that ransom note then were discovered. Two and a half hours after that, by 12:37 in the early Sunday afternoon, the affected VMs had been isolated to prevent further malware spread.

Four hours later, by 4:44 p.m., Nevada's legal counsel was added, and they added Google's Mandiant forensic group to the effort. And 15 minutes after that, at 5:03 p.m. on that same Sunday, recovery protocols were initiated, and post-attack recovery had begun. State government employees took an unplanned two-day vacation that following Monday and Tuesday, by which time systems were beginning to come back up and online, and they were able to return to work on Wednesday.

So we're talking about a full rallying response by dinnertime of the day it happened. The full recovery did take four weeks. It seems as though, you know, that might have been faster. We don't know the details of where that time went. But it does sound like, you know, they didn't overpower their response. They didn't bring in outside people who actually, you know, would need to be brought up to speed. They paid a ton of overtime, $1.3 million in overtime, to their own people in order to get this, you know, get back up and online quickly. But overall, Nevada is saying they spent $1.5 million rather than whatever the ransom was. And you can imagine it was, you know...

Leo: More than that, yeah, yeah.

Steve: Oh, yeah, 10, easily $10 million for a state to be decrypted, and the decryption keys possessed. Obviously, Nevada had good backups, and they were offline, and they did not get encrypted because they paid no ransom, which means they never got any keys from any bad guys.

Leo: Good.

Steve: So, you know, overall I would say this is quite an impressive response. This is what you would expect. And you'd have to imagine that they also showed their cyber security insurance firm that they were worth insuring, that they were going to be responsible, that they were not going to spend a ton of money. And so I would say that Nevada taxpayers should be impressed with this. This is the way, I mean, you'd rather not have that guy click the link. But as we've said before, this is now the low-hanging fruit. I sent a note out to a bunch of my - actually it's a group I've talked about before, my group of high school buddies that I'm still in touch with, because Ars Technica had a piece this morning about a threat that we've discussed several times already, but it's still so unknown.

And that was Ars Technica's point, was this very little known, they're calling it the "click fix attack." It's where you believe you're trying to prove that you're human through a new style of CAPTCHA. And of course CAPTCHAs change from time to time. And so you're instructed to press the button to copy something from your browser onto your clipboard, then to open the run field down in Windows and paste that command. Well, again, none of our - hopefully no one listening to this podcast would do this. But it turns out this is becoming extremely effective because you are - the way I explained it to my group, who are non-technical, I said our contemporary browsers are all about containment. They are doing a very good job of containing all of the horrors and crap and malicious intent that is out on the Internet within the browser, within the browser's boundaries.

But if you copy something out of the browser into Windows, you've violated that containment. And nothing prevents that from happening, unfortunately, at the moment. You know, if the browser assumes that you want to copy something that you've seen online, well, okay. A URL or some text off a page...

Leo: You know what you're doing.

Steve: Yeah.

Leo: It's your machine. Go ahead.

Steve: You know? So what we're going to need to have is some sort of - I'm blanking on the word.

Leo: Something. We're going to need something, that's for sure.

Steve: Yeah, that's definitely the case. You copy something to your clipboard. Clipboard is the word I was looking for. We're going to need a clipboard source identification.

Leo: Yes.

Steve: So that if something is pasted from a browser, it's tagged as, like, special caution. And so that, for example, you just can't drop it into the run field of Windows and say "paste" without all kinds of warning sirens and stuff going off to prevent this kind of problem. So, you know, the where the clipboard got its contents is going to start, we're going to need to start tracking, rather than, as you said, Leo, just assuming that the user knows what they're doing because, eh...

Leo: No. Yeah.

Steve: No.

Leo: Clearly there's - they're not going to...

Steve: Too much.

Leo: Asking way too much.

Steve: But anyway, you know, props for Nevada.

Leo: Yeah, amazing.

Steve: You don't want to get hit by malware. But if you do, you want to be able to recover. You don't want to have to trust bad guys to give you your keys back. And we've seen that, even when you get the keys from the bad guys, as they pointed out, and they weren't wrong, private sector firms still take months to recover. So look at Jaguar, you know, what a disaster.

Leo: Yeah. Yeah.

Steve: So, good job. Okay. Now this is really interesting. And, wow, okay. Last week, Check Point Research published an incident report describing an arcane attack on a DeFi - a Decentralized Finance - platform called Balancer. And it occurs to me that saying "arcane attack on a DeFi platform" is an oxymoron. I mean, is like - or redundant. I don't know. I mean, because it's like, I mean, we have seen dumb, like, authentication mistakes being made where a third-party system was attached to the API, and so that credential got abused, which allowed them to sneak code into the devs of the DeFi platform. You know, we talked about all that. That's not this.

I'm not going to expend any great amount of effort in either me understanding the details, or expecting anyone listening to this to. My strongest advice to everyone listening would be don't worry about the details. And after you hear why, I imagine that you'll agree. But what happened here is still so very cool, even if it's borderline incomprehensible, that I wanted to share it.

Okay. So Check Point titled their report: "How an Attacker Drained" - and I would argue earned, but we'll see - "Drained $128 Million from Balancer Through a Rounding Error Exploit," Leo. This is just - this is so cool. Okay. Again, I don't even under - I can't begin to understand the details, but I'm going to share them so everyone can not understand them with me. Apparently, some attackers did understand this, and they literally leveraged - because this is somehow about leverage - they leveraged the crap out of it. So here's what Check Point...

Leo: That's the technical term, I believe, yes.

Steve: That's a technical term, yes. Check Point said: "On November 3rd" - right, so this just happened - "2025, Check Point Research's blockchain monitoring systems" - cool that we even have such things now - "detected a sophisticated exploit targeting Balancer V2's ComposableStablePool contracts." Whatever that is. "The attacker exploited arithmetic precision loss in pool invariant calculations..."

Leo: Well.

Steve: Again, okay.

Leo: Okay, yeah.

Steve: When you're going to have some invariant pool leakage involved, okay.

Leo: That's the problem right there, I bet you, yeah.

Steve: That's not good, right, "...to drain $128.64 million across six blockchain networks in under 30 minutes." They wrote: "The attack leveraged a rounding error vulnerability in the _upscaleArray function that, when combined with carefully crafted batchSwap operations, allowed the attacker to artificially suppress BPT (Balancer Pool Token) prices and extract value through repeated arbitrage cycles. The exploitation occurred primarily during attacker smart contract deployment, with the constructor executing 65 micro-swaps that compounded precision loss to devastating effect."

Leo: Yes, I would imagine.

Steve: And that was just the overview, folks.

Leo: The fact that they even figured this out is amazing; right?

Steve: That's why I would say arguably they earned this money.

Leo: They earned it.

Steve: Like, yeah, but...

Leo: Okay.

Steve: So they said: "Balancer V2" - just to add insult to injury I'll give you a little more - "uses a centralized 'Vault' contract that holds all tokens across all pools" - of course - "separating token storage from pool logic to reduce gas costs" - it's like, what? Is that a typo?

Leo: Oh, it's reducing gas costs. That's the reason. Yeah.

Steve: Of course, that's right, "and enable capital efficiency," which you would want. "This shared liquidity design meant a single vulnerability in pool math could affect all ComposableStablePools simultaneously" - well, of course - "which is exactly what happened in this attack. Balancer V2's Internal Balance system allows users to deposit tokens once and use them across multiple operations without repeated ERC-20 transfers." Well, naturally. "This system..."

Leo: This sounds like - this sounds like the DCOMbobulator thing. This is crazy.

Steve: I know. And it's true. "This system became critical to the attack. The exploit contract accumulated stolen funds in its internal balance during deployment, then withdrew them to the final recipient address in subsequent transactions. ComposableStablePools use Curve's StableSwap invariant formula to maintain price stability between similar assets. The invariant D" - that's capital D for those who are following along - "represents total pool value, and BPT price is calculated as D divided by totalSupply. However, the scaling operations that prepare balances for invariant calculations introduce rounding errors." Wouldn't you know.

"The mulDown function performs integer division that rounds down. When balances are small, in the 8-9 Wei range, that's W-E-I, we'll get to that in a second, this rounding creates significant relative errors - 'relative' is important here - up to 10% precision loss per operation."

Okay, now, the term "Wei," W-E-I, is important. A Wei is the smallest possible unit of Ethereum. Get this. One Ethereum is 10^18 Wei. So one Wei is far less than one trillionth of a cent in value. So some super clever individual realized that by using these incredibly small balances, the rounding error, which would normally be utterly insignificant, would result in up to a 10% precision loss per operation down at the 8-9 Wei range.

Leo: Jesus.

Steve: I'm sure not giving these people any of my money. Check Point then finishes their explanation by writing: "This precision error propagates to the invariant D calculation, causing abnormal reduction in the calculated value. Since BPT price equals D divided by total supply, the reduced D directly lowers BPT price, creating arbitrage opportunities for the attacker. Individual swaps produce negligible precision loss, but within a single batchSwap transaction containing 65 operations, these losses compound dramatically." I'll say. "The lack of invariant change validation allowed the attacker to systematically suppress BPT price through accumulated precision errors, extracting millions in value per pool." Okay.

Leo: Wow.

Steve: As I said, I'm not sure that I would call this an attack at all. I mean, technically, maybe. An extremely clever bad guy understood enough of the inner workings of this system - and apparently we're the minority. Or maybe not, Leo. I wouldn't call us a minority. But there are others. Obviously Check Point has some people who understand this gobbledy-gook. So, okay. But this guy understood the inner workings of a system to design an exploit of its inherent rounding error. And doing some other background research, it turns out this is understood. The fact that there's this rounding error down there has been known for quite a while. No one had figured out how to exploit it. He clearly started with a purely theoretical concept and made it work. And for his trouble he's now slightly more than $128 million richer, whoever he is and wherever he is.

So I'm not completely certain that he didn't earn it. What I am certain of is that none of my money, nor any money belonging to anyone I care about and have any influence over, is ever going to get anywhere near any of that wacky arcane technology. It all gives me the heebie-jeebies, which is another technical term. So no thank you. I suppose I'm old-fashioned, but I want to understand where I put my money, you know, even if it's under a mattress. Because, wow, you know, where did it go? We don't know. What do you mean you don't know? Well, you know...

Leo: It's just crazy.

Steve: Well, you know, it was a rounding error. A rounding error worth $128 million? Where's my money? Well, we don't know.

Leo: It's crazy. Crazy.

Steve: It drained out. It's gone.

Leo: Yeah, yeah.

Steve: So people paid for some monkey icons or something, and now Kevin is a lot richer than he used to be. I don't know. What I do know, Leo, is that we should...

Leo: Oh, I suspect I know, too.

Steve: I suspect you do.

Leo: When you say that.

Steve: Oh, and stay tuned because, after that, we're going to find out why Chrome thinks it's a good idea to begin autofilling people's driver's license numbers and states where they obtain them.

Leo: That's nuts. Just nuts.

Steve: And we know why, don't we.

Leo: Yes, we do. Do we? I don't know. I'm going to find out. I don't know if...

Steve: You're going to find out. It's not good.

Leo: And I have some good news. Anthony Nielsen came over and said, well, you've got to turn that on. And now you can see my screen. So I'll show your chart later on. I made Anthony drive all the way here to flip a switch. I'm sorry, Anthony. But I appreciate it. I could have sworn I flipped that switch myself earlier. But anyway...

Steve: Probably in the other direction.

Leo: Yeah, probably. You know, they need big buttons that say ON and OFF.

Steve: GOOD/BAD.

Leo: GOOD/BAD. All right, Steve. On we go.

Steve: So a little blurb from Google about a new feature in Chrome caught my eye, and not in a good way.

Leo: Uh-oh.

Steve: Get a load of this one. Google wrote: "Chrome now helps you fill in passport, driver's license, vehicle information, and more."

Leo: No.

Steve: They said: "Chrome already saves you time every day by securely filling in your addresses, passwords, and payment information. Today, we're making it even more helpful. For desktop users with enhanced autofill enabled, Chrome can now also fill in your passport and driver's license number, vehicle info (like license plate or VIN) and more. It can also better understand complex forms and varied formatting requirements, improving accuracy across the web.

"We've designed enhanced autofill to be private and secure. When you enter relevant info into a form, Chrome will save this data only with your permission and protect it through encryption. And before filling in saved info on your behalf, Chrome will also ask you to confirm, keeping you in full control of your data. Starting today, these updates are available globally in all languages, and we plan to support even more data types over the coming months."

Okay. Then their little sample screen shot shows a form being filled in with fields for "Driver's License Number" and "Issuing State." Huh. Gee, you know, we've all gotten along so well until now without that.

Leo: But it's so much work, Steve.

Steve: How often do we see websites asking us to provide them with our state-issued identification such as a driver's license number and the issuing state. It does kind of make you wonder why the Chrome devs might all of a sudden be thinking that making government identification data easier to fill out for websites...

Leo: Now I get it.

Steve: ...might suddenly be useful and convenient when it has never come up before.

Leo: Hmm.

Steve: Anyone around here have any sudden need to prove who they are and how old they are? There's one other thing about this. Recall that Google wrote: "We've designed enhanced autofill to be private and secure. When you enter relevant info into a form, Chrome will save this data only with your permission and protect it through encryption. And before filling in saved info on your behalf, Chrome will ask you to confirm, keeping you in full control of your data."

Now, there's no doubt that they mean that. Even if the application for this information may be a concern, there's no doubt that Google will do their best to keep that data from leaking. The problem is, leaking is what data does. It leaks. Right?

Leo: Yeah, that's right.

Steve: I mean, that's what it does.

Leo: That's what it does.

Steve: Chrome is a good browser with excellent security. But it's still being constantly exploited and receiving patches to close zero-day vulnerabilities that have been discovered being used in the wild. This is not any criticism of Chrome and its Chromium engine. Firefox and Safari are in the same boat. Today's web browsers have grown so complex, and are also never being left alone, they're being constantly updated with the latest features, that they can never probably ever become completely impervious.

So to me, it's a convenience for my password manager to be able to fill out my credit card number and mailing/delivery address information. That comes in handy. But I memorized my California driver's license number 54 years ago. Right?

Leo: Yeah.

Steve: And aside from having to add a "0" in front of its most significant digit when California ran out of numbers, it has never changed. So I've had no problem entering it the, perhaps, what, maybe five or six times I've ever needed to provide my identity online, such as when I froze my credit reporting at the various agencies, or when I signed up for Social Security. Other than that, it doesn't come up very often.

But consider this: We're entering a very different universe if the world's most popular web browser designers for some reason believe that in the future we're going to be needing to provide our government identification information with sufficient regularity that enabling our web browser to do that for us will be a benefit.

And here's the other problem. Even if we trust Google to have done everything right about keeping that personally identifiable information secure and to never leak, how can we possibly trust all of the many individual websites that are, presumably, all going to be asking for this information often enough for Google to have added this feature to Chrome? We all know that websites cannot keep secrets. They don't. Just ask Troy Hunt's Have I Been Pwned site. And don't forget that massive database leak, Leo, you and I and hundreds of thousands of others all discovered had our searchable...

Leo: Socials.

Steve: ...our Social Security numbers searchable online. Further demonstration that websites leak. So this brings to mind that old adage about how to keep a secret: "Don't tell anyone." I don't plan to tell Chrome or Firefox or Safari - or even my trusted password manager - anything more about me than they really require knowing for my own convenience. And I don't need to give my driver's license number out, like, ever, with a few exceptions. If we get to a place where we're needing to frequently provide our driver's license numbers to random websites, then the Internet will have entered an entirely new era.

Leo: Yeah.

Steve: And not a good one.

Leo: No.

Steve: So I don't know what Google knows, but I hope they're busy implementing identity protecting age assertion technologies rather than storing my driver's license number in an encrypted secure format so it can be given out more easily because I don't ever want to be in a position where that's happening.

Leo: Yeah, yeah. Wow. I didn't think of that till you said it. And then I realized, oy.

Steve: Yeah, why. We haven't needed it until now.

Leo: Now, all of a sudden, yeah.

Steve: What's changed? Well, we know.

Leo: I turn off all of that stuff - password, autofill, address. Even address autofill and credit card autofill. I don't think the browser's the right place for that stuff, to be honest.

Steve: Well, no. And as we know it's not multiplatform. They don't do as, you know, they're not all as focused on it as our password managers are. And if it's on, then you end up with a collision of the autofill. Everybody's trying to fill the thing out, and it's like, whoa, wait, ho.

Leo: Right. Right. Right. Hold on there. No, yeah, and that's - I do keep it in Bitwarden, and I keep all that other stuff in Bitwarden. I presume that's relatively safe, if I need to fill it in. But like you, I never consciously memorized my driver's license number; but you enter it enough, it sticks.

Steve: I know. I don't know why, but I can, like, run through it. I know exactly what it is.

Leo: Yeah. Just it's not that long, for one thing.

Steve: No, exactly. And mine kind of has a little rhyme to it, so it's good.

Leo: Oh. Ooh, nice.

Steve: Okay. So it's not often that I find myself envious of life in the UK. Not that there's anything wrong with the UK. It's just kind of hard to beat Southern California, is all I'm saying.

Leo: Yeah. Boy, do they envy you, I'm just going to say.

Steve: But this next bit of news would certainly be welcomed by our UK-based listeners, and I wouldn't mind having some of it myself to go along with Southern California's sunshine. Last Wednesday the official gov.uk website posted this update under the headline "Spoofed numbers blocked in crackdown on scammers." The UK government wrote: "Scammers hiding behind fake numbers will be unmasked under a new partnership with Britain's biggest" - there are six of them - "phone companies to protect the public from fraud.

"A landmark new agreement between government and industry, signed at the BT Tower today, will see a raft of new measures to safeguard the UK's mobile network from fraud. It will make it harder than ever for criminals to trick people through scam calls, using cutting edge technology to expose fraudsters and bring them to justice. Scam calls and texts are a daily frustration for many, with criminals based abroad often impersonating trusted organizations like banks and government departments to deceive people to steal money or personal information.

"Britain's six largest mobile networks have committed to upgrade their network within the next year to eliminate the ability for foreign call centers to spoof UK numbers, making it clear that calls are originating from abroad, exposing scammers' lies. Data shows that 96% of mobile users decide whether to answer a call based on the number displayed on their screen, with three-quarters unlikely to pick up if it's from an unknown international number.

"Advanced call tracing technology will also be rolled out across mobile networks to give police the intelligence to track down scammers operating across the country and dismantle their operations. New commitments to boost data sharing with the police will shine a light on the mobile networks that let scam calls slip through their net, empowering customers and making it harder for scams to go undetected."

So in this regard I could easily wish that the U.S. would be as proactive as the UK. When you think about it, this is such a simple solution. Simply examine the telephone calls entering the UK. Just watch your national borders. It's trivial to know when a call coming in from outside the UK is carrying a spoofed originating UK phone number. UK citizens travelling abroad who actually do have valid UK originating numbers will need to be admitted, but the agreement specifically talked about foreign call centers spoofing known UK numbers, so presumably there's some way to handle them separately. And yay to the UK. You know, this would be something we could all use, lord knows.

Leo: We've said this, you've said this for years with regard to ISPs. But if the phone companies did the same thing...

Steve: Yes. It's exactly like ISPs who are saying, wait a minute, you know, these packets do not have our IP, and they're saying that they do, so let's drop them.

Leo: Yeah.

Steve: Like, how hard is that?

Leo: And the phone company should do that. This phone call is pretending to come from 707 area code, but it's not. Why should I allow it? Because they make money is why, I'm sure.

Steve: Yes, I know. Yes. Well, it's good that they stepped up.

Leo: Yeah.

Steve: Okay. So this is really interesting. Something that makes a lot of sense is pruning old and aging technologies from our web browsers. Browser bloat is a very real thing. Not every idea that the Internet community comes up with gains or maintains a solid foothold. I mean, they clash; right? But unless proactive measures are taken to deliberately scrape the dead bits out of our browsers, they just don't go away on their own. And the last thing anyone wants is having "zombie code" taking up space and polluting browsers with old, unmaintained, and potentially exploitable code.

So it was in that spirit that Google recently announced the planned deprecation and eventual total removal of a feature that, hopefully, no one listening to this podcast is using and needs, nor knows anyone who is or does. And if you or your enterprise do, you have at most one year to replace it with some other solution because it is going away. And I should mention that moving to Firefox or Safari probably won't help because both of them are hopeful that Google will succeed in this.

Okay. So what's going away? Something that I suspect matters so little that most people listening have never even heard of it. It's called XSLT, which is the official abbreviation for "Extensible Stylesheet Language Transformations." XSLT is a declarative template-based language that's used for transforming convenient to code, but difficult to view XML formatted data into other forms, such as HTML. Here's what Mozilla posted about this just a few months ago, back in August.

Mozilla wrote: "Our position is that it would be good for the long-term health of the web platform and good for user security to remove XSLT, and we support Chromium's effort to find out if it would be web compatible to remove support." Which is an interesting way to phrase it, if it would be web compatible to remove support. Meaning, I think, how badly it breaks things. "If it turns out that it's not possible to remove support, then we think browsers should make an effort to improve the fundamental security properties of XSLT, even at the cost of performance.

"While it's important to not break existing web content, it's also important to prevent security vulnerabilities." Thank you. "XSLT," they wrote, "has been in maintenance mode in browsers and has been an ongoing source of security issues. Features and technology are sometimes removed from browsers for this reason, even when doing so breaks some existing content. Examples include Mutation Events, window.showModalDialog function, keygen, and plugins. The usage of XSLT is lower than that of Mutation Events at the time of their removal, and Flash was very commonly used.

"If it turns out not to be possible to remove the feature, we'd like to replace our current implementation," says Mozilla. "The main requirements would be compatibility with existing web content, addressing memory safety security issues, and not regressing performance on non-XSLT content. We've seen some interest in sandboxing LIBXSLT; and if something with that shape satisfied our normal production requirements, we would ship it."

Okay. So that was August. Wednesday before last, Google's Chrome group posted the headline "Removing XSLT for a more secure browser." And they wrote: "Chrome intends to deprecate and remove XSLT from the browser. This document details how you can migrate your code before the removal in late 2026." In other words, we're currently in late 2025, so you've got a year. Actually, things start getting a little dicey in March, as we'll see.

They wrote: "Chromium has officially deprecated XSLT" - Chromium has - "XSLT, including the XSLT Processor JavaScript API and the XML stylesheet processing instruction. We intend to remove support from version 155" - that's of Chrome - "November 17, 2026." So a year. "The Firefox and WebKit projects have also indicated their plans to remove XSLT from their browser engines. This document provides some history, context, explains how we're removing XSLT to make Chrome safer, and provides a path for migrating before these features are removed from the browser."

Okay. Then Google then provides a timeline for this removal where, starting next March, they cautiously tiptoe forward, disabling first by default, but not fully removing it yet, increasing portions of Chrome's XSLT support. But the more interesting part of this event, since I really hope no one cares about the loss of XSLT itself, is what we learn about the feature and code support evolution of the web through the lens of this event. Here's what Google shared about the past and present of XSLT, since we now pretty much know its future.

They wrote: "XSLT was recommended by the World Wide Web Consortium (W3C) on November 16, 1999" - funny how these November timelines line up, so around the same time, 1999, end of the year 1999, so 26 years ago - "as a language for transforming XML documents into other formats, most commonly HTML for display in web browsers." In other words, it would be possible for a website, for a web browser, to retrieve an undisplayable XML format document. And for the code in the browser to have XSLT, which is like a declarative nonprocedural, non-explicitly executed, template-oriented language, kind of like, you know, CSS is, to declaratively translate an XML document into HTML, which you would then stick into the DOM, the Document Object Model, and render on the screen for the user. So that's a thing for 26 years.

"Before the official 1.0 recommendation, Microsoft took an early initiative by shipping a proprietary implementation based on the W3C working draft in" - get this - "Internet Explorer 5.0" - so, yeah - "released in March of 1999. Following the official standard, Mozilla implemented native XSLT 1.0 support in Netscape 6" - before we had Firefox, Netscape 6 - "in late 2000. Other major browsers, including Safari, Opera, and later Chrome, also incorporated native XSLT 1.0 processors, making client-side XML-to-HTML transformations a viable web technology in the early 2000s." So the W3C standardized on it, produced a specification, and by the early 2000s all the browser community had it. Meaning anybody could reasonably use it for presentation of information through a web browser, where the source of that was an XML document, which is anything but presentable.

Google said: "The XSLT language itself continued to evolve, with the release of XSLT 2.0 in 2007 and XSLT 3.0 in 2017. These updates introduced powerful features like regular expressions, improved data types, and the ability to process JSON." Not just XML. "Browser support, however" - this is interesting - "never followed. Today, all major browser engines only provide native support for the original XSLT 1.0 from 1999," 26 years ago. In other words, it wasn't important enough for them even to go to 2.0 in '07 or 3.0 in 2017. Stayed at 1.0.

Google wrote: "This lack of advancement, coupled with the rise of the use of JSON on the wire format, and JavaScript libraries and frameworks (like jQuery, React, and Vue.js) that offer more flexible and powerful Document Object Model manipulation and templating, has led to a significant decline in the use of client-side XSLT. Its role within the web browser has been largely superseded by these JavaScript-based technologies.

"So why does XSLT need to be removed? The continued inclusion of XSLT 1.0 in web browsers presents a significant and unnecessary security risk. The underlying libraries that process these transformations, such as LIBXSLT used by Chromium browsers" - and Firefox - "are complex, aging C/C++ codebases. This type of code is notoriously susceptible to memory safety vulnerabilities like buffer overflows, which can lead to arbitrary code execution. For example, security audits and bug trackers have repeatedly identified high-severity vulnerabilities in these parsers." And they cite two CVEs, 2025-7425 and 2022-22834, both in LIBXSLT. And I just misspoke, by the way, a moment ago. As far as I know Mozilla does not use the LIB. They implemented their own native code back in the early days, back in Netscape 6.

"Because client-side XSLT is now a niche, rarely-used feature, these libraries" - this is Google saying - "receive far less maintenance and security scrutiny than the core JavaScript engines, yet they represent a direct, potent attack surface for processing untrusted web content. Indeed, XSLT is the source of several recent high-profile security exploits that continue to put browser users at risk. The security risks of maintaining this brittle legacy functionality far outweighs its limited modern utility.

"Furthermore, the original purpose of client-side XSLT - transforming data into renderable HTML - has been superseded by safer, more ergonomic, and better-maintained JavaScript APIs. Modern web development relies on things like the Fetch API to retrieve data (typically JSON) and the DOMParser API to safely parse XML or HTML strings into DOM structure within the browser's secure JavaScript sandbox. Frameworks like React, Vue, and Svelte then manage the rendering of this data efficiently and securely. This modern toolchain is actively developed, benefits from the massive security investment in JavaScript engines, and is what virtually all web developers use today. Indeed, only about 0.02% of web page loads today actually use XSLT at all, with less than 0.001% using XSLT processing instructions."

Okay. So, okay. To me, it sure sounds like they're doing an awful lot of apologizing for something that really just needs to die. On the other hand, even the end of the horrific Flash plugin - remember those nightmares, Leo? I mean, we dined out on Flash so often on this podcast, oh my lord, I mean, it was just such a problem. And even that, it took forever to finally say goodbye, which was painful. And it's true that for those vanishingly rare websites that are built in some fashion around XSLT and who will stop functioning without it, XSLT's complete disappearance from the web could prove to be a significant inconvenience.

So Google continued apologizing by writing: "This is not a Chrome or Chromium-only action. The other two major browser engines also support the removal of XSLT from the web platform, WebKit and Gecko. For these reasons, deprecating and removing XSLT reduce the browser's attack surface for all users, simplify the web platform, and allow engineering resources to be focused on securing the technologies that actually power the modern web, with no practical loss of capability for developers."

So what I love about this as a lesson is it's a perfect textbook example of the way all this should work. The web ecosystem needs to evolve to meet the evolving uses to which our web browsers are being put. But evolution doesn't only mean continually tacking on new feature after new feature without end. It necessarily also means trimming off the dead limbs so that the organism as a whole can remain as healthy as possible. This is never an easy thing to do because someone somewhere is going to see their website die through no fault of theirs. They will have been early adopters of an interesting technology that all browsers at the time built in and have supported ever since.

Unfortunately, their use of that technology has left them being such a miniscule minority of the world that the sane decision on the part of the web browsers is to discontinue their support and to say they're sincerely sorry, which Google clearly is. If XSLT could be left in there without compromising all Internet users, it would be left in there. It would be left alone. But this old code which still requires maintenance sees so little use that it makes much more sense to just remove it than it does to expose everyone to its dangers which require continual repair to deal with. So that's the way the web ecosystem goes. And, you know, it is the way it should go.

Leo: Yeah.

Steve: And speaking of the way it should go, Leo, the way I think this podcast should go is for me to have a sip of coffee while we take a break.

Leo: Coffee doesn't keep you up at night?

Steve: I don't drink it late in the day.

Leo: It's late in the day. It's 3:20.

Steve: Okay, that doesn't keep me up, no. And I do drink espresso, which has a strong flavor, but the caffeine is burned off by the additional roasting.

Leo: Right. I don't know. I can't - I have one cup in the morning, and if I have another one I won't sleep well. And I'm just jealous because I would love to drink coffee all day. Maybe I'll get some decaf. Although that seems like it should be anathema. But anyway, we will get back to the highly caffeinated Steve Gibson.

Steve: I like the caffeine bite. There is a bite.

Leo: Yeah, I know you do. Yeah, is that from the caffeine?

Steve: Yeah.

Leo: Oh. So decaf doesn't have that, huh.

Steve: No.

Leo: Oh, well. Oh, well. Now fully caffeinated, I give you Steve Gibson.

Steve: Okay. So while we're on the subject of web browsers, which we'll be looking at again for today's main topic, I wanted to share Mozilla's posting last Friday which carried the headline "Introducing early access for Firefox Support for Organizations." The pointer to this announcement described it as "Paid Firefox support for corporate customers," which made me curious. So this is what Mozilla said.

They said: "Increasingly, businesses, schools, and government institutions deploy Firefox at scale" - meaning everywhere - "for security, resilience, and data sovereignty. Organizations have fine-grained administrative and orchestration control of the browser's behavior using policies with Firefox and the Extended Support Release. Today, we're opening early access to Firefox Support for Organizations" - that's its official title - "a new program that begins operation in January of 2026." So in a month. Or a month and a half.

What Firefox Support for Organizations offers. They said: "Support for Organizations is a dedicated offering for teams who need private issue triage and escalation, defined response times, custom deployment options, and close collaboration with Mozilla's engineering and product teams."

So they said: "Private support channel accesses a dedicated support system where you can open private help tickets directly with expert support engineers. Issues are triaged by severity level, with defined response times and clear escalation paths to ensure timely resolution.

"Discounts on custom deployment: Paid support customers get discounts on custom deployment work for integration projects, compatibility testing, or environment-specific needs. With custom development as a paid add-on to support plans, Firefox can adapt with your infrastructure and third-party updates."

And finally: "Strategic collaboration: Gain early insight into upcoming development and help shape the Firefox Enterprise roadmap through direct collaboration with Mozilla's team." So some opportunity to steer Firefox's future.

They said: "Support for Organizations adds a new layer of help for teams and businesses that need confidential, reliable, and customized levels of support. All Firefox users will continue to have full access to existing public resources including documentation, the knowledge base, and community forums." So they're saying none of that's changing. "And we'll keep improving those for everyone in the future. Support plans will help us better serve users who rely on Firefox for business-critical and sensitive operations. If these levels of support are interesting for your organization, get in touch using our inquiry form, and we'll get back to you with more information."

So that's new and, you know, interesting. To me, at first blush this sounded like a bit of the result of a brainstorming meeting whose goal was to cook up new sources of revenue for Mozilla to, you know, help support Firefox. But I can also easily imagine that there has probably been some true demand for these services for which Mozilla had no such program. So organizations that wish to be able to depend upon Firefox and Mozilla will now have a way of being assured that they can do so, while paying for the privilege. I dropped a link to this announcement into the show notes. It's here in the middle of page 12 for anyone who's interested. And that blog posting contains links that allow you to follow up and get your organization listed.

So, you know, Firefox has been just, you know, free and open source, and it will continue to be so. But, you know, if there are organizations that have decided that they want to go fully Firefox, I can imagine if the price is right saying, yeah, you know, we'd like to have access to Firefox's developers on a shorter leash so that we are able to get attention where we need it, where and when we need it. So I can see that that makes sense.

Meanwhile, Russia's policy continues to starve their own citizens of Western services. Now Akamai has reported service disruptions throughout Russia after the Russian government started filtering Akamai's traffic. This has led to disruptions for some Russian Akamai customers. Akamai says, yeah, it's aware of the government's actions, but it's unable to do anything about it. Right? It's, you know, it's Russian bandwidth on Russian wires, and so Akamai has a known block of IP presence, so if Russia wants to say "No Akamai," they can. This may just be Russia issuing a "We're serious about this" warning, because they have not yet implemented a full blanket block, and Russia now requires foreign cloud providers, among which would be Akamai, to open local offices in-country and register themselves with the state.

So that may just be like, you know, a little bit of saber-rattling on Russia's part, saying, hey, you know, we told you. If you want to be bringing bandwidth into Russia, you've got to have a local office. And so far, most organizations are saying, eh, we don't think we want to do it that much. And in some cases, if the West is sanctioning, then it may not be legally possible for Western corporations to be running offices in Russia. And we know there's been a great exodus of that so far.

A number of times in the past year we've looked at the fine security work being performed by a company called "Wiz," and I've been forced to say, you know, W-I-Z as in Wizard, just to be clear. Another security firm, Mandiant, was also once independent, and we often covered their work. They were then gobbled up by Google to become a division of that ever-growing behemoth. So it's now time to report that Google's $32 billion acquisition of Wiz Security just passed U.S. regulatory approval. Although there are some other jurisdictions in which approval is still pending, it appears certain that Wiz will be joining Mandiant as a new Google property, you know, an Alphabet property. And so Google increases their Internet security offering group. And, you know, Mandiant's still doing great work. I imagine Wiz will be, too. It's just, you know, Google has so much money, they're just - they're spending some of it. And Leo?

Leo: Yes.

Steve: Believe it or not.

Leo: Oh, please, please.

Steve: I know.

Leo: Tell me it's true.

Steve: Looks good.

Leo: Don't tease me.

Steve: A recently obtained leaked copy of proposed changes to the EU's comically horrific GDPR regulation, which forced, among other things, all websites everywhere to constantly request their visitors' cookie preferences, will finally change the requirements to work - oh, my god - the way they always should have. It's hard to believe. I've read the language.

The new regulations allow web browser users to configure their browsers - their browsers - once and for all to subsequently transmit their cookie, tracking, and direct marketing preferences to every website they visit.

Leo: OMG.

Steve: This would be a formalized variant of the DNT (Do Not Track) header or the GPC, the Global Privacy Control signal header. But it would be done by, you know, by GDPR regulations EU-wide, which as we know has global effect because I'm in Southern California, and I'm still getting cookie banners, thank you very much. The regulations also legally require every website - which is the part that matters - to silently comply with and obey any such preference transmission from a browser's headers.

Once adopted, and following a six-month implementation grace period to give websites a chance to get up to speed, these amended requirements would be backed by the full weight, force, and effect of the EU's GDPR which as we know originally was involved in these cookie pop-ups on the entire world. So the constantly annoying cookie request banners would finally disappear, and users who care will be able to "set and forget" their preference in their browsers once and for all.

Leo: Of course, I just use uBlock Origin to block them, but still.

Steve: Yeah. Yeah.

Leo: It'd be nice to...

Steve: This will be, well, I mean, and this will be built into the browser, so much higher traction we can expect over time.

Leo: Right, right.

Steve: And I'll do things like have GRC display a banner when people don't have these set, just to let them know, hey, you know, you've got a browser...

Leo: That's a good idea.

Steve: ...that supports this. Maybe you want to think about turning it on. You bet.

Last week we also saw another pair of migrations away from dependence upon Microsoft's closed proprietary solutions. The International Criminal Court - got a kick out of this one, Leo - they dropped their use of Microsoft Office in favor of OpenDesk in response to the U.S. sanctioning some of its judges. So the U.S. sanctioned some judges over something that we didn't like that the International Criminal Court did. I saw it go by at the time. I don't remember now what it was. And so the ICC said, okay, we're going to switch over to OpenDesk, thanks very much.

Also, Austria's Armed Forces abandoned Office for LibreOffice, while the Austrian Ministry of Economy has moved from Microsoft's Azure over to NextCloud. So, you know, the non-domestic dependence on Microsoft's proprietary solutions is really changing. And I hope Microsoft, somebody there is paying attention because, you know, they've certainly been enriched by the global dominance they had, and it's still there, but it's waning. You know, there's handwriting on the wall.

Speaking of handwriting, recall that last week we noted that officials in Oslo, Norway became worried about the hidden and undocumented cellular radios they found scattered throughout their Chinese-made electric buses. So out of an abundance of caution they pulled the SIM chips out of all of them to shut those radios down because, you know, why not tell us why they're here, at least, if you're going to have them? I just wanted to follow up this week by noting that Norway's discovery has shaken assumptions so that investigations are now underway in several other countries, including Australia, Denmark, the UK, and the Netherlands. All of them are driving their buses into large, bus-size Faraday cages and saying, okay, what's up with you?

Leo: That's wild.

Steve: What's going on here? Yeah. Okay. So this is extremely cool, this next piece. And at first I, like, what? Are you - what? Microsoft's claim in the introduction of what they named their "Whisper Leak" attack brought me up short because what it was claiming to do seemed far from plausible. They proved otherwise. Brought me up short because what it was claiming to do seemed far from plausible. They proved otherwise.

They wrote: "Microsoft has discovered a new type of side-channel attack" - oh, and for our listeners who have not been listening for long, this is probably the best example of a side-channel attack on cryptography, on encryption, that we will ever see. I mean, this is so good. So if you've been wondering what side-channel is, and you haven't gone back to earlier episodes, we know that our truck-driving friend is catching up, he's probably up to Episode 100 now, he was on 52 or something when we last checked in with him, this is a perfect classic example of a side-channel attack.

So they wrote: "Microsoft discovered a new type of side-channel attack on remote language models. This type of side-channel attack could allow a cyberattacker a position to observe your network traffic to" - oh, sorry. And actually they meant in the position to observe your network traffic - "to conclude language model conversation topics, despite being end-to-end encrypted via Transport Layer Security. We've worked with multiple vendors to get the risk mitigated" - in other words, this has been fixed now - "as well as made sure Microsoft-owned language model frameworks are protected."

Okay. So, now, what? Microsoft is saying here that they've discovered some sort of side-channel attack on a fully encrypted TLS connection which can reveal large language model conversation topics. They then tell us why we should care, writing: "In the last couple of years, AI-powered chatbots have become rapidly an integral part of our daily lives, assisting with everything from answering questions and generating content to coding and personal productivity. As these AI systems continue to evolve, they're increasingly used in sensitive contexts, including healthcare, legal advice, and personal conversations. This makes it crucial to ensure that the data exchanged between humans and language models remains anonymous and secure.

"Without strong privacy protections, users may be targeted or hesitate to share information, limiting the chatbot's usefulness and raising ethical concerns. Implementing robust anonymization techniques, encryption, and strict data retention policies is essential to trust and safeguarding user privacy in an era where AI-powered interactions are becoming the norm.

"In this blog post, we present a novel side-channel attack against streaming-mode language models that uses packet network sizes and timings." Okay, uses packet sizes and timings. "This puts the privacy of user and enterprise communications with chatbots at risk, despite having end-to-end encryption." So, okay. It's not claiming to determine what they're saying. But it appears to be able to determine if the discussion is about a specific topic. Okay. So this is certainly not nothing. I'll let them finish.

They wrote: "Cyberattackers in a position to observe the encrypted traffic (for example, a nation-state actor at the Internet service provider layer, someone on the local network, or someone connected to the same WiFi router) could use this cyberattack to infer if the user's prompt is on a specific topic. This especially poses real-world risks to users by oppressive governments where they may be targeting topics such as protesting, banned material, election process, or journalism. Finally, we discuss mitigations implemented by cloud providers of language models to reduce the privacy attack risks against their users. Through this process, we have successfully worked with multiple vendors to get these privacy issues addressed."

Okay. So Microsoft's post then reminds us that packet length depends upon packet content. Less content means smaller packets. And also that the ciphertext that's encrypted from plaintext will have the same approximate length as the plaintext it encrypts. Next we have the fact that users of cloud-based AI prefer watching the AI generating and sending tokens of text as they're generated, sequentially. Right? In streaming mode, as it's called. As if the AI was busily typing on its computer on its end. This means that rather than waiting to receive the entire output all at once, the AI models are deliberately dribbling it out as it's being determined. That also means that the TLS protocol is similarly "dribbling out" individual encrypted packets one by one as they are being sent to the user, containing, in many cases, individual encrypted words.

And, finally, the timing of the individual dribbles contains some information about what the model went through in order to produce that next bit of dribble. It turns out that Microsoft did indeed discover and implement a successful side-channel attack, without ever having any access to the decrypted content, only using the individual sizes and timing of the TLS packets which were seen to be going back and forth.

This attack does not allow an eavesdropper to broadly determine what's being discussed. But in the example they gave, they pre-trained their system, their cyber attacking system, with 100 examples of LLM prompt transaction regarding money laundering. They ask about money laundering 100 different ways. And they trained their recognizer on the LLM's replies only by examining the individual TLS packet timings and lengths that replies about "money laundering" generated from the LLM. And it worked. Once they had set everything up, they allowed their system to monitor the individual packet lengths and timings of 10,000 separate conversations with 100% of the time it's successfully identifying the one conversation out of those 10,000 that was about money laundering.

Microsoft summed the threat up as follows: "For many of the testbed models, a cyberattacker" - many of the testbed models that Microsoft implemented, so they saw this happen - "a cyberattacker could achieve 100% precision (all conversations it flags as related to the target topic are correct) while still catching 5-50% of target conversations. In plain terms, nearly every conversation the cyberattacker flags as suspicious would actually be about the sensitive topic, no false alarms. This level of accuracy means a cyberattacker could operate with high confidence, knowing they're not wasting resources on false positives.

"To put this in perspective, if a government agency," they wrote, "or Internet service provider were monitoring traffic to a popular AI chatbot, they could reliably identify users asking questions about specific sensitive topics - whether that's money laundering, political dissent, or other monitored subjects - even though all the traffic is encrypted." They said: "Important caveat: these precision estimates are projections based on our test data and are inherently limited by the volume and diversity of our collected data. Real-world performance would depend on actual traffic patterns, but the results strongly suggest this is a practical threat, not just a theoretical one."

So this seems academically interesting, but not something that we would need to worry about. But when we recall Bruce Schneier's reminder - "Attacks never get weaker, they only ever get stronger" - you know, it seems like what might be a curiosity today could have the tendency to mature over time. So how to fix this?

They wrote: "We've engaged in responsible disclosure with affected vendors and are pleased to report successful collaboration in implementing mitigations. Notably, OpenAI, Mistral, Microsoft, and xAI have deployed protections at the time of writing. This industry-wide response demonstrates the commitment to user privacy across the AI ecosystem. OpenAI, and later mirrored by Microsoft Azure, implemented an additional field in the streaming responses under the key 'obfuscation,' where a random sequence of text of variable length is added to each response. This notably masks the length of each token, and we observed it mitigates the cyberattack effectiveness substantially. We've directly verified that Microsoft Azure mitigation successfully reduces attack effectiveness to levels we consider no longer a practical risk."

So as I said, here we have a beautiful example of a surprisingly effective side-channel attack, and a classic, perfect example of a side-channel attack in general, where the data being leaked is never seen, you know, never seen directly; but some indirect consequences of the specific data are observable and can allow a sufficiently clever attacker to infer what that data must have been for that inference to be true of the data. So just, you know, nice work on Microsoft's part, and not something we would ever think to protect or that needed protecting, but indeed it did.

Leo, break time. We're going to talk about a few miscellaneous bits, and then we'll tackle our topic.

Leo: And now, back to Steve.

Steve: A word from a listener. David Wright wrote: "Hi, Steve. I've bought numerous copies of SpinRite over the years..."

Leo: Really.

Steve: "...to support you."

Leo: Aww.

Steve: He's moved around from company to company.

Leo: Oh, okay. Oh, that makes sense.

Steve: And he said, hey, we need a corporate site license for SpinRite. He says: "But" - I loved it. He said: "But I've never actually needed to use it in anger." He said: "I've had problems over the years, but they all turned out to have other causes. Until last week." Now, this is a fresh email, so this just happened. He said: "My predecessor set up a NAS for the documentation of our Measuring and Control department," and he said, "installation and programming of the PLCs and associated technology. Their documentation 'drive'" - he had "drive" in quotes, meaning a NAS drive, it was connected by iSCSI to a server - "disappeared.

"Looking at the NAS, one of the drives was blinking red. Checking the NAS UI, the drive was also showing a fault there, but He Who Shall Not Be Named had set up the NAS, which was the main storage for all the department's documentation, with drive-spanning zero-redundancy RAID 0." Meaning the entire volume was at risk because of one drive.

He said: "I grabbed my copy of SpinRite, a USB drive adapter, and plugged it in. Twenty-four hours later the drive was back in its NAS, and we were busily copying their documentation over to our NAS. A new drive has been ordered, and I will be completely rebuilding their NAS with RAID 5 this time." He said: "With so much kit, it was one of those pieces that hadn't been checked since I took over, but it being a NAS with four large drives in RAID, you assume the person setting it up wasn't so idiotic as to use RAID 0. Needless to say, once the dust has settled and I have time to breathe, I'll be putting in an order for another corporate license. Best regards, David Wright."

So first of all, David, thank you. And I wanted to share David's story since it's a perfect contemporary example of SpinRite 6.1 still coming to the rescue of those who need it. With RAID configured so that any one of its four drives having a problem would endanger the entire storage volume, I'm unsure what someone would do if not for SpinRite. There are many data recovery specialist services. And if a drive has failed electrically or mechanically, so that it requires a PC board swap or, god help you, a head replacement, then there's no alternative. Software is not going to be able to help you there.

But that sort of catastrophe is exceedingly rare. Usually they'll have a drive for a week or more, so you're down for that period of time, and then charge several thousand dollars. They take advantage of people's desperation to have their data back, of course. And we've heard many times from ex-employees of these services who learned about SpinRite from their employer or their ex-employer, that the first thing those companies do is run SpinRite over the drive themselves. So, you know, many days and dollars can usually be saved, as David here just reported he did, by giving SpinRite a try yourself. And, you know, save thousands and save a week and get your data back. So anyway, thank you, David. I appreciate the feedback.

While I'm on the topic of GRC software, I'll mention that Saturday evening I dropped the 62nd development release of our forthcoming commercial version of the Benchmark. And I am so pleased with the way it has turned out. As is so often the case when I begin one of these journeys, I only ever have some rough idea of what the end result will be. And this is one reason I learned long ago - actually it was with SpinRite 3.1 - to never guess when that will be. People say "When, when, when?" I go, "I would tell you if I knew." But I don't know because I don't know what it's going to be.

In this case, as we know, I mostly set out with the goal of adding the three newer protocols that the freeware Benchmark doesn't support: IPv6, DNS over TLS, and DNS over HTTPS. But what we have wound up with after a year of work, because it was November last year, is a far more advanced and enhanced result. It now does things like quickly and automatically "sidelining" resolvers right from the get-go, which it determines quickly will be unable to compete. So this allows it to spend its time more accurately, much more accurately actually, much more accurately measuring the performance of the DNS resolvers at the head of the pack, rather than giving equal time and wasting time on the stragglers at the end. And this behavior can be tuned since there are also several new expert-level knobs that can be turned on the software.

Through statistical analysis of the spread of timing results, we also learned that the original single-pass timing of 150 queries, which are made up of the top 50 domains on the Internet, which is what the freeware version has always done, turns out that was unable to yield sufficient certainty due to packet timing variations. It's easy to obtain an average, you know, four readings will do that. But it's surprising to see how many queries must be made to obtain 95% statistical certainty of what that average value actually is, rather than by chance it being higher than it actually would be in practice. So the new version of the benchmark makes five passes by default, though that can be set to any number you want.

And if someone, for example, wished to measure, collect, and process timing data over a much larger time span, like for example run the Benchmark for two days, the Benchmark's actual running speed can now be set so that a run which would, for example, normally take 30 minutes could be set to take 50 hours, with each resolver queried 750 times over a much wider span, which allows you to then get that average. So, and even so you can still do a benchmark in three minutes.

So anyway, there are many, many more features; and I am so pleased with the outcome of this past year's work. The gang in the newsgroup has now had the Benchmark for several days. Nobody's found a problem. It's working perfectly for everyone. We're done. So I'll be working on the documentation to get that ready for the release, which should be, you know, a week or two from now. So I'm very excited.

And while we're on the subject of GRC projects, recall that about a month ago, near the start of October, there was a time when all of GRC's weekly Security Now! podcast email suddenly went to Gmail's spam folders. Our listeners, I don't know how they even saw them or found them. They must - maybe they - yeah, in fact, Leo, you said that you that you'd check your Gmail spam folder once a week to see if anything important has gone there. So obviously Gmail makes mistakes. I was horrified because I had done nothing different; but suddenly, like, all of the Gmail from our listeners, and we have a huge percentage of listeners who either use email as their domain or have their own personal domain that Google handles for them, it all went into junk. It was all routed that way.

So we soon learned from our listeners that Google had apparently suffered some sort of internal glitch because many other people's email which was bound for Gmail, which had never had any trouble, was also going into its recipients' spam folders. So it wasn't anything that I did, nor really anything that Google was doing deliberately. I think that there was some just internal glitch inside of Google for a few days. And the weekly Security Now! mailing happened to hit then.

But since I'm planning GRC's second-ever full mass mailing to our more than 150,000 subscribers once the commercial version of GRC's DNS Benchmark that I was just talking about is ready, the possibility that, you know, Gmail recipients among those 150,000-plus might get routed into spam scared the you-know-what out of me. So even though I was certain I had originally gotten all of the spam stuff fixed correctly, I returned my focus to our SPF, DKIM, and DMARC DNS records. All of the various test sites said that everything I had set up was all working correctly, it was hunky-dory, that the records restricting the spoofing of email from GRC were all correct.

Yet a look at Google's user-reported spam history and chart told a very different story. Users apparently could be annoyed by email pretending to come from GRC, spoofed GRC. So GRC.com email was being sent by spammers because GRC's been around a long time, I suppose.

So what I discovered was that, even though my anti-spam DNS records were well locked down, there were two optional parameters missing from our DMARC DNS record. The bits that were missing are called "alignment modes," and those can either be "relaxed" or "strict." And what I discovered was that, if they're not specified, they default to being "relaxed," as in "none," because spam was getting through. So I added two additional values: adkim=s and aspf=s, both for strict. And it took a while, it took Google a while for the records to propagate. Probably Google is caching them internally because it doesn't want to be constantly checking all of the DNS for incoming email sources. So I was, like, on pins and needles for a while.

But I have in the show notes - and Leo, you were showing it, thank you - the recent chart from Google showing that - I think that's the last 90 days, September, October, November, yes. So basically through September and October, there were instances of users reporting incoming spam that was pretending to be GRC. It had nothing to do with GRC. I never sent it. No one at GRC ever sent it. It was bad guys thinking that maybe if we pretend to be Gibson Research Corporation, that has a spotless email reputation, we'll be able to get through. And they were. And as a consequence, Google was saying to me, you know, we're not so sure about GRC email.

Well, the good news is adding those last two specifications finally locked it down tight. And as we can see in that chart, it's been flat line at zero ever since early October. So there have been periods in the past where it was also a flat line for a while. So I've been holding my breath. But at this point it looks like we've exceeded the length of time that we've ever not had any spam problem. So anyway, I just wanted to share this. If there are listeners, and I know there are because I've heard from you, who are running your own email servers, turns out this is important. Those two records, which I managed to spend a lot of time, a long time ago, with SPF and DKIM and getting it all right, and I never discovered those two fields had to be specified in order to get true protection. Apparently you get some, but not what Google needs.

Leo: So you have to say "strict adkim" and "strict aspf."

Steve: Yes.

Leo: And then you'll get through.

Steve: Yes. And then when an email comes in to a provider who has previously probably obtained that record from GRC, they'll see that our instructions, GRC's instructions are, if this doesn't strictly align with SPF, then reject it. Absolutely it is not valid. And so it was relaxed until I said treat that as strict. And the SPF, I mean, SPF is Sender Policy Framework. It just says, it's so simple, it says "These are the IPs that are allowed, that will ever generate valid email from GRC." And actually it's just one IP. It's something .201, client.grc.com. And I've said this is the only IP that will ever generate valid email from GRC. And I've been saying it for years, but without also saying, "And I'm serious about it."

Leo: Strict. I'm being strict.

Steve: Yeah. I'm being strict, darn it. I mean, and to me it's crazy that - why would I, what value is having an SPF record and a DKIM record if they're being treated in a relaxed fashion?

Leo: Well, so somebody could use different subdomains probably; right? So could be mail.grc?

Steve: No. No.

Leo: No.

Steve: There are mechanisms for having - for, like, specifying ranges of IPs or subdomains. And even so...

Leo: You could still be strict.

Steve: Yeah. I think, I mean, I actually kind of know. The reason is that you want, before you lock this down, you want it to be in a reporting mode.

Leo: Right.

Steve: Where you can monitor bounces...

Leo: Right, see how it's doing, yeah, yeah.

Steve: Yes, to make sure that you've got it all right so that you don't get email that is rejected when it shouldn't be. Like valid mail you're sending that gets sent to spam. That wasn't the problem. It was invalid mail that bad guys were sending as GRC were being seen as legitimate. So, you know, false positives instead of false negatives. So anyway, problem solved. Yay. And when we get this - I'm now confident, increasingly confident. Again, I've seen weird spells where we've not been spoofed. But given that I made this change, and after waiting a little bit, it's gone absolutely to zero with not a single exception, where before it looked like the Rocky Mountains in the graph. It's like, okay, I think maybe we've got it.

Leo: So the whole point of this is that somebody does not spoof you to send their spam.

Steve: Correct.

Leo: And Google was assuming that mail coming from you was in fact spam.

Steve: Yes. And the problem is, they have a very low tolerance. It's 0.3%. If it's over 0.3% of users saying I don't want this, you get in trouble with Google. So 0.3% is three out of a thousand.

Leo: Right.

Steve: If I sent a thousand pieces...

Leo: So somebody must have done that, though; right? They must have clicked - you could do that by accident, though. It's very easy to click that button, spam button.

Steve: That's what I was thinking, except that now it's gone to zero, and we've had many of our mass mailings, not a single recipient has said this is spam. So what was happening was bad guys' spam, I mean, it was spam. It was like, you know...

Leo: Oh, it wasn't from you, yeah.

Steve: It was like, you know, how to stay hard longer from GRC.com.

Leo: I haven't gotten that email.

Steve: And it's like, no, we didn't send this. And so people were saying, this is spam. And unfortunately, I was being blamed for other people's...

Leo: It was getting associated with your domain.

Steve: Yes. And again, it was like 20%. Well, the reason it was 20% was I'm not sending any email at all. And so one out of five people were clicking on spam, saying this is spam.

Leo: Right. Makes sense.

Steve: Yeah. Turns out, you know, spam is a problem.

Leo: It's a little bit of a problem, yeah.

Steve: Who knew?

Leo: Yeah. And I wouldn't mind except that I still get tons of spam in my Gmail account, so...

Steve: Oh, Gmail is - it's entertaining, actually, to look at the spam folder in Gmail.

Leo: Oh, my god.

Steve: Because, I mean, you can look in the morning, and just since, like, earlier in the morning you've got, like, just a torrent of spam. The good news is that Google has this ability to view across all their subscribers, so it's very apparent when all these people are getting the same come-on email.

Leo: Well, that's the theory is this kind of community spam filtering is the best way to do it. But maybe because I've had laporte@gmail forever, I get so much spam, even not into my spam box.

Steve: Yeah.

Leo: Most of it's in French. Maybe that's why.

Steve: Yeah, yeah. Well, anyway, so my message is, it really does look like it is possible, no matter how popular your domain is to spammers to abuse, if you get this SPF and DKIM and DMARC all set up correctly, with everything set for the strictest enforcement possible, then no valid recipient provider will think that spam that is being spoofed as coming from you will get through. It'll go into people's spam folder.

Leo: If you want to see, just to show you how much spam is not being filtered, this is my laporte@gmail primary inbox. Let's see. I get a request for something. It's all in French. Just missed your call, says Jen. Here's an invoice for your account from Airtel. I mean, I don't know if it's - it's got to be spam. I don't know what it is. It's why I don't use this address anymore, which is why I'm willing to tell people what it is.

Steve: And I do think that, like, some of this is typos.

Leo: It's people trying to send to a real French person.

Steve: Yeah.

Leo: "Bonjour. Your personal training account has been updated." Notice Google translated it. Thank you. "Join us at the Indigenous Speakers Universe at Vancouver Island University." But, see, this is a cc to all of these people whose names are visible in here. I mean, this is crazy. "Roof inspections for M Street." I don't live on M Street. Okay. I love all the French stuff, too. "Attention populaire."

Steve: I notice that Kimberly wrote you. I think she wrote to me, too.

Leo: Yeah, Kimberly, you know, she gets around.

Steve: She does.

Leo: "Hey Laporte. It's my email." She doesn't know my first name because it's just laporte@gmail. All right. I'm sorry. I'm glad you fixed it.

Steve: Okay. Oh, I am, too. I feel very relieved. I just wanted to spread the news so that if any of our listeners have any problem like that, turns out it can be - it appears, again, I'm couching everything in it so far, and I'm crossing my fingers. And, boy, I'll know when I sent out 150,000 pieces of email.

Leo: Oh, man. Holy cow.

Steve: Yeah, it's going to be good. Okay. We are at two hours. Let's take our final break, and then we're going to look at the question which is entirely gray. I don't normally have a gray area feeling about things. But in this case, yeah, I don't know. This is an interesting issue.

Leo: You know, we talked about it on Sunday. I'm very curious what you think about it. It has to do with agentic browsers doing your shopping on Amazon, yeah. We'll talk about it in just a minute. Yeah, I mean, I think I'm gray, too. I was not - I understand from both points of view. But anyway, we'll get to that in moment. Okay, Steve. Let's talk about this. I think it's a very interesting story.

Steve: Yeah. Yeah, yeah. Okay. So some time ago we examined the robots.txt file which is sort of where this controversy began. And as we know, they were originally provided by sites as an aid to help keep web search spiders out of trouble. Controversy arose when Cloudflare decided to become much more proactive on behalf of their users when they believed robot AI agents, whether scraping for content or browsing on behalf of their users, were being deliberately deceptive and were also deliberately disobeying the clearly expressed wishes of those users. Then last week's podcast was "Here Come the AI Browsers," which looked at the vulnerabilities that could arise when AI browsers encountered remote website content which it might confuse for user instructions.

Today we have a third aspect of the AI web browser amalgam, which is AI browsers acting on behalf of their users. The Guardian's headline read "Amazon sues AI startup" - I thought that was interesting they'd call it a startup, I guess - "over browser's automated shopping and buying feature," which it follows with the tease, "Amazon accuses Perplexity of covertly accessing customer accounts and disguising AI activity as human browsing." Okay, now, the idea that Perplexity almost certainly does this is not news. Although questions were raised over Cloudflare's possible misinterpretation of Perplexity's automated agent actions, as a web technology developer, I was left with no questions there. It seemed obvious to me that the evidence revealed deliberate shenanigans on Perplexity's part.

So let's see what The Guardian's reporting adds to this. They wrote: "Amazon sued a prominent artificial intelligence startup Tuesday over a shopping feature in the company's browser, which can automate placing orders for users. Amazon accused Perplexity AI of covertly accessing customer accounts and disguising AI activity as human browsing."

Okay, so, you know, duh. It's the Internet, Amazon. And Amazon has done quite well thanks to the Internet. Right? In fact, they owe their entire existence to the Internet. So what's wrong with having a browser working on our behalf? That's the real question, and that's what we're going to examine today.

The Guardian continued, writing: "Amazon's lawyers wrote: 'Perplexity's misconduct must end. Perplexity is not allowed to go where it has been expressly told it cannot. That Perplexity's trespass involves code rather than a lock pick makes it no less unlawful.'" Whoa. Okay. So "expressly told it cannot" certainly sounds as though someone has been caught ignoring and bypassing those pesky robots.txt files again. But this time we don't have some bridge tollgate analogy. This time we're talking about the content owner becoming very upset.

The Guardian continues: "Perplexity, which has grown rapidly amid the boom in AI assistants, has previously rejected the U.S. shopping company's claims, accusing Amazon of using its market dominance to stifle competition. Perplexity wrote in their blog post: 'Bullying is when large corporations use legal threats and intimidation to block innovation and make life worse for people.'

"The clash highlights an emerging debate - and it is a debate - over regulation of the growing use of AI agents, autonomous digital secretaries powered by AI, and their interaction with websites. In the lawsuit, Amazon accused Perplexity of covertly accessing private Amazon customer accounts through its Comet browser and associated AI agent and of disguising automated activity as human browsing. Perplexity's system posed security risks to customer data, Amazon alleged, and the startup had ignored repeated requests to stop. Amazon said: 'Rather than being transparent, Perplexity has purposely configured its CometAI software to not identify the Comet AI agent's activities in the Amazon Store.'" Well, imagine that.

"In the complaint, Amazon accused Perplexity's Comet AI agent of degrading customers' shopping experience and interfering with its ability to ensure customers who use the agent benefit from the tailored shopping experience Amazon curated over decades. 'Third-party apps making purchases for users should operate openly and respect businesses' decisions on whether to participate,' Amazon said in an earlier statement.

"Perplexity earlier said it had received a legal threat from Amazon demanding that it block the Comet AI agent from shopping on the platform, calling the move a broader threat to user choice and the future of AI assistants. Perplexity is among many AI startups seeking to reorient the web browser around artificial intelligence, aiming to make it more autonomous and capable of handling everyday online activities, from drafting emails to completing purchases. Amazon is also developing similar tools, such as 'Buy For Me,' which lets users shop across brands within its app; and Rufus, an AI assistant to recommend items and manage carts.

"The AI agent on Perplexity's Comet browser acts as an assistant that can make purchases and comparisons for users. The startup said user credentials remain stored locally" - just like they do for us now - "and never on its servers. The startup said users had the right to choose their own AI assistants, portraying Amazon's move as an attempt to protect its business model. Perplexity added: 'Easier shopping means more transactions and happier customers. But Amazon doesn't care. They're more interested in serving you ads.'"

Leo: I think that's true. I hate to say it.

Steve: I do, too, Leo. We were just saying last week, the reason we're not using Alexa - and yes, I've just said the "A" word.

Leo: Or the Fire TV, or the Fire tablets, or any of the Amazon stuff, it's that they're ads. It's all ads.

Steve: Yes. And I was going to, I was going to do that initially because in researching it looked like it had the best voice recognition technology available, and I want that. The good news is Apple is really gung-ho on HomeKit and pushing forward into that market in the future. And I trust Apple more than any other organization in the world to do the right thing. And we're an Apple shop except for Windows, so yeah.

Leo: Amazon makes more money on advertising than it does on product sales. That's the fact.

Steve: Yeah. Yeah. So guess what, you know, not Google and not Amazon, thank you very much. So using the CometAI browser to shop is a much more pleasant experience for its user because they won't be exposed to Amazon's constant visual bullying and repeated appeals to purchase stuff. I'm a heavy Amazon user, and I'm quite familiar with the need to often decline their multiple come-ons along the way to the final purchase conclusion. I mean, what about this, and how about that? And, oh, you left this, and you were looking at this before. What about that? It's like, just let me have the "Am I done yet?" button, please.

So this question of the "agency" of AI agents I think is very interesting, and it's not at all cut and dried. For example, what if, rather than using Perplexity's CometAI browser, we used an AI Chrome browser extension to do the same thing? In that scenario we would be using an authentic Chrome browser, but an add-on AI agent would be viewing the pages and clicking the links and pressing the buttons on our behalf. So Amazon is attempting to tell the world that we're unable to make our lives better and easier while purchasing stuff from them? You know, they certainly wouldn't like that scenario, the Chrome AI add-on, because it's going to do the same thing that Perplexity's CometAI has built in.

Since the entire Internet pretty much blew up over this new battle last week, I mean, it was something to see the coverage of this, and since the rights and roles of AI agents promises to be one of the crucially important issues of our near future, I want to spend a bit more time on it today before we move on. TechCrunch weighed in on this with their coverage last week titled "Amazon sends legal threats to Perplexity over agentic browsing." Here's what TechCrunch reported.

They said: "Amazon has told Perplexity to get its agentic browser out of its online store, the companies both confirmed publicly on Tuesday. After warning Perplexity multiple times that Comet, its AI-powered shopping assistant, was violating Amazon's terms of service by not identifying itself as an agent, the ecommerce giant sent the AI search engine startup a sternly worded cease-and-desist letter, Perplexity wrote in a blog post entitled 'Bullying is not innovation.'

"Perplexity lamented in the blog post: 'This week, Perplexity received an aggressive legal threat from Amazon, demanding we prohibit Comet users from using their AI assistants on Amazon. This is Amazon's first legal salvo against an AI company, and it is a threat to all Internet users."

And of course I completely agree. This is important. As I noted above, the AI add-on to Chrome thought experiment demonstrates that this is a question with a very soft border. Where exactly does the AI agency begin and end? Does Amazon, like, refuse to allow us to do anything?

TechCrunch continues: "Perplexity's argument is that, since its agent is acting on behalf of a human user's direction, the agent automatically has the 'same permissions' as the human user. The implication is that it doesn't have to identify itself as an agent. Amazon's response points out that other third-party agents working at the behest of human users do identify themselves. Amazon's statement explains: 'It's how others operate, including food delivery apps and the restaurants they take orders for, delivery service apps and the stores they shop from, and online travel agencies and the airlines they book tickets with for customers.'

"If Amazon is to be believed, then Perplexity could simply identify its agent and start shopping. Of course, the risk is that Amazon, which has its own shopping bot called Rufus, could block Comet or any other third-party agentic shopper from its site. Amazon suggests as much in its statement, which also says: 'We think it's fairly straightforward that third-party applications that offer to make purchases on behalf of customers from other businesses should operate openly and respect service provider decisions whether or not to participate.'

"Perplexity claims that Amazon would block the shopping bot" - and I'm sure they would, I mean, they already said cease and desist - "because Amazon wants to sell advertising and product placements. Unlike human shoppers, a bot tasked with buying a new laundry basket presumably wouldn't find itself buying a more expensive one, or getting lured into buying the latest Brandon Sanderson novel and a new set of earphones 'on sale.'

"If all of this sounds a bit familiar, that's because it is. A few months ago, Cloudflare published research accusing Perplexity of scraping websites while specifically defying requests from websites blocking AI bots. Interestingly, many people came to Perplexity's defense that time because this wasn't a clear-cut case of web crawler bad behavior. Cloudflare documented how the AI was accessing a specific public website when its user asked about that specific website. Perplexity fans argued that this is exactly what every human-operated web browser does.

"On the other hand, Perplexity was using some questionable methods to do that accessing when a website opted out of bots, like hiding its identity. As TechCrunch reported at the time, the Cloudflare incident foreshadowed the challenges to come if the agentic world materializes as Silicon Valley predicts it will. If consumers and companies outsource their shopping, travel bookings, and restaurant reservations to bots, will it be in the best interest of websites to block bots entirely? How will they allow and work with them? Perplexity may be right in that Amazon is setting a precedent. As the 800-pound gorilla in ecommerce, Amazon is clearly saying that the way this should work is for an agent to identify itself and let the website decide."

So I think that what makes this such an interesting debate is that the issue is anything but black and white. What has evolved is being called "the attention economy." But the commandeering of our attention comes at a cost to us a cost that we often have no control over and might prefer not to pay. So one reading of what is happening is that new AI agency tools are appearing which promise to return to us some of the control that's been deliberately taken away. When we visit a web page, we're its captive audience. We're subjected to whatever it wishes to do to us. It's true that we could leave. Nothing is forcing us to remain. But there might be something there we want. If it would be possible to avoid the nonsense and get only the bits we want, that seems like a clearly pro-user thing. It's no wonder that the agent concept is appealing to people.

I believe that this is critically important because the way this shakes out will determine the shape of our future. My feeling is that user rights will ultimately prevail and that Amazon and others will be forced to grin and bear it, much as websites have had to tolerate the presence of ad blockers.

Leo: I mean, should a website be able to say you can't use this browser to visit me?

Steve: No.

Leo: No. I mean, technically they can. They could. But should they be, I mean, it seems unreasonable. And then the next step is should a website be able to say you can visit us, but not with an adblocker? Websites do that all the time.

Steve: Yeah.

Leo: You would think Amazon would want - if I go to Amazon using an agentic browser to buy something, you would think Amazon would want me as a customer. But apparently not.

Steve: And as you said, if they're actually generating more revenue from advertising than sales...

Leo: They're not quite yet, but I suspect that that's, I mean, they made - their ad sales went up 24% last quarter. I mean, they're making a lot of money on ad sales.

Steve: And it's product placement; right? It's like, I'm searching for this.

Leo: Exactly.

Steve: And there's four other things in front of the thing I want.

Leo: Yeah. It's the Amazon Picks. It's the Amazon Recommends.

Steve: And it's what Google used to do. Remember when Google's page came up, and it was a beautiful white page with 10 links that were actually all good, and that's all that was there? And now it's all sponsored crap.

Leo: Yeah. And so that's why people - and the other reason people use an agentic browser is I know what I want. Just go get it and look for the best price for me. It's just it automates something that they, you know, could do by themselves, but it's a lot easier.

Steve: And Amazon's also worried because when I wanted to get that inexpensive Samsung phone, I ended up buying it from Best Buy, where I never go. But if I told an agent that I'm looking for the Samsung whatever it is, get me the best price because that's all I care about, my default is Amazon, and it broke, it would have broken that default.

Leo: Yeah. Yeah, isn't that interesting.

Steve: And suddenly created competition where there wasn't any for Amazon.

Leo: Right. It's a fascinating story. I'm glad you brought it up. And I am still kind of - it's such a different world that we're living in, and our rules, our value systems don't really extend to this kind of new world we're living in. And we're not sure...

Steve: Talking about, you know, automating much of what the user does, there was a beautiful article in Vox this morning. Oh. I don't have it on the tip of my tongue. But it was basically, it was well written and fun, about the probable form of the coming AI Apocalypse. But they basically, you know, we're going to have our experience with computers automated for us.

Leo: Yes.

Steve: And I'm sorry, Amazon, but you're a target of this.

Leo: That's the future, yeah.

Steve: You have been living off of human eyeballs, and humans are deciding they want to sub that out.

Leo: Yeah. And you kind of made it that way by making it so unpleasant.

Steve: Yes, exactly, exactly. Yeah, we were a captive audience.

Leo: Right.

Steve: And now we've found a way to get...

Leo: You got greedy, yeah.

Steve: And you've become dependent upon our captivity.


Copyright (c) 2014 by Steve Gibson and Leo Laporte. SOME RIGHTS RESERVED

This work is licensed for the good of the Internet Community under the
Creative Commons License v2.5. See the following Web page for details:
http://creativecommons.org/licenses/by-nc-sa/2.5/



Jump to top of page
Gibson Research Corporation is owned and operated by Steve Gibson.  The contents
of this page are Copyright (c) 2025 Gibson Research Corporation. SpinRite, ShieldsUP,
NanoProbe, and any other indicated trademarks are registered trademarks of Gibson
Research Corporation, Laguna Hills, CA, USA. GRC's web and customer privacy policy.
Jump to top of page

Last Edit: Nov 19, 2025 at 08:47 (25.81 days ago)Viewed 11 times per day