Transcript of Episode #625

Security Politics

Description: This week we discuss the continuing Marcus Hutchins drama, the disclosure of a potentially important Apple secret, a super cool website and browser extension our listeners are going to appreciate, trouble with extension developers being targeted, a problem with the communication bus standard in every car, an important correction from ElcomSoft, two zero-days in Foxit's PDF products, lava lamps for entropy, the forthcoming iOS 11 Touch ID kill switch, very welcome Libsodium audit results, a mistake in AWS permissions, a refreshingly forthright security statement, a bit of errata, miscellany, and a few closing-the-loop bits from our terrific listeners.

High quality  (64 kbps) mp3 audio file URL: http://media.GRC.com/sn/SN-625.mp3

Quarter size (16 kbps) mp3 audio file URL: http://media.GRC.com/sn/sn-625-lq.mp3

SHOW TEASE: It's time for Security Now!. Steve Gibson is here and, as always, whew, just in the nick of time. He's going to explain what that Apple Secure Enclave Processor exploit means and whether it's time to worry. He's also got more thoughts on Marcus's arrest and maybe why the feds are after him. There's a whole lot more. It's a jam-packed edition of Security Now!, coming up next.

Leo Laporte: This is Security Now! with Steve Gibson, Episode 625, recorded Tuesday, August 22nd, 2017: Security Politics.

It's time for Security Now!, the show where we cover your privacy and security online with this cat right here, the man in charge of the operation, Mr. Steve Gibson of GRC.com. We were talking reverse Polish notation before the show today.

Steve Gibson: Ah, yes, our favorite calculator modes.

Leo: Well, did you see that calculator? It was Andy's pick for MacBreak Weekly. It was a perfect reproduction on an iPad of the HP-65. It was amazing.

Steve: Nice. I did not see it.

Leo: You didn't see it? Oh, you would love it. I bet you even have an HP around, probably four in the freezer; right?

Steve: Yeah, I actually just, you know, here's two within reach at the moment. One is the 15C Scientific, and the other is the 16C. So they're never far away. And in fact just last week I changed one out because the "8" key was beginning to get a little funky, and I was noticing I was having some entry errors. So I thought, okay, I've got 12 more of these.

Leo: Wait a minute. You use it?

Steve: Oh, daily, yes. Remember, I'm an engineer, too, Leo.

Leo: But we have computers for that now; don't we?

Steve: No, nothing is [crosstalk].

Leo: So look at this. You will love this. This is a perfect reproduction of an HP-65. And what's really cool, you know all the cards that they came with.

Steve: Yeah, yeah, yeah, the little magnetic...

Leo: They're all available for free.

Steve: Oh, that's nice.

Leo: So the next time you have to do your capacitance of parallel plates calculations, you just load that card into the HP-65. Watch how they put the card in.

Steve: Oh, my lord. Nice.

Leo: And it reads it, and there's a little description of it. You've got your paper tape here. You can advance it. Even the - what's really nice, Andy was pointing this out, even the segments on the display, they're not just using some font. They're actually really drawing it.

Steve: Nice.

Leo: Pretty sweet, huh?

Steve: It's very cool.

Leo: I think it's one guy in Switzerland who makes this software. It's the HP-65 or rather RPN-65 Pro is what he calls it. [Crosstalk] HP.

Steve: Well, and we've talked - I have on every one of my iDevices a copy of PCalc. And I think it's available on Android.

Leo: Oh, I love PCalc, yeah.

Steve: I mean, it's - when I'm out and about, I don't carry a calculator with me. I'm beyond that stage. But that's actually absolutely my go-to calculator. But when I'm here and I've got - I can physically hold this thing in my mind, the HP is just - it's just perfect.

Leo: Yeah, that makes sense, yeah.

Steve: We started the podcast recording, didn't we, a while ago.

Leo: Oh, yeah. I'm recording. Yeah, why?

Steve: So this is No. 625 for August 22nd. And I was really struggling for a name for this because nothing really stood out. But I want to talk about a number of things swirling around Marcus Hutchins, who as we've discussed for the last couple weeks was picked up at the Las Vegas airport upon trying to leave to return to his homeland, his home country, Britain, by U.S. law enforcement. And there's an interesting technical side to this which caused me to call this podcast "Security Politics" because, as often happens, when we don't have enough facts, guesses fill the void. And there is some interesting stuff that I wanted to address about what is going on in this realm, so we're going to talk about that.

We've got the disclosure of a potentially important Apple secret that I wasn't able to watch you guys discuss on MacBreak because I was busy pulling all this together. So we'll have my take on that. A super cool website that our listeners are going to love that includes a browser extension. Trouble with Chrome extension developers being targeted by attackers. A problem with the communication bus standard in all of our cars, which cannot be fixed. An important correct from ElcomSoft regarding that benchmark we shared last week which was wrong in a very important and critical way. Two zero-days in Foxit's PDF products. Lava lamps actually being used by a company we know well for entropy.

Leo: Ah.

Steve: The forthcoming iOS 11 Touch ID kill switch. A very welcome Libsodium audit result from our friend Matthew Green. A mistake in AWS permissions and what that means. A refreshingly forthright security statement from a company, I don't even know what they're making, but I just loved what they said in their "What could possibly go wrong?" It's so forthright and upfront. A bit of errata, some miscellany, and then - time permitting, and I didn't expect we'd have much, so I only pulled two - interesting closing-the-loop bits from our terrific listeners.

Leo: Nice.

Steve: So I think a great podcast. So, wait. They have their own top-level domain?

Leo: Yeah, AWS. You didn't notice that. All of the ads about AWS...

Steve: Yow. Well, there's...

Leo: Hey, it's only a couple hundred grand; right?

Steve: There's an expression of clout.

Leo: Yeah.

Steve: Holy...

Leo: No, you can buy it. No, you can buy it. When ICANN announced this - WordPress bought .blog. And it was like, I don't remember the exact price, but it was like $150,000.

Steve: Oh, wait a minute, where's my wallet? I need that.

Leo: You need a GRC, steve.grc.

Steve: Finally, a use for those bitcoins that I've been holding onto.

Leo: How many do you have? You said you got 50 in one blow.

Steve: Yeah. I woke up the next morning after the podcast, and there was 50.

Leo: Now, you couldn't do that now. In fact, all the big...

Steve: No.

Leo: Does Mark still do his bitcoin mining thing in Arizona?

Steve: Yes.

Leo: He does. Because the key to it now, and this is - by the way, Satoshi Nakamoto, very smart, the whole thing was planned almost as if he knew exactly what he was doing, to cost more over time in terms of resources, particularly power.

Steve: We discussed it in our bitcoin episode [SN-287] years ago that there was a deliberate tracking, pre-planned, exponential curve where the difficulty of the coin would follow a trajectory. And it automatically scaled independent of the rate at which hardware became faster because it used time. And so if there was like a sudden breakthrough in the ability to solve these problems. And the problems are coming up with an input to a hash that has some number of zero bits. And the more zeroes in the output, the more difficult it is to come up with something - you don't care about the non-zeroes, but you just have to come up with something that generates, has some number of zeroes like at the right-hand side of the hash.

And so, for example, if it was just you needed one zero, well, half of the things you put in would end up with hashing to a zero in the least significant bit. If you required two zeroes in the two least significant bits, one quarter of the guesses would immediately give you just random guesses of what to put into the hash function would give you two zeroes in the least significant place, and so on. So by slowly increasing the number of zeroes that are required, this scales the difficulty of coming up with a value which hashes with that many zeroes and number of least significant bits. And so the algorithm is independent of the rate of change of hashing ability. I mean, it was immaculately conceived.

Leo: It's brilliant, yeah.

Steve: And we covered all this back in our bitcoin podcast [SN-287]. So if anyone's interested, it's all laid out in a podcast years ago.

Leo: But the upshot of it is that it's not merely better hardware, and there's custom ASICs and stuff now. Really it's the cost of power.

Steve: Yes.

Leo: So the reason Mark can do it, and it makes economic sense, I guess, is his power must be really cheap. But most...

Steve: Correct. In California it is no longer economic...

Leo: You can't do it.

Steve: No, you cannot mint bitcoin.

Leo: No, power's too expensive.

Steve: Apparently there's a massive facility under Niagara Falls.

Leo: Yeah, because they have hydro, yeah.

Steve: Exactly.

Leo: But most miners are now in China, and I suspect subsidized by the Chinese government. And they're mostly near hydroelectric plants. And I think it only makes sense because they're paying virtually nothing for their power. Anyway, you have what, 50?

Steve: Yes. Back in the day my one i7 machine, left running overnight - and back then that's how many - so that's how many bitcoin you got when you solved one problem.

Leo: Wow.

Steve: And so I woke up, and it said, oh, 50 bitcoin. It's like, oh. And, you know, that wasn't a big deal back then. Now we're at $4,000 for a bitcoin.

Leo: And I read in Forbes earlier today that one guy thinks it's going to go to something like $600,000 per coin. So you, sir, have a retirement plan.

Steve: Yeah, I've got to find those. I have them somewhere.

Leo: Do not cash in those bitcoins. In fact, it's worth enough to buy .steve, if you wanted it.

Steve: Oh, don't tempt me.

Leo: You could have 50 bitcoin or .steve.

Steve: That's the ultimate vanity domain.

Leo: OMG.

Steve: *.steve.

Leo: Oh, man. I'd like .leo. I really would like .leo, man.

Steve: Oh. Well, you know, maybe if you just brush your tongue some more, Leo...

Leo: There's probably some other hoops to jump through.

Steve: ...you can convince your wife to give you permission to...

Leo: Yeah. "Honey, can I buy a TLD?" "I don't know what that is, but if you need it...." "Yeah, I do."

Steve: Ooh, well...

Leo: Apparently from 2012 the price of an application for your own TLD is $185,000.

Steve: But it is not that much.

Leo: It's not going up at the rate Bitcoin's going up, that's for sure.

Steve: That's certainly low, Leo. That's, oh, boy. Okay. So the caption on this photo reads - our Photo of the Week. Quote: "Every employee has been implanted with biometric access RFID tags enabling secure access control to our extremely security-sensitive facility. It is utterly state of the art and cannot be penetrated."

Leo: And there's the picture.

Steve: The lesson, of course, being, ah, yes, the human factor can defeat any form of security that has been designed.

Leo: So somebody's put a big rock keeping the door open.

Steve: Yes. Meanwhile they're all limping around with, like, bandages on their thumbs because they've been implanted, and they're waiting for this thing to heal. And someone walks over and says, oh, screw this, puts a rock out to hold the door open. It's like, okay, yeah.

Leo: Well, now, doesn't Level 3 do like one of those airlock things? You've got double doors. You can't have somebody butt-riding you.

Steve: Yeah. I was told it's all computer controlled, camera monitored. And you have automated access. But the guy told me something I will never forget. He says: "When you leave, make absolutely sure this door is closed because our system monitors it, and all hell will break loose if this door remains open for longer than is reasonable" for someone to walk through carrying, you know, maybe two people at each end of a big server kind of a thing. So it's not allowed to remain open for long. And there are people on-premise. There's a security guard who will come running with his hand on his hip in order to make sure that nothing nefarious is going on. So, yeah.

Speaking of nefarious or not, Marcus Hutchins. One distressing piece of information which came to light that we haven't covered here is that Marcus, who as we know pled not guilty, is facing six charges and up to 40 years in prison. So I'm sure all of us feel as I do. We need justice to be done, and I'm so glad that, as I have said each week, that one thing we can be assured of is that this has come to the attention of people who can provide him with a first-class defense so that there will be no miscarriage. And for that I'm glad.

We also learned that the U.K. knew what was going to happen. They were aware that Marcus - who, as we know, helped them. He stopped the WannaCry worm which was decimating the U.K.'s entire NHS, their National Health Service IT infrastructure. GCHQ knew that he was under investigation by the FBI before he traveled to America and that he would be walking into a trap that was being set for him before he was then arrested by U.S. authorities for these alleged cyber offenses. Multiple reporting indicates that the British government allowed him to wander into this trap because it saved them the headache of what would then have been a highly charged extradition proceeding with the U.S., who is of course an ally. So they just said, oh, have a nice trip.

Okay. So what's developing in this case? Any time something important is happening, and this is why I titled this "Security Politics," people will form opinions. And then those opinions evolve into positions. Egos become engaged, and those positions start being defended, often before or beyond the point where they're supported by fact. It's the politics of humanity. And this is what's starting to develop in this interesting case with Marcus because there's a lot of background, but it doesn't have the context that is required for us to know what it means.

So during the past week there have been a number of new reports, some suggesting that Marcus was more deeply involved in the development of the Kronos banking malware, others arguing that the code was lifted from him by the Kronos malware authors, and some even alleging that he was more directly involved with the creating of the WannaCry worm. So some of this confusion surrounds the origins and refinement of a commonly employed programming technique known as "function hooking" and the need for something called a "trampoline." These are well-established, well-understood terms of art for antiviral software and also employed by malware. Now, to create somewhat of a historical background, 2.5 years ago, on January 8th, 2015, Marcus at his MalwareTech.com blog posted a two-part tutorial, an explainer about these technologies, function hooking and trampoline.

So to give everyone a sort of a sense for who he is in this context, he wrote, 2.5 years ago, 2015: "A lot of my articles have been aimed at giving a high-level insight into malware for beginners, or those unfamiliar with specific concepts. Today I've decided to start a new series designed to familiarize people with malware internals on a programming level. This will not be a tutorial aimed towards people creating sophisticated malware, but security enthusiasts looking to better understand it."

So then he has a topic, "Inline Hooking." He says: "What is it?" Okay. "Inline hooking is a method of intercepting calls to target functions, which is mainly used by antiviruses, sandboxes, and malware. The general idea is to redirect a function" - that is, a function call, a call to a function - "to our own" - he meant code - "so that we can perform processing before and/or after the function does its work. This could include checking parameters, shimming the function, logging calls to it, spoofing its returned data, and filtering calls. Rootkits tend to use hooks to modify data returned from system calls in order to hide their presence" - and we discussed rootkits years ago using exactly this technology.

And again, we discussed rootkits and this technology years before Marcus posted this. So this wasn't his invention, and he's not claiming that it was. He says: "...while security software uses them" - that is, hooks - "to prevent/monitor potentially malicious operations. The hooks are placed by directly modifying code within the target function, called 'inline modification,' usually by overwriting the first few bites with a jump instruction. This allows execution to be redirected before the function does any processing." And he concludes, or I'm concluding my quote of the top of his post: "Most hooking engines use a 32-bit relative jump which is opcode hex E9, which takes up five bytes of space." And then he goes into detail in his second part, the assembly language code, mixed C and assembly, of the actual implementation of this technology.

I looked at it this morning. I've never seen that code before. Yet I read it for the first time, and I was completely familiar with every aspect of it. So I don't know, from what Marcus subsequently tweeted that has gotten him in some trouble, I don't know - he may have felt that he invented it. It may be a refinement of something, I mean, of what had to be common practice 2.5 years ago. I mean, rootkits are older than Marcus is. So I don't quite understand why he then later said what he did. But, for example, I have code from 1988 which does this because I had a...

Leo: You might not want to say that out loud.

Steve: Well, I had a developer working for me named Michael Toutonghi, who was brilliant. And we were developing a super high-performance disk cache back in the DOS days and the early Windows days that we called "Propel." And Mike needed to hook the operating system and the DOS compression engine and all this stuff. And this is the way you do that. So Mike went on to Microsoft, where he became one of very few Distinguished Engineers, which is a formal designation of the top people they've ever had. And Mike was the original architecture of the entire .NET system. He did all that.

So my point is that, back in 1988, this is what we were doing. And this reference to the trampoline, if you have a function, and you want to hook it, you want to intercept some other call to it, well, you need to replace the beginning of it with a jump instruction to you so that, when something else tries to invoke that subroutine, instead of seeing the beginning, instead of encountering the beginning of the subroutine, it encounters the jump instruction you have stuck there.

So, but notice that, in putting a jump instruction there, you have had to overwrite the first few instructions - the first few bytes at least, five bytes, which is why he referenced the size of the 32-bit jump instruction - you've had to overwrite the original beginning of the function. So what is done is there's an instruction interpreter as part of the hooking system which reads the instructions, like the beginning of the function you're going to hook, in order to extract an even number, that is, the exact number of bytes and instructions that can then be relocated to your own code.

So you first look at what's there, figure out what the instructions are that you're going to smash by putting a jump there. You copy those instructions to your hook. Now the jump that you put there jumps to what's called the "trampoline," because you're going to jump off of it again. So you jump onto the trampoline, which executes the beginning instructions that you had to overwrite to put your jump instruction there and then jumps back and continues execution. So what you've done is you've hooked that function. Anyone who calls that runs through you first before going in.

So you have two things you can do. You can call, after your hook gets control, you can call the function yourself, that is, you call your own little trampoline, which then invokes the function. And when it's done it comes back to you. That allows you to filter the result. Or you can inspect the parameters that are being used first and, for example, abort the function, just return to the caller, so don't do what it said. For example, there's a function called "virtual protect" which turns on protection for virtual memory. Well, you could just short-circuit that so software thinks it's calling the virtual protect function, but it's neutered. It doesn't happen.

So the point is this is all super well understood. Everybody, you know, for decades has been doing this. But then, oddly, a month later, Marcus tweeted something. And his tweet contains the "F" bomb, and you know me, I don't use it casually, and I avoid it on this podcast. But I decided, and for a while I had concatenated it or hyphenated it, and I thought, it just doesn't convey his sense. And it's important for us to understand who he is.

So I'm going to read his tweet. I have a link to it in the show notes. On February 7th - so remember, his posting was January 8th, 2015. A month later, on February 7th, he posted, and it's still there on Twitter: "Just found the hooking engine I made for my blog in a malware sample. This is why we can't have nice things [bleep]."

So he's not happy. And it's clear he's not happy. Maybe it was an exact copy of what he did. But, I mean, it's not like they couldn't have gotten it elsewhere. It's not like this wasn't like a well-known, well-established technique. It was. And one of the things that's confused people is his use of a particular instruction.

Dan Goodin, writing for Ars Technica, said: "Shortly after his arrest in Las Vegas two weeks ago, the tweet resurfaced" - that is, this tweet from 2.5 years ago - "and almost immediately it generated speculation that the malware Hutchins was referring to was Kronos. An analysis of Kronos" - and, by the way, that's the well-known banking malware which is a known source of trouble. "An analysis of Kronos soon showed that one portion used an instruction that was identical to the one included in the code Hutchins published in January 2015."

Okay. Now, the instruction in question is what's called a "monotonic function." It exists in the Intel instruction set. It's a compare-and-exchange instruction. There are separate compare instructions, and there are exchange instructions where you swap two values. But performing those two functions separately, that is, doing a compare and then a conditional exchange, that allows the possibility that a thread context switch could occur, that is, an interrupt could occur.

Back in the old days, when we had a single core, you have a single processing core, but you've got a multiuser, multitasking, or more properly a multithreading environment. What that means is that many things are going on at once, but you only have one actual CPU. So typically there's a hardware timer that is ticking in the background. And when that ticks, it yanks control away from wherever the processor was back to the OS, which then gives another thread some time to run. So this gives the illusion that all the threads are running at once, where in fact we're just switching among them very quickly.

Well, there are so-called "race conditions" that you can get into where, for example, you have something that needs to remain coherent in a multithreaded environment. Say, for example, you want to increment a 64-bit value, but you've only got a 32-bit chip. Well, the math is not hard. We know that you increment the low 32 bits. And if the carry overflows, that is, if it wraps around to zero, then that means you need to increment the high 32 bits to create a 64-bit counter.

But you cannot do that. It's hard for people to grasp. But you cannot do that safely in a multithreaded environment because there is a chance that you could increment the low side of that counter. And before your code has had the chance to immediately execute the conditional increment of the high 32 bits, it's interrupted, and another thread comes along and increments the low side. Well, now, if the first thread caused a wrap, then it didn't have a chance to finish yet. So now this other thread increments the low side, but it won't wrap.

Anyway, you can see what happens is this breaks a counter that was intended to reliably count the number of times it happened. But because it's not monotonic, because there's no way to do a 64-bit increment at once, we run into a problem. So in the old days, we would turn interrupts off. Before doing that, you would disable the hardware interrupts so nothing could interrupt your code in just that tiny interval of sensitivity where you must not be interrupted to create, to sort of fake a monotonic operation. Then you'd immediately reenable hardware interrupts.

But now we've got multiple cores. And so we don't have virtual multithreading, where we're switching around between threads. We have hardware multithreading, where actual physical cores are all working at once. So what had to be designed was instructions that could solve this problem. Thus the compare-and-exchange is one of the - there are several of these Intel instructions that are explicitly and deliberately thread safe, meaning that it does everything it needs to do at once in a single instruction, and you don't have to worry about it being, like, having to do a compare-and-test and a conditional exchange, which might get fouled up if thread-changing occurred. My point of all this is that this is the way you solve that problem. So Marcus's use of this instruction is like saying, gee, what instruction would I use if I had to add two numbers? Uh, how about the add instruction?

Leo: I'm sure a jury will understand that. That's just...

Steve: Yeah, I mean, and this is the problem. This is why I stopped being an expert witness is that these technical things are so clear to everyone who understands them. But I told you the story, Leo, about the judge who was on oxygen. I mean, I'm not kidding, he had a green oxygen tank next to him. I was an expert witness for NEC because Princeton Graphic had sued them over an ad about the MultiSync monitor being able to have a long life because it would adapt to whatever you plugged it into, which was true. But Princeton Graphic didn't have that technology, and they were annoyed. And so I was trying to explain to this judge who literally was on oxygen about, okay...

Leo: At least he was awake.

Steve: No, actually, he was nodding off, too. So maybe he needed to turn it up a little higher. But so this is the problem where, I mean, our listeners, you and I, I mean, we get this. But again, and this is why we need a technically competent defense with lots of charts and graphs who can explain that this instruction is - there's nothing magic about it. It has a purpose for which Marcus used it. And arguably there is no other way to solve that particular problem than to use it. And so the fact that some other code also used it absolutely doesn't mean that they got the idea from him or that he gave it to them.

I mean, first of all, remember, this was a public posting, a blog posting. So a month later he tweeted he was annoyed that apparently - and my sense is maybe he was claiming a little more ownership of this than he should. As I said, this is the way I would have solved the problem if someone said, okay, Steve, write some code, Windows code that does this. I'd just sit down, and I would write what Marcus wrote. That's what an engineer who understood the problem and what tools were available would use.

So anyway, again, exactly as you put your finger on it, Leo, the problem is the truth of this rests in some details which I'm just hoping a very good defense will be able to bring to light because that would be important.

Leo: Well, is that specifically what he's being accused of is that?

Steve: No.

Leo: We don't know.

Steve: Yeah. We don't yet know what the allegations are against him. Well, six counts of something with apparently up to 40 years of prison time. So that just can't be allowed. And, you know, as I was reading what he wrote, I'm thinking, no wonder he's popular. He's so literate, too. I mean, that was beautifully written, his description of the way the hooking and trampoline works.

Okay. So Part 2 is - that's Kronos. So what of WannaCry? There's a head of a security firm Immunity named Dave Aitel. He had a different immediate response to the reports of Marcus being, quote, and I'm just paraphrasing, "the hero of WannaCry," which of course we all understand because he created that domain, he registered that domain and shut down the propagation.

So in a posting, in Dave Aitel's posting at the Immunity blog earlier this month, he wrote: "But let me float my and others' initial feeling when MalwareTech got arrested: The kill switch story" - and here I will use an abbreviation because it's not important - "was clearly BS. What I think happened is that MalwareTech had something to do with WannaCry, and he knew about the kill switch. And when WannaCry started getting huge and causing massive amounts of damage, say to the NHS of his own country, he freaked out and, quote, 'found the kill switch,' unquote."

Leo: Ah. That's interesting.

Steve: Yeah, I know. "This is why he was so upset to be outed by the media." And then Dave says: "Being afraid to take the limelight is not a typical white hat behavior, to say the least." And then Dave continues, and backs off a little bit or, like, adds a little more depth. He says: "That said, we need to acknowledge the strategic impact law enforcement operations as a whole have on national security cyber capabilities, and how the lighter and friendlier approach of many European nations avoids the issues we have here in the states."

He writes: "Pretty much every infosec professional knows people who have been indicted for computer crimes by now. And in most cases the prosecution has operated in what is essentially an unfair, merciless way, even for very minor crimes. This has massive strategic implications when you consider that the U.S. Secret Service and FBI often compete with Mandiant for the handling of computer intrusions, and the people making the decisions about which information to share with law enforcement have an extremely negative opinion of it. In other words," he writes, "law enforcement needs to treat hacker cases as if" - and I like this - "they are the LAPD prosecuting a famous actor in Hollywood. Or at least that's the smartest thing to do strategically, and something the U.S. does a lot worse than many of our allies."

So we're all aware, I mean, addressing David's point, we're all aware of the very real concern, to draw an analogy, over a hopefully fictional, highly virulent super virus being developed purely for study in a lab, somehow later escaping from the lab into the wild and wreaking havoc upon humanity. Now, imagine a malware author who recently learned of the disclosed weaponized EternalBlue technology which we believe was weaponized and developed by the NSA, and who has a deep technical background in malware operation, being unable to resist the temptation of experimenting with that technology, and who creates an instance of a highly virulent worm which carries cryptomalware - why not - as its payload. It makes it a little more exciting and a little more real.

And because this researcher is not insane, he or she builds in a kill switch, just in case. Then somehow, some way, it finds its way out, like it's scanning. And within the researcher's network there was an unknown or unappreciated vulnerability. But, like, it gets loose, as viruses and worms will, and escapes its containment, exploding onto the public Internet. In this scenario, this wasn't what the researcher ever intended, but it happened nevertheless.

So now what does he do? He discovers his own responsibly installed kill switch and immediately registers the domain to shut down this creation of his which escaped his control. I certainly hope that's not what happened because then we're faced with a moral dilemma while, as Dave noted above, U.S. and global law enforcement won't have any dilemma. They will throw the book at Marcus.

So anyway, this is what's going on on the Internet in the technical forums and people combing through the evidence, looking at the code, and also just using, as Dave did, sort of his gut feel of how reasonable is the story that, wow, wasn't that great, you know, Marcus happened to find this wacky domain and thought, oh, I wonder what that does, and it shut down. I mean, remember how skeptical we were about why any actual malware author who wanted to be wreaking havoc with cryptomalware would put a kill switch in. It's like, why? It seems antithetical to the intent of that worm, except if it was an experiment. I mean, if someone responsible, fundamentally responsible did this in the lab and knew enough to give it a kill, like just in case, and the worst happened. I mean, again, I think eventually we'll have answers to some of this really interesting case which I think is fascinating.

Leo: Now, if he wrote it, whatever his intentions are, I think he's in trouble.

Steve: Precisely my point. That's right. U.S. law enforcement will have no moral dilemma.

Leo: Yeah. Well, it reminds me of the Morris worm, remember. Robert Tappan Morris, when he wrote that - I think that was his name - when he wrote that worm, didn't write it to be a worm - a it was the very first computer worm - and was a little horrified that it was so effective. Didn't stop anybody from prosecuting him.

Steve: Okay. So I did not have a chance to hear you guys talk about the loss of or the escape of the Secure Enclave key. So I'll explain my take on it, and then I'd like you to add what you guys discussed on MacBreak.

Leo: I'll echo Rene's thoughts on it, yeah.

Steve: So, okay. Apple had been keeping a secret. It's unclear so far how important keeping that secret had been, but it is secret no more. We do know that cryptographic security does require some secret keeping. And as our listeners, our longtime listeners will know because we've discussed this in the past, the breakthrough, which was made quite some time ago in cryptographic maturity, occurred when we switched from keeping algorithms secret to developing keyed algorithms where the algorithms themselves could be made public, and then keeping specific instances of their usage keys were what was secret.

And when you think about it, just that, switching from secret algorithms to public algorithms