Transcript of Episode #505

Listener Feedback #211

Description: Leo and I discuss the week's major security events and discuss questions and comments from listeners of previous episodes. We tie up loose ends, explore a wide range of topics that are too small to fill their own episode, clarify any confusion from previous installments, and present real world application notes for any of the security technologies and issues we have previously discussed.

High quality  (64 kbps) mp3 audio file URL:

Quarter size (16 kbps) mp3 audio file URL:

SHOW TEASE: It's time for Security Now!. Steve Gibson is here. We are going to talk. Lots of questions. Lots of answers. At the end, a little discussion of favorite programming languages. It's all coming up next on Security Now!.

Continuing with Part II:

Leo: , Bobby in Idaho. He appears to be a troublemaker. He wants more on, get this, brute-forcing encryption. Sure, Bobby. We'll tell you everything. Steve's glad to help out. In Episode 501 you explained how you know when you've successful brute-force broken encryption. Because it's actually checking against the authentication. So if encryption doesn't authenticate, how would you brute force it? Hmm, hmm, hmm?

Steve: Okay. So I answered a listener's question a couple weeks ago on this question. And I explained that proper encryption is always then wrapped with a layer of authentication because there are many ways to mess with encryption if it is not authenticated, that is, if the attacker is allowed to make a change, even to the encrypted data, there are well-known available attacks that can produce trouble. So you authenticate what looks like noise because it's encrypted. It'll be maximum entropy. You authenticate it separately to prevent from ever considering it valid if it's been changed. Thus you know when you're brute forcing, you know when authentication works that you have found the right password.

So Bobby's saying, okay, if you don't have that outer authentication wrapper, then what? Okay? And the answer is heuristics. You literally look at what the output is from a test key and see whether it could possibly be correct. It's a little bit like what Turing did with the Bombe and decrypting the Enigma machine because it was looking for possible solutions. So the machine would stop when the constraints that they had imposed on it were met, and then they would look.

So, for example, say that this is, again, if you're decrypting something, you typically know what it is. For example, say it's email. Okay, email in a language, and you know what the language is going to be, is going to have, when it's decrypted, some very definable characteristics. For example, ASCII is a seven-bit code in an eight-bit byte. So the high bits will all be off. Now, the chance of using the wrong key and misdecrypting something such that a chunk of it comes out wrong, but with all the high bits off, is vanishingly small.

So a perfect example is you simply decrypt the first block of the cipher text using every possible key, checking just to see, given that you know that it's ASCII, given, you know, checking to see that all the high bits are off in your decrypted result. If not, you know it cannot be the right key. If they are off, then you can be a little more sure by checking the next block using that same key, making sure that they, too, are all off. At that point you pretty much know that you've got it nailed. Then, of course, look at it and see if it makes sense.

There's some vanishingly small chance that you would get the wrong key that would produce ASCII gibberish, but with all the high bits off. Probably won't happen. But if that's the case, you say, oh, that's like the Bombe stopping at a code that could have been possible, but then they tried to use those settings in an actual Enigma machine with a different message from the same day to see whether it would decrypt properly. If it did, they had it solved. If not, they pressed "keep going" on the Bombe, and it picked up where it left off. So basically, brute force without authentication means you just have to brute force. Which means you look at each output and apply some heuristic to say, could this be possible? And if it could be, then it stops, and then someone human looks at it and goes, uh, no, keep going; or, yeah.

Leo: You'll know.

Steve: Yeah, exactly. You'll know.

Leo: You'll know.

Steve: You'll know when you get it right.

Leo: I love that seven-bit in an eight-bit byte because, I mean, that's great. That could be done by a computer very quickly.

Steve: Yup.

Leo: And do we see any eight-bit set? No? Okay.

Steve: Got it.

Leo: Question 2, Brian Tillman in Wyoming, Michigan - by the way, that'd be a fun exercise, is what would be a quick and simple effective test against various kinds of data to know you've done the brute force correctly. Wouldn't that be fun?

Steve: Yes.

Leo: Kind of like a little puzzle? Brian Tillman in Wyoming, Michigan is haunted - haunted, I tell you - by the idea of malware in disk drives: Back in Episode 496 or '7, you related evidence that had been found outside of disk drive firmware of malware that deliberately alters said firmware to malicious ends. Following up in the Q&A later you elaborated how it was not practically possible to know whether a drive's firmware had been infected so.

But drives are usually made in countries whose governments are extremely interested in learning U.S. business secrets. I see these devices - hard drives, flash drives, memory cards - as a great way to implement industrial espionage. Who knows what might be embedded in these drives, and how would we even know? You said we wouldn't. No one reverse-engineers the majority of these devices. Their designs are considered closed and proprietary. They could be collecting data or doing anything. We wouldn't even know. I'm haunted, too, now. What is your answer, Steven?

Steve: We're in trouble. No, you know, this is - through this podcast I have often commented how odd it is to me that, for example, the Chinese government and populace use Windows that comes from a U.S. company. And we've talked about problems where routers were being infected, either by unknown agents or, unfortunately, by our own NSA, in transit, in order to have them modified. I guess I'm sort of bemused.

You know, right now at this point in time we're all - the whole Trans-Pacific trade agreement is in the news, where the President is negotiating to try to figure out how we extend trade and increase opportunities. And some of the people that we're trading with, as Brian notes, are diplomatically, politically hostile. Yet one of the things we're doing is we're swapping hardware that we cannot see inside of, that we can't see into. There was an article I ran across about a hand scanner that was coming from China that had malware embedded in it such that it waited until it got plugged into a certain type of system, and then infiltrated the network. And, I mean, so this is not science fiction. This stuff is being found. So the problem with drives is that their firmware is not readable. So we don't know what's in there. And the drive manufacturers consider their firmware proprietary.

Now, maybe at some point in the future the technology will evolve to a point where we'll end up with something like open source drive firmware, the way we sort of had open source everything else occurring over time. It used to be OSes were all closed source. Now very good ones are open source, and so forth. So, I mean, and that might happen because, if this becomes a big enough problem that people say, look, we're not going to buy a drive whose firmware we can't have vetted by a third party because there is just too much opportunity for real mischief.

But the fact is we're in this phase now where the value to the attacker is rapidly increasing, and our true responsible management and oversight is hugely lacking. It's, I mean, we're taking so much on faith that this stuff is what it says it is and isn't more, and we have no way of verifying it. I mean, I can't - of course the famous expression is "trust and verify." Well, all we can do is one of those two. And we really are just hoping for the best.

Leo: There's trust in everything. We've talked about this before. You can't drive down the street without trusting that the other guy's not going to cross the line and slam into you.

Steve: Yup. Yup.

Leo: The problem is we're trusting people we don't know and don't trust.

Steve: Yes. And at a scale. That's the other thing, too, is if someone is suicidal and decides to take themselves out with your car, well...

Leo: It's limited in its impact; right.

Steve: They've taken you out, too. But now they're done. Whereas, if a nation-state is able to infiltrate a vast number of networks, I mean, the problem is these things really scale, these sorts of attacks.

Leo: Michael S. McElrath in Flint, Texas - sounds like a flinty place - worries about SSD warranty returns. I have an issue most users may not consider when returning warranteed solid state drives: They are prone to just quitting without any notice. No further access is possible. Bam. Gone. My dead SSD has my personal/company tax filings, my personal files including bank and credit card information and my customer files; facility layouts, production data. In other words, crucially privacy-sensitive content. Now it's dead. For the $200 cost of the drive, would you return it on the assurances that they will erase any information they find? TNO. SSDs must be encrypted at all times, especially during the warranty period. Thank you. Michael in Texas.

Steve: So it has been my experience, and I know it's been many people's experience, that SSDs, very much like hard drives, can fail in either of two ways. They can fail slowly, or they can just turn into a doorstop. I have seen SSDs that just, as Michael says, just bang, they're just gone. It stops being an SSD. Hard drives, notoriously, if they have a head crash or the servoing mechanism dies, I mean, there are catastrophic ways for a hard drive - did I say the spindle motor just doesn't spin anymore, or the heads stick on the platter and refuse, with what's called "stiction," refuse to allow the drive to spin up. Again, no more data coming off of that drive unless you can break that contact with the heads and the surface.

So Michael's point is, and it's a good one, is that something to consider is, if you intend to avail yourself of an SSD's warranty, there's all the more reason to apply external encryption. He didn't mention this, but drives today will have the "security feature set," as it's called, which allows you, like typically the BIOS, to give the drive a password which will then encrypt the contents of the drive. The problem is that can be removed at the factory. So it's a weak solution, one that specifically, if you're worried about the factory warrantying the drive and not poking around in it, you can't use. You need TrueCrypt or a similar add-on whole-drive encryption solution.

And so it's just something to consider, that, again, if you want to take advantage of the warranty, if you're worried that a dead drive - which of course it's not really dead. They can certainly look at the chips and get the data off it, exactly as Michael suggests. It's just dead to the interface for whatever reason, who knows why. You might consider that it's worth putting your own external encryption on that drive, whole drive encryption that you add, because then it's just a drive full of noise.

Leo: You know, people may say, well, I'm not going to worry about warranty returns on my SSD. But you know where this really hits is on your smartphone. Because how often, I mean, it's not unusual at all that your smartphone dies, and you bring it back to the store, and they say, "Oh, yeah, sorry, here's a new one." Well, all your stuff's on there.

Steve: Right.

Leo: So if you're using an iPhone, you're okay, right, because Apple encrypts.

Steve: Right, yup.

Leo: But on Android devices, not all of them encrypt by default. Might be good to turn that on. Nexus 6 does, but others may not.

Steve: Yup, that's exactly right.

Leo: And it's the same issue, exactly the same issue.

Steve: Yes, yes.

Leo: Maybe more so.

Steve: And unfortunately there are a lot of immature people who would get their jollies from poking around in someone else's business. I take great pride in the fact that I, back in the day, the early days of SpinRite, I would sometimes receive customers' drives who had problems beyond what SpinRite could do. Their file system was messed up, and they absolutely had to have something. I never looked in. It just doesn't interest me, as a point of pride. But I know that there are people who have a different approach, unfortunately, and they just get a big kick out of that.

Leo: Question 4 comes from Andy Martin in Los Angeles. He wants a quick message encryption tutorial: Steve, I've been listening to Security Now! for two years. Oh, there's your problem, Andy. You need at least eight more years. You understand encryption, but I'm more interested in the programming behind it. Have you ever written a tutorial on how to implement secure messaging? It seems so easy to do right, but it appears no one wants to take the time to do it right. Surely there is an open source library out there that makes this easy.

This is the flow I would want: Imagine that I manage the server, and I want no way to decrypt any messages. Every client generates their own private key and public key. They push the public key to the server. Now Client 1 wants to send a message to Client 2. The message has to be authenticated, then encrypted; right? How can the client encrypt it so that the other client can decrypt it? It seems that just knowing the public key is not enough because then anyone with the public key can decrypt it. Mmm, no. And frankly, I don't know what "encrypt with public key" even really means at the algorithmic level. I think you need a few more years, Andy.

Anyway, then the client sends the message to the server, which routes it to the other client, who could then decrypt and magically authenticate it. Help! If the client ever lost their private key, they would just need to re-make a public key/private key pair and push it to the server again, losing any messages that might come while they had not yet posted their new public key. I think he needs some help, Steven.

Steve: So, okay. We have discussed this at length, and I don't want to go over the whole thing again. I will just say that the server, as Andy notes, holds all the public keys. And so, and the server then is, as it's called, a "key server." So if Client 1 wants to encrypt something from Client 2, you first choose a random number. So you need a good source of randomness, "entropy" as we call it. And you use that as the symmetric key for encrypting the bulk because public key crypto is so slow that it's never practical to do bulk encryption with it. Instead, you use the public key encryption only to encrypt this random key that you generated just out of thin air and used to encrypt the bulk.

So the idea is that you have, from the key server, you have the target's public key, which means that something you encrypt with their public key can only be decrypted with their private key. So you take their public key, which you receive from the key server, use it to encrypt the key, the symmetric key that encrypted the bulk of your message. And you could also, for example, use your private key to sign the result. Then you send it either to the server to relay or directly to Client 2, whichever makes more sense. Now Client 2 has it. They get your public key from the key server, which allows them to verify the signature on it, which could only have been made by somebody having Client 1's private key. And so they verify that. Then they've got...

Leo: So key understanding, fundamental concept, the public key, which can be distributed widely, freely, publicly, can be used to do two things: to encrypt and to verify. Right?

Steve: And to decrypt.

Leo: Public key?

Steve: Yes, yes. Because...

Leo: Don't you have to have...

Steve: Oh, no, I'm sorry, to...

Leo: Authenticate and encrypt.

Steve: ...encrypt and verify the signatures, right.

Leo: Yeah. That's important. The decryption requires the private, closely held, only I have it key.

Steve: Exactly.

Leo: You don't let that one out of your sight.

Steve: Exactly. And so that's what Client 2 does after verifying using Client 1's public key that it actually is coming from Client 1 and has not been modified because authentication will verify that. It'll both verify who signed it and that it has not been modified. And then Client 2 uses their private key in order to decrypt that symmetric key, which is that random number, that big random key that Client 1 generated. And then that they use to decrypt the message. So that's the whole flow.

Now, because Andy's right, there's a need for a library, it has been created. Dan Bernstein, world-famous cryptographer, has something called NaCl, which is a library. Unfortunately, it is not broadly cross-platform, but it has been taken and extended and made absolutely cross-platform. And that's called "libsodium." So GitHub has libsodium, L-I-B-S-O-D-I-U-M, which is basically an API-compatible recoding to be cross-platform of Dan Bernstein's NaCl. It has a simple API that does everything anyone wants. It uses the state-of-the-art, efficient, elliptic curve crypto. And it's got functions like encapsulate this message where here's the public key of the recipient, the private key of the sender, and I want you to encrypt it and authenticate it and give me the result. It does all the work for you. It's been heavily scrutinized. I'm using several of the functions for SQRL, specifically the digital signature stuff because that's what we need.

Anyway, that's what you want to use, libsodium from GitHub. And there's something also I just discovered called GitBook, which is for eBooks. And there is a beautiful libsodium book on how to use libsodium in the repository. So, and it's available in all kinds of eBook formats. So there's no excuse. Use that library, read that book, you'll have everything you need, Andy.

Leo: What about OTR, Off The Record? Because that's used often for messaging.

Steve: Right. So there the requirements are different because that's a real-time protocol.

Leo: Ah.

Steve: And you need to have a real-time interchange between the endpoints.

Leo: Right.

Steve: This system that we just laid out is completely static. I mean, it's very much like the original PGP model. This approach has been around now for years.

Leo: Right, right. And it solves all the criteria that Andy laid out for what he wants.

Steve: Yup.

Leo: And the thing is, if you share your public key, as I do, and I don't know, do you share your public key? You should.

Steve: Don't have one.

Leo: You should make one. I'm going to send you an invite to this place, It's JavaScript based, although if you prefer you could do it at the command line. That's what I do. It is a way to - you know what? It solves the problem of the key signing. Because one of the problems with PGP is it's a self-generated key. So you want to create a web of trust of people who've said, yes, that's Leo's key, I know that. This kind of solves that problem. You don't have to go to a key-signing party. But you use your other things, like your GitHub or your Twitter or your Reddit account or your website, to validate that, yes, that's me, THE Leo Laporte. And then on this site I've got my key. So you can get, from this, you can actually get my public key right there and add it to your keychain. And that way you can do two things. Again, you can encrypt to me, not decrypt, but encrypt to me, and authenticate messages from me and say, oh, yeah, Leo sent it. It's unchanged."

Steve: Yeah, I guess I just don't have the need. I mean, I've never...

Leo: Well, I'll tell you why you might. Well, we talked about miniLock. I guess you could use that.

Steve: Right.

Leo: The idea that maybe somebody would want to send you a private message. And since you use Twitter, a public messaging system, they could use encryption to do that over Twitter.

Steve: Yeah. But again, I mean, no one ever has.

Leo: So there you go. If you don't need it...

Steve: I just don't have the problem.

Leo: don't need it.

Steve: No.

Leo: That solves that. Thank you, Andy. Jim in Grand Rapids coming up with an update on PCIE SSDs and AHCI/NVMe. Whoa, well. It's about time. Actually, let's do it now. I can't stand the suspense.

Steve: It's a quickie.

Leo: It's a quickie. Two weeks ago you had a listener question about SpinRite and SSDs on the PCI Express bus. I know I'm the 8,000th person to point this out, but while most PCIe SSDs on the market now use the AHCI interface, the newest drives are using something called NVMe, or Non-Volatile Memory express interfaces. Just a heads up.

Steve: So, yes. Apple, in fact, is using the new NVMe on this latest MacBook because it turns out that AHCI, powerful and fast as it is, is not as fast as you can make an SSD. That is, if you squeeze an SSD through a SATA interface, an S-A-T-A interface, they can go, what, is it 6Gbps is as fast as SATA3 will go. But an SSD, I mean, we're inherently kind of coming from a spinning media mindset, where the SSD, solid-state disk, I mean, it's even got "disk" in its name, it sort of says, okay, there's all these disks in the world with this SATA interface. I want to be plug-compatible with the SATA interface. So that's the way SSDs got into the market. And they were faster than spinning disks, but they still had this serial notion, that is, that the data would be transferred serially. So even at 6Gbps, it turns out there's no reason for it to be serial. Essentially, it's just RAM. You can have it all, right now.

So what the designers - and this has been a few years in coming. There's an Intel spec, this NVMe interface. Essentially, whereas AHCI, the Advanced Host Controller Interface, is serial, NVMe is parallel. You can have multiple queues. You can ask the - I keep saying "SSD." You can ask the NVMe interface storage device to just sort of say, you know, give it everything it has. And it's just, blam, there you go.

So people have asked, "What about SpinRite?" And my answer is that the way I have already designed the 6.1 that I've got running is that it is multi-interface. And it enumerates the PCI bus to determine what's there and then uses the driver it needs. I don't know exactly where in the cycle I'm going to do this. 6.1 adds many features. But for compatibility, AHCI, it doesn't use the - I'm trying to think what - the Mac. I've already got it running on the Mac keyboards, which was a problem for SpinRite. And it will understand the GPT partition table. I don't want to slow 6.1 down to add anything else.

So I'm committed to getting 6.1 out ASAP. But the whole point of the 6 series is to create a foundation for 7. So I also need to support USB natively, rather than through the BIOS. So that has been slated for 6.2. So I don't know where NVMe will fit. Maybe it'll be 6.2 instead, or maybe it'll be 6.3. But I will not stop working on SpinRite, believe me, because my goal is to catch up and have it running perfectly screaming speed on everything. So the 6 series, which will follow one after the other, will support NVMe. And it turns out I've already dug into it, and it's not difficult to do. So SpinRite'll have it.

Leo: M. Weber, who is on the move, wonders about the WiFi location confusion: My plane from Orange County - oh, I've had this happen - landed in Dallas. And while we were still in taxi to the gate, I pulled up a map on my Android phone to get directions from the rental car facility to my client's office. To my shock, to my horror, the driving directions told me it would take more than 20 hours to drive there. Once I got over my shock, I realized the map was showing my starting point in San Jose. Then I remembered your discussion from a few weeks ago of how the location service uses the available WiFi devices to establish coordinates. I turned off WiFi, ignored the location service's begging me to turn WiFi back on "Because it's so much more accurate with WiFi." Hah. And problem solved. I guess someone's portable WiFi hotspot had been pegged as stationary. Seems like a pretty big flaw in a system that we are more reliant on. Thanks, Steve, for a timely and practical show. I have an alternative explanation of that, by the way.

Steve: Which is that it knew where he was?

Leo: No, that he was on the WiFi of the plane. And often the case, especially with commercial WiFi providers, I've had this happen on cruise ships, it identifies its location as the home office.

Steve: Interesting.

Leo: So he was using Boingo on the plane, Boingo WiFi, which is in San Jose. And so it says that's where we are. On the cruise ship, I kept going back to Venice. I'd be in Croatia. It was amazing. In, like, 30 seconds I could go back and forth and back and forth. So that's kind of not really...

Steve: Well, and the point I wanted to make was that this is an imperfect system.

Leo: Yeah.

Steve: GPS is perfect, but we don't always have it. And cellular...

Leo: Yeah. You're in a plane, dude.

Steve: Right. And cellular is another means, is a more reliable means, but doesn't provide the granularity that we would like. So what's happened is, in a classic sort of let's use a heuristic, we are doing something which could fail, much like the heuristic I suggested for brute-force cracking encryption. It could give you a false positive. It could be wrong. You could get the wrong key and still by some miracle have all the high bits off. Very unlikely, but possible. Similarly, hotspots do move around. So, you know, it's probably better than not using it. But you would hope, for example, that the software would look at all the hotspots in the area and see whether, first of all, they make sense. Like does it make sense that all of these are in the same relative proximity to each other, and then reject the one which is clearly some sort of a crazy false positive, and then go from there. So maybe the software could have been smarter. But it's a classic heuristic.

Leo: Well, if you think about it, if you're in a plane, and you're enclosed in metal, you probably don't have access to other WiFi, just the plane's WiFi.

Steve: Right.

Leo: GPS may not be working because you don't have line of sight to the sky. So it's going to use what it's got, which is the location of the WiFi router, which is back in San Jose. It doesn't - the router in the plane doesn't identify where it is. It identifies where the Internet's coming from or whatever, where the company was. And Boingo, of course, is in San Jose.

Steve: Right.

Leo: So I think that that's probably what it was.

Steve: Yup, makes sense.

Leo: It's a great conundrum. And I bet you, as soon as it got a GPS signal, it would reject the spurious WiFi signal.

Steve: Yeah, yeah.

Leo: And certainly by turning off WiFi you did that. You said no, no, no, no. Look and see where you are.

Steve: Bad information coming in here.

Leo: Right, right. Let's take a break, come back with more. You have already agreed, by the way, I hope, to supply some cute little tips for The New Screen Savers?

Steve: Absolutely, I have.

Leo: We can record those after the show some week and just, you know, they can be little things like, you know, turn off your whatever, WiFi, or don't, I don't know, whatever it is that you think would be important.

Steve: Yeah. I'm chatting with...

Leo: Jerry or Karsten or Lisa or...

Steve: A gal. I'm blanking on her name.

Leo: Oh, Tonya. We have so many producers on the show.

Steve: I'm chatting with Tonya tomorrow morning about doing that.

Leo: Excellent. And then after a show some week we'll record those. The New Screen Savers launch is May 2nd, right after the radio show, about 3:00 p.m. Pacific, 6:00 p.m. Eastern time.

Steve: Talk about a lot of buzz. People are very, very excited.

Leo: They're very happy. Lot of buzz going on in the Comcast headquarters, too, apparently, but that's another story. If you have an idea for the show, or we are actually looking for questions for our Help Me! segment, you can email And don't forget, we're going to be live, 3:00 p.m. Pacific, 6:00 p.m. Eastern time, 2200 UTC, Saturday, May 2nd. And we decided - this is a great opportunity. One of the reasons we're doing it, we wanted to do a variety show so we could have little tips and stuff and bits from everybody. Our experience on New Year's Eve was what told us that.

Steve: How did the dry run go on Sunday?

Leo: Great. It's going to be a great show.

Steve: Yeah.

Leo: You know, it's going to be so much fun. Everybody's, you know, people have wanted me to do this for 10 years. And but now we have the horsepower to do it, I think. So I'm excited. New Screen Savers at TWiT. Actually, you know, it says "newscreensavers," but I think is actually the email. I think this is wrong. Do Unless we've got two addresses.

All right. More questions for Steverino, starting with Patrick in Central Minnesota. He wants to remove RC4. I want it gone. What? In the last two podcasts, you've described the unfathomable realization that many banking sites, like Bank of America, are using the bad RC4 cipher as their main communication medium. You also described how a browser offers a list of ciphers to the bank's server, and then it chooses one, in theory the best of the bunch, but apparently sometimes the bad one, RC4. So can I just remove that from the list? In fact, let's take all the weak ciphers out of the list so my browser just says, hey, I don't do that. And then Bank of America will be forced to choose a better one. If I can't get rid of RC4, can I configure my browser to not even mention it in the cipher list? Seems like this would solve a lot of problems. And if a site only accepts RC4, well, I don't want to talk to it anyway.

As a side note, I've been a listener of Leo's shows since day one. I got hooked on Leo from Tech TV until that channel went belly-up without any explanation as to why it was suddenly gone. Oh, I could tell you some stories, my friend. Every so often I'd google him to see what he was up to, and then one day he made his first podcast with some of the old crew. Those first ones were kind of crazy, but his personality always carried a lot of weight on the show. Thank you. Wow, he's come a long way. I'm sure glad you two found each other. Thank you for many, many years of "netcasts," with a wink to Leo.

Steve: So, okay. So first of all, I realized why banks had RC4.

Leo: Oh, why?

Steve: First in the list. Not quite as unfathomable as I had been saying.

Leo: Probably has something to do with IE6 or something.

Steve: Actually, it has to do with some of the attacks that we found, like BEAST.

Leo: Ah.

Steve: Which were attacks on the block ciphers. Block ciphers like, well, any of the block ciphers, like AES, use some sort of a mode like cipher block chaining mode. And as we've covered on the podcast, various little nicks have appeared in those, where if the attacker has access to the communications and can generate lots of bandwidth, there are games that they can play that the block cipher modes are specifically vulnerable to, BEAST being the first one. And our listeners who've been with us the longest will remember when BEAST was revealed, the recommendation was move RC4 to the top of the list.

Leo: Oh.

Steve: Because RC4, while it's an old creaky cipher, oddly enough, there's no known attacks against it. There was a paper maybe six months ago where the keying of RC4 was further brutalized. But there are actually no known attacks against RC4. Whereas now, with BEAST and then Lucky 13 and then POODLE, all of these are attacks against the block cipher. So this sort of leaves us in a conundrum because people feel nervous about RC4. People just don't like it because it's so simple. It's actually one of the reasons I do like it. And if you just warm it up further, then it scrambles up its starting state. And then it's a fantastic, very fast cipher. In fact, some of the original designers of RC4 have done a - have, like, fixed it. By just making a very few changes to it, they have strengthened it. But people are just not liking a stream cipher. What's different is that it is a stream cipher. It produces a pseudorandom bitstream which you XOR your plaintext with to get the ciphertext, which is different from a block cipher that takes your plaintext in chunks of bits, in blocks, and then mutates that block into a different block, and then does some fancy interblock linking in order to create the so-called "chain."

So, I mean, the dilemma we have is that we want to promote state-of-the-art stronger ciphers. The later versions of TLS have mitigations against these block ciphers. So people who have TLS 1.2, for example, are going to be okay. But people who don't then have a problem. At the same time, all the browsers quickly added mitigations, you know, prevention for like the BEAST attack. But not everyone is known to be using the latest browser. So what's someone to do? Weirdly enough, there is no good solution.

Ivan Ristic over at SSL Labs is right in the middle of this because he's trying to give people a letter grade, A through F. And he's decided that, if you use RC4, if RC4 is present, you cannot get better than a B because he wants to encourage people to remove RC4. GRC has an A now, and we don't support the RC4 cipher. There just is no reason to. So what banks should do is remove it, except then the concern is that there may be some creaky old person browser something somewhere that still needs it. I doubt that's true. The problem is, as long as everybody is worried that removing it might break something, nobody at either end will remove it. And then, if the bank puts it first in the list, that's what you're going to use. At the same time, there's actually nothing wrong with using it. There are no known attacks against it, against the cipher itself. And the browsers have solved problems where the browser, where the client has been able to.

So we're sort of in this weird place. For what it's worth, it looks like Windows is the only operating system that through some registry manipulation will allow you to turn off RC4. I haven't bothered to do it, but I did some digging around. And if you just google, like, "removing cipher suites from Windows," that will take you to a page where Microsoft explains it. You go into the registry, you flip some bits, and your system will no longer run the RC4 cipher. It'll just remove it from the list. Now, Firefox won't get the benefit of that because it's got its own security suite. Once upon a time it had a way of doing that, and they removed that. So there's nowhere, there's no way now to take it out in Firefox.

Leo: Ah, not so fast.

Steve: Oh, yeah?

Leo: It looks like Aurora, the development edition, has added that. In fact, "disables use of RC4 except for temporarily whitelisted hosts."

Steve: Nice. Nice, nice, nice.

Leo: So it's back, baby. This is Aurora, is, what, the developer edition; right?

Steve: Nice. So we're right on the edge of making this transition.

Leo: If you use Firefox or Windows.

Steve: Yeah.

Leo: I bet you Google and others will follow suit; right?

Steve: Yeah. I think people are just getting to the point where it's like, okay, it's just time to stop using it. And what Ivan is going to do at SSL Labs is, I think by September of this year, if you still offer it on your server, you get an F.

Leo: Good.

Steve: He's going to deprecate servers this summer to a D. And if it's still there in September, F. It's just time to yank it because what you really want to do is just stop using it. Just let's not use it. Then we don't have that whole issue.

Leo: Chatroom has also given me a link to this Microsoft security advisory update for disabling RC4.

Steve: Nice.

Leo: Not sure when this went out, but it was for all versions of Windows from 7 and up. And it gives you some information, as you said, for what to modify in the registry to completely disable RC4.

Steve: Yup.

Leo: So you can do that, as you said.

Steve: So it can be done.

Leo: Yup. Good news. Actually this - I'm really glad you had that question. Very interesting. Tim Trott, Marianna, Florida, continues the conversation, wonders about SSL and a shared server: SSL requires a dedicated IP address. I run a server at where all except e-commerce sites are on a shared IP address. I've been assigned 16 IPs for a total of 185-plus websites. Wow. Will I be forced to obtain IPv6 assignments, which cost money at Rackspace, for my remaining shared IP hosted sites in order to give them each their own IP address? Some SSL certs cost more than the retail price for hosting, so I will welcome Let's Encrypt. That's the free thing we were talking about.

Steve: Yup.

Leo: And will that require a dedicated IP? And what will happen to self-signed certs, which I of course have never used, he says.

Steve: Okay. So here is the story. It is no longer the case that SSL requires a dedicated IP. It used to be true because the negotiation with the certificate took place before the encrypted TCP tunnel was first used. So what you had was an IP-to-IP connection where you were negotiating SSL and needed to verify the server's domain name. That's why you had to have - that's why the server had to assert its certificate, its domain name, based on the IP that you were connecting to, since it wasn't until that was done that the web server could then make a query with a hosts header and say, hi, I'm hoping that I'm going to So consequently, the IP address had to be bound to the certificate. That changed with TLS.

There's something called SNI, I want to say server name - I'm blanking on the acronym, SNI, Server Name Indication. And that is an option which has always been available, starting with TLS, which solves this problem completely. And it was somewhat worrisome maybe five years ago. But everybody now supports at least TLS 1.0, which is essentially SSL 3.0, but, again, has additional features. TLS 1.0 and on allows the initial handshake from the client to the server to have an extension field saying I'm going to be connecting to That is the domain name for the first time ever. Not just the IP is in the handshake. Which allows a multi-homed server hosting many different websites to recognize that extension field and then supply from an array of certificates that it has. Go look up "server name indication" on Wikipedia, and you'll see a list of all the browsers and all the servers that support it at each end. Basically, everybody supports it at each end. So it is now something, Tim, that you are safe to use. So you could put all your commercial sites with free certificates from Let's Encrypt behind one IP if you wanted to.

Leo: Yay. And an update, we were talking a little bit about the challenge we were having going HTTPS on because we use so many different...

Steve: Caching.

Leo: Caching, well, it's not just caching. You know, I totaled it up. We have a CDN, that's Cachefly. And of course we will be pulling from that if you want to watch the video or listen to the audio on the website. We have an API server, Apiary. The website is served from a Node.js hosting service called Heroku. And the API itself is served from a Drupal host called Acquia. So at least four. There may be others involved. However, thank you for bringing up the issue because I did - at our scrum last week. The next day I said to the developers...

Steve: What's the story?

Leo: Yeah, hey, you think we could go HTTPS? And they very strongly said yes, we should do that. Matt, who's great, a really a good programmer, said, "Yeah, I strongly encourage you to do that." And they came back, and they said, "We can do this. It's not as hard as we thought." You know, he said, "Let me look into it because, yeah, you're right, it raises some issues." Not as hard as we thought. There's some chaining or something. And it's going to cost us another $2,000 for the time involved in doing that, which is nothing, as far as I'm concerned, 1% of the total cost, less than, and well worth it.

Steve: Nice, nice.

Leo: So we will be HTTPS everywhere.

Steve: Yay. Using DigiCert EV certificates.

Leo: With a DigiCert EV, which is really neat.

Steve: Wonderful. Wonderful.

Leo: Yes, and we'll be green. We won't just be HTTPS, we'll be green, baby.

Steve: Yup, baby.

Leo: And proud of that. So it was great. The developers said "Yes, thank you for asking, we will do this." Yeah, so that was good. Joe Laba, Question 9, in Metropolitan Detroit

says, "But, but, but..." of TrueCrypt: Steve, apparently I missed something somewhere. I remember you saying that there was no legal way for anyone to fork the TrueCrypt source code. But it sounds like someone - not just one, many - are going to be doing just that. I think it would be great, but what happened to make this possible? It was - TrueCrypt, we did point out, is not actually open source.

Steve: No, it's not. It's open license. And people are just doing it anyway because...

Leo: What are they going to do?

Steve: It's there, yeah.

Leo: Come out of the dark and sue you?

Steve: Yeah, I mean, it is absolutely true, the letter of the law of the TrueCrypt license says this is yours to inspect, but not to modify. And so they were making it available in exactly in the open source spirit that you've often talked about Leo, of we need to be able to inspect it. And doubtless they were helped by people over the years finding problems in the source. And look at the audit. It found some problems. Nothing major, but better to have found them than not, only because it didn't need to be reverse-engineered. It was open source. The license says you can't change it. Well, the developers also said, and we're going away, and we're going to remain anonymous, and you're never going to find out who we are, and we're not supporting it any longer. So people are like, well, fine, we're going to take it and keep it alive because...

Leo: And my suspicion is the existing developers would embrace that.

Steve: Yes. I think, again, there's their officially stated policy, and there's their, eh, fine, you know, we made it very clear we are disassociated with it. Nobody come crying to us.

Leo: They're done.

Steve: Yup.

Leo: But that's a good question because we did say that. We were talking about that.

Steve: Yeah.

Leo: Last question.

Steve: And it's been reaffirmed.

Leo: Robert Lowery of Kansas: Steve and Leo, thanks for the great podcast. I've been a listener for blah, blah, blah. SpinRite saved my bacon, blah, blah, blah. Fan, blah, blah, blah. I want to turn the tables a bit and ask Leo a question: You occasionally mention that you enjoy programming, and you're obviously able to converse with Steve about some fairly in-depth programming topics. I'm curious what you use your programming skills to create? What languages are you most comfortable using? If you're going to get up to speed on a new language, how do you approach learning? Have a great day. Get out and grill. Believe me, Robert, I am. Robert in Kansas. He's obviously a beef farmer. Well, we know you love assembly language.

Steve: Yup, that's my language.

Leo: But you're a professional programmer. I am the farthest thing from a professional programmer. I'm a hobbyist. I love it. I did write some software that was relatively widely used, but it's been many moons, 20, more than 20, almost 30 years ago, in assembly language, for the Macintosh, for the early Macintosh.

Steve: That's right, 68000.

Leo: I was running a BBS, and one of the first - I wrote two things that I released to the world, open source, by the way, before there was even open source. I just made it public domain and published the source code. Yeah, 68000 assembler is beautiful. But I love C. I learned C. And somewhere, yeah, I have my Kernighan & Ritchie right here. This is actually - this is not the original. The original is so beaten up that I actually bought a second copy of this. So C was my first - BASIC was my first language on Atari. I learned assembly, which assembly's beautiful. I agree with you. I love assembly. And there's something magical about getting down to the how the machine works.

Steve: There's just nothing below it.

Leo: Yeah. And it's useful, very useful, in understanding that. And then C I learned. I love Forth, believe it or not, but that's when it started becoming a hobby; right? That's when it was, like, I just collect these. And I do collect languages. And I have books on every possible language. You know, once you learn C, you could pretty much learn anything.

Steve: Yeah. Although Forth, Forth is a write-only language.

Leo: Wrong. Forth can be written so that it looks like English prose.

Steve: Okay.

Leo: Because the essence of Forth is you create your own primitives out of Forth primitives.

Steve: True, true.

Leo: And you can write a sentence in Forth that actually...

Steve: That is true.

Leo: It was created by Charles Moore for controlling telescopes. And I think it's still used...

Steve: Yeah, astronomy.

Leo: Yeah. But I loved it. It was a stack-based language. It was just wild architecture.

Steve: And there's, like, weird - it pops up. Like there's something to do with a UEFI has like a Forth interpreter in it because...

Leo: Yeah, well, Forth is good in embedded because it's compact.

Steve: Yes, yes.

Leo: Very compact. So most languages are like C, they're imperative languages. But you go back to Forth and even earlier, you go back to Lisp, these are languages that are very different. And so I've, believe it or not, I've been learning Lisp, as my new thing is Common Lisp. I decided I wanted to go back to the beginning, a language as old as I am.

Steve: Yeah, well, and in fact Lisp in particular has a really rich heritage. I mean, it is a...

Leo: Oh, it's amazing.

Steve: And for it to have continued the way it has for so many years, I mean, because it is in a class of its own.

Leo: And it's interesting because it actually has started to influence modern languages. You know, everybody followed the C root. But as time has gone by, many elements of Lisp, and that style of programming...

Steve: And as computers get more powerful, I mean, one of the reasons we have C is that it was written on one of the PDPs, it was written on the PDP-11, when you needed - really the whole concept was the smallest increment above assembly language that was machine independent. So for me, that's why it's my second language is it is, well, and in fact I wrote a big chunk of SQRL in C because it needed to be cross-platform. I implemented the GCM authenticated encryption technology. And because there wasn't a public domain library and open source, I created one and made it available to everyone because you need that as part of SQRL. And C was my choice for implementing it because it needed to be - I needed to offer the source. And for it to be able to be compiled for iOS or Android, you know, the ARM platform or whatever.

But what I loved about C was these guys, they first wrote the OS in assembly language, PDP-11 assembly language. And then they said let's recode this in something that is really, I mean, just like the smallest step away that gives us more expressability. We can do expressions. We can do the sorts of things, flow control that is more elegant. And that's C. But then again, that was because they were still dealing with very small, very low-power mini computers. Today, we just have so much more power to do much more powerful languages, dynamic languages.

Leo: I think, though, the point, really, is that the language - so all languages are what we call Turing complete. They can all be made to do the same thing. But there are some languages, for whatever reason, maybe my personality, yours, or whatever, that we just kind of get better, and we're more fluent and expressive in. And that's what you're kind of looking for, if you're a programmer. And I have to say Python, I love Python. I used Perl for a long time, wrote a lot of little bits of utilities and grep stuff in Perl. I love Ruby. Ruby's gorgeous. It's kind of, after Python, Ruby was the next logical step. There's more modern languages. Go from Google is really great for concurrency. Each has its kind of merits. And Haskell, there's a great book called "[Learn You a Haskell for Great Good!]"

Steve: That's a wacky language.

Leo: But it's really - but it's, by the way, it's going back to Lisp. That's what's so interesting. So I finally said, you know what, I learned Scheme, which is a derivative in Racket. But I want to go back and go to Common Lisp. And, by the way, it's really fun. And it's actually very easy to learn.

Steve: And it's available everywhere.

Leo: It's free. In fact, you know what, I'll show you, I'm using Emacs to do it because Emacs is written in Lisp. Emacs is the programming language - or actually it's really a lifestyle more than an editor. Richard Stallman wrote it, and he wrote it in Lisp. But I'm using a thing called Slime, which is a mode for Lisp programming, makes it very easy. So I'm actually in what's called a REPL, a Read Evaluate Print Loop, that lets you enter in code and execute it immediately. In fact, you see I had a typo in here. It dumped me out into the debugger. This is the debugger. You can't see it because the color contrast is bad. But it looks fine on my screen. And it tells me, oh, you've got a typo. So let me go back here. Let me go back up in my Emacs. And the problem is this parenthesis. So I've already entered it in, but I'm going to reenter it. I'm going to make that a brace. Hit return, it's going to put it back down there, it's going to execute it. It warned me, it says you've redefined this function. That's all right. I'm in Emacs, executing Lisp, which is awesome.

Steve: Nice, yup.

Leo: It's awesome. And so this is an IDE, in a way, like a modern IDE, in a very old-fashioned form, in essentially command-line. So, now, he said how do you learn? There's some really great - the web is wonderful now. Look for - there are series of books that teach a variety of languages. "How to Think Like a Computer Scientist" is one, and they have every language, although it was, I think, originally Python. And you'll find these online. If you look at my programming folder, I have a lot of links to various places and things and ways to learn. You should also look at "The Hard Way," "[Learn] X the Hard Way." This is a style of teaching, and many different languages have "[Learn] X the Hard Way" books. It's really fun. There's a lot of - if you want to learn, of course there's Code Academy. They teach JavaScript, which is not a bad language to learn. You taught yourself JavaScript, Steve. Was it hard?

Steve: Yup, yup. No, it was, well...

Leo: It's very C-like; isn't it?

Steve: Yeah, I've been programming forever, but, yeah.

Leo: Yeah, but it's like C. Well, you know what the rule about programming is, you've got to do it every day. At least an hour or two every day. Because then, if you don't, you have to relearn the language. Randal Schwartz told me this. He's a Perl guru. Randal is such a guru...

Steve: That's especially true for Perl.

Leo: Yeah. Because you forget. You don't do it for four days...

Steve: Oh, lord.

Leo: You've got to get the book out.

Steve: Yeah. I've got the whole - GRC's news server uses a Perl frontend for adding a whole bunch of features. I mean, that I wrote.

Leo: You wrote it?

Steve: I wrote a Perl wrapper around the news server.

Leo: Oh, respect.

Steve: And it does all kinds of extra stuff. But, boy, I have to go look at my source, and I go, you know, I sort of like relearn Perl from looking at what I wrote before and go, oh, yeah, that's the way I do that. What I would say, answering the question of how you learn a language, is start with a book and read it until you start getting antsy. At some point you just kind of like, you start feeling like, come on, I want to get going, I want to - and then solve a problem.

Leo: Yeah.

Steve: That's the key, is solve a problem. Think of something that you want to do in that language, and put yourself about that task. Because it is by solving a specific problem that you will then realize, oh, I'm not quite as ready as I thought. And so basically it slaps you down a little bit. It'll put your antsiness back in its place. And then you'll go about finding the answers that you need for how to solve the problem. But that's - I think that's the key. You just can't, like, do nothing because it's just, at some point...

Leo: No, you have to write something.

Steve: Yeah, exactly, you've got to create something.

Leo: That's actually, for me, that's the biggest challenge is, oh, well, what problem should I solve? So it's nice to have a problem to solve. Somewhere, and I'll find it, there is a document I found once, a guy who said, "I learn a lot of languages as a professional programmer. I have 10 things, if I want to learn a language, I have to solve these 10 problems. And by the time I've solved all 10 in that language, that dialect, I'm fluent. I wish I could find it. I thought I had it in my bookmarks here, but...

Steve: I've also seen that. I'm kind of like, I know...

Leo: It was like on a news - it was a news server somewhere I saw years ago, and I copied it. I have a PDF of it. The other thing I'd recommend, I really recommend, and we've mentioned this before, is, which is designed to teach people to think about programming in a kind of a - more than just kind of get out there and write code way, kind of - this is from MIT Press. It's free and it's online, And this actually uses a Lisp dialect called Scheme that's free and easily available. It's a good teaching language.

Steve: Nice.

Leo: But it's fun for me. I'm not a pro; I'm too old to be a professional programmer. I wish I had been in my youth because I love it. But it's kind of like doing crossword puzzles.

Steve: What was the kid's programming language that Alan Kay did? There was a...

Leo: Well, he did Logo, Turtle Graphics. But I think you're thinking of Smalltalk. And then Scratch.

Steve: No, Scratch, Scratch is what I was thinking of, yes. Logo...

Leo: Yeah. Scratch is a Smalltalk, yeah.

Steve: Logo, then Smalltalk, then Scratch was a Smalltalk variant. And that's another...

Leo: Scratch is still there. A great place to go is And Scratch is very much like that - we were talking about that Android app inventor from MIT. It's the same exact idea where you click blocks together to make it do things. Yeah, this is Alan Kay's, still alive, this thing. In fact, Scratch, I think the one laptop per child is - much of the UI is written in Scratch.

Steve: Nice.

Leo: Smalltalk is a great language to learn, by the way.

Steve: Yup, yup.

Leo: What a good language. And that's test-driven programming, really some really good disciplines built into Smalltalk. And it's the original object-oriented language. It's, see, it's fun. You can just go on and on.

Steve: Yeah, there's been so much, there's so much depth and history here.

Leo: I love it.

Steve: And I've often thought I would do someday what you are doing, which is just sort of decide I'm going to learn another language and pick it up.

Leo: Really fun. Really, really fun. And I wanted to start, I wanted to kind of do foundational work. And so for me, going back to Common Lisp and starting there...

Steve: I think that's very neat.

Leo: like tearing everything down and starting over.

Steve: Yup.

Leo: It's actually quite simple.

Steve: And look how much fun you're having. I mean, that's the whole point. Have fun.

Leo: So much fun. Steve, we are, speaking of fun, we are done. But always fun to do this show, learn so much from it. And I hope you all enjoy it as much as I do. We do Security Now! every Tuesday, right after MacBreak Weekly, about 1:30 Pacific, 4:30 Eastern time, 2030 UTC at Please tune in. Love it when you're in the chatroom. It really adds to the show for me. And then of course you can participate in other ways. Steve is on the Twitter, @SGgrc. You can send him questions there, or comments, or suggestions. He reads those. If you have questions for the show itself, you can go to and fill out the form there. That's the best way. Steve also has lots of other stuff at GRC, including SpinRite, the world's best hard drive maintenance utility, a must-have.

Steve: It works.

Leo: If you've got a hard drive, you need SpinRite. It works. You might also be interested in all the other freebies he's got there. See, you only, really, you only pay for one thing. Everything else is free at, including 16Kb audio versions and the transcriptions of this show. We offer full fidelity audio, soon to be stereo, soon to be joint stereo versions of this show.

Steve: Ah.

Leo: I don't know why, but - oh, I do know, actually I do know why, and I can't say.

Steve: Okay.

Leo: But a partner wants them in stereo, let's put it that way. So we thought, well, we'll just go stereo. So Steve will be slightly left, I'll be slightly right. Or something.

Steve: Interesting. Interesting.

Leo: Not so much that if you were listening in one ear you wouldn't understand it. It's just slight stereo. A little fullness to it. We also have video, if you want to watch. It's a fascinating thing to watch. You could see Steve's blinking lights from his PDP-8. Those are all at, or look for Security Now! on all your favorite podcatchers and the TWiT apps and all that stuff. Steve, always a pleasure.

Steve: Well, and next week the - I have read Matt Blaze's testimony that he'll be giving to Congress tomorrow. And this is the whole issue of government's backdoor or front door or golden key or whatever. And there are some - there's some subtlety to it, but some really good points that he makes. And so it's my intention to share his testimony with our audience for next week's podcast. I think everyone will find it really interesting. And so we'll do that, and then we'll spend the rest of the podcast talking about it.

Leo: Oh, I can't wait.

Steve: Yeah. Because, I mean, this is the big, this is really the big question. Is our legal system going to force people, force encryption technologies to have some way for law enforcement to decrypt? The government and law enforcement desperately want it. And everybody who understands why it's a bad idea, even those who aren't concerned about privacy, but actually understand why it's a bad idea, Matt understands it. And he raises some points that I had never considered. And so I think it's going to be a fascinating podcast.

Leo: Oh, I can't wait. Next week.

Steve: Yup.

Leo: See you later, Steve.

Steve: Thanks, my friend.

Copyright (c) 2014 by Steve Gibson and Leo Laporte. SOME RIGHTS RESERVED

This work is licensed for the good of the Internet Community under the
Creative Commons License v2.5. See the following Web page for details:

Jump to top of page
Gibson Research Corporation is owned and operated by Steve Gibson.  The contents
of this page are Copyright (c) 2016 Gibson Research Corporation. SpinRite, ShieldsUP,
NanoProbe, and any other indicated trademarks are registered trademarks of Gibson
Research Corporation, Laguna Hills, CA, USA. GRC's web and customer privacy policy.
Jump to top of page

Last Edit: May 01, 2015 at 16:38 (904.21 days ago)Viewed 1 times per day