Tuesday, August 8, 2017

Strategic Plateaus in the Cyber domain

One thing I think that surprises many people who don't play video games is how similar the strategies for them all are. It's as if Chess and Checkers and Go all had the same basic gameplay.

In most online shooters, you have characters with a high "skill ceiling" that require precise aim and maneuvering, and others which have the ability to soak up damage or cause area effects or heal their friends, which generally require more positioning and strategy understanding.

And as new characters are introduced to a game, or existing characters are tweaked, you change the strategies that works the best overall.

In Overwatch, the most popular game among hackers right now, you have "Dive Comp" and "Deathball Comp". These represent "Fast, deadly characters and chaotic rampage" vs "Healthy armored characters and slow advance". If you're going with the right team composition and strategy you can overcome even very serious disadvantages with your "mechanics" (shooting skill, reaction times, etc.) . I.E. your team gains an asymmetric advantage until the other teams copy you and catch up.

Which technique works best is generally called "The current meta" and trickles down from the pro-players to the very lowest ranks of Overwatch (where nobody should honestly care, but they still really do). New meta shifts in Overwatch, despite the continual changes introduced by every patch, are extremely rare, perhaps once a year! The game designers say this is because people are bad at finding and testing new strategies, essentially. It is a rare skill. You almost have to be pretty good at any new strategy to know if it even really works. I call this a strategic plateau, because it LOOKS like the meta is still one way, but it's really another way, yet to be discovered until someone gets good  enough at some new way of operating.

And yet, the cyber domain is even more choppy than any computer game could ever be. Things change at a tremendous rate, and people generally look at the "Cyber Meta" as a static thing! Either we are in the "Botnet Meta" or the "Worm Meta". We either do "Client side attacks" or we do "SQLi attacks". So many people think the cyber meta is what the West Coast's VC funded machine tells them it is at RSA or in Wired Magazine!

Getting this right is a big bet - some might point to recent events by saying it is a bet of global importance. Investment in a high end "Man on the Side" technology stack can run you into the billions. You'd better hope the meta doesn't change until your investment pays off. And what are the strategic differences between TAO-style organizations and the Russian/Chinese way? It's possible to LOSE if you don't understand and adapt to the current up-to-date Meta of the domain you are in, no matter what your other advantages are.

Grugq has a whole talk on this, but everyone is going to divide it differently in their head and be really crazy about it, the way people are when I use Torbjorn on attack. Also, why isn't "Kaspersky" in my spreadsheet yet! :) Also: Do you have a similar spreadsheet? IF SO SHARE.

No matter how you define the "Deathball" or "Dive Comp" of the cyber domain you also need to analyze in depth how modern changes in the landscape effect them and make them stronger and weaker. "Bitcoin and Wikileaks as a service" may have replaced "Russian Intel" as a threat against giant teams of operators, for example. Endpoint defenses and malware analysis and correlation may have advanced to the point where Remote Worms have become much stronger in the meta.

But the real fun is in thinking up new comps to run - before QUANTUMINSERT was done, someone had to imagine it fully fledged in their heads. Before the Russians could run a destructive worm from a tiny contractor team that hit up an accounting firm, someone already had a certainty in their mind that knew it would work. And so that's the real question I'm asking everyone here. What's the next meta? What does your dark shadow tell you?

Monday, August 7, 2017

DDIRSA posts about VEP

Former DDIRNSA (who just retired) posted this today, and it accurately reflects his feelings on the VEP debate, I assume.


There's nothing in there that would surprise someone who regularly reads this blog though - essentially he does not hold any water with the argument that we should be giving up all our vulnerabilities to vendors.

Likewise, he appears to be miffed that people are blaming WannaCry/NotPetya on the NSA, as you might expect.

Oh, also I want to mention the things he didn't say would be good compromises, which tend to be offered as "halfway points" from people who have never been in this business. He didn't say "Let's only keep 0day for a few months" or "Let's only keep certain kinds of 0day - the not important ones". All those ideas are terrible, and get offered again and again by various policy arms as if they are going to magically get better over time.

Saturday, August 5, 2017

The Killswitch story feels like bullshit

If you haven't watched the INFILTRATE keynote from Stephen Watt here then you need to do that, especially if you are a lawyer who specializes in cyber law. INFILTRATE is where you hear about issues that effect the community in the future, and you should register now! :)

But let me float my and others initial feeling when MalwareTech got arrested: The "killswitch" story was clearly bullshit. What I think happened is that MalwareTech had something to do with Wannacry, and he knew about the killswitch, and when Wannacry started getting huge and causing massive amounts of damage (say, to the NHS of his own country) he freaked out and "found the killswitch". This is why he was so upset to be outed by the media.

Being afraid to take the limelight is not a typical "White Hat" behavior, to say the least.

That said, we need to acknowledge the strategic impact law enforcement operations as a whole have on national security cyber capabilities, and how the lighter and friendlier approach of many European nations avoids the issues we are having here in the States.

Pretty much every infosec professional (yes, even the ones in the IC!) knows people who have been indicted for computer crimes now. And in most of those cases, the prosecution has (as in the video above) operated in what is essentially an unfair, merciless way, even for very minor crimes. This has massive strategic implications when you consider that the US Secret Service and FBI often compete with Mandiant for the handling of computer intrusions, and the people making the decisions about which information to share with Law Enforcement have an extremely negative opinion of it.

In other words: Law Enforcement needs to treat hacker cases as if they are the LAPD prosecuting a famous actor in LA. Or at least, that's the smartest thing to do strategically, and something the US does a lot worse than many of our allies.

Wednesday, August 2, 2017

0days: Testing, and Timing

Perfect timing is everything...
So there's another reason why nation states use 0days: Testing. Testing on exploits is particularly hard. All software is hard, but exploits are "things that are not supposed to work" by definition. This means not only are you testing them against a few VMs you have laying around but also against, say, every version of AVG's free virus protection you have ever seen, on every version of Windows possible, with every possible configuration.

As you can imagine, that problem is "exponential" in the way that computer scientists use the term when they mean "a complete freakin' nightmare". Of course, the only REAL test is whether they "work in the wild". This is a whole different level of "working" and "works in the wild" is labeled on many exploits to say 'yes, this one is at that next level of quality'.

And some exploits, even very good exploits, like ETERNALBLUE appears to be, fail the testing. (According to reporting, it was known as "ETERNAL BLUESCREEN" since it often crashed targets.)

When exploits fail the testing phase, you don't give up on them. You pass them off to different exploitation teams for more analysis, you wait for the code in the target process to change (which is often successful). You wait for another bug to be found to combine with this bug. Persistence, which is a key metric of your operational success, is not just about the hacking part, in other words.

And even if your exploit is PERFECT you still have many things to do before you can use it. It needs to be integrated into your toolset. People need to be trained on it. Targets need to be collected and triaged. Operational security notes need to be written. What do you do when the exploit fails? Does it leave logs? How do you clean those up? How do I tune my defenses to detect if the Russians are already using this vulnerability (and have therefor tuned their OWN defenses to detect it?)

All of these things take time and we could, in some cases, be talking several years. Your average penetration testing gig is maybe two weeks long. These processes are similar, but not the same. So be careful extrapolating operational work from penetration testing too much. The Grugq has a good presentation you can read on this evolution here.

Needless to say, attacking with vulnerabilities which already are well known has negative impact on your OPSEC. But it also may mean the targets you most care about (which have an average patch testing cycle of 14 days), may become out of your reach.

Tuesday, August 1, 2017

Do you need 0days? What about oxygen?

I always enjoy it when people say that you don't need 0days to gather cyber intelligence as a nation state, such as in today's SearchSecurity article about the BlackHat discussion on the VEP.

Technically, you don't need covert intelligence at all. Open Source information can be just as good in many cases. But then, there are also cases (and I'm struggling to avoid bombast here) where covert collection is desired. And from a military standpoint, there are many cases where hidden pre-placement on an enemy network is desired.

The answer to "Do you need 0days" is "Yes."

Intelligence and military work is quite different from penetration testing work. This should go without saying, but let's delve a bit into the "how" to see why exactly 0days are so useful.

First of all, in penetration testing you rarely sit on a target network for months or years collecting data like you do in intelligence. And you rarely need that data to be "untampered with". I.E. We don't want our signals intelligence collection to be double-agents feeding us false data. Implants in general have received a lot less attention in the public penetration testing sphere than in the intelligence sphere. FLAME is still generations ahead of what a typical penetration testing company would use. I say this, because our "Somewhat similar to FLAME" framework INNUENDO is in that market space, and the people who buy it are typically large banks looking to emulate nation state threats, not small and midsize penetration testing companies.

The thing is this: Using a non-0day exploit means IDS systems can silently catch you, and then burn and turn your implant network against you. This is a non-trivial risk. Human lives are OFTEN ON THE LINE and when they are not, billion dollar SIGINT programs are.

In intelligence, you need to overcome every network visibility and management tool the defender has, and the defender only has to detect you once. Also in many cases you simply cannot fail when doing intelligence operations in the cyber domain. In penetration testing you can get away with writing a report that says "You have no unpatched vulnerabilities on your system." This is, most of the time, what the customer really wants!

In intelligence work you have a much higher bar. Get in, get out, be undetected, for years at a time, and the consequences for failure are unimaginable. This is where 0days fit in, as part of a mature intelligence capability that takes into account the real risk structure of the world of mirrors.

Monday, July 31, 2017

Rebecca Slayton can write and has a cool name

This is a really great paper you can read here. Highly recommend it for its much more in-depth analysis of how offense is probably expensive. At the end, she goes into a cost/benefit of both offense and defense of Stuxnet - which in my opinion is the weakest part of the paper. You can't say vulnerabilities cost 0 dollars if they came from internal sources. And you can't just "average" the two possibilities you have and come up with a guess for how much something was.

I mean, the whole paper suffers a bit from that: If you're not intimately familiar with putting together offensive programs, there are many many moving pieces you don't account for in your spreadsheet. That's something you learn when running any business really. On the other hand, she's not on Twitter, so maybe she DOES have experience in fedworld and just doesn't want to go into depth?

Also, there's no discussion of opportunity costs. And a delay of three months on releasing a product, equally true for web applications and nuclear bombs, can be infinitely expensive.

But aside from that, this is the kind of work the policy world needs to start doing. Five stars, would read again.

I mean, the simpler way of saying this is the NSA mantra, which is that whoever knows the network better, controls it. And defenders obviously have a home-field advantage... :)

Quotas as a Cyber Norm

Originally I chose this picture as a way of illustrating perspective around different problems we have. But now I want a giant scorpion pet! So win-win!

Part of the security community's issue with the VEP is that it addresses a tiny problem that Microsoft has blown all out of proportion for no reason, and distracts attention from the really major and direct policy problems in cyber namely:

Vulnerabilities have many natural limits - like giant scorpions needing oxygen. If nothing else, it costs money to find them and test and exploit them, even assuming they are infinite in supply, which I assure you they are not. Likewise, vulnerabilities can be mitigated by a company with a good software development practice - there is a way for them to handle that kind of risk. A backdoored cryptographic standard or supply chain attack cannot be mitigated, other than by investing a lot of money in tamper proof bags, which is probably an unreasonable thing to ask Cisco to do. 

Deep down, forcing the bureaucracy to prioritize on actions that have no "cost" to them but high risk for an American company makes a lot more sense than something like the VEP, which imposes a prioritization calculus on something that is already high cost to the government.

Essentially what I'm asking for here is this: Limit the number of times a year we intercept a package from a vendor for backdooring. Maybe even publish that number? We could do this only for certain countries, perhaps? There are so many options, and all of them are better than "We do whatever we want to the supply chain and let the US companies bear those risks."

Likewise, do we have a policy on asking US companies to send backdoored updates to customers? Is it "Whenever it's important as we secretly define it?"

Imagine if China said, "Look, we backdoor two Huawei routers a year for intelligence purposes." Is that an OK norm to set?