Wednesday, February 13, 2019

Cyber-Security Analogy, Cont.

My previous analogy was... well, not an exact description of things. For example, a denial of service attack would be more like someone creating a bunch of clones to try to get through the gate. So many are trying to get through that it creates a really long line, and all other (legitimate) traffic gets fed up with the wait and leaves.

And you can also imagine that everyone trying to get through the gate has a unique number, identifying their origin. You can fake the number, of course, but you have to have a number of some sort. The clones might all share the same number (one attacker using one computer to generate a flood of attacks) or they might all have unique numbers (one attacker using a botnet under their control to generate attacks). The gate guards can potentially use that number to identify attackers and clear them out.

But...

That's not really why I created the analogy in the first place. I did it to create a different frame of reference, so I could look at the problem in a different way.

For example, much of cyber-security is focused on handling the daily attacks... figuring out ways of improving security at the gate, or blocking up the not-so-secret doors in the walls, or training people so that they don't throw ropes over the wall to let an attacker in.

Each of those fields has their cat-and-mouse, fast-paced development. Someone finds a new 'secret' door in the wall. If it's an enemy, they may keep it to themselves (a 'zero-day' attack that nobody knows about and can't defend against) or try to share it with everyone. If it's a defender, they may try blocking it up with bricks.

If/when both sides grow aware of it, there's a race between the defender to block it up and the attacker... where an enemy wizard creates a new spell to find the door, and spreads that information to all the people interested in getting into the castle. Does the door get blocked before an attacker gets through? Who knows?

All of that is necessary just to stay on top of things, but it doesn't really change the nature of the game.

Or perhaps it does, in the long run. Maybe. If the defenders can find and secure all the doors faster than new ones are found and exploited.

Maybe, someday, getting into the castle will become so difficult that most of the casual attacks drop off.

It'll probably be a long time before that happens, though.

So, what would change the nature of the game?

From my (very superficial, noobie) awareness, there are a couple of different ideas on how to do handle this.

For example, some people want to just rebuild the castle entirely, making sure that this time there are no secret doors or hidden passageways. (I think this gets into Trusted Computing, as well as the push for more secure software, holding software providers liable for vulnerabilities, and probably some other stuff I don't really know much about).

There are quite a few challenges to this goal, though. Imagine trying to rebuild a castle while you're still living in it and working out of it. Assuming you can make something entirely secure in the first place (there are arguments about that, and I don't have enough experience to have my own opinion on it. I mean, systems are complex and it's possible that we can't secure them entirely... but finding and fixing a security flaw like the infamous buffer overflow doesn't necessarily mean that doing so creates another vulnerability elsewhere, so in theory you should be able to secure it all? Maybe? Let me get back to this when I have a better idea of what I'm talking about.)

For anyone unaware - quite a bit of computer technology is concerned with "backwards compatibility". That way all your old programs and things will still work on the new system. It also, unfortunately, means that technology  has all these 'kludges', or remnants of things that were necessary back when computers were built a certain way, but aren't now. Or rather, they're only needed now for backwards compatibility. And early computing was more trusting than we are now, so some security issues are intrinsic to decisions made way back when. If you could redesign everything from scratch, incorporating what we now know, things might possibly be different. But that would require a massive investment in time, energy, and money. There are, apparently, still numerous computers using really ancient software because businesses rely on that software and haven't been able to find an alternative on anything more recent. (Many tech people seem to have stories of someone finding an old system that nobody knows what it's used for any more, powering it off because something that old can't possibly be important any more, and discovering that doing so made it impossible for the business to function any more.)

Anyways. Rebuilding from scratch seems massively complicated, though there's some potential to the idea. Especially if you go with a gradual rollout, so businesses can adjust as their existing systems wear out and they have to buy new ones. (Sort of like the transition from IPV4 to IPV6, though you still have a lot of systems that requires the ability to use both.)

There's also the idea that we could secure the castle if we just held software vendors accountable for their software. That is, the hidden doors in your wall are not necessarily just from what you built. Or Microsoft, or Linux, or Apple. The operating system might have vulnerabilities, but any software you add to your computer can also come with vulnerabilities. So even if your operating system is secure, even if all known doorways are bricked up, if you downloaded and installed something (like an internet browser, or a game, or an application to manage your finances, or anything) that software might also create a hidden door in your castle walls.

And software companies, apparently, are well known for releasing products as quickly as they can... and fine-tuning them after they've gone public. Think of every new Windows operating system, or the time it took for Pokemon Go to smooth out all the bugs. The public acts as the final phase of testing for much of what gets released.

And, well... software is complicated. You can test, and test, and test, and probably won't find everything until you're doing it for real. Like when we changed our warehouse management system at my old company... I tested the heck out of it, but some things didn't become apparent until we were dealing with the number of users and orders we did on a daily basis. It's hard for a test environment to duplicate everything, and some problems are probably inevitable.

But that doesn't seem to be the real problem with this solution. After all, it shouldn't be too hard to have a reasonable standard of what is 'inevitable parts of producing a new product' and what is 'sheer laziness and the desire to make money quick'.

The real problem is that, in our current environment, nobody wants to add regulation that will unduly burden software vendors. Or maybe we don't need regulation, maybe we just need a really big lawsuit holding someone (Adobe, or Microsoft, or Google, or Apple, etc) accountable for the losses a business had when an attacker exploited a vulnerability in their product. Assuming you can, since I think we all tend to pro forma sign one of those Terms of Service that they'll probably use to deny liability for any such thing. Still, if software vendors have to pay a really large fee for any mistakes in their code, they'll probably start spending more money on development and testing for security in their product before releasing something new.

And if the case is big enough, and the vendor penalized enough, everyone else in the industry will take notice and start doing the same.

It may not stop attacks entirely, but it would at least make sure that there were fewer hidden doors to find. Of course, it would probably also slow down the software development process.

So those are just a few possible solutions. There's another way of looking at this, though.

I came up with the analogy I did because I wanted to think about what it meant, this mass distribution of the ability to hack into a target. The 'wizards' creating 'spells' that anyone could use.

Hmmm, before I focus on that I want to talk about something else.


Actually, given how long this is getting Imma gonna stop right here and start a new post.


No comments:

Post a Comment