So you’ve walked into a new job, you’ve been given the task of securing all the things and you are wondering, where do I start? I’ve been there, I think most of us have. It’s a major ask, it’s a bit overwhelming and so I’d like to share some of the questions I ask when in a new environment. (You can apply this to any environment, not just a new one though)
I start by asking 5 fundamental questions, 5 things that I feel cover the core, the basics, the initial things that any org must be doing in order to even have a chance. These aren’t random, they are in order and if you read other “top” lists, you will notice I didn’t invent these or make them up. These are also not exhaustive, there are a GREAT MANY other things you can/should be doing but if you can’t nail these 5, you might be sabotaging your success.
These are nothing to do with any of the latest buzzwords, they are not sexy but if your org cannot master this stuff, you have next to no chance of dealing with todays exotic targeted attacks (or hell the non exotic off the shelf stuff either).
These 5 questions resolve to 5 main focus areas which in turn are made up of a multitude (and more than I mention here) processes. Initially I pick just 4 processes in each area to deliver because I wanted to scope it. That gives me 20 deliverables to measure/work on. I try to make these action oriented, things that I can actually do or implement to improve security. They tend to be a process of some kind or an artifact that can be seen, measured and demonstrated. As with all good processes, they are not complete until they are documented and measure. If you don’t like the 4 I choose, pick your own.
First lets talk about tracking/measuring our progress.
I use a modified COBIT maturity model that has 4 basic levels. 0-3. Why 4? I don’t think many people get to “optimized” and I would move on to something else before working on that. Use 5 if you like.
I use these process maturity scores and colors to produce a heat map. I use the colors to show progress. I would go through and “score” the environment on that process or artifact, honestly, so I would know how much work I had to do to get it to green. Green being a established repeatable process that is documented and measured. That is a wholly achievable goal for any process or artifact and this whole approach is about achievable goals that have a real impact on security.
The heat map approach lets me baseline an org and show real easy to identify progress that execs and digest with a glance. They can see a hexagon go from red to orange to yellow to green and with that simple visual change they can immediately see areas that need resources and work. I’ve used the model to justify what I am working on and to justify new resources multiple times. I can’t think of a simpler way to show progress or highlight issues to a busy exec.
So on to the 5 questions.
Do you know what is on your network? Do you know who owns it? How it got there? How important is it?
How can you be sure you are securing all the things if you have no idea what “All The Things” consists of? Asset Management flows to vulnerability management and on to a great many IT processes (in fact it can be said that if your IT department is not doing asset management, you cannot do effective security).
The basic premise is that everything that is on the network was put there by us, in a manner that we approve of and is being managed by us.
This means we put it there, we baselined/configured/hardened it and we are patching/logging/backup up/monitoring it. You can’t do all that unless you have good process to onboard new systems, sunset old systems and discover rogue systems. All key parts of your asset management program.
So now we know what we have, we should measure its security right? We need to check that it’s being patched, and has been deployed with a secure configuration and maybe measure it against some baselines. To do this we need to do some vulnerability scanning.
Vulnerability scanning will measure the patching and configuration management processes of the IT organisation, not just report on vulnerabilities.
Having vulnerabilities is a sure sign that the patch management and/or configuration management processes are broken.
I like to throw password management into this area , how they are being created, used, stored etc in the environment for both users and admins. This includes auditing for default passwords, password policy, looking at how/where passwords are stored and who has access to them.
Ok next up we want to know what our assets are doing. What is happening on our VPN? Our firewalls? Our network? Our Active Directory/LDAP?
Before we can know all that, we have to ensure things are being logged in a correct manner and to a central location. We probably want to ensure we are doing some looking at time synchronisation across systems so that we can compare timestamps between logs.
If we don’t have a good view into the network traffic that is going into and out of our egress points (we did identify those right?… right?), now would be a great time to look at doing that.
We have no hope of detecting incidents in our org if we are not logging, if we are not watching the network, if our times are not synched.
Many orgs reach directly for the SIEM in times like these and you might wonder why that is not step 1.
Before you can automate a process, you must be able to do it manually.
Before you can SIEM, you must be able to manually correlate events between logs from various systems. A SIEM is not a magical system that makes everything in your environment spontaneously start logging things to a central location in a useful manner.
Once we can do all this, we must put in process to deal with the alerts and reporting that is happening. it is said that a security tool that has no process attached with the output, might as well not be running.
Stuff will go wrong. Laptops will be stolen, malware will be found, bad links will be clicked on and we all know it. We know it’s not if but when. So what do we do when it happens? Who do we tell? who is in charge? who fixes what?
Start by writing a high level policy for incident response, something that outlines the who, the why, the when. When you have that, work on the what, the Incident Response Process. This is more in depth, more thorough, contains more details and details around the kinds of incidents you might encounter, how to document them, communicate them etc.
It is said that no plan survives contact with the enemy and in this way, very few incident response processes survive being used through an incident.
So test it, often, start simple. Test it, change it, evolve it, grow it, retest it in more complex ways as it matures. Don’t let the first test of your incident response process and capabilities be a real incident.
A core piece of incident response is getting back to business. depending on the kind of incident this can be simple or monumental. Plan for it. Back things up, have plans to rebuild systems. if you started with asset management you should have a starting point on which systems are important and need to be recovered first if the incident is massive.
We need some rules, we need some guidance, we all need to be working from the same playbook. Our IT needs a set of rules, our users need come education, our developers need both. We might have external rules we need to follow also. We might also need to measure some things.
We need a master plan for security, the basic set of rules that lays out our core rules for security. The enterprise security policy. This forms the core of the other policies we lay out.
Our users need training and awareness. Security doesn’t come naturally to everyone and people need to be taught what is important, why it is important and their role in the bigger picture.
Users need to be educated regularly on what our rules are and how to be secure. They are the single biggest vulnerability we have. Even after we secure everything, a single user can undo everything with a uneducated click.
We also need to measure some things. The org probably wants to know how secure they are, what their money is buying, what we are doing to secure all the things. If we’ve done things right, just about every process we implement should have some kind of metric. This serves to show others that it is being done and to keep us honest. If no one is looking at the metrics, it’s easy to drop the ball and shortcut the metric. I’ve often used metrics to drive IT to implement a process. Simply say you will measure something and they will implement process to drive that metric in the right direction.
Get your metrics from good security, don’t let metrics drive the security.
Now these translate into 5 Programs with action oriented deliverables. (Part 2 coming soon.. ish)by Zate with no comments yet
Late night last night I got the code cleaned up and submitted to msfdev.
Should probably be in the Metasploit svn in a few days, I know those guys are super busy with an upcoming release.
In the mean time you can download it from the GitHub – http://github.com/Zate/Nessus-Bridge-for-Metasploit
I’m a google fan, I admit it fully. I use lots of their services, I like their stuff and I am at peace with their devling into my personal space. It’s gonna happen, unless you choose to live your life 100% offline, you are trading personal privacy for access to services.
Their latest creation (which has been around for a while, just not public) is their very own URL shortener called goo.gl. It does some of the usual things, it tracks metrics and it does one other things I think is really cool. It creates a QR code for your url.
Here is one I created earlier (ha, sounds like a cooking show).
http://goo.gl/YgTu.qr for the url http://goo.gl/YgTu
Very cool. I like QR codes. For those of us with smart phones, a simple scan of the code and you can open the site.
I do wish the service had an easy way to copy the new urls to the clipboard though.
So what else can it do? Well #1, I want it to tie in with their safe browsing serivce (http://www.google.com/safebrowsing/diagnostic?site=google.com) so that I can’t create a URL to a known bad site. I’d also like them to regularly scan the urls and disable those that link to malware. There are lots of URL shorteners and they definately pose a security risk and it’s about time someone took the step of removing bad URL’s.
Thoughts?by Zate with no comments yet
So one of the major “issues” with the Nessus for Metasploit Plugin right now is that it does not handle large reports well. Not even the usual db_import_nessus handles large reports well and this is because it reads the entire file in one big blob then parses it.
The nexpose importer and the nmap importer both use REXML Stream Processors.
So tonight I copied the nmap_xml.rb file and am working on making it process Nessus v2 files. I am hoping that both the Nessus plugin, and the db_import will benefit from these changes.
I’ve been looking at it for a few days and kind of avoiding it because it’s difficult and is going to require large portions of my time fumbling through learning how the current one works enough to know how/what to modify.
Well turns out it’s simpler than I thought. (more…)by Zate with no comments yet