On Organizational Maturity and Cybersecurity

Malicious compromise is a profitable industry with constant innovation. This has resulted in a repeating cycle of new cybersecurity risks generating new mitigations.
While the cycle is understood and mitigations may be available they are only useful if the organization knows they need them, have people who can use them, and have processes in place to ensure they are used. In other words, security tools are only useful if the organization has the maturity to ensure cybersecurity is important. In this post I go over the importance of understanding the maturity level of your cybersecurity preparedness.

In the not-so-distant past the prevailing concept for corporate cybersecurity was the application of the principle of least privilege, construction and maintenance of network firewall solutions, and stringent mail security. Cybersecurity has evolved as elevation of privilege and lateral attacks became common and it became obvious that networks were not the boundary they once were. Email became just one more SaaS service which must be secured. The common routine now is that bad people see opportunities to make money or do mischief which drives them to think of new ways to compromise environments. Malicious compromise is a profitable industry with constant innovation. This has resulted in a repeating cycle of new cybersecurity risks coming on the scene, security products and improvements needing to be identified and created, companies applying threat mitigations. In essence, innovation by bad people drives reactive security innovation by software companies and cybersecurity organizations. At this point, this cycle is generally understood by most as a fundamental truth of the state of security and the cloud.

Malicious compromise is a profitable industry with constant innovation.

While the cycle is understood by many there is a lot of work still needed to improve overall postures of organizations. The availability of security suites and tools are integral to that improvement, but security tools are only handy if the organization knows they need them, have people who can use them, and have processes in place to ensure they are used. In other words, security tools are only useful if the organization has the maturity to ensure cybersecurity is important. Assessing cybersecurity maturity is important whether you are gauging your environment, preparing your organization to do better, or considering a new group to work with.

I characterize cybersecurity maturity into three stages (or postures) an organization can be in. The security stages are Beginning, Forming, and Established. Let’s go over some characteristics of those stages in the lists below.

Beginning

  • This stage benefits most from pervasive trends in new technology and security initiatives, and typically involves a lot of discovery effort to see the state of the environment.
  • No established processes for identifying flaws/weaknesses in org and patching them to reduce likelihood of events.
  • No established processes, plans, or dedicated owners for incident handling if an incident were to occur.
  • Nonexistent to minimal planning for post-incident recovery to get the business back on its feet.
  • Small or no dedicated team devoted to security preparedness and operations.
  • No security specific tooling to alert the organization of potential problems.
  • Potentially unaware of compliance obligations related to security and preparedness.
  • Not meeting compliance obligations for data security and business continuity.
  • Security controls and recoverability is put into place reactively during or immediately following an event or events (i.e. add MFA following an event)

Forming

  • This stage is characterized by team building, product selection and use, and new initiatives to add security functionality and more secure service hygiene.
  • Building/establishing processes to mitigate risks.
  • Identified people and teams who are accountable for security preparedness.
  • Dedicated people who are accountable to respond to incidents.
  • Security teams typically formed from admins who are familiar with authentication and authorization technologies, application behavior, or IT operations.
  • Endpoint (host, network endpoint) detections in use or beginning to be used.
  • BYOD control in progress or established.
  • In discovery mode to determine organization and/or industry specific compliance needs.
  • Inventory efforts for line business resources and services (applications, data, automation) underway.
  • Prioritization and documentation of business-critical configurations and services underway.
  • Deployments of privileged identity and access management solutions started.
  • Reactive end user training on security “does” and “donts”.
  • Multi- factor authentication requirements are defined, and deployments are underway.
  • SIEM use and analysis is routine.
  • Beginning use of privileged access management and privileged identity management.

Established

  • Established organizations are continually working to maintain and enhance the security of the business with dedicated experts, tools, and strategies.
  • Security preparedness and incident response have established processes.
  • Business interrupting events have established plans and action owners.
  • Established code repositories and pipelines for security settings and resource configurations.
  • Risk mitigation planning
  • Deep understanding of the organizational compliance needs.
  • Compliance needs met or plans on how to meet them underway.
  • Uses advanced security aggregators for monitoring and risk mitigation. Microsoft Defender for Cloud, for example.
  • Identity sourcing is cloud driven, and applications have standard security criteria they must adhere to.
  • Governance reviews and automation for compliance.
  • Routine privileged access management and privileged identity management.
  • Established privileged access management processes and tooling where needed.
  • End user training campaigns to continually bolster security by education and testing.

Organizations in the Beginning stages are those which are most likely to experience a sudden and profound revelation on the need for better security. The cause of the revelation often being the result of an audit which didn’t go well, something malicious which interrupts business services, or perhaps a security breach. As of the time of this writing, this stage is typically comprised of small to medium sized businesses (1000 or less seats) as the result of many larger organizations getting wise to the problems already.

In the Forming stages, there are plans or the beginnings of plans on what it will take to establish sufficient visibility and control. There is an awareness and investment into security at executive levels, albeit the investments are newer. In this stage, things will begin mobilizing to make the business secure and, while it is possible that an event could still occur, if an event occurs the organization has dedicated responders who and tools to deal with it. Most enterprise organizations are at least in the Forming stage at this point-in many cases after unfortunate event(s) to prod things along.

Those who are in an Established stage still have risks related to business interruptions and compromise. This is the stage all should be striving to achieve. Having reached the Established stage doesn’t mean the work is done, instead it means that breaches or business interruptions are likely to occur, and if they do occur the blast radius will be reduced and recovery fast.

It should be expected to see the organization mature faster in some areas and slower in others. However, if things are going well there will be a linear progression toward greater maturity and thus greater security and resilience. There is not a defined path to cybersecurity maturity, nor is there a clear indicator when an organization is mature. These are things which are subjective and must be assessed by the organization and against the business priorities.

Since there is not a defined path it should be expected that it will take time and effort to gain security maturity in an organization. It won’t happen overnight. Think of it as steering a cruise ship-turns are wide and slow and take time. And, to push the analogy further, you have to have the right crew to have the ship do what you want.

What is the cybersecurity maturity of your organization? A better question is, where do you need to invest to get security where you need it to be?

PowerShell Repositories

Over the years, I had the opportunity to lead engagements on many types of scenarios. Quite a few of those scenarios required automation of some kind to make complex tasks doable. From those scenarios I learned that I really enjoyed solving complex problems or creating automation with code. PowerShell was my language of choice for most of these scenarios due to the extensibility the language allows and the ease of use, and to make the code accessible I would post it online at the TechNet Script Gallery.

The TechNet Script Gallery hosted an amazing collection of intellectual property in the form of scripts from all languages. It was a truly community-driven repository and a unique one in that it was centered specifically to the Microsoft product areas. Unfortunately, the TechNet gallery was retired in 2020 and nearly all the intellectual property was lost.

Which explains why I have had so many people reaching out to me to get copies of PowerShell code! Thankfully, I make it a point to save copies of content I write in case people ask, or I need to adapt it for some future situation.

I recently moved much of the code I originally posted on the TechNet Script Center to GitHub. I’ll be posting more over time, but for now here are 14 repositories which I hope will help you as much as they have helped others.

The scripts below are divided into four categories:

  • Environment discovery: Code which enumerates environmental configuration, looks for performance conditions, or retrieves data.
  • Security checks: Code which reports on for security specific conditions or configurations.
  • Data queries, auditing, and analysis: Code which enables logging or auditing, retrieves data, or distills information.
  • Tools: Code to retrieve data.

Environment discovery

SensibleTim/GetTrustTopology: What Active Directory trusts are present in an environment and how they are configured is one of those things which isn’t important until everything depends on it. This script will query Active Directory for all configured trusts details and put those details into a text file. (github.com)

SensibleTim/GetADObjectData: A PowerShell script which willl do Active Directory searches for a specified objects attribute values and AD replication metadata without needing PowerShell modules or other dependencies. (github.com)

SensibleTim/CheckMaxConcurrentApi: This PowerShell script checks local servers (member and DCs) for NTLM performance bottlenecks (aka MaxConcurrentAPI issues) and provides a report. (github.com)

Security checks

SensibleTim/CheckCertChaining: A PowerShell scripted solution for doing validity checks (aka chaining) of certificates on Windows hosts. (github.com)

SensibleTim/GoldenTicketCheck: This script queries the local computer’s Kerberos ticket caches for TGTs and service tickets which have do not match the default domain duration for renewal. This script is not certain to point out golden tickets if present, it simply points out tickets to be examined. Details of the ticket are presented at the PS prompt. (github.com)

SensibleTim/DetectCiphersConfig: This PowerShell script checks the local Windows computers registry to see what is configured for cipher and Schannel (TLS)use. (github.com)

SensibleTim/SHA1SigCertCheck: Microsoft and others have deprecated the use of certificates which have SHA1 signatures (http://aka.ms/sha1). This PowerShell script makes it easy determine if a certificate was signed with SHA1 and whether the deprecation applies. (github.com)

Data queries, auditing and analysis

SensibleTim/AADAuditReport: This script will search an Azure AD tenant which has Azure AD Premium licensing and AAD Auditing enabled using GraphApi for audit results for a specified period till current time. At least one user must be assigned an AAD Premium license for this to work. Results are placed into a CSV file for review. (github.com)

SensibleTim/ADFSReproAuditing: This PowerShell script can be used to easily and in an automated way turn on ADFS tracing and collect only the data which was taken during the problem reproduction-therefore saving hours of time for the engineer in review of the data. (github.com)

SensibleTim/ADFSSecAuditParse: This script will parse an ADFS Security event log file (EVTX) and search for audit events related to a specific user or other criteria. The script will work for the each ADFS login instance for a given criteria during a stated time frame. (github.com)

SensibleTim/SetCertStoreAudit: There are scenarios where it’s helpful to turn on file object auditing of the certificate store on a computer for a user. This script can be used to set the file objects for certificates for auditing which is an otherwise difficult thing to enable due to the complexity of the permissions on certificates. (github.com)

Tools

SensibleTim/StartScriptAsProcess: This PowerShell script can be used to run another PowerShell script using a specific identity in Windows. (github.com)

SensibleTim/FindSPNs-inForest: This script accepts a parameter of a Kerberos ServicePrincipalName string and searches the local forest for that string using the DirectorySearcher .Net namespace. (github.com)

SensibleTim/GetUserGroups: This script finds all groups a specific principal is a member of. It includes all groups scopes and SIDHistory memberships as well. (github.com)

A Day at the SPA

This blog post was originally published July 9, 2007, on the TechNet blog https://blogs.technet.com/ad. It is worth noting that the capability discussed in this article is now available in the Windows Performance Toolkit, and the toolkit also allows for customization of the performance analyzers.

Ah, there’s nothing like the stop-everything, our-company-has-come-to-a-complete-halt emergency call we sometimes get where the domain controllers have slowed to a figurative crawl.  Resulting in nearly all other business likewise emulating a glacier as well owing to logon and application failures and the like.

If you’ve had that happen to one of your domain controllers, then you are nodding your head now and feeling some relief that you are reading about it and not experiencing that issue right this moment.

The question for this post is: what do you do when that waking nightmare happens (other than considering where you can hide where your boss can’t find you)?

Well, you use my favorite, and the guest of honor for this post: Server Performance Advisor. Otherwise known as SPA.

 Think of SPA as a distilled and concentrated version of the Perfmon data you might review in this scenario.    Answers to your questions are boiled down to what you need to know; things that are not relevant to Active Directory performance aren’t gathered, collated or mentioned.  SPA may not tell you the cause of the problem in every case, but it will tell you where to look to find that cause. 

So, I’ve talked about the generalities of SPA, now let’s delve into the specifics.  Well, not all of them, but an overview and the highlights that will be most useful to you.

SPA’s AD data collector is comprised of sections called Performance Advice, Active Directory, Application Tables, CPU, Network, Disk, Memory, Tuning Parameters, and General Information.

Before you reach all the hard data in those sections, though, SPA gives you a summary at the top of the report.  It’ll look something like this:

Summary
CPU Usage (%) 2
Top Process Group CPU% lsass.exe 98
CPU% Top Activity SamEnumUsersInDom 5
Top Client CPU% Rockyroaddc01.icecream.dairy.org 5
Top Disk by IO Rate IO/sec 0 11

Performance Advice is self-explanatory and is one of the big benefits of SPA over other performance data tools.   It’s a synopsis of the more common bottlenecks that can be found with an assessment of whether they are a problem in your case.  Very helpful.  It looks at CPU, Network, Memory and Disk I/O and gives a percentage of overall utilization, it’s judgment on whether the performance seen is idle, normal or a problem and a short detail sentence that may tell more.

The Active Directory portion gives good, collated data and some hard numbers on AD specific counters.  These are most useful if you already understand what that domain controllers baseline performance counters are.  In other words, what the normal numbers would be for that domain controller based on what role it has and services it provides day to day.   Though, SPA is most often used when a sudden problem has occurred, and so at that point establishing a baseline is not what it should be used for.

The good, collated data includes a listing of clients with the most CPU usage for LDAP searches.   Client names are resolved by FQDN and there is a separate are that gives the result of those searches.

AD has indices for fast searches and those indices can get hammered sometimes.  The Application Tables section gives data on how those indices are used.    The information this gives to you can be used to refine queries being issued to the database (if they were to traverse too many entries to get you a result for example) if you have an application that is doing that sort of thing, it can suggest that you need to index something new, or that you need to examine and perhaps fix your database using ntdsutil.exe.

The CPU portion gives a good snapshot of the busiest processes running on the server during the data gathering.   Typically, this would show LSASS.EXE as being the busiest on a domain controller, but not always-particularly in situations where the domain controller has multiple jobs (file server, application server of some kind perhaps).   Generally speaking, having a domain controller be just a domain controller is a good thing.

Note: If Idle has the highest CPU percentage then you may want to make sure you gathered data during the problem actually occurring.

The Network section is one of the most commonly useful ones.  Among other things, this summarizes the TCP and UDCP client inbound and outbound traffic by computer.  It also tells what processes on the local server were being used in conjunction with that traffic.  Good stuff which can give a “smoking gun” for some issues.  The remaining data in the Network section is also useful but we have to draw the line somewhere or this becomes less of a blog post and more like training.

The Disk and Memory sections will provide very useful data, more so if you have that baseline for that system to tell you what is out of the normal for it typically.

SPA is a free download from our site and installs as a new program group.  Here’s where you can get it (install does not require a reboot):

http://www.microsoft.com/downloads/details.aspx?familyid=09115420-8c9d-46b9-a9a5-9bffcd237da2&displaylang=en

A few other things to discuss regarding SPA.

· It requires Server 2003 to run.

· As I stated above, when you have a problem is the worst time to establish a baseline

· The duration of the test can be altered depending on your issue.  The default time set for it is 300 seconds (5 minutes).  Keep in mind that if you gather data a great deal longer than the duration of the problem then you run the risk of averaging out the data and making it not useful for troubleshooting.

· In the same way that there are ADAM performance counters, SPA has an ADAM data collector

· The latest version (above) includes an executable that can kick this off from a command line, and which can be run remotely via PsExec or similar.

· Server 2008 Perfmon will include SPA like data collectors…or so I hear.

· SPA will not necessarily be the only thing you do, but it’s a great starting place to figure out the problem.

See?  A day at the SPA can really take the edge off of a stop-everything, our-company-has-come-to-a-complete-halt emergency kind of day.  Very relaxing indeed.

The Road to Take

A lot of what we end up doing in the information technology field is not really about the technology itself but rather choosing the right figurative road to take with respect to technology. This paradigm can be applied to most software and IT industry roles. This article introduces a new blog which will help you choose the best technology roads to take with Identity and Security technologies.

A lot of what we end up doing in the information technology field is not really about the technology itself but rather choosing the right figurative road to take with respect to technology. Those who are successful in IT know how to bring together the needs of an organization with what is available and figure out which is the right technology to meet those needs.

This paradigm can be applied to most software and IT industry roles.

Product managers are an example of this. Software product managers work with their customers to determine the things the software needs, the engineering teams to see what is possible and practical, and ultimately with the business planning and marketing to ensure that the approach will meet the business needs.

IT architects are another example. They are given requirements from their business leaders and internal customers and work with internal and perhaps external partners to implement the desired solutions which comprise the enterprise architecture.

On a smaller and more iterative scale, consultants and engineers are presented with a problem or problems which will be solved by information technology, and they must choose the right solution and implement it.

It’s all about determining the right path and taking the organization down it.

Of course, determining the right road to take is the hardest thing to do. Deriving what is needed is only the first step. Gaining enough technical expertise and experience so that the right solution can be selected and implemented is the next step. This is why people who are the best at choosing the right technology road to take are often the people who implemented similar solutions before.

I have had extraordinary experiences helping organizations choose the right information technology roads to take. From being a field consultant early in my career, providing Security and Identity solutions at Microsoft, developing the Microsoft engineering delivery organizations, to building new Azure AD features. I have been blessed by the opportunity to lead and contribute in so many companies and institutions and in all parts of the globe over the years.

In this blog I will recount some of those experiences but also provide my insights, repost old blog posts from my AD blog circa 2006-2014 along with updates and commentaries, and provide PowerShell code I wrote which was useful in Security and Identity scenarios with the hope it will help others too. I’ll also discuss technical leadership and my thoughts on technology trends.

We may not be going to the same destination but hit the road with me. It’ll be an interesting walk.