Andrew Maloney

Andrew Maloney
Andrew is an Air Force veteran, a seasoned executive, and a security expert. With nearly 20 years experience in roles varying from hands on security practitioner to business and though leadership he has seen the market evolve and has a keen understanding of the challenges facing the industry.
Find me on:

Recent Posts

Six Cybersecurity Predictions for 2022 (No, We’re Not Going to Talk About Ransomware)

Posted by Andrew Maloney on Dec 6, 2021 8:50:00 AM

If you had to sum up your thoughts on cybersecurity in 2021 in one word, “ransomware” is probably at the top of your list. There’s no doubt ransomware dominated headlines this year, and it makes sense that many cybersecurity predictions will focus on this ongoing epidemic.

Read More

Topics: cybersecurity, Digital Transformation, API

Moving Past Universal Data Centralization

Posted by Andrew Maloney on Oct 5, 2021 11:11:07 AM

As we discussed in our last blog on SOC evolution, once upon a time things were much simpler than they are today. Data volumes were initially small and largely network based, which enabled organizations to house all data on-prem, and tuck it behind a cozy perimeter where they could stack protective security technologies. Back then, organizations wanted to centralize all their data into one data store so they could manage logging and compliance alongside the detection and response capabilities they needed.

As technology evolved, new capabilities with different data types and data formats required extensibility beyond the typical network log that contained IP addresses, ports, and the number of bytes transferred. The inclusion of new data sources like threat intelligence, file monitoring, users, and applications blew up data volumes and started to erode the pipe dream of a single, centralized data store.

The “Original Gangster” (OG) - technological limitations

The proof is in the pudding as some have said. Thinking back to my time at ArcSight, the promise of a single pane of glass for users to drive their security operations was never realized, and has long remained out of reach.

For example, when ArcSight started out with its Enterprise Security Manager (ESM) product, it was using Oracle as the primary relational database for its back end. We learned that Oracle could be configured for large numbers of small transactions, such as writing event logs with volumes in the thousands per second, or alternatively as a data warehouse with a much smaller number of bulk data imports on which to run queries and reports to get answers. Oracle could not handle both well - high volumes of consistent write activity with a significant number of large queries against the dataset for analysis - which is, in essence, what SIEM required.

This is important because it forced the first of many changes, which led to the decentralization of centralized data way back in 2007.

ArcSight introduced Logger, which had a homegrown database intended to overcome some of the technical challenges associated with the relational databases of the time. The introduction of Logger created a tiered architecture, and encouraged customers to send the bulk of their data to distributed logger tiers and forward a subset of those events to ESM for specific analytics and analysis.

In theory this could have worked, but even single vendors with multiple solutions failed to create the integrations that customers desired, and therefore never delivered on the promise of a single pane of glass. As a result, analysts were still consistently pivoting from ESM, to loggers, to other products. They were looking at multiple interfaces, asking multiple questions, and having to learn and use multiple query languages. This was obviously not intentional on the part of ArcSight, and is certainly not a criticism of ArcSight. I’m simply trying to communicate that gaining insights into decentralized data has been a really difficult problem for a long time, and it continues to prove challenging to this day.

The reality problem - this sh$% is expensive, oh and that other stuff

You may be thinking to yourself, “That is so old school. Technology has changed, and scale isn’t an issue today.”

I largely agree, however technology was only the initial problem. There are many companies that successfully cracked the code to build highly scalable and distributed collection, indexing, search, and analytics capabilities, built with their own intellectual property or by leveraging big data technologies like the apache technology stack scale is largely a thing of the past. Still however, universal data centralization and a single pane of glass remained out of reach. There are three main reasons for this: cost, type of data, and politics and bureaucracy.

  • Cost - Technologies that centralize data are expensive, and the alignment to ingestion-based pricing has forced organizations to be selective and decide what data should go to their central repository and what should not, namely very verbose high-volume data sources such as network and endpoint which are both extremely valuable in support of security investigations.

  • Types of Data - To support security investigations context is king and, generally speaking, contextual data is point-in-time, meaning you must get it from the source of truth and not from an archive. There are a few simple examples of this. One is threat intelligence, which queries directly from the threat intelligence platforms and ages out almost as fast as it is created.
    Another example is Identity and Access Management (IAM). You can’t figure out from a central repository who a user is, what role they have, their current access, or whether their account is active, locked, or disabled. You need to go to the IAM system that authenticates the user and grants access in real time.

    A third example is asset management information that can reside within configuration management databases (CMDB) or vulnerability management systems, which provide insight into the current state of an asset, the operating system, vulnerabilities, what it’s used for, and how critical it is. Asset information has always been difficult to lock down, but in the world of here today and gone tomorrow cloud systems, the data necessary to answer these questions could live anywhere.

  • Politics and bureaucracy - While it’s disheartening to believe, it's not only our government that is laden with politics and bureaucracy. These two challenges exist in almost every business in the world.
    Security is negatively impacted when teams struggle to gain access to systems and data that are owned by different teams or departments, and when the priorities of those functions are misaligned or competing. This often creates a tension and wall that security professionals are unable to work around. I can’t tell you how many times I’ve heard in my career, I’d like to have that data, but “John” runs that team or system and he won’t play nice.

In short, I’m saying that universal centralization has always been a lofty goal. It was unattainable in a much simpler time, but in today's world, it is honestly impossible. Companies should stop trying to fit the square peg into a round hole, leave their decentralized data where it lives, and embrace new approaches to accessing, gaining context, and acting on that data with modern capabilities.

Read More

Topics: Data Centralization, Centralizing Data, Security Investigations, Universal Data Centralization

Moving Past Universal Data Centralization

Posted by Andrew Maloney on Oct 5, 2021 8:00:00 AM

As we discussed in our last blog on SOC evolution, once upon a time things were much simpler than they are today. Data volumes were initially small and largely network based, which enabled organizations to house all data on-prem, and tuck it behind a cozy perimeter where they could stack protective security technologies. Back then, organizations wanted to centralize all their data into one data store so they could manage logging and compliance alongside the detection and response capabilities they needed.

As technology evolved, new capabilities with different data types and data formats required extensibility beyond the typical network log that contained IP addresses, ports, and the number of bytes transferred. The inclusion of new data sources like threat intelligence, file monitoring, users, and applications blew up data volumes and started to erode the pipe dream of a single, centralized data store.

The “Original Gangster” (OG) - technological limitations

The proof is in the pudding as some have said. Thinking back to my time at ArcSight, the promise of a single pane of glass for users to drive their security operations was never realized, and has long remained out of reach.

For example, when ArcSight started out with its Enterprise Security Manager (ESM) product, it was using Oracle as the primary relational database for its back end. We learned that Oracle could be configured for large numbers of small transactions, such as writing event logs with volumes in the thousands per second, or alternatively as a data warehouse with a much smaller number of bulk data imports on which to run queries and reports to get answers. Oracle could not handle both well - high volumes of consistent write activity with a significant number of large queries against the dataset for analysis - which is, in essence, what SIEM required.

This is important because it forced the first of many changes, which led to the decentralization of centralized data way back in 2007.

ArcSight introduced Logger, which had a homegrown database intended to overcome some of the technical challenges associated with the relational databases of the time. The introduction of Logger created a tiered architecture, and encouraged customers to send the bulk of their data to distributed logger tiers and forward a subset of those events to ESM for specific analytics and analysis.

In theory this could have worked, but even single vendors with multiple solutions failed to create the integrations that customers desired, and therefore never delivered on the promise of a single pane of glass. As a result, analysts were still consistently pivoting from ESM, to loggers, to other products. They were looking at multiple interfaces, asking multiple questions, and having to learn and use multiple query languages. This was obviously not intentional on the part of ArcSight, and is certainly not a criticism of ArcSight. I’m simply trying to communicate that gaining insights into decentralized data has been a really difficult problem for a long time, and it continues to prove challenging to this day.

The reality problem - this sh$% is expensive, oh and that other stuff

You may be thinking to yourself, “That is so old school. Technology has changed, and scale isn’t an issue today.”

I largely agree, however technology was only the initial problem. There are many companies that successfully cracked the code to build highly scalable and distributed collection, indexing, search, and analytics capabilities, built with their own intellectual property or by leveraging big data technologies like the apache technology stack scale is largely a thing of the past. Still however, universal data centralization and a single pane of glass remained out of reach. There are three main reasons for this: cost, type of data, and politics and bureaucracy.

  • Cost - Technologies that centralize data are expensive, and the alignment to ingestion-based pricing has forced organizations to be selective and decide what data should go to their central repository and what should not, namely very verbose high-volume data sources such as network and endpoint which are both extremely valuable in support of security investigations.

  • Types of Data - To support security investigations context is king and, generally speaking, contextual data is point-in-time, meaning you must get it from the source of truth and not from an archive. There are a few simple examples of this. One is threat intelligence, which queries directly from the threat intelligence platforms and ages out almost as fast as it is created.
    Another example is Identity and Access Management (IAM). You can’t figure out from a central repository who a user is, what role they have, their current access, or whether their account is active, locked, or disabled. You need to go to the IAM system that authenticates the user and grants access in real time.

    A third example is asset management information that can reside within configuration management databases (CMDB) or vulnerability management systems, which provide insight into the current state of an asset, the operating system, vulnerabilities, what it’s used for, and how critical it is. Asset information has always been difficult to lock down, but in the world of here today and gone tomorrow cloud systems, the data necessary to answer these questions could live anywhere.

  • Politics and bureaucracy - While it’s disheartening to believe, it's not only our government that is laden with politics and bureaucracy. These two challenges exist in almost every business in the world.
    Security is negatively impacted when teams struggle to gain access to systems and data that are owned by different teams or departments, and when the priorities of those functions are misaligned or competing. This often creates a tension and wall that security professionals are unable to work around. I can’t tell you how many times I’ve heard in my career, I’d like to have that data, but “John” runs that team or system and he won’t play nice.

In short, I’m saying that universal centralization has always been a lofty goal. It was unattainable in a much simpler time, but in today's world, it is honestly impossible. Companies should stop trying to fit the square peg into a round hole, leave their decentralized data where it lives, and embrace new approaches to accessing, gaining context, and acting on that data with modern capabilities.

Read More

Topics: Data Centralization, Centralizing Data, Security Investigations, Universal Data Centralization

The Journey to Modern Security Operations

Posted by Andrew Maloney on Sep 23, 2021 1:48:09 PM

Security operations is not a new concept. In fact, it’s earned quite a few gray hairs in its roughly three-decade history, which got its start around the mid-1990’s with Log and Search. Each maturation of security operations has become more complex than the last, over time incorporating compliance, detection and response, intelligence, real-time threat hunting, and leaning towards fusion centers, as well as a whole host of other continuously developing capabilities.

The progression had been ongoing, but somewhat measured and predictable. Its evolution had been closely aligned with new technology innovations and new methods of adopting those innovations to deliver business outcomes.

Then COVID-19 suddenly hit, and we saw a mass acceleration of what many called the “digital transformation.” Memes by the dozens found their way into our social feeds, talking about how it wasn’t the CEO, the CIO, or even business strategy and foresight that led this transformation. It was COVID.

Businesses went into pandemonium and the adversaries took advantage, using the chaos to advance their nefarious agendas. In the shifting of the workforce from offices to remote, literally overnight, attack surfaces were not just increased, but expanded to a point they were hard to discern, and with the expanded attack surface we saw a corresponding increase in business risk.

For several reasons, all predominantly related to the power of human resilience in some way, shape, or form, we adapted to the new normal. Companies sped up their plans to move to the cloud. They started exploring the concepts of a perimeter-free world and zero trust models and making years’ worth of digital transformation progress in a matter of months. In fact, according to the CyberRes 2021 State of Security Operations report, 85% of organizations increased their adoption of cloud-based security solutions in the past year, with at least 99% or organizations now having at least some part of their security operations solutions now deployed in the cloud.

Yet somehow, in all this modernization and embracing of new technologies and capabilities, the methods upon which the foundation of security operations are built have been completely overlooked, and the status quo has prevailed.

It is time for companies to rethink how they bring efficient security operations into the post- pandemic world. Most security operations centers are still living in metaphorical houses built on traditional on-premises foundations. From SOC floor layouts, to governing processes, to daily standups and basic communication flows, organizations are spending too much time trying to figure out how to extend legacy methodologies into the cloud, resulting in a Frankenstein approach with neck bolts and stitches largely based on the concept of universal data centralization. Perhaps, organizations should be thinking about new ways to realize the potential of their full cybersecurity ecosystems, embracing the data silos that extend across multiple environments.

Read More

Topics: cybersecurity, Security Operations, Digital Transformation

Same Cybersecurity Obstacles, Different Day

Posted by Andrew Maloney on Sep 2, 2021 10:10:29 AM

Dark Reading published an interesting story earlier this week entitled Ten Obstacles that Prevent Security Pros from Doing their Jobs. None of the obstacles is particularly surprising – mostly the same ones we’ve been dealing with for years, such as lack of budget, etc. What is striking about the list is that six of the 10 obstacles are directly related to security investigations. And, even for “lack of budget,” threat hunting is cited as a prime area where investment is lacking. 

Read More

Topics: cybersecurity, Security Investigations, Cybersecurity Obstacles, Visibility, CISO

Cybersecurity Investigations and M&A: How to Accelerate Integration

Posted by Andrew Maloney on Aug 24, 2021 9:29:07 AM

In a recent conversation, a friend was pondering if she’d been impacted by the recent T-Mobile breach. “I know my personally identifiable information has been included in several big breaches in the past, and I’m sure it’s been sold a million times over. I’ve never been a T-Mobile customer, yet T-Mobile acquired Sprint, and I was a Sprint customer for years. Do you think my data has been compromised as a result?”

Read More

Topics: cybersecurity, Mergers and Acquisitions, M&A

XDR: What Does Extended Detection and Response Really Mean?

Posted by Andrew Maloney on Jul 26, 2021 10:33:53 AM

If you do a search for “extended detection and response,” you will find several different definitions. In general, Extended Detection and Response (XDR) focuses on either a single vendor being utilized to cover all the different areas of security or an open model that incorporates multiple vendors. However, by looking at analyst definitions and finding the commonalities, you can get a better sense of what XDR really means. 

Read More

Topics: XDR, Hybrid XDR, Open XDR

Will XDR Help the Future of Modern SOC?

Posted by Andrew Maloney on Jul 8, 2021 12:15:00 AM

We’re all seeing the market buzz

Extended Detection and Response(XDR) is getting a lot of attention these days. Given two, leading endpoint detection and response (EDR) vendors, SentinelOne and Crowdstrike, recently announced acquisitions of Scaylr and Humio, respectively, it seems more vendors are making the daily pivot to enter the XDR market.

Read More

Topics: SOC, NDR, XDR, EDR, SIEM, NTA, UEBA, Hybrid XDR, Open XDR

Top Challenges with Data Centralizing for Threat Investigations

Posted by Andrew Maloney on Apr 22, 2021 11:35:21 PM

Threat investigations are one of the most important tasks security analysts face today. To quantify the importance and complexity here are a couple of statistics from a recent IBM “Cost of a Data Breach Report 2020.”  According to the report, the average time to detect and contain a data breach caused by a malicious actor was 315 days. That's a long time. Additionally, we’ve all heard the saying that “time is money” well how about this? “Organizations that are able to contain a data breach in less than 200 days saved an average of $1.12 million compared to organizations that took more than 200 days to contain a breach,” that is pretty compelling.  

Read More

Topics: cybersecurity, incident response, Data Centralization, Centralizing Data