Series A funding validates demand and will scale our unique ability to deliver faster, more efficient security operations

Posted by Dhiraj Sharan on Oct 21, 2021 6:30:00 AM



Earlier this week, we were excited to announce our oversubscribed $15 million Series A round of financing, led by new investor SYN Ventures with participation from existing investors ClearSky Security and South Dakota Equity Partners. The funding further validates the market demand for our one-of-a-kind solution that gives companies full control of security investigations within a single, unified interface.

Query.AI was founded to solve a major problem for enterprises today – ransomware, breaches, and other cyberattacks are continuing to increase at record rates while the amount of enterprise data is exploding and becoming more and more decentralized and distributed across cloud, third-party SaaS, and on-prem environments. Businesses of all sizes collect data from a wide range of sources, including AWS, Google Cloud Platform, Azure, Microsoft 365, multiple SaaS applications (typically 50-100), plus a ticketing system. This is compounded by scale challenges with gigantic-volume data sources like DNS, Flow, Proxy, Cloudtrail, and Endpoint data.

In addition, multinational corporations use siloed data due to regulatory compliance requirements across different countries or regions, and while it still may have its place for compliance and retention situations, the onslaught of decentralized data has rendered universal data centralization models impractical for security investigations. As data volumes exponentially increase, so do alerts that security teams need to review and investigate from an inordinate number of tools.

Among the range of tools, organizations are using SOAR and XDR in an attempt to solve the problems of data decentralization, but they’re not the answer. Enterprises have found that SOAR requires time-consuming software engineering efforts to build playbooks and manage API integrations. In fact, according to a study by the Ponemon Institute, the average organization spends $2.7 million per year on engineering work to integrate disparate security data and yet only 23 percent consider their security engineering efforts as very valuable. And, while XDR definitions are all over the map, XDR still relies on a single platform provider to do all the collecting, aggregating, correlating, and analyzing. To try and adjust, SOC analysts are also increasingly relying on their endpoint protection or a focused threat detection product to address the issue, but neither option gives them the full picture to truly assess what is happening in their environment.

In the end, SOC analysts spend their days doing swivel-chair analytics, pivoting between siloed tools to manually correlate the data to determine what they should investigate before they can actually respond. It’s an exhaustive, time-consuming, and burnout-inducing way to work for security teams that are already stretched thin.

The Query.AI security investigation platform solves this problem by serving as the connective tissue that provides real-time insight for security data across platforms no matter where it resides – the cloud, third party SaaS, or on-prem systems. Our API-enabled platform does not require the transfer or duplication of data. It simultaneously normalizes, aggregates, enriches, visualizes, and analyzes alert data that lives across cybersecurity systems with a single, unified browser interface. And, it makes security operations teams more productive much faster by giving them the flexibility to ask questions via text, natural language, or Unified Query Language, and helping them quickly understand data relationships so they can initiate response actions.

The pain point is real, and the market is responding to our solution. We’re already generating revenue from several enterprise-level organizations, many of which are MSSPs for parent companies with numerous affiliates. We have an extremely healthy pipeline, and the new funding will go toward scaling customer support, the continued expansion of the Query.AI security investigations platform, as well as its expanding library of integrations with additional technology providers across cloud, third-party SaaS, and on-prem environments.

The entire Query.AI team is excited for the opportunity to continue our work to help enterprises accelerate cybersecurity investigations and efficiently respond to and mitigate threats.

Want to learn more about our innovative security investigations platform? Book a demo, today!

Read More

Topics: Data Centralization, Centralizing Data, SOC, XDR, Security Investigations, Decentralized Data, SOAR

Moving Past Universal Data Centralization

Posted by Andrew Maloney on Oct 5, 2021 11:11:07 AM

As we discussed in our last blog on SOC evolution, once upon a time things were much simpler than they are today. Data volumes were initially small and largely network based, which enabled organizations to house all data on-prem, and tuck it behind a cozy perimeter where they could stack protective security technologies. Back then, organizations wanted to centralize all their data into one data store so they could manage logging and compliance alongside the detection and response capabilities they needed.

As technology evolved, new capabilities with different data types and data formats required extensibility beyond the typical network log that contained IP addresses, ports, and the number of bytes transferred. The inclusion of new data sources like threat intelligence, file monitoring, users, and applications blew up data volumes and started to erode the pipe dream of a single, centralized data store.

The “Original Gangster” (OG) - technological limitations

The proof is in the pudding as some have said. Thinking back to my time at ArcSight, the promise of a single pane of glass for users to drive their security operations was never realized, and has long remained out of reach.

For example, when ArcSight started out with its Enterprise Security Manager (ESM) product, it was using Oracle as the primary relational database for its back end. We learned that Oracle could be configured for large numbers of small transactions, such as writing event logs with volumes in the thousands per second, or alternatively as a data warehouse with a much smaller number of bulk data imports on which to run queries and reports to get answers. Oracle could not handle both well - high volumes of consistent write activity with a significant number of large queries against the dataset for analysis - which is, in essence, what SIEM required.

This is important because it forced the first of many changes, which led to the decentralization of centralized data way back in 2007.

ArcSight introduced Logger, which had a homegrown database intended to overcome some of the technical challenges associated with the relational databases of the time. The introduction of Logger created a tiered architecture, and encouraged customers to send the bulk of their data to distributed logger tiers and forward a subset of those events to ESM for specific analytics and analysis.

In theory this could have worked, but even single vendors with multiple solutions failed to create the integrations that customers desired, and therefore never delivered on the promise of a single pane of glass. As a result, analysts were still consistently pivoting from ESM, to loggers, to other products. They were looking at multiple interfaces, asking multiple questions, and having to learn and use multiple query languages. This was obviously not intentional on the part of ArcSight, and is certainly not a criticism of ArcSight. I’m simply trying to communicate that gaining insights into decentralized data has been a really difficult problem for a long time, and it continues to prove challenging to this day.

The reality problem - this sh$% is expensive, oh and that other stuff

You may be thinking to yourself, “That is so old school. Technology has changed, and scale isn’t an issue today.”

I largely agree, however technology was only the initial problem. There are many companies that successfully cracked the code to build highly scalable and distributed collection, indexing, search, and analytics capabilities, built with their own intellectual property or by leveraging big data technologies like the apache technology stack scale is largely a thing of the past. Still however, universal data centralization and a single pane of glass remained out of reach. There are three main reasons for this: cost, type of data, and politics and bureaucracy.

  • Cost - Technologies that centralize data are expensive, and the alignment to ingestion-based pricing has forced organizations to be selective and decide what data should go to their central repository and what should not, namely very verbose high-volume data sources such as network and endpoint which are both extremely valuable in support of security investigations.

  • Types of Data - To support security investigations context is king and, generally speaking, contextual data is point-in-time, meaning you must get it from the source of truth and not from an archive. There are a few simple examples of this. One is threat intelligence, which queries directly from the threat intelligence platforms and ages out almost as fast as it is created.
    Another example is Identity and Access Management (IAM). You can’t figure out from a central repository who a user is, what role they have, their current access, or whether their account is active, locked, or disabled. You need to go to the IAM system that authenticates the user and grants access in real time.

    A third example is asset management information that can reside within configuration management databases (CMDB) or vulnerability management systems, which provide insight into the current state of an asset, the operating system, vulnerabilities, what it’s used for, and how critical it is. Asset information has always been difficult to lock down, but in the world of here today and gone tomorrow cloud systems, the data necessary to answer these questions could live anywhere.

  • Politics and bureaucracy - While it’s disheartening to believe, it's not only our government that is laden with politics and bureaucracy. These two challenges exist in almost every business in the world.
    Security is negatively impacted when teams struggle to gain access to systems and data that are owned by different teams or departments, and when the priorities of those functions are misaligned or competing. This often creates a tension and wall that security professionals are unable to work around. I can’t tell you how many times I’ve heard in my career, I’d like to have that data, but “John” runs that team or system and he won’t play nice.

In short, I’m saying that universal centralization has always been a lofty goal. It was unattainable in a much simpler time, but in today's world, it is honestly impossible. Companies should stop trying to fit the square peg into a round hole, leave their decentralized data where it lives, and embrace new approaches to accessing, gaining context, and acting on that data with modern capabilities.

Read More

Topics: Data Centralization, Centralizing Data, Security Investigations, Universal Data Centralization

Moving Past Universal Data Centralization

Posted by Andrew Maloney on Oct 5, 2021 8:00:00 AM

As we discussed in our last blog on SOC evolution, once upon a time things were much simpler than they are today. Data volumes were initially small and largely network based, which enabled organizations to house all data on-prem, and tuck it behind a cozy perimeter where they could stack protective security technologies. Back then, organizations wanted to centralize all their data into one data store so they could manage logging and compliance alongside the detection and response capabilities they needed.

As technology evolved, new capabilities with different data types and data formats required extensibility beyond the typical network log that contained IP addresses, ports, and the number of bytes transferred. The inclusion of new data sources like threat intelligence, file monitoring, users, and applications blew up data volumes and started to erode the pipe dream of a single, centralized data store.

The “Original Gangster” (OG) - technological limitations

The proof is in the pudding as some have said. Thinking back to my time at ArcSight, the promise of a single pane of glass for users to drive their security operations was never realized, and has long remained out of reach.

For example, when ArcSight started out with its Enterprise Security Manager (ESM) product, it was using Oracle as the primary relational database for its back end. We learned that Oracle could be configured for large numbers of small transactions, such as writing event logs with volumes in the thousands per second, or alternatively as a data warehouse with a much smaller number of bulk data imports on which to run queries and reports to get answers. Oracle could not handle both well - high volumes of consistent write activity with a significant number of large queries against the dataset for analysis - which is, in essence, what SIEM required.

This is important because it forced the first of many changes, which led to the decentralization of centralized data way back in 2007.

ArcSight introduced Logger, which had a homegrown database intended to overcome some of the technical challenges associated with the relational databases of the time. The introduction of Logger created a tiered architecture, and encouraged customers to send the bulk of their data to distributed logger tiers and forward a subset of those events to ESM for specific analytics and analysis.

In theory this could have worked, but even single vendors with multiple solutions failed to create the integrations that customers desired, and therefore never delivered on the promise of a single pane of glass. As a result, analysts were still consistently pivoting from ESM, to loggers, to other products. They were looking at multiple interfaces, asking multiple questions, and having to learn and use multiple query languages. This was obviously not intentional on the part of ArcSight, and is certainly not a criticism of ArcSight. I’m simply trying to communicate that gaining insights into decentralized data has been a really difficult problem for a long time, and it continues to prove challenging to this day.

The reality problem - this sh$% is expensive, oh and that other stuff

You may be thinking to yourself, “That is so old school. Technology has changed, and scale isn’t an issue today.”

I largely agree, however technology was only the initial problem. There are many companies that successfully cracked the code to build highly scalable and distributed collection, indexing, search, and analytics capabilities, built with their own intellectual property or by leveraging big data technologies like the apache technology stack scale is largely a thing of the past. Still however, universal data centralization and a single pane of glass remained out of reach. There are three main reasons for this: cost, type of data, and politics and bureaucracy.

  • Cost - Technologies that centralize data are expensive, and the alignment to ingestion-based pricing has forced organizations to be selective and decide what data should go to their central repository and what should not, namely very verbose high-volume data sources such as network and endpoint which are both extremely valuable in support of security investigations.

  • Types of Data - To support security investigations context is king and, generally speaking, contextual data is point-in-time, meaning you must get it from the source of truth and not from an archive. There are a few simple examples of this. One is threat intelligence, which queries directly from the threat intelligence platforms and ages out almost as fast as it is created.
    Another example is Identity and Access Management (IAM). You can’t figure out from a central repository who a user is, what role they have, their current access, or whether their account is active, locked, or disabled. You need to go to the IAM system that authenticates the user and grants access in real time.

    A third example is asset management information that can reside within configuration management databases (CMDB) or vulnerability management systems, which provide insight into the current state of an asset, the operating system, vulnerabilities, what it’s used for, and how critical it is. Asset information has always been difficult to lock down, but in the world of here today and gone tomorrow cloud systems, the data necessary to answer these questions could live anywhere.

  • Politics and bureaucracy - While it’s disheartening to believe, it's not only our government that is laden with politics and bureaucracy. These two challenges exist in almost every business in the world.
    Security is negatively impacted when teams struggle to gain access to systems and data that are owned by different teams or departments, and when the priorities of those functions are misaligned or competing. This often creates a tension and wall that security professionals are unable to work around. I can’t tell you how many times I’ve heard in my career, I’d like to have that data, but “John” runs that team or system and he won’t play nice.

In short, I’m saying that universal centralization has always been a lofty goal. It was unattainable in a much simpler time, but in today's world, it is honestly impossible. Companies should stop trying to fit the square peg into a round hole, leave their decentralized data where it lives, and embrace new approaches to accessing, gaining context, and acting on that data with modern capabilities.

Read More

Topics: Data Centralization, Centralizing Data, Security Investigations, Universal Data Centralization

The Journey to Modern Security Operations

Posted by Andrew Maloney on Sep 23, 2021 1:48:09 PM

Security operations is not a new concept. In fact, it’s earned quite a few gray hairs in its roughly three-decade history, which got its start around the mid-1990’s with Log and Search. Each maturation of security operations has become more complex than the last, over time incorporating compliance, detection and response, intelligence, real-time threat hunting, and leaning towards fusion centers, as well as a whole host of other continuously developing capabilities.

The progression had been ongoing, but somewhat measured and predictable. Its evolution had been closely aligned with new technology innovations and new methods of adopting those innovations to deliver business outcomes.

Then COVID-19 suddenly hit, and we saw a mass acceleration of what many called the “digital transformation.” Memes by the dozens found their way into our social feeds, talking about how it wasn’t the CEO, the CIO, or even business strategy and foresight that led this transformation. It was COVID.

Businesses went into pandemonium and the adversaries took advantage, using the chaos to advance their nefarious agendas. In the shifting of the workforce from offices to remote, literally overnight, attack surfaces were not just increased, but expanded to a point they were hard to discern, and with the expanded attack surface we saw a corresponding increase in business risk.

For several reasons, all predominantly related to the power of human resilience in some way, shape, or form, we adapted to the new normal. Companies sped up their plans to move to the cloud. They started exploring the concepts of a perimeter-free world and zero trust models and making years’ worth of digital transformation progress in a matter of months. In fact, according to the CyberRes 2021 State of Security Operations report, 85% of organizations increased their adoption of cloud-based security solutions in the past year, with at least 99% or organizations now having at least some part of their security operations solutions now deployed in the cloud.

Yet somehow, in all this modernization and embracing of new technologies and capabilities, the methods upon which the foundation of security operations are built have been completely overlooked, and the status quo has prevailed.

It is time for companies to rethink how they bring efficient security operations into the post- pandemic world. Most security operations centers are still living in metaphorical houses built on traditional on-premises foundations. From SOC floor layouts, to governing processes, to daily standups and basic communication flows, organizations are spending too much time trying to figure out how to extend legacy methodologies into the cloud, resulting in a Frankenstein approach with neck bolts and stitches largely based on the concept of universal data centralization. Perhaps, organizations should be thinking about new ways to realize the potential of their full cybersecurity ecosystems, embracing the data silos that extend across multiple environments.

Read More

Topics: cybersecurity, Security Operations, Digital Transformation

Same Cybersecurity Obstacles, Different Day

Posted by Andrew Maloney on Sep 2, 2021 10:10:29 AM

Dark Reading published an interesting story earlier this week entitled Ten Obstacles that Prevent Security Pros from Doing their Jobs. None of the obstacles is particularly surprising – mostly the same ones we’ve been dealing with for years, such as lack of budget, etc. What is striking about the list is that six of the 10 obstacles are directly related to security investigations. And, even for “lack of budget,” threat hunting is cited as a prime area where investment is lacking. 

Read More

Topics: cybersecurity, Security Investigations, Cybersecurity Obstacles, Visibility, CISO

Cybersecurity Investigations and M&A: How to Accelerate Integration

Posted by Andrew Maloney on Aug 24, 2021 9:29:07 AM

In a recent conversation, a friend was pondering if she’d been impacted by the recent T-Mobile breach. “I know my personally identifiable information has been included in several big breaches in the past, and I’m sure it’s been sold a million times over. I’ve never been a T-Mobile customer, yet T-Mobile acquired Sprint, and I was a Sprint customer for years. Do you think my data has been compromised as a result?”

Read More

Topics: cybersecurity, Mergers and Acquisitions, M&A

Making the 1-10-60 Rule a Reality

Posted by Query.AI on Aug 4, 2021 12:30:00 AM

In today’s digitally-transformed world, developers can spin workloads up and down in a matter of minutes. Despite the fleeting nature of these resources, threat actors can still use misconfigurations to exploit these as part of an attack. With time of the essence, the security operations center (SOC) needs to respond to new alerts quickly. Yet, the volume becomes overwhelming.

Read More

Topics: threat hunting, 1-10-60 Rule, Investigate

XDR: What Does Extended Detection and Response Really Mean?

Posted by Andrew Maloney on Jul 26, 2021 10:33:53 AM

If you do a search for “extended detection and response,” you will find several different definitions. In general, Extended Detection and Response (XDR) focuses on either a single vendor being utilized to cover all the different areas of security or an open model that incorporates multiple vendors. However, by looking at analyst definitions and finding the commonalities, you can get a better sense of what XDR really means. 

Read More

Topics: XDR, Hybrid XDR, Open XDR

A New Paradigm to Meet the Executive Order Incident Response Mandate

Posted by Query.AI on Jul 18, 2021 11:25:04 PM

The Executive Order on Improving the Nation’s Cybersecurity (Executive Order) sets out an ambitious plan for enhancing federal agency and supply chain security. Covering everything from cloud-first initiatives to zero trust architecture, the Executive Order covers many topics. It will likely have a wider reach than just Federal Civilian Executive Branch (FCEB) agencies. For security operations center (SOC) teams, Section 6, “Standardizing the Federal Government’s Playbook for Responding to Cybersecurity Vulnerabilities and Incidents,” has the most significant impact on their day-to-day activities. 

Read More

Topics: cybersecurity, SOC, NIST, data, National Institute of Standards and Technologies

Will XDR Help the Future of Modern SOC?

Posted by Andrew Maloney on Jul 8, 2021 12:15:00 AM

We’re all seeing the market buzz

Extended Detection and Response(XDR) is getting a lot of attention these days. Given two, leading endpoint detection and response (EDR) vendors, SentinelOne and Crowdstrike, recently announced acquisitions of Scaylr and Humio, respectively, it seems more vendors are making the daily pivot to enter the XDR market.

Read More

Topics: SOC, NDR, XDR, EDR, SIEM, NTA, UEBA, Hybrid XDR, Open XDR