Interview with Gravwell CEO, Corey Thuen

1*IQSx90wRogo-QtBYqsLDEQ

Gula Tech Adventures recently became an investor in a very exciting data storage and analytics company named Gravwell. I was very impressed with the ability of the solution to consume and analyze just about any type of data at amazing scales and query speeds. I conducted an interview with their CEO, Corey Thuen, and published it below.

What led you to found Gravwell?

Gravwell was founded due to an explicit need for large scale, fully unstructured data ingest. Gravwell’s founders were born in the security field, working with emergent behavior analysis in large scale emulytics and performing offensive security analysis on embedded industrial control systems. The offerings fell short in three core areas: speed, cost, and flexibility. We set out to build a platform that enabled very high speed ingest and search, extremely flexible data handling, and didn’t twist the knife when a customer wanted to actually engage the analysis platform.

The Gravwell founders can point to two very specific moments when we knew Gravwell had to be built. The first was when we were attempting to analyze the emergent behavior of a large network (on the order of 50k peer points) as the routes converged; after spending significant effort to build a custom module that was capable of translating the network traffic and flows into text format (existing unstructured data platforms require text data), we found that the act of converting to text was becoming the dominating factor in our ingest rate. The second was during an industrial control system audit where we were struggling to demonstrate to a customer what a flaw in their control system actually meant to the process. We built a small custom system to simply graph the state of of the process as seen by the network and the HMI, visualizing two different process states made it easy for the customer to understand that the HMI was lying to them, and drove home the point that a platform capable of unstructured analytics on binary data was sorely needed.

What sort of use cases does Gravwell allow users to accomplish?
Gravwell strives to be an “ingest first and ask questions later” platform where users may not have a strong handle on exactly what they are ingesting or what questions may arise in the future. Gravwell is a truly unstructured ingest and search platform, right down to the bytes. The unstructured storage and query allows system administrators and DevOps engineers to move quickly without spending time normalizing data. Hunt operators and incident responders don’t have to worry about what can and cannot be ingested, nor make the difficult decision to throw data away. Whether ingesting text logs, network packet captures, or industrial control sensor (ICS) data, Gravwell allows users to ingest and query ground truth data in its native format.

Gravwell’s unique core competencies enables a wide variety of use cases but we’re focusing on some initial offerings that stem from founder backgrounds. In the security space, Gravwell has been used as a hunting platform and a way to enhance existing security personnel to overcome the cybersecurity shortage. The platform helps to sort through “alert overload”; customers of Gravwell can identify meaningful alerts and hunt all the way to the ground-truth root cause data. One practical example would be our Gravwell pre-built dashboards for Security Onion to analyze bro, suricata and snort, along with some custom packet capture analytics.

In the ICS space we recognize that the process is the crown jewels and your security operations center (SOC) should be process-aware. For all customers, we offer Gravwell integration services directly to ensure customer success. In ICS, this results in a solution built for each unique process that often starts 100% passive and moves into active once value and safety is demonstrated. Properly integrated Gravwell can combine elements of a Historian, HMI, and SIEM to provide holistic “ground truth” insights. When hunting a potential breach, it’s imperative to know if attackers controlled the process. For those interested in unifying the IT and OT SOCs, Gravwell is the only option that can handle all of the disparate data types. The cyber kill chain isn’t relegated to one areaphishing attacks against OT personnel has potential impact on the process network and Gravwell enables organizations to have complete and unerring visibility.

How is this different than the traditional Splunk or Elastic user experience?
Gravwell exists in the intersection of Splunk and Elastic, we extract features at time of search via a pipeline architecture in a manner similar to Splunk, but allow for unlimited data ingest and search similar to Elastic. Where Gravwell truly separates itself from both is in the speed and flexibility offered by our highly concurrent storage and search system. Gravwell will ingest raw binary data using an open ingest API, then it queries using a highly concurrent pipeline that will scale with modern hardware. The pipeline architecture enables dramatically reduced cognitive overhead and time required as compared to a MapReduce architecture, yet our concurrency allows us to take advantage of modern hardware that is scaling in core counts, not CPU frequency.

An exemplary operation that highlights differences between Gravwell and other solutions is a query where we were able to extract layer 2 packets being tunneled purely through analytics from text http proxy logs. Gravwell ingested the raw proxy logs without any pre-existing knowledge of the tunnel API. At time of query the layer 2 packets were extracted, decoded, and passed to a network packet processor. The chain of modules allowed us to extract command and control traffic passing through the tunnel and identify additional traffic destined for internal machines. No prior knowledge was necessary, and Gravwell was able to aggressively scale across many CPU cores.

What sort of scale can the Gravwell technology deliver for users who can deploy a VM or enterprises who can deploy dozens of servers?
Gravwell is a highly concurrent and distributed platform that was built from the ground up to take advantage of a modern hardware. Our founders engineered the platform to scale well at the CPU and storage layer. The highly concurrent storage system scales well across all types of storage, whether it be hundreds of terabytes of magnetic storage or high speed non-volatile storage. Gravwell can multiplex across an array of high speed Flash or XPoint NVME drives and age out old data to compressed long term storage. The concurrent nature of our query language means that storage is almost always the bottleneck and calculating throughput is as simple as calculating the throughput on your storage system.

Gravwell has been benchmarked on various tiers of AWS instances, and we have seen ingest rates on a single M3.xlarge instance in excess of 1 million entries per second. On a small two node cluster comprised of magnetic storage and older generation E5 processors we can ingest at over 200 MB/s and query in excess of 10GB/s when the data is well cached. Installations with high speed hot storage and a large pool of cold storage, we expect customers to be able to handle in excess of 300GB/day on a single well equipped Intel E5 or AMD EPYC node and 10+ concurrent users. Larger enterprises with the resources to deploy multiple nodes can expect to handle multiple terabytes of data per day.

Where can readers go to learn more and what sort of evaluations or demos do you offer?
The main source of information is, of course, our website at https://www.gravwell.io. I highly recommend checking out our blog (https://www.gravwell.io/blog) where we post interesting data analytics stories, walkthroughs, feature highlights, and product updates. You can also interact with us on twitter (@gravwell_io), facebook, or linkedin.

As for evaluations, we just opened up our “drafthouse” cluster to allow potential customers an opportunity to try Gravwell with our data. This is the same cluster we used to conduct our FCC comment research and gives participants access to Reddit comments, ssh and other syslog, pcap network traffic, the shodan data feed, and more. You can sign up for a trial access to this data here.


0*Rhc_3ZIJQyb9Ow41.