As one of 2020’s most anticipated public listings, Snowflake Inc. started trading on the 16th of September and snatched the title for the largest software IPO ever recorded. With a market valuation of more than 70$ billion, the data warehousing company makes previous record-holders look cheap by comparison.
The performance of the company on the New York Stock Exchange has raised skepticism about the stock market behavior. While some are still wondering whether or not the market had lost its mind, investors are attracted by Snowflake’s technology and its promise for the future of data in the cloud.
Snowflake’s valuation comes as a wake-up call for the data warehousing market, where it’s competing directly against big names: legacy database software companies (such as Oracle and IBM) and cloud service providers (Google, Amazon & Microsoft). With its fresh perspective on handling data in the cloud, Snowflake’s slice of the pie is steadily increasing.
But why is this warehousing company attracting customers and investors and what is it about its technology that is so promising for the future?
Snowflake – what is it?
Simply put, Snowflake is a database software for the cloud. The company was founded with the goal of alleviating a central pain point of all organizations seeking to better make sense of their data: defining and implementing an architecture for the cloud. With more and more companies moving away from on-premise solutions, compute infrastructures built in the past century need to find a new home in the cloud. Amidst this massive transition, Snowflake’s goal is to soften the process and help companies build better data architectures in the cloud.
Snowflake calls itself a cloud data platform that can be plugged into any of the existing public cloud service providers. In a traditional IaaS, PaaS, SaaS cloud offering, Snowflake bridges the gap between infrastructure and applications by providing out-of-the-box architectures optimized for data workloads.
Its main offering is a completely managed data warehouse that can be deployed onto any public cloud. But the real value lies under the hood: Snowflake’s service runs on an architecture that is optimized for the cloud and maximizes its capabilities.
Snowflake tackles limitations of legacy cloud data systems
Snowflake is built to tackle inherent limitations that come with on-premise solutions and with legacy cloud data systems. Both for organizations that already have a cloud environment in place, as well as for organizations still maintaining in-house infrastructures, Snowflake aids them in tackling these two common challenges.
1. Data Latency
One limitation of legacy data architectures is data latency. Namely, the time that data needs from ingestion to analysis and insights. Modern workloads require a variety of data sources to be integrated and need to offer support for different data formats. The classic pitfall is when infrastructure clouds resemble legacy data centers, where isolated data warehouses are responsible for different portions of data. Similarly, traditional cloud architectures only load data periodically in batches, running daily, weekly, or monthly. But modern analytics needs require much faster processing and workloads that can serve data engineers in real-time.
In order to match businesses’ hunger for analytics and data processing, optimized cloud architectures are needed. Like this, data engineers, data analysts, and business users alike get instant access to data.
2. Cloud architectures are not inherently optimized for data
Architectures built for the cloud require expertise for integrating the right cloud services. While data warehouses and data lakes are offered by most cloud service providers, are they used accordingly?
For organizations that have already moved to the cloud, simply being in the cloud is not a guarantee of having appropriate data workloads. Making the best out of the cloud-provided architecture means that a variety of tools and services are well configured to work well with one another: data storage solutions, data integration tools, transformation pipelines, and ingestion procedures.
For organizations that are still working on-premise, moving to the cloud solves only half of their problem. Traditional storage and processing capabilities have become outdated given the ever-increasing amounts and sources of data. With more data and data sources available, new business use cases and analytics needs have arisen. Keeping up with these needs is the main requirement of cloud-based data processing frameworks.
Regardless of where they are built, legacy cloud architectures have one thing in common: they are not inherently optimized for efficient data workloads, they require heavy system management efforts and they are fragmented. Fragmented environments mean that the cloud promise “pay for what you use” is actually hard to achieve, having an economic impact on all cloud service users.
This is where Snowflake jumps right in with their cloud data platform. They provide architectures that fully exploit the public clouds’ scale and compute capabilities.
The Snowflake features
From an architectural standpoint, the magic behind Snowflake is quite simple: its cloud data platform separates the computational workloads from the storage. At the core of their architecture is the database storage service, around which different service layers are built. This data is unique and stores both structured and semi-structured data into a single source of truth. Virtual warehouses are then built on top of this data, making it possible to have multiple workloads run in parallel while using the same underlying data.
Snowflake’s features really make its cloud data platform unique in terms of data engineering tools and workloads. It assists data engineers and analysts across the entire data workload: from data ingestion all the way to data transformation and delivery.
Below is a high-level overview of Snowflake features that assist data engineers in building better workloads:
- Allows storage of both structured and semi-structured data. This data then becomes available in its raw form or can be extracted into any desired formats.
- Simplifies the process of working with semi-structured data. JSON files can be transformed and loaded using an SQL-like language.
- Provides a multi-clustered, shared data architecture that allows data to be loaded at the same time it’s being analyzed. This results in reduced times for analysis and data wrangling.
- Allows simultaneous data workloads on the same data source without queuing. Thus, engineers and business analysts are enabled to collaborate when extracting insights from their data.
- Provides a vast list of ecosystem partners allow further expansion of the analytics flow with visualization tools and custom BI dashboards
Conclusion & Snowflake benefits
The main selling point of Snowflake is that it enables cloud architectures that truly implement the “pay for what you use” pricing scheme. On top of that, it’s cloud architectures enable its customers to focus more on data strategy and less on the implementation and maintenance of intricate cloud architectures.
A comprehensive list of benefits for organizations can be seen in the image below:
The Fast Way To Get Started with Snowflake
Blue Orange Digital has vast experience in assisting organizations in their transition towards cloud environments. Developing within Snowflake takes data expertise. Your internal IT team may not have the capacity to build this out. If you want to get a faster ecosystem and the efficiency benefits of the snowflake environment, you don’t have to wait for IT to become less busy. Drop us a line and let us know. The Blue Orange Digital data science team is happy to help you get started with Snowflake.
For more on AI and technology trends, see Josh Miramant, CEO of Blue Orange Digital’s data-driven solutions for Supply Chain, Healthcare Document Automation, and more.
Josh Miramant is the CEO and founder of Blue Orange Digital, a data science and machine learning agency with offices in New York City and Washington DC. Miramant is a popular speaker, futurist, and a strategic business & technology advisor to enterprise companies and startups. He is a serial entrepreneur and software engineer that has built and scaled 3 startups. He helps organizations optimize and automate their businesses, implement data-driven analytic techniques, and understand the implications of new technologies such as artificial intelligence, big data, and the Internet of Things.
Featured on IBM ThinkLeaders, Dell Technologies, and NYC’s “Top 10 AI Development and Custom Software Development Agencies” as reviewed on Clutch and YahooFinance. Specializing in predictive maintenance, unified data lakes, supply chain/grid/marketing/sales optimization, anomaly detection, recommendation systems, among other ML solutions for a multitude of industries.
Main Image source: Canva