Home - Resources -  Blog

Partner Focus: How Snowflake and IntelliShift Harness Data to Deliver Powerful Analytics Solutions to Enterprise-Level Operations

Strategic partner, Snowflake, joined IntelliShift’s ConnectedOps virtual event. Snowflake account manager, Eric Szenderski, offers a deep-dive look at how the company’s data warehouse technology. IntelliShift data solutions engineer, John McKenna, also took the stage to discuss how IntelliShift leverages the partnership to deliver powerful analytics solutions to enterprise customers.

Below is the transcript of this session. You can also watch the session and view all sessions from ConnectedOps 2020.

ConnectedOps 2020 Snowflake partner session

 

Eric Szenderski: Hello everyone. My name is Eric Szenderski, I am a corporate sales manager here at Snowflake. I’ve been with the organization for a little bit over a year and a half. I’m going to provide insight as to how we address challenges around really growing data volumes and how we enable organizations, like IntelliShift, to be more data-driven. So, as you probably know, or maybe you don’t know, Snowflake went public last month and recorded one of the largest initial public offerings in tech history, so pretty exciting stuff.

But, back in 2012, our founder set out to solve some common challenges around the legacy on premise and cloud-hosted relational databases. They wanted to break down data silos and make it easier to scale the infrastructure to support exploding data volumes and the number of data users, as well. So, from scratch, they built the first relational database with 100% cloud code. We went generally available in 2015, and in just five short years, we now have over 3,000 customers, seven out of the Fortune Ten, over a thousand technology and services partners, but really, at its core, Snowflake is an entirely new cloud-native database and data platform that can instantly scale up and down as data volumes grow and businesses demands change because of this new and unique architecture.

But, before we jump in and start talking about how Snowflake can advance your data and predictive analytics initiatives, really what we’ve done with IntelliShift, I wanted to quickly levelset on the current state and why many organizations struggle to advance their data initiatives, since data is now considered to be the world’s most valuable resource.

So, 3,000 customers, right? What are the most common requests from these organizations? The first, performance, second, concurrency, and the third, simplicity. So, performance, or speed, really. This isn’t surprising, given the information age that we live in. Having the ability to answer any question instantly. So, our customers actually want to replicate that within their enterprise; they want to foster a culture of a data-driven culture. So, if there’s a question in operations or other business a user wants to answer, they want to encourage that curiosity. So, real-time data, and typically, not surprising again, they want it at a fraction of the cost. The second, concurrency. Many of our customers utilize Snowflake to power internal BI applications, so a BI looker.

Take Capital One, for example. Prior to Snowflake, their infrastructure could not support the hundreds, if not thousands, of users running a report simultaneously. The infrastructure teams actually wanted to give their customers the ability to answer questions in real-time and with real-time data, reducing the time spent waiting for these reports to load. And, the third, simplicity. Organizations, maybe like Red Bud Logistics, often have a web of technologies that they stitch together over time. Whenever a new use case arose, a technology to support it arose with it. Each tool was a risk because it was yet another piece that if not maintained properly, could bring down the entire data pipeline; not to mention a potential failure in the pipeline, but each was storing data which created these data siloes and the inability to obtain a holistic view of the enterprise.

So, for example, data silos, right? There’s CRM data that’s typically residing in something like a SalesForce cloud. There’s ERP data that’s residing in SAP, while the website and IOT data is living somewhere within a data link, typically a WSS33 or (unclear 00:03:58), mainly because relational databases cannot store that data in its raw, semi-structured form, which creates an even greater fragmented view of the enterprises. So, what’s interesting is many of the CEOs and other business executives that I work with, what they don’t realize is that their IT and engineering teams have to go through a series of processes and steps to change that GPS and sensor data from its raw form to a format where it can be loaded and joined with these other data sets for reporting purposes. These steps or processes or workloads is what we kind of refer to them as here, internally, require a significant compute resource, so they typically have to occur and schedule these jobs to run overnight to avoid impacting the business users during regular business hours. This processing, or what we call normalizing, can take hours, sometimes days, in some cases, so BI teams and business users are constantly working on outdated and stale information, and they want to wrap it all into a simple, easy to use platform that’s going to unify all of that information into one location.

So, what’s actually stopping our customers from having a high performance, highly scalable, and easy to use platform? Historically speaking, like you see, infrastructure is fixed. Storage layers are tightly coupled with the compute layer; you cannot scale one without the other. If data volumes explode, there’s an actual business need to ingest 15 years’ worth of historical information, terabytes or petabytes of information, you’re required to purchase the extra compute nodes or clusters even if there isn’t a business need for that extra computing horsepower.

There’s actually a predetermined amount of data that you can store, and a predetermined amount of compute resources that can support the enterprise at times of really high and peak uses. So, think Monday morning: the entire organization is running reports, so sales, finance, operations, the fleet activity over the weekend, while the marketing team is attempting to aggregate the results of their weekend campaign to understand conversions and leads generated, questions, etcetera. To ensure these reports actually run on time and in a timely manner, the database administrators are required to ensure that the knobs and the levers of your system are tuned for peak performance, which requires actually a lot of upkeep and frequent maintenance over time.

Because the storage of compute resources are so inflexible and rigid, what we often see is that customers need to estimate out two, sometimes five years to ensure that there’s enough compute and storage capacity to support their growing data binds. At this point, one of two things typically happen: our customers either completely overestimate and they’re stuck paying for this expensive stack that has the ability to support very large data volumes and is never really utilized to its fullest capacity, or the flipside of the coin, they’ll completely underestimate, not a great thing, and the system reaches its limits far sooner than expected, it crashes, the business is negatively impacted, BI tools and reports are extremely slow, and at this point, the IT team has to go through the entire process of planning and manually scaling again, so there’s more downtime, more maintenance involved, and a lot more overhead.

So, what our customers love about Snowflake, and what I really love about Snowflake for IntelliShift and IntelliShift customers is the way we’ve addressed the performance concurrency and simplicity requests. Our cloud data platform enables organizations to scale their infrastructure instantly and effortlessly, without down time, without performance tuning, to meet the demands of the business automatically. So, workload isolation, because we’ve separated the storage from compute layers and we’ve also isolated compute from compute, this provides customers the ability to dedicate independent compute resources to specific workloads and specific jobs. So, those data processing steps that I mentioned earlier that your engineering team typically has to schedule to run overnight, can now be completed in real-time and throughout regular business hours because they’re using a group of compute resources that are separate from those that are powering the marketing analytics workloads, which are separate from the sales and the operation and finance workloads and so on. Unlimited clusters of compute to serve any workload instantly and effortlessly.

So, traditional platforms, however, definitely not the case. Other IoT customers come to us to elevate product analytics, which requires streaming data from thousands of GPS devices, which are generating billions of data events per month in real time. Unfortunately, these platforms are not able to scale to those volumes and require several engineers to consistently tune, troubleshoot their database, which diverts their time and attention from building reports and dashboards. So, across all of these areas, from infrastructure to physical design, tuning, availability, maintenance, lies an endless number of parameters to control and knobs to turn, if you will. Time spent here is not time doing what their company needs them to do, which is actually working with and obtaining insights from their data.

So, our approach eliminates the time spent on those low value add tasks and performs those for our customers so that all your team has to do is simply load in query data. We’ve actually made semi-structured data, or those GPS data events, a native object type within our database, eliminating the need to process and normalize that data prior to ingesting, which ultimately reduces the time to insight. This actually enables organizations like Red Bud Logistics and IntelliShift to ingest billions of GPS events in real-time without affecting their customer-facing dashboards and internal reporting applications.

So, what are the actual benefits from our customer’s perspective? 84% have realized a more competitive advantage, 96% have decreased the administration costs and efforts to manage their infrastructure, and 95% are actually better able to manage organizational risk due to the fact that there is one platform for all of their workloads. What started out as a data warehouse back in 2012 quickly expanded to support the data lake and data science use cases that are so popular today because of the unique architecture where our storage and compute layers are independently scaled. When companies take complete advantage of their data, there is a significant impact on how they operate and compete in the market. When you’re able to centralize all of your data, it’s far easier to make better, quicker business decisions. We can get all of your users all of the data they need when they need it, fostering that data-driven culture, and we help our customers focus on delivering a great customer experience, improving revenue, reducing cost, reducing risk from one global platform to support all of your data workloads, from data engineering to sharing data seamlessly with your partners and customers, with the Snowflake data exchange, it’s all possible.

So, what were we able to actually accomplish with IntelliShift? At an extremely high level, going back to those requests that we talked about at the beginning of this presentation, scalability, for one, and performance, simplicity, and overall cost savings. Prior to moving their analytic workloads into Snowflake, the IntelliShift team was experiencing limitations with their existing infrastructure and required a platform that could support a customer-facing analytic application for more robust analytics and machine learning and artificial intelligence. The actual results: better operations intelligence products for you and less burden on their team so that they could continue to innovate and drive value for their customers. And, with that, I’ll actually hand it off to John McKenna to talk about why they chose Snowflake and what that actually means for you.

John McKenna: Hello. My name is John McKenna, I’m a data solutions engineer here at IntelliShift, and I’m going to speak to you briefly today to discuss why we chose to use Snowflake as our data analytics platform to better support you, our customers.

So, without further ado, we assembled criteria in order to evaluate Snowflake that included performance, scalability, integration, reliability, security, and cost, and in evaluating Snowflake, we did extensive testing on performance scalability and integration, along with the reliability, evaluating the security features, and the costs, and we found that these features were outstanding and really separated Snowflake from most of the industry regarding value to the customer and value to us. So, regarding performance, Snowflake’s massive parallel processing, ability to scale on demand, and their entire architecture allows us to provide superior performance and virtually limitless capacity to serve your data analytic needs.

In regards to integration, they have native support for all of the advanced analytic tools and technologies available in the marketplace today, so that we can incorporate predictive analytics, machine learning, and artificial intelligence capabilities into our products. They’re a cloud-based service with redundancy and automated back-up, so that ensures that our products are reliable and dependable and available to you when you need them. They have included all their latest security features, so we’re confident that your information is safe and secure, data is encrypted, it’s compressed, there’s multi-factorial authentication for security. And, they can do this at a lower total overall cost than other technologies that are available on the marketplace today.

So, how does all this benefit you, our customers? Well, regarding reporting capabilities, it allows us to provide faster report processing, enables you to use longer reporting time periods, and keep more larger data sets online and available for your reporting purposes. Currently, many of our reports are constrained to a couple of weeks; this technology will allow us to open that up to monthly, quarterly, and yearly reporting. It’s also the backbone technology that we’re using for our newer features, including the Silent Passenger dashboard, the Inspect dashboard, and coming soon, an Operator Safety dashboard.

These are providing key operating metrics and key performance indicators to you, our customers, so that you have better insight into your fleet management operations. It’s also enabling and a key technology component of our new analytic products that are due to release very soon, including Operations IQ and Fleet IQ, which provide capabilities to do ad-hoc analytics and operating metrics, custom dashboards, and extracting these key operating metrics so that you can analyze them offline in Excel or other technologies that you might use. It’s also enabling a key component of our future products that are going to incorporate machine learning, artificial intelligence, and predictive analytics. All of these features are going to give us the ability to support your operational excellence.

I’d like to thank you for your time today, and enjoy the conference.

Watch the video of this session, “Partner Focus: How Snowflake and IntelliShift Harness Data to Deliver Powerful Analytics Solutions to Enterprise-Level Operations.”

View all sessions from ConnectedOps 2020.