Eric Herzog, IBM | DataWorks Summit 2018

Domain & Hosting bundle deals!

>> Live from San Jose in the heart of Silicon
Valley, it's theCUBE, covering DataWorks Summit 2018, brought to you by Hortonworks. >> Welcome back to theCUBE's live coverage of DataWorks here in San Jose, California. I'm your host, Rebecca Knight, along with my co-host, James Kobielus. We have with us Eric Herzog. He is the Chief Marketing Officer and VP of Global Channels
at the IBM Storage Division. Thanks so much for coming
on theCUBE once again, Eric. >> Well, thank you. We
always love to be on theCUBE and talk to all of theCUBE analysts about various topics,
data, storage, multi-cloud, all the works.
>> And before the cameras were rolling, we were talking about how you might be the biggest CUBE alum in the sense of you've been on theCUBE more times than anyone else.

>> I know I'm in the top five, but I may be number one, I have to check with Dave
Vellante and crew and see. >> Exactly and often
wearing a Hawaiian shirt. >> Yes. >> Yes, I was on theCUBE
last week from CISCO Live. I was not wearing a Hawaiian shirt. And Stu and John gave me a hard time about why was not I wearing a Hawaiian shirt? So I make sure I showed
up to the DataWorks show- >> Stu, Dave, get a load. >> You're in California with
a tan, so it fits, it's good. >> So we were talking a little bit before the cameras were rolling and you were saying one of the points that is sort of central
to your professional life is it's not just about the
storage, it's about the data. So riff on that a little bit. >> Sure, so at IBM we believe
everything is data driven and in fact we would argue that data is more valuable than oil or diamonds or plutonium or platinum
or silver to anything else. It is the most viable asset,
whether you be a global Fortune 500, whether
you be a midsize company or whether you be Herzogs Bar and Grill.

So data is what you use
with your suppliers, with your customers, with your partners. Literally everything around your company is really built around the data so most effectively managing it and make sure, A, it's always performant because when it's not
performant they go away. As you probably know, Google did a survey that one, two, after one,
two they go off your website, they click somewhere else
so has to be performant. Obviously in today's 365, 7 by 24 company it needs to always be
resilient and reliable and it always needs to be available, otherwise if the storage
goes down, guess what? Your AI doesn't work,
your Cloud doesn't work, whatever workload, if
you're more traditional, your Oracle, Sequel, you know
SAP, none of those workloads work if you don't have a
solid storage foundation underneath your data driven enterprise. >> So with that ethos in
mind, talk about the products that you are launching,
that you newly launched and also your product
roadmap going forward.

>> Sure, so for us everything really is that storage is this critical foundation for the data driven,
multi Cloud enterprise. And as I've said before on theCube, all of our storage
software's now Cloud-ified so if you need to automatically tier out to IBM Cloud or Amazon or Azure, we automatically will move
the data placement around from one premise out to a Cloud and for certain customers
who may be multi Cloud, in this case using multiple
private Cloud providers, which happens due to either legal reasons or procurement reasons
or geographic reasons for the larger enterprises,
we can handle that as well. That's part of it, second thing is we just announced earlier today an artificial intelligence,
an AI reference architecture, that incorporates a full stack from the very bottom,
both servers and storage, all the way up through the top layer, then the applications on top, so we just launched that today.

>> AI for storage management or AI for run a range of applications?
>> Regular AI, artificial intelligence from
an application perspective. So we announced that
reference architecture today. Basically think of the
reference architecture as your recipe, your blueprint, of how to put it all together. Some of the components are from IBM, such as Spectrum Scale
and Spectrum Computing from my division, our servers
from our Cloud division. Some are opensource, Tensor,
Caffe, things like that. Basic gives you what
the stack needs to be, and what you need to do
in various AI workloads, applications and use cases. >> I believe you have
distributed deep learning as an IBM capability,
that's part of that stack, is that correct?
>> That is part of the stack, it's like in
the middle of the stack.

>> Is it, correct me if I'm
wrong, that's containerization of AI functionality?
>> Right. >> For distributed deployment?
>> Right. >> In an orchestrated Kubernetes
fabric, is that correct? >> Yeah, so when you look at
it from an IBM perspective, while we clearly support
the virtualized world, the VM wares, the hyper
V's, the KVMs and the OVMs, and we will continue to do that, we're also heavily invested
in the container environment. For example, one of our other divisions, the IBM Cloud Private division, has announced a solution that's
all about private Clouds, you can either get it hosted at IBM or literally buy our stack- >> Rob Thomas in fact demoed it this morning, here.
>> Right, exactly.

And you could create-
>> At DataWorks. >> Private Cloud initiative,
and there are companies that, whether it be for security purposes or whether it be for legal
reasons or other reasons, don't want to use public Cloud providers, be it IBM, Amazon, Azure, Google or any of the big public Cloud providers, they want a private Cloud and IBM either A, will host it or B,
with IBM Cloud Private. All of that infrastructure is built around a containerized environment. We support the older world,
the virtualized world, and the newer world, the container world. In fact, our storage, allows
you to have persistent storage in a container's environment,
Dockers and Kubernetes, and that works on all of our block storage and that's a freebie, by the
way, we don't charge for that.

>> You've worked in the
data storage industry for a long time, can you talk a little bit about how the marketing
message has changed and evolved since you first
began in this industry and in terms of what
customers want to hear and what assuages their fears? >> Sure, so nobody cares
about speeds and feeds, okay? Except me, because I've been
doing storage for 32 years. >> And him, he might care. (laughs) >> But when you look at it,
the decision makers today, the CIOs, in 32 years, including seven start ups, IBM and EMC, I've never, ever, ever, met a CIO who used to
be a storage guy, ever. So, they don't care. They know that they need storage and the other infrastructure, including servers and networking, but think about it, when the
app is slow, who do they blame? Usually they blame the storage guy first, secondarily they blame the server guy, thirdly they blame the networking guy.

They never look to see
that their code stack is improperly done. Really what you have to
do is talk applications, workloads and use cases which is what the AI reference architecture does. What my team does in non AI workloads, it's all about, again, data driven, multi Cloud infrastructure. They want to know how you're going to make a new workload fast AI. How you're going to make
their Cloud resilient whether it's private or hybrid. In fact, IBM storage
sells a ton of technology to large public Cloud providers that do not have the initials IBM. We sell gobs of storage to
other public Cloud providers, both big, medium and small. It's really all about the applications, workloads and use cases, and that's what gets people excited. You basically need a position,
just like I talked about with the AI foundations, storage
is the critical foundation. We happen to be, knocking on wood, let's hope there's no earthquake, since I've lived here my whole life, and I've been in earthquakes,
I was in the '89 quake.

Literally fell down a bunch
of stairs in the '89 quake. If there's an earthquake
as great as IBM storage is, or any other storage or
servers, it's crushed. Boom, you're done! Okay, well you need to make
sure that your infrastructure, really your data, is covered
by the right infrastructure and that it's always resilient, it's always performing
and is always available. And that's what IBM drives
is about, that's the message, not about how many gigabytes
per second in bandwidth or what's the- Not that we can't spew that stuff when we talk to the right person but in general people don't care about it. What they want to know is, "Oh that SAP workload took 30 hours and now it takes 30 minutes?" We have public references
that will say that. "Oh, you mean I can use eight
to ten times less storage for the same money?" Yes, and we have public
references that will say that.

So that's what it's
really about, so storage is really more from really a
speeds and feeds Nuremberger sort of thing, and now
all the Nurembergers are doing AI and Caffe and TensorFlow and all of that, they're
all hackers, right? It used to be storage
guys who used to do that and to a lesser extent server guys and definitely networking guys.

That's all shifted to the software side so you got to talk the languages. What can we do with Hortonworks? By the way we were named in Q1 of 2018 as the Hortonworks infrastructure
partner of the year. We work with Hortonworks
all time, at all levels, whether it be with our channel partners, whether it be with our direct end users, however the customer wants to consume, we work with Hortonworks very closely and other providers as well
in that big data analytics and the AI infrastructure
world, that's what we do.

pexels photo 6476808

>> So the containerizations
side of the IBM AI stack, then the containerization capabilities in Hortonworks Data Platform 3.0, can you give us a sense
for how you plan to, or do you plan at IBM,
to work with Hortonworks to bring these capabilities,
your reference architecture, into more, or bring their
environment for that matter, into more of an alignment
with what you're offering? >> So we haven't an exact decision of how we're going to
do it, but we interface with Hortonworks on a continual basis. >> Yeah. >> We're working to figure
out what's the right solution, whether that be an integrated
solution of some type, whether that be something
that we do through an adjunct to our reference architecture or some reference
architecture that they have but we always make sure, again, we are their partner of
the year for infrastructure named in Q1, and that's
because we work very tightly with Hortonworks and
make sure that what we do ties out with them, hits
the right applications, workloads and use cases,
the big data world, the analytic world and the AI world so that we're tied off, you know, together to make sure that we
deliver the right solutions to the end user because
that's what matters most is what gets the end users fired up, not what gets Hortonworks or IBM fired up, it's what gets the end users fired up.

>> When you're trying to get
into the head space of the CIO, and get your message out there, I mean what is it, what
would you say is it that keeps them up at night? What are their biggest pain points and then how do you
come in and solve them? >> I'd say the number one pain point for most CIOs is
application delivery, okay? Whether that be to the line of business, put it this way, let's
take an old workload, okay? Let's take that SAP example,
that CIO was under pressure because they were trying, in this case it was a giant retailer who was shipping stuff every
night, all over the world.

Well guess what? The green undershirts in the wrong size, went
to Paducah, Kentucky and then one of the other
stores, in Singapore, which needed those green
shirts, they ended up with shoes and the reason is, they
couldn't run that SAP workload in a couple hours. Now they run it in 30 minutes. It used to take 30 hours. So since they're shipping every night, you're basically missing
a cycle, essentially and you're not delivering the right thing from a retail infrastructure perspective to each of their nodes, if you will, to their retail locations.

So they care about what do they need to do to deliver to the business
the right applications, workloads and use cases
on the right timeframe and they can't go down,
people get fired for that at the CIO level, right? If something goes down, the CIO is gone and obviously for certain companies that are more in the modern mode, okay? People who are delivering stuff and their primary transactional
vehicle is the internet, not retail, not through partners, not through people like IBM, but their primary transactional
vehicle is a website, if that website is not
resilient, performant and always reliable, then guess what? They are shut down and
they're not selling anything to anybody, which is to true
if you're Nordstroms, right? Someone can always go into the store and buy something,
right, and figure it out? Almost all old retailers have
not only a connection to core but they literally have
a server and storage in every retail location
so if the core goes down, guess what, they can transact. In the era of the internet,
you don't do that anymore.

Right? If you're shipping
only on the internet, you're shipping on the internet so whether it be a new workload, okay? An old workload if you're
doing the whole IOT thing. For example, I know a company
that I was working with, it's a giant, private mining company. They have those giant, like
three story dump trucks you see on the Discovery Channel. Those things cost them a
hundred million dollars, so they have five thousand
sensors on every dump truck. It's a fricking dump truck but guess what, they got five
thousand sensors on there so they can monitor and make sure they take proactive action
because if that goes down, whether these be diamond mines or these be Uranium
mines or whatever it is, it costs them hundreds
of millions of dollars to have a thing go down. That's, if you will, trying to take it out of the traditional, high tech area, which we all talk about,
whether it be Apple or Google, or IBM, okay great, now let's
put it to some other workload.

In this case, this is the use of IOT, in a big data analytics environment with AI based infrastructure,
to manage dump trucks. >> I think you're talking about what's called, "digital twins" in a networked environment
for materials management, supply chain management and so forth. Are those requirements growing in terms of industrial IOT
requirements of that sort and how does that effect
the amount of data that needs to be stored,
the sophistication of the AI and the stream competing that needs to be provisioned? Can you talk to that? >> The amount of data is
growing exponentially. It's growing at yottabytes
and zettabytes a year now, not at just exabytes anymore. In fact, everybody on their
iPhone or their laptop, I've got a 10GB phone, okay? My laptop, which happens
to be a Power Book, is two terabytes of flash, on a laptop. So just imagine how much
data's being generated if you're doing in a giant factory, whether you be in the warehouse space, whether you be in healthcare,
whether you be in government, whether you be in the financial sector and now all those additional regulations, such as GDPR in Europe
and other regulations across the world about what you have to do with your healthcare
data, what you have to do with your finance data, the
amount of data being stored.

And then on top of it, quite honestly, from an AI big data analytics perspective, the more data you have,
the more valuable it is, the more you can mine it or the more oil, it's as
if the world was just oil, forget the pollution side, let's assume oil didn't cause pollution. Okay, great, then guess what? You would be using oil everywhere and you wouldn't be using
solar, you'd be using oil and by the way you need
more and more and more, and how much oil you have
and how you control that would be the power. That right now is the power of data and if anything it's getting
more and more and more.

So again, you always have to be able to be resilient with that data, you always have to interact with things, like we do with Hortonworks or
other application workloads. Our AI reference architecture
is another perfect example of the things you need to do to provide, you know, at the base
infrastructure, the right foundation. If you have the wrong
foundation to a building, it falls over. Whether it be your house, a
hotel, this convention center, if it had the wrong
foundation, it falls over. >> Actually to follow the oil analogy just a little bit further, the
more of this data you have, the more PII there is and
it usually, and the more the workloads need to scale
up, especially for things like data masking.
>> Right. >> When you have compliance
requirements like GDPR, so you want to process the data but you need to mask it first, therefore you need clusters
that conceivably are optimized for high volume, highly
scalable masking in real time, to drive the downstream app, to feed the downstream applications and to feed the data scientist,
you know, data lakes, whatever, and so forth and so on? >> That's why you need things
like Incredible Compute which IBM offers with the Power Platform.

And why you need storage that, again, can scale up.
>> Yeah. >> Can get as big as you need it to be, for example in our reference architecture, we use both what we call Spectrum Scale, which is a big data analytics
workload performance engine, it has multiple threaded, multi tasking. In fact one of the largest
banks in the world, if you happen to bank with them, your credit card fraud is
being done on our stuff, okay? But at the same time we have what's called IBM Cloud Object Storage
which is an object store, you want to take every one
of those searches for fraud and when they find out that no one stole my MasterCard or the Visa, you still want to put it in there because
then you mine it later and see patterns of how people
are trying to steal stuff because it's all being
done digitally anyway.

You want to be able to do that. So you A, want to handle it
very quickly and resiliently but then you want to be able
to mine it later, as you said, mining the data.
>> Or do high value anomaly detection in the moment to be able to tag the more anomalous data that you can then sift through later or maybe in the moment
for realtime litigation. >> Well that's highly compute intensive, it's AI intensive and it's
highly storage intensive on a performance side
and then what happens is you store it all for,
lets say, further analysis so you can tell people, "When
you get your Am Ex card, do this and they won't steal it." Well the only way to
do that, is you use AI on this ocean of data, where
you're analyzing all this fraud that has happened, to look at patterns and then you tell me, as
a consumer, what to do.

Whether it be in the financial business, in this case the credit card business, healthcare, government, manufacturing. One of our resellers actually
developed an AI based tool that can scan boxes and cans
for faults on an assembly line and actually have sold
it to a beer company and to a soda company that
instead of people looking at the cans, like you
see on the Food Channel, to pull it off, guess what? It's all automatically done. There's no people pulling the can off, "Oh, that can is damaged"
and they're looking at it and by the way, sometimes
they slip through. Now, using cameras and this
AI based infrastructure from IBM, with our storage
underneath the hood, they're able to do this. >> Great. Well Eric thank you
so much for coming on theCUBE. It's always been a lot
of fun talking to you. >> Great, well thank you very much. We love being on theCUBE and appreciate it and hope everyone enjoys
the DataWorks conference.

>> We will have more from
DataWorks just after this. (techno beat music).

You May Also Like