58: GreyBeards talk HCI with Adam Carter, Chief Architect NetApp Solidfire #NetAppHCI

Sponsored by:In this episode we talk with Adam Carter (@yoadamcarter), Chief Architect, NetApp Solidfire & HCI (Hyper Converged Infrastructure) solutions. Howard talked with Adam at TFD16 and I have known Adam since before the acquisition. Adam did a tour de force session on HCI architectures at TFD16 and we would encourage you to view the video’s of his session.

This is the third time NetApp has been on our show (see our podcast with Lee Caswell and Dave Wright and our podcast with Andy Banta) but this is the first sponsored podcast from NetApp. Adam has been Chief Architect for Solidfire for as long as I have known him.

NetApp has FAS/AFF series storage, E-Series storage and SolidFire storage. Their HCI solution is based on their SolidFire storage system.

NetApp SolidFire HCI Appliance


NetApp’s HCI solution is built around a 2U 4-server configuration where 3 of the nodes are actual denser, new SolidFire storage nodes and the 4th node is a VMware ESXi host. That is they have a real, fully functional SolidFile AFA SAN storage system built directly into their HCI solution.

There’s probably a case to be made that this isn’t technically a HCI system from an industry perspective and looks more like a well architected, CI  (converged infrastructure) solution. However, they do support VMs running on their system, its all packaged together as one complete system, and they offer end-to-end (one throat to choke) support, over the complete system.

In addition, they spent a lot of effort improving SolidFire’s, already great VMware software integration to offer a full management stack that fully supports both the vSphere environment and the  embedded SolidFire AFA SAN storage system.

Using a full SolidFire storage system in their solution, NetApp  gave up on the low-end (<$30K-$50K) portion of the HCI market. But to supply the high IO performance, multi-tenancy, and QoS services of current SolidFire storage systems, they felt they had to embed a full SAN storage system.

With other HCI solutions, the storage activity must contend with other VMs and kernel processing on the server. And in these solutions, the storage system doesn’t control CPU/core/thread allocation and as such, can’t guarantee IO service levels that SolidFire is known for.

Also, by configuring their system with a real AFA SAN system, new additional ESXi servers can be added to the complex without needing to purchase additional storage software licenses. Further, customers can add bare metal servers to this environment and there’s still plenty of IO performance to go around. On the other hand, if a customer truly needs more storage performance/capacity, they can always add an additional, standalone SolidFire storage node to the cluster.

The podcast runs ~23 minutes. Adam was very easy to talk with and had deep technical knowledge of their new solution, industry HCI solutions and SolidFire storage.  It’s was a great pleasure for Howard and I to talk with him again. Listen to the podcast to learn more.

Adam Carter, Chief Architect, NetApp SolidFire

Adam Carter is the Chief Product Architect for SolidFire and HCI at NetApp. Adam is an expert in next generation data center infrastructure and storage virtualization.

Adam has led product management at LeftHand Networks, HP, VMware, SolidFire, and NetApp bringing revolutionary products to market. Adam pioneered the industry’s first Virtual Storage Appliance (VSA) product at LeftHand Networks and helped establish VMware’s VSA certification category.

Adam brings deep product knowledge and broad experience in the software defined data center ecosystem.

57: GreyBeards talk midrange storage with Pierluca Chiodelli, VP of Prod. Mgmt. & Cust. Ops., Dell EMC Midrange Storage

Sponsored by:

Dell EMC Midrange Storage

In this episode we talk with Pierluca Chiodelli  (@chiodp), Vice President of Product, Management and Customer Experience at Dell EMC Midrange storage.  Howard talked with Pierluca at SFD14 and I talked with Pierluca at SFD13. He started working there as a customer engineer and has worked his way up to VP since then.

This is the second time (Dell) EMC has been on our show (see our EMCWorld2015 summary podcast with Chad Sakac) but this is the first sponsored podcast from Dell EMC. Pierluca seems to have been with (Dell) EMC forever.

You may recall that Dell EMC has two product families in their midrange storage portfolio. Pierluca provides a number of reasons why both continue to be invested in, enhanced and sold on the market today.

Dell EMC Unity and SC product lines

Dell EMC Unity storage is the outgrowth of unified block and file storage that was first released in the EMC VNXe series storage systems. Unity continues that tradition of providing both file and block storage in a dense, 2 rack U system configuration, with dual controllers, high availability, AFA and hybrid storage systems. The other characteristic of Unity storage is its tight integration with VMware virtualization environments.

Dell EMC SC series storage continues the long tradition of Dell Compellent storage systems, which support block storage and which invented data progression technology.  Data progression is storage tiering on steroids, with support for multi-tiered rotating disk (across the same drive), flash, and now cloud storage. SC series is also considered a set it and forget it storage system that just takes care of itself without the need for operator/admin tuning or extensive monitoring.

Dell EMC is bringing together both of these storage systems in their CloudIQ, cloud based, storage analytics engine and plan to have both systems supported under the Unisphere management engine.

Also Unity storage can tier files to the cloud and copy LUN snapshots to the public cloud using their Cloud Tiering Appliance software.  With their UnityVSA Software Defined Storage appliance and VMware vSphere running in AWS, the file and snapshot data can then be accessed in the cloud. SC Series storage will have similar capabilities, available soon.

At the end of the podcast, Pierluca talks about Dell EMC’s recently introduced Customer Loyalty Programs, which include: Never Worry Data Migrations, Built-in VirtuSteram Storage Cloud, 4:1 Storage Efficiency Guarantee, All-inclusive Software pricing, 3-year Satisfaction Guarantee, Hardware Investment Protection, and Predictable Support Pricing.

The podcast runs ~27 minutes. Pierluca is a very knowledgeable individual and although he has a beard, it’s not grey (yet). He’s been with EMC storage forever and has a long, extensive history in midrange storage, especially with Dell EMC’s storage product families. It’s been a pleasure for Howard and I to talk with him again.  Listen to the podcast to learn more.

Pierluca Chiodelli, V.P. of Product Management & Customer Operations, Dell EMC Midrange Storage

Pierluca Chiodelli is currently the Vice President of Product Management for Dell EMC’s suite of Mid-Range solutions including, Unity, VNX, and VNXe from heritage EMC storage and Compellent, EqualLogic, and Windows Storage Server from heritage Dell Storage.

Pierluca’s organization is comprised of four teams: Product Strategy, Performance & Competitive Engineering, Solutions, and Core & Strategic Account engineering. The teams are responsible for ensuring Dell EMC’s mid-range solutions enable end users and service providers to transform their operations and deliver information technology as a service.

Pierluca has been with EMC since 1999, with experience in field support and core engineering across Europe and the Americas. Prior to joining EMC, he worked at Data General and as a consultant for HP Corporation.

Pierluca holds one degree in Chemical Engineering and second one in Information Technology.


56: GreyBeards talk high performance file storage with Liran Zvibel, CEO & Co-Founder, WekaIO

This month we talk high performance, cluster file systems with Liran Zvibel (@liranzvibel), CEO and Co-Founder of WekaIO, a new software defined, scale-out file system. I first heard of WekaIO when it showed up on SPEC sfs2014 with a new SWBUILD benchmark submission. They had a 60 node EC2-AWS cluster running the benchmark and achieved, at the time, the highest SWBUILD number (500) of any solution.

At the moment, WekaIO are targeting HPC and Media&Entertainment verticals for their solution and it is sold on an annual capacity subscription basis.

By the way, a Wekabyte is 2**100 bytes of storage or ~ 1 trillion exabytes (2**60).

High performance file storage

The challenges with HPC file systems is that they need to handle a large number of files, large amounts of storage with high throughput access to all this data. Where WekaIO comes into the picture is that they do all that plus can support high file IOPS. That is, they can open, read or write a high number of relatively small files at an impressive speed, with low latency. These are becoming more popular with AI-machine learning and life sciences/genomic microscopy image processing.

Most file system developers will tell you that, they can supply high throughput  OR high file IOPS but doing both is a real challenge. WekaIO’s is able to do both while at the same time supporting billions of files per directory and trillions of files in a file system.

WekaIO has support for up to 64K cluster nodes and have tested up to 4000 cluster nodes. WekaIO announced last year an OEM agreement with HPE and are starting to build out bigger clusters.

Media & Entertainment file storage requirements are mostly just high throughput with large (media) file sizes. Here WekaIO has a more competition from other cluster file systems but their ability to support extra-large data repositories with great throughput is another advantage here.

WekaIO cluster file system

WekaIO is a software defined  storage solution. And whereas many HPC cluster file systems have metadata and storage nodes. WekaIO’s cluster nodes are combined meta-data and storage nodes. So as one scale’s capacity (by adding nodes), one not only scales large file throughput (via more IO parallelism) but also scales small file IOPS (via more metadata processing capabilities). There’s also some secret sauce to their metadata sharding (if that’s the right word) that allows WekaIO to support more metadata activity as the cluster grows.

One secret to WekaIO’s ability to support both high throughput and high file IOPS lies in  their performance load balancing across the cluster. Apparently, WekaIO can be configured to constantly monitoring all cluster nodes for performance and can balance all file IO activity (data transfers and metadata services) across the cluster, to insure that no one  node is over burdened with IO.

Liran says that performance load balancing was one reason they were so successful with their EC2 AWS SPEC sfs2014 SWBUILD benchmark. One problem with AWS EC2 nodes is a lot of unpredictability in node performance. When running EC2 instances, “noisy neighbors” impact node performance.  With WekaIO’s performance load balancing running on AWS EC2 node instances, they can  just redirect IO activity around slower nodes to faster nodes that can handle the work, in real time.

WekaIO performance load balancing is a configurable option. The other alternative is for WekaIO to “cryptographically” spread the workload across all the nodes in a cluster.

WekaIO uses a host driver for Posix access to the cluster. WekaIO’s frontend also natively supports (without host driver) NFSv3, SMB3.1, HDFS and AWS S3  protocols.

WekaIO also offers configurable file system data protection that can span 100s of failure domains (racks) supporting from 4 to 16 data stripes with 2 to 4 parity stripes. Liran said this was erasure code like but wouldn’t specifically state what they are doing differently.

They also support high performance storage and inactive storage with automated tiering of inactive data to object storage through policy management.

WekaIO creates a global name space across the cluster, which can be sub-divided into one to thousands  of file systems.

Snapshoting, cloning & moving work

WekaIO also has file system snapshots (readonly) and clones (read-write) using re-direct on write methodology. After the first snapshot/clone, subsequent snapshots/clones are only differential copies.

Another feature Howard and I thought was interesting was their DR as a Service like capability. This is, using an onprem WekaIO cluster to clone a file system/directory, tiering that to an S3 storage object. Then using that S3 storage object with an AWS EC2 WekaIO cluster to import the object(s) and re-constituting that file system/directory in the cloud. Once on AWS, work can occur in the cloud and the process can be reversed to move any updates back to the onprem cluster.

This way if you had work needing more compute than available onprem, you could move the data and workload to AWS, do the work there and then move the data back down to onprem again.

WekaIO’s RtOS, network stack, & NVMeoF

WekaIO runs under Linux as a user space application. WekaIO has implemented their own  Realtime O/S (RtOS) and high performance network stack that runs in user space.

With their own network stack they have also implemented NVMeoF support for (non-RDMA) Ethernet as well as InfiniBand networks. This is probably another reason they can have such low latency file IO operations.

The podcast runs ~42 minutes. Linar has been around  data storage systems for 20 years and as a result was very knowledgeable and interesting to talk with. Liran almost qualifies as a Greybeard, if not for the fact that he was clean shaven ;/. Listen to the podcast to learn more.

Linar Zvibel, CEO and Co-Founder, WekaIO

As Co-Founder and CEO, Mr. Liran Zvibel guides long term vision and strategy at WekaIO. Prior to creating the opportunity at WekaIO, he ran engineering at social startup and Fortune 100 organizations including Fusic, where he managed product definition, design and development for a portfolio of rich social media applications.


Liran also held principal architectural responsibilities for the hardware platform, clustering infrastructure and overall systems integration for XIV Storage System, acquired by IBM in 2007.

Mr. Zvibel holds a BSc.in Mathematics and Computer Science from Tel Aviv University.

55: GreyBeards storage and system yearend review with Ray & Howard

In this episode, the Greybeards discuss the year in systems and storage. This year we kick off the discussion with a long running IT trend which has taken off over the last couple of years. That is, recently the industry has taken to buying pre-built appliances rather than building them from the ground up.

We can see this in all the hyper-converged solutions available  today but it goes even deeper than that. It seems to have started with the trend in organizations to get by with less man-women power.

This led to a desire to purchase pre-buit software applications and now, appliances rather than build from parts. It just takes to long to build and lead architects have better things to do with their time than checking compatibility lists, testing and verifying that hardware works properly with software. The pre-built appliances are good enough and doing it yourself doesn’t really provide that much of an advantage over the pre-built solutions.

Next, we see the coming systems using NVMe over Fabric storage systems as sort of a countertrend to the previous one. Here we see some customers paying well for special purpose hardware with blazing speed that takes time and effort to get working right, but the advantages are significant. Both Howard and I were at the Excelero SFD12 event and it blew us away. Howard also attended the E8 Storage SFD14 event which was another example along a similar vein.

Finally, the last trend we discussed was the rise of 3D TLC and the absence of 3DX and other storage class memory (SCM) technologies to make a dent in the marketplace. 3D TLC NAND is coming out of just about every fab these days and resulting in huge (but costly) SSDs, in the multi-TB range.  Combine these with NVMe interfaces and you have msec access to almost a PB of storage without breaking a sweat.

The missing 3DX SCM tsunami some of us predicted is mainly due to the difficulties in bringing new fab technologies to market. We saw some of this in the stumbling with 3D NAND but the transition to 3DX and other SCM technologies is a much bigger change to new processes and technology. We all believe it will get there someday but for the moment, the industry just needs to wait until the fabs get their yields up.

The podcast runs over 44 minutes. Howard and I could talk for hours on what’s happening in IT today. Listen to the podcast to learn more.

Howard Marks is the Founder and Chief Scientist of howardmarksDeepStorage, a prominent blogger at Deep Storage Blog and can be found on twitter @DeepStorageNet.


Ray Lucchesi is the President and Founder of Silverton Consulting, a prominent blogger at RayOnStorage.com, and can be found on twitter @RayLucchesi.

54: GreyBeards talk scale-out secondary storage with Jonathan Howard, Dir. Tech. Alliances at Commvault

This month we talk scale-out secondary storage with Jonathan Howard,  Director of Technical Alliances at Commvault.  Both Howard and I attended Commvault GO2017 for Tech Field Day, this past month in Washington DC. We had an interesting overview of their Hyperscale secondary storage solution and Jonathan was the one answering most of our questions, so we thought he would make an good guest for our podcast.

Commvault has been providing data protection solutions for a long time, using anyone’s secondary storag, but recently they have released a software defined, scale-out secondary storage solution that runs their software with a clustered file system.

Hyperscale secondary storage

They call their solution, Hyperscale secondary storage and it’s available in both an hardware-software appliance as well as software only configuration on compatible off the shelf commercial hardware. Hyperscale uses the Red Hat Gluster cluster file system and together with the Commvault Data Platform provides a highly scaleable, secondary storage cluster that can meet anyone’s secondary storage needs while providing high availability and high throughput performance.

Commvault’s Hyperscale secondary storage system operates onprem in customer data centers. Hyperscale uses flash storage for system metadata but most secondary storage resides on local server disk.

Combined with Commvault Data Platform

With the sophistication of Commvault Data Platform one can have all the capabilities of a standalone Commvault environment with software defined storage. This allows just about any RTO/RPO needed by today’s enterprise and includes Live Sync secondary storage replication,  Onprem IntelliSnap for on storage snapshot management, Live Mount for instant recovery using secondary storage directly  to boot your VMs without having to wait for data recovery.  , and all the other recovery sophistication available from Commvault.

Hyperscale storage is capable of doing up to 5 Live Mount recoveries simultaneously per node without a problem but more are possible depending on performance requirements.

We also talked about Commvault’s cloud secondary storage solution which can make use of AWS S3 storage to hold backups.

Commvault’s organic growth

Most of the other data protection companies have came about through mergers, acquisitions or spinoffs. Commvault has continued along, enhancing their solution while bashing everything on an underlying centralized metadata database.  So their codebase was grown from the bottom up and supports pretty much any and all data protection requirements.

The podcast runs ~50 minutes. Jonathan was very knowledgeable about the technology and was great to talk with. Listen to the podcast to learn more.

Jonathan Howard, Director, Technical and Engineering Alliances, Commvault

Jonathan Howard is a Director, Technology & Engineering Alliances for Commvault. A 20-year veteran of the IT industry, Jonathan has worked at Commvault for the past 8 years in various field, product management, and now alliance facing roles.

In his present role with Alliances, Jonathan works with business and technology leaders to design and create numerous joint solutions that have empowered Commvault alliance partners to create and deliver their own new customer solutions.