132: GreyBeards talk fast embedded k-v stores with Speedb’s Co-Founder&CEO Adi Gelvan

We’ve been talking a lot about K8s of late so we thought it was time to get back down to earth and spend some time with Adi Gelvan (@speedb_io), Co-founder and CEO of Speedb, an embedded key-value store, drop-in/replacement for RocksDB, that significantly improves on its IO performance for large metadata databases.

At Adi’s last job they were searching for a key-values store or database to manage the substantial metadata they needed. After looking at RocksDB, they found it had a number of performance problems, especially as you got up to lots of metadata. Speedb was specifically designed to address the problems they found. Listen to the podcast to learn more.

RocksDB is a key-value store engine that manages the metadata for just about every open source project in existence that uses metadata. RocksDB is a Facebook open source fork of Google’s LevelDB database.

The main issues with RocksDB is that when you have a lot of metadata (key:volume pairs), RocksDB performance suffers from highly variable latency and write stalls.

Most RocksDB users are aware of these problems and turn to sharding the database to address them (by essentially shrinking the amount of metadata under management within a single node/instance..

Historically, key-volume stores used B+-trees to store data. B+-trees are great for reading, but bad for writing. Namely the B+-tree usually had to be rebalanced when entries were added and potentially when they were updated. This could cause a cascade of read-write IO throughout the tree, delaying the original IO.

Log Structured Merge trees (LMS-trees) were created to reduce write problems while at the same time provide B+-tree speed for reading. Essentially, an LSM-tree is an in-memory, sequence of (sometimes sorted) key-value pairs that can be written (destaged) to multiple sequential (sorted) string tables (SST) files on some backing store. A hierarchical index is maintained in memory to identify which SSTs holds which key-value data.

RocksDB uses LSM-tree, in memory data structures, to buffer writes. When memory becomes full, the LSM tree can be destaged to backing store to one or more SST files. Howewer, SSTs, when first written, aren’t necessarily in sorted key order, and they may contain duplicate key-value entries to what’s already in other SSTs.

So earlier versions of SSTs will need to be read back in, compacted (duplicate key-value entries deleted), sorted and written back out. The earliest version of the SSTs is considered Level 0 (L0), the next (1st level compacted and sorted) is considered L1, and this process can go on generating L2 to Ln SSTs. We would call this garbage collection, the metadata world calls it compaction.

But each time an SST is written out that’s another read of all the key-value pairs AND another write to storage. In SSDs we would call these repeated writes, write amplification. It turns out that RocksDB can have up to a 30X write amplification for a key-value entry. This means that instead of just writing it once or twice it’s written (and reread) up to 30 times. This IO takes away bandwidth and processing power from normal metadata read and write activity, which impacts IO performance

As GreyBeards know, storage (and flash) garbage collection can lead to unpredictable latencies and system busy times. Intense garbage collection (for SSDs) can seemingly hold off or stall all other IO for some amount of time during this activity. This is the main reason why RocksDB has highly variable latencies and write stalls.

Garbage collection is not an issue when you have limited amounts of metadata entries (key:volume pairs), but as you get more entries, ongoing garbage collection can become a serious impediment to performing IO. When we say “large metadata stores” we are talking 30GBs of metadata, with probably, billions of key-volume pair entries.

There appears to be two dimensions to (RocksDB) LSM-tree/SST file performance. One is the number of levels allowed and the other is the size of SST file.

Speedb determined that two dimensions weren’t sufficient to solve RocksDB performance problems. And sharding the database seemed to be putting the burden on the customer to fix the issue. So Speedb restructured their LSM-trees and SSTs to create 3 or more dimensions to tune for database performance.

With Speedb’s restructured LMS-tree and SST files, they reduce write amplification for large metadata databases, from 30X to 5X. That alone could easily increase system performance by a factor of 6.

Adi mentioned that for one cloud based customer, they were able to double performance with 1/4 the (cloud instance) server hardware, essentially providing an ~8X improvement in performance over RocksDB.

Adi also mentioned that they are targeting system developers with large metadata stores. Luckily Speedb is a fully RocksDB compatible replacement. This means developers should only take ~30 minutes to convert a system to use Speedb.

We also asked about pricing. Adi said there’s two current pricing models: 1) OEMs pay a revenue share to use Speedb and 2) non-OEMs can license the product on a per node per month basis. Given that Speedb node efficiency over RocksDB, there should be a less nodes required to support the same performance for any given metadata store.

Adi also mentioned they are in the process of releasing an open source version of Speedb that incorporates some of the enterprise product. This way developers can try Speedb to see how it works for free. It won’t bethe complete product but it’s better than native RocksDB.

Adi Gelvan, Co-Founder and CEO Speedb

Adi Gelvan is co-founder and CEO of Speedb, a data management startup, that provides a drop-in replacement for RocksDB embedded storage engine. 

Adi was a former IT infrastructure manager with over two decades of management, commercialization and executive sales position. Adi specializes in leading global software technology companies like Infinidat and SQream to outstanding growth.

Adi holds a double academic degree in mathematics & computer science.

131: GreyBeards talk native K8s data protection using Veritas NetBackup with Reneé Carlisle

The GreyBeards have been discussing K8s storage services a lot over the last year or so and it was time to understand how container apps and data could be protected. Recently, we saw an article about a Veritas funded survey, discussing the need for data protection in K8s. As such, it seemed a good time to have a talk with Reneé Carlisle (@VeritasTechLLC), Staff Product Manager for NetBackup (K8S), Veritas.

It turns out that Veritas NetBackup (NBU) has just released their 2nd version of K8s data protection. It’s gone completely (K8s) native. That is, Veritas have completely re-implemented all 3 tiers of NBU as K8s micro services. Moreover, the new release still supports all other NBU infrastructure implementations, such as bare metal or VM NBU primary server/media server services. It’s almost like you have all the data protection offered by NBU for the enterprise over the years, now also available for K8s container apps. Listen to the podcast to learn more.

To make use of NBU K8s, backup admins establish named gold, silver, bronze backup policies selecting frequency of backups, retention periods, backup storage, etc. Then DevOps would tag a namespace, pods, containers, or PVs with those data protection policy names. Once this is done, NBU K8S will start protecting that namespace, pod, container, or PV.

In addition, backup admins can include or exclude specific K8s namespace(s), pod(s), container(s), labels (tags), or PVs to be backed up with a specific policy. When that policy is triggered it will go out into the cluster to see if those K8s elements are active and start protecting them or excluding them from protection as requested.

NBU K8s has an Operator service, Data Mover services and other micro services that execute in the cluster. That is, at least one Operator service must be deployed in the cluster (recommended to be in a separate namespace but this is optional). The Operator service is the control plane for NBU K8S services. It will spin up data movers when needed and spin them down when done.

The Operator service supports a CLI but more importantly to DevOps, a complete implemented RESTful API service. Turns out the CLI is implemented ontop of the NBU (Operator) API. With the NBU API DevOps CI/CD tools or other automation can perform all the data protection services to protect K8s.

One historical issue with backup processing is that it can consume every ounce of network/storage and sometimes compute power in an environment. The enterprise class data movers (or maybe the Operator control plane) has various mechanisms to constrain or limit NBU K8S resource consumption so that this doesn’t become a problem.

But as the Operator and its Data Mover are just micro services, if there’s need for more throughput, more can be spun up or if there’s a need to reduce bandwidth, some of them can be spun down, all with no manual intervention whatsoever.

Furthermore, NBU K8s can be used to restore/recover PVs, containers, applications or namespaces to other, CNCF compliant K8s infrastructure. So, if you wanted to say, move your K8s namespace from AKS to GKE or onprem to RedHat OpenShift, it becomes a simple matter of moving the last NBU backup to the target environment, deploying NBU K8s in that environment and restoring the namespace.

NBU K8s can also operate in the cloud just as well as on prem and works in any CNCF compatible K8s environment which includes AKS, EKS, GKE, VMware Tanzu and OpenShift.

In the latest NBU K8s they implemented new, enterprise class Data Movers as micro services in order to more efficiently protect and recover K8S resources. Enterprise class Data Movers can perform virus-scanning/ransomware detection, encryption, data compression, and other services that enterprise customers have come to expect from NBU data protection.

NBU K8S accesses PV data, container, pod and namespace data and metadata using standard CSI storage provider and normal K8s API services.

As mentioned earlier, in the latest iteration of NBU K8s, they have completely implemented their NBU infrastructure, natively as containers. That adds, K8s auto-scaling, full CI/CD automation via APIs, to all the rest of NBU infrastructure operating completely in the K8s cluster.

So, now backup admins can run NBU completely in K8s or run just the Operator and its data mover services connecting to other NBU infrastructure (primary server and media servers) executing elsewhere in the data center.

NBU K8s supports all the various, disk, dedicated backup appliances, object/cloud storage or other backup media options that NBU uses. So that means you can store your K8s backup data on the cloud, in secondary storage appliances, or anyplace else that’s supported by NBU.

Licensing for NBU K8s follows the currently available Veritas licensing such as front end TB protected, subscription and term licensing options are available.

Reneé Carlisle, Staff Product Manager, Veritas NetBackup (K8S)

Reneé (LinkedIn) has been with Veritas Technologies for eleven years in various focus areas within the NetBackup Product Management Team.  In her current role she is the Product Manager responsible for the NetBackup strategic direction of Modern Platforms including Kubernetes and OpenStack.   She has a significant technical background into many of the NetBackup features including Kubernetes, virtualization, Accelerator, and cloud.  

Prior to working for Veritas, she was a customer running a large-scale NetBackup operation as well as a partner implementing, designing, and integrating NetBackup in many different companies.

130: GreyBeards talk high-speed database access using Apache Arrow Flight, with James Duong and David Li

We had heard about Apache Arrow and Arrow Flight as being a hi-performing database with access speeds to match for a while now and finally got a chance to hear what it was all about with James Duong, Co-Fourder of Bit Quill Technologies/Senior Staff Developer at Dremio and David Li (@lidavidm), Apache PMC and software developer at Voltron Data.

First, Apache Arrow is an open source, in memory data base (GitHub repo) for columnar data that enables lightening fast access and processing of data. Apache Arrow Flight is a set of interfaces, protocols, and services that parallelizes access to load and unload Arrow data over the network, from storage to memory and back, very fast. Listen to the podcast to learn more.

Columnar databases are all the rage these days and have more or less taken over from row oriented data bases. With row based database, data is stored (and accessed) row by row. In a columnar database, data is stored in columns, i.e, all data for one column is stored in sequence and then the next column is stored in sequence. Columnar databases can be queried/processed faster than row databased (depending on whether you are looking at/accessing multiple columns per row or not). And columnar data should compress better as all the data in a single column is of the same type..

Also the fact that columns are located contiguous in memory means if you process a column at a time, CPU data caches should work better. This is because they can grab a whole vector (columns worth of data) with one request.

Arrow data is processed and accessed in record batches. These are 2D segments which represent all the columns in a sequence/set of rows. And record batches are the unit of parallelism in Arrow and Arrow Flight. So an Arrow client operating on a CPU thread/core/chip or server could be processing one record batch while another CPU thread/core/CPU or server could process a different record batch.

Arrow Flight (GitHub RPC format doc repo) is an RPC framework that includes API’s, protocols, standards (for on storage, on wire and in memory) and libraries used to transfer Arrow data and metadata (record batches) across the network. For the typical system there exists Flight clients and Flight services in a system.

Arrow Flight currently uses Google’s gRPC for data transfers. gRPC is a open source remote procedure call (RPC) service that supports within data center, across data centers and out to the edge processing services. Although Arrow Flight is currently implemented on top of gRPC, other network protocols will be supported in the future.

What makes Arrow Flight so fast is its ability to support parallel transfers. That is customers can configure Arrow (Flight) clients across clusters of servers and Arrow (Flight) services residing on one or more other servers. Any client can request metadata and record batches from any end point (Flight service) in the data center. And yes Arrow data can be supplied from multiple end points by being mirrored/replicated. All data transfers can operate in parallel across all Flight client and services, with no known bottleneck other than the network.

A single stream of Arrow Flight data was able to deliver 20GB/sec. The fact that you can have any (?) number of Arrow Flight data streams in operation at the same time makes that a very interesting number.

Also, Arrow data can be stored on or sourced from typical data lakes such as Azure Data Lake, AWS S3, Google Cloud storage, etc.

Another advantage of Arrow Flight is the ability to use the same format on the wire and in storage. Normally JDBC (and ODBC) have on storage and on wire formats which require format conversion (serialization) to move data from storage/memory to wire and another conversion (deserialization) to move data from on wire format to in storage/memory format. Arrow Flight does away with serialization and deserialization of data all together and uses the same format for on wire and in storage.

Arrow Flight SQL allows Arrow processing of SQL database data. My understanding is that customers using non Arrow databases such as Oracle, SQL Server, Postgres, etc. can use Arrow Flight SQL to provide Arrow in-memory database processin/query execution for their data.

Arrow and Arrow flight are primarily used to process data analytics workloads but Arrow also has a new execution engine, the Arrow Gandiva project, that enables vectorized processing of Arrow data. This is a special execution engine for Arrow that supports X86 cores with AVX instructions, (NVIDIA) GPUs, and FPGAs.

There’s also an open source package, Fletcher, used to create Arrow and Arrow flight processing HDLs so that customers can add Arrow data processing and Arrow Flight data transfer functionality to custom built FPGAs.

One challenge with open source software is support for problems/bugs that crop up. An active developer community helps, but enterprise customers require professional, on call 7×24 (5×12?) support for all their critical (and most non-critical) software. Voltron Data (David’s) company provides paid for support for Arrow Flight and Arrow data services.

The other major problem with open source software has been use complexity. At the moment the Arrow Flight team is very responsive in clarifying documentation and are trying to make it easier to use. But at the moment Arrow Flight is mostly a set of APIs, libraries and connectors that end users can use to standup Arrow (Flight) clients and servers to transfer Arrow data between them.

James Duong, Co-Founder Bit Quill Technologies & Sr. Staff Developer at Dremio

An Apache Arrow contributor, cofounder at Bit Quill Technologies, and contributor to Dremio Corporation projects, James Duong has worked with databases for over 15 years, from backend query engines to drivers and protocols. He’s worked with a variety of relational, big data, and cloud databases including Dremio, SQL Server, Redshift, and Hive.

Previously at Simba Technologies, James architected and built connectors for sources, as well as designing the Simba Engine SDK for developing connectivity solutions for any data source.

Bit Quill Technologies, the company James helped co-found, builds back end software in the data and cloud space. Bit Quill has built a name for itself as a producer of high-quality software, a collaborative approach to design and development, and a love for good tech and happy people.

Balancing his passion for the data ecosystem with a young family, James occasionally steps away from it all to go hiking.

David Li, Apache Arrow PMC and software engineer at Voltron Data

David is a PMC member for Apache Arrow and a software engineer at Voltron Data (formerly known as Ursa Computing). Prior to that, he worked on data services and Apache Arrow at Two Sigma.

David holds an M.Eng. in Computer Science from Cornell University.

129: GreyBeards talk composable infrastructure with GigaIO’s, Matt Demas, Field CTO

We haven’t talked composable infrastructure in a while now but it’s been heating up lately. GigaIO has some interesting tech and I’ve been meaning to have them on the show but scheduling never seemed to work out. Finally, we managed to sync schedules and have Matt Demas, field CTO at GigaIO (@giga_io) on our show.

Also, please welcome Jason Collier (@bocanuts), a long time friend, technical guru and innovator to our show as another co-host. We used to have these crazy discussions in front of financial analysts where we disagreed completely on the direction of IT. We don’t do these anymore, probably because the complexities in this industry can be hard to grasp for some. From now on, Jason will be added to our gaggle of GreyBeard co-hosts.

GigaIO has taken a different route to composability than some other vendors we have talked with. For one, they seem inordinately focused on speed of access and reducing latencies. For another, they’re the only ones out there, to our knowledge, demonstrating how today’s technology can compose and share memory across servers, storage, GPUs and just about anything with DRAM hanging off a PCIe bus. Listen to the podcast to learn more.

GigaIO started out with pooling/composing memory across PCIe devices. Their current solution is built around a ToR (currently Gen4) PCIe switch with logic and a party of pooling appliances (JBoG[pus], JBoF[lash], JBoM[emory],…). They use their FabreX fabric to supply rack-scale composable infrastructure that can move (attach) PCIe componentry (GPUs, FPGAs, SSDs, etc.) to any server on the fabric, to service workloads.

We spent an awful long time talking about composing memory. I didn’t think this was currently available, at least not until the next version of CXL, but Matt said GigaIO together with their partner MemVerge, are doing it today over FabreX.

We’ve talked with MemVerge before (see: 102: GreyBeards talk big memory … episode). But when last we met, MemVerge had a memory appliance that virtualized DRAM and Optane into an auto-tiering, dual tier memory. Apparently, with GigaIO’s help they can now attach a third tier of memory to any server that needs it. I asked Matt what the extended DRAM response time to memory requests were and he said ~300ns. And then he said that the next gen PCIe technology will take this down considerably.

Matt and Jason started talking about High Bandwidth Memory (HBM) which is internal to GPUs, AI boards, HPC servers and some select CPUs that stacks synch DRAM (SDRAM) into a 3D package. 2nd gen HBM silicon is capable of 256 GB/sec per package. Given this level of access and performance. Matt indicated that GigaIO is capable of sharing this memory across the fabric as well.

We then started talking about software and how users can control FabreX and their technology to compose infrastructure. Matt said GigaIO has no GUI but rather uses Redfish management, a fully RESTfull interface and API. Redfish has been around for ~6 yrs now and has become the de facto standard for management of server infrastructure. GigaIO composable infrastructure support has been natively integrated into a couple of standard cluster managers. For example. CIQ Singularity & Fuzzball, Bright Computing cluster managers and SLURM cluster scheduling. Matt also mentioned they are well plugged into OCP.

Composable infrastructure seems to have generated new interest with HPC customers that are deploying bucketfuls of expensive GPUs with their congregation of compute cores. Using GigaIO, HPC environments like these can overnight, go from maybe 30% average GPU utilization to 70%. Doing so can substantially reduce acquisition and operational costs for GPU infrastructure significantly. One would think the cloud guys might be interested as well.

Matt Demas, Field CTO, GigaIO

Matt’s career spans two decades of experience in architecting innovative IT solutions, starting with the US Air Force. He has built federal, healthcare, and education-based vertical solutions at companies like Dell, where he was a Senior Solutions Architect. Immediately prior to joining GigaIO, he served as Field CTO at Liqid. 

Matt holds a Bachelor’s degree in Information Technology from American InterContinental University, and an MBA from Concordia University Austin.

128: GreyBeards talk containers, K8s, and object storage with AB Periasamy, Co-Founder&CEO MinIO

Sponsored by:

Once again Keith and I are talking K8s storage, only this time it was object storage. Anand Babu (AB) Periasamy, Co-founder and CEO of MinIO, has been on our show a couple of times now and its always an insightful discussion. He’s got an uncommon perspective on IT today and what needs to change.

Although MinIO is an open source, uber-compatible, S3 object store, AB more often talks like a revolutionary, touting the benefits of containerization, scale and automation with K8s. Object storage is just one of the vehicles to help get there. Listen to the podcast to learn more.

We started our discussion on the changing role of object storage in applications. Object storage started out as an archive solution. But then, over time, something happened, modern database startups adopted object storage to hold primary data, then analytics moved over to objects in a big way, and finally AI/ML came out with an unquenchable thirst for data and object storage was its only salvation.

Keith questioned the use of objects in analytics. Both AB and I pointed out that Splunk (and Spark) fully supported objects. But Keith said R (and Python) data scientists prefer to use protocols they learned in school, and these were all about (CSV, JPEGs, JSON) files. AB said what usually happens is this data is stored as object storage and then downloaded onto local disk as files to be processed. That’s not to say, that R or Python can’t process objects directly, but when they don’t, the ultimate source of data truth is object storage.

Somehow, we got onto the multi-cloud question. AB said the multi-cloud is really all about containers and K8s. When customers talk multi-cloud, what they really mean is they want applications that can run anywhere, in any cloud, on premise, or anyplace else for that matter.

I thought multi cloud was a DR solution. But AB reiterated it’s more a solution to vendor lock-in. What containerization gives IT is the option (ability) to run applications anywhere, but IT is not obligated to execute that option unless it makes sense

AB said that dev today doesn’t develop apps in the cloud anymore. They develop locally using minikube, once it’s working there they then add CI/CD tool chains and then move it to its final resting place (the cloud or wherever it ultimately needs to run). It turns out, containers, YAML files, scripts etc. are small and trivial to upload, migrate, or move to any internet location. And with ubiquitous K8s support available everywhere, they can move anywhere unchanged.

But where’s the data. AB said anywhere the app executes. It’s never moved, it takes too much time and effort to move this amount of data. But as applications move, any data it generates grows in that location over time.

We next turned to how MinIO was supported in K8s. AB mentioned they have a DirectPV CSI driver that creates a distributed PV to support MinIO services on local disks. In this way, containers needing access to MinIO S3 object storage can directly allocate data to user storage.

Then we asked about opinionated stacks. AB said most customers don’t want these. They may have some value in preserving an infrastructure environment but they’re better off transitioning to containerization and build any stack within those containers and the K8s cluster services.

On the other hand, MinIO object storage is available with the same S3 API, in bare metal, on VMware, OpenShift, K8s, every public cloud and most private clouds, as well. The advantage of the same, single storage interface, available everywhere can’t be beat.

MinIO recently closed a new funding round of $103M. AB mentioned they had new investments from Intel and Softbank, but I was more interested in plans he had for the new cash. And Keith asked where the new funding left MinIO with respect to its competitors in this space.

AB said it was never about the money, it was more about what you did with your team that mattered in the long run. AB’s imperative was to enter an existing market with a better product and succeed with that. Creating a new market plus a new product always cost more, takes longer and is riskier.

As for the new funds, there are really two ways to go: 1) improve the current product or 2) create a new one. My sense is that AB leans towards improving the current product.

For instance, MinIO is often asked to support a different object storage API. But AB’s perspective is that S3 was an early bet that paid off well by becoming the de facto standard for object storage. Supporting another API would divide his resources and probably make their current product worse not better. AB mentioned they are getting 1.1M downloads of their Docker container version so they seem to be succeeding well with the current product

Anand Babu (AB) Periasamy, Co-founder and CEO

AB Periasamy is the co-founder and CEO of MinIO, an open-source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).

AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat’s Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory’s “Thunder” code, which, at the time was the second fastest in the world.  

AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.