title

Storage Developer Conference

SNIA Technical Council

0
Followers
1
Plays
Storage Developer Conference
Storage Developer Conference

Storage Developer Conference

SNIA Technical Council

0
Followers
1
Plays
OVERVIEWEPISODESYOU MAY ALSO LIKE

Details

About Us

Storage developer Podcast, created by developers for developers.

Latest Episodes

#115: Accelerating RocksDB with Eideticom’s NoLoad NVMe-based Computational Storage Processor

RocksDB, a high performance key-value database developed by Facebook, has proven effective in using the high data speeds made possible by Solid State Drives (SSDs). By leveraging the NVMe standard, Eideticom’s NoLoad presents FPGA computational storage processors as NVMe namespaces to the operating system and enables efficient data transfer between the NoLoad Computational Storage Processors (CSPs), host memory and other NVMe/PCIe devices in the system. Presenting Computational Storage Processors as NVMe namespaces has the significant benefit of minimal software effort to integrate computational resources. In this presentation we use Eideticom’s NoLoad to speed up RocksDB. Compared to software compaction running on a Dell R7425 PowerEdge server, our NoLoad, running on Xilinx’s Avleo U280, resulted in 6x improvement in database transactions and 2.5x reduction is CPU usage while reducing worst case latency by 2.7x. Learning Objectives: 1) Computational storage with NVMe; 2) Presenting computational storage processors as NVMe namespaces; 3) Accelerating database access with NVMe computational storage processors.

42 MIN3 d ago
Comments
#115: Accelerating RocksDB with Eideticom’s NoLoad NVMe-based Computational Storage Processor

#114: NVM Express Specifications: Mastering Today’s Architecture and Preparing for Tomorrow’s

Since the first release of NVMe 1.0 in 2011, the NVMe family of specifications continue to expand to support current and future storage markets, increasing the amount of new features and functionality. With that natural, organic growth, however, comes additional complexity. In order to refocus on simplicity and ease-of-development, the NVM Express group has undertaken a massive effort to refactor the specification. The upcoming refactored specification - NVMe 2.0 - integrates the scalable and flexible NVMe over Fabrics architecture within the NVMe base specification, meeting the needs of platform designers, device vendors and developers. But how can developers optimally design their products using the new NVMe 2.0 specification? This session will provide attendees with the following insights: • An overview of the existing specification structure, its logic and limitations • Highlights on how developers use the current specification before refactoring • Information showing how the refactored specification enables companies to architect their products with better awareness of future areas of innovation • Details on how new features and functionalities will be included in the refactored specification • Descriptions of how developers can leverage the refactored NVMe 2.0 specification to simply and efficiently bring new products to market • Examination of the current projects and how to contribute Learning Objectives: 1) Overview of the current NVMe specification structure; 2) Introduction to NVMe 2.0: the refactored specification enables companies and developers to simply and efficiently bring new products to market; 3) The new features and functionalities that will be included in NVMe 2.0 and how to get involved in current projects.

50 MIN2 w ago
Comments
#114: NVM Express Specifications: Mastering Today’s Architecture and Preparing for Tomorrow’s

#113: Latency is more than just a number

Over the years, SSD QoS has become more important to a variety of storage market segments. Traditional latency reporting methods do not always accurately depict QoS behaviors. This is problematic when attempting to understand what events lead to a specific QoS level and how to mitigate latency events that lead to levels of QoS. Defining correct statistical techniques for large populations of latencies deepens our understanding of what drives levels of QoS. Advanced statistical techniques, such a machine learning and utilizing AI, allows for deeper understanding of what drives QoS and how to correctly manage large quantities of latencies. New visualization techniques enhance capabilities to understand latency behavior and define critical scenarios that drive latency. Learning Objectives: 1) Identify shortcomings of current QoS reporting; 2) Generate more reliable QoS values; 3) Techniques to broaden understanding of groups of latencies; 4) Identification of critical transitions in latency; 5) Identify inaccuracies that inhibit understanding QoS.

51 MINNOV 6
Comments
#113: Latency is more than just a number

#112: Computational Storage Architecture Development

With the onset of the Computational Storage TWG and growth of interest in the market for these new and emerging solutions, it is imperative to understand how to develop, deploy and scale these new technologies. This session will walk through the new definitions, how each can be deployed and show use cases of NGD Systems Computational Storage Devices (CSD). Learning Objectives: 1. Learn the different kinds of Computational Storage 2. Understand the use cases for each type of solutions 3. Determine the ease of deployment and the value of these solutions

50 MINOCT 29
Comments
#112: Computational Storage Architecture Development

#111: SMB3 Landscape and Directions

SMB3 has seen significant adoption as the storage protocol of choice for running private cloud deployments. With the recent advances in persistent memory technologies, we will take a look at how we can leverage the SMB3 protocol in conjunction with SMBDirect/RDMA to provide very low latency access to persistent memory devices across the network. With the increasing popularity of cloud storage - technologies like Azure Files which provide seamless access to cloud stored data via the standard SMB3 protocol is seeing significant interest. One of the key requirements in this space is to be able to run SMB3 over a secure / firewall friendly internet protocol. We will take a quick look at some work we are doing to enable SMB3 over QUIC - which is a recent UDP based transport with strong security and interop properties. We will also explore some work we have done to enable on-the-wire compression for SMB3. Learning Objectives: 1) Learn how we can use SMB3 to setup direct RDMA access to remote persistent memory; 2) Using QUIC as a transport for SMB3; 3) How can we use data compression algorithms to optimize SMB data transfer?

45 MINOCT 15
Comments
#111: SMB3 Landscape and Directions

#110: Datacenter Management of NVMe Drives

This talk describes work going on in three different organizations to enable scale out management of NVMe SSDs. The soon to be released NVME-MI 1.1 standard will allow management from host based agents as well as BMCs. This might be extended to allow support for Binary Encoded JSON (BEJ) in support of host agents and BMCs that want to support the Redfish Standard. We will also cover work going on in SNIA (Object Drive TWG) and DMTF in support. Learning Objectives: 1) Principles and limitations of scale out datacenter management; 2) An understanding of the NVMe-MI standard; 3) A Redfish profile for NVMe drives; 4) Inside the box management networks and outside the box management networks; 5) Platform Layer Data Model (PLDM).

43 MINOCT 9
Comments
#110: Datacenter Management of NVMe Drives

#109: Real-world Performance Advantages of NVDIMM and NVMe

As NVDIMMs enter the realm of standard equipment on servers and storage arrays and NVMe is standard equipment for servers and consumer devices alike, what is the actual performance advantage of using NVDIMM over NVMe, or NVMe over SAS or SATA SSDs? First, we’ll review some purely synthetic benchmarks of single devices using different storage technologies and see how they differ. Then, we’ll enter a more real world environment and see what performance gains can be had. One use of NVDIMMs is as a transaction log to allow quick acknowledgement of write operations. In our real-world scenario, we discuss the performance differences of using NVDIMMs, NVMe Flash, or SAS/SATA Flash as the SLOG or “write-cache” for an OpenZFS pool. Learning Objectives: 1) Performance differences between different storage media and storage transports for transactional workloads; 2) Basic overview of OpenZFS and how a SLOG works; 3) Impact of low latency NVDIMM and NVMe storage for application and user latency.

44 MINOCT 2
Comments
#109: Real-world Performance Advantages of NVDIMM and NVMe

#108: SPDK NVMe: An In-depth Look at its Architecture and Design

The Storage Performance Development Kit (SPDK) open source project is gaining momentum in the storage industry for its drivers and libraries for building userspace, polled mode storage applications and appliances. The SPDK NVMe driver was SPDK’s first released building block and is its most well-known. The driver’s design and architecture is heavily influenced by SPDK’s userspace polled-mode framework which has resulted in some significant differences compared to traditional kernel NVMe drivers. This presentation will present an overview of the SPDK NVMe driver’s architecture and design, a historical perspective on key design decisions and a discussion on the driver’s advantages and limitations. Learning Objectives: 1) Gain a deeper understanding of the architecture and design of the SPDK NVMe driver; 2) Identify the key design differences between a userspace polled-mode driver and a traditional kernel-mode driver; 3) Describe the key advantages and limitations of SPDK and its polled mode NVMe driver.

34 MINSEP 17
Comments
#108: SPDK NVMe: An In-depth Look at its Architecture and Design

#107: The Long and Winding Road to Persistent Memories

Persistent Memory is getting a lot of attention. SNIA has released a programming standard, NVDIMM makers, with the help of JEDEC, have created standardized hardware to develop & test PM, and chip makers continue to promote upcoming devices, although few are currently available. In this talk two industry analysts, Jim Handy & Tom Coughlin, will provide the state of Persistent Memory and show a realistic roadmap of what the industry can expect to see and when they can expect to see it. The presentation, based on three critical reports covering New Memory Technologies, NVDIMMs, and Intel’s 3D XPoint Memory (also known as Optane) will illustrate the Persistent Memory market, the technologies that vie to play a role, and the critical economic obstacles that continue to impede these technologies’ progress. We will also explore how advanced logic process technologies are likely to cause persistent memories to become a standard ingredient in embedded applications, such as IoT nodes long before they make sense in servers. Learning Objectives: 1) What is the state of emerging memory technologies; 2) What technologies will be used in future NVDIMMS; 3) Emerging memory use in embedded and enterprise applications; 4) What are the costs for making emerging memories.

49 MINAUG 27
Comments
#107: The Long and Winding Road to Persistent Memories

#106: Container Attached Storage (CAS) with openEBS

Applying micro service patterns to storage giving each workload its own Container Attached Storage (CAS) system. This puts the DevOps persona within full control of the storage requirements and brings data agility to k8s persistent workloads. We will go over the concept and the implementation of CAS, as well as its orchestration. Learning Objectives: 1) Go over the modern day apps and their storage needs; under the notion of applications have changed someone forgot to tell storage; 2) What are the problems to use technologies like user space IO, in particular using technologies like SPDK among others; 3) Looking devops and the k8s model, how can we bring the power of user space storage in developers hands? Virtio for containers? direct access from the go run time for example SPDK?; 4) We have tried both, and like to share the outcome of this with you.

39 MINAUG 20
Comments
#106: Container Attached Storage (CAS) with openEBS

Latest Episodes

#115: Accelerating RocksDB with Eideticom’s NoLoad NVMe-based Computational Storage Processor

RocksDB, a high performance key-value database developed by Facebook, has proven effective in using the high data speeds made possible by Solid State Drives (SSDs). By leveraging the NVMe standard, Eideticom’s NoLoad presents FPGA computational storage processors as NVMe namespaces to the operating system and enables efficient data transfer between the NoLoad Computational Storage Processors (CSPs), host memory and other NVMe/PCIe devices in the system. Presenting Computational Storage Processors as NVMe namespaces has the significant benefit of minimal software effort to integrate computational resources. In this presentation we use Eideticom’s NoLoad to speed up RocksDB. Compared to software compaction running on a Dell R7425 PowerEdge server, our NoLoad, running on Xilinx’s Avleo U280, resulted in 6x improvement in database transactions and 2.5x reduction is CPU usage while reducing worst case latency by 2.7x. Learning Objectives: 1) Computational storage with NVMe; 2) Presenting computational storage processors as NVMe namespaces; 3) Accelerating database access with NVMe computational storage processors.

42 MIN3 d ago
Comments
#115: Accelerating RocksDB with Eideticom’s NoLoad NVMe-based Computational Storage Processor

#114: NVM Express Specifications: Mastering Today’s Architecture and Preparing for Tomorrow’s

Since the first release of NVMe 1.0 in 2011, the NVMe family of specifications continue to expand to support current and future storage markets, increasing the amount of new features and functionality. With that natural, organic growth, however, comes additional complexity. In order to refocus on simplicity and ease-of-development, the NVM Express group has undertaken a massive effort to refactor the specification. The upcoming refactored specification - NVMe 2.0 - integrates the scalable and flexible NVMe over Fabrics architecture within the NVMe base specification, meeting the needs of platform designers, device vendors and developers. But how can developers optimally design their products using the new NVMe 2.0 specification? This session will provide attendees with the following insights: • An overview of the existing specification structure, its logic and limitations • Highlights on how developers use the current specification before refactoring • Information showing how the refactored specification enables companies to architect their products with better awareness of future areas of innovation • Details on how new features and functionalities will be included in the refactored specification • Descriptions of how developers can leverage the refactored NVMe 2.0 specification to simply and efficiently bring new products to market • Examination of the current projects and how to contribute Learning Objectives: 1) Overview of the current NVMe specification structure; 2) Introduction to NVMe 2.0: the refactored specification enables companies and developers to simply and efficiently bring new products to market; 3) The new features and functionalities that will be included in NVMe 2.0 and how to get involved in current projects.

50 MIN2 w ago
Comments
#114: NVM Express Specifications: Mastering Today’s Architecture and Preparing for Tomorrow’s

#113: Latency is more than just a number

Over the years, SSD QoS has become more important to a variety of storage market segments. Traditional latency reporting methods do not always accurately depict QoS behaviors. This is problematic when attempting to understand what events lead to a specific QoS level and how to mitigate latency events that lead to levels of QoS. Defining correct statistical techniques for large populations of latencies deepens our understanding of what drives levels of QoS. Advanced statistical techniques, such a machine learning and utilizing AI, allows for deeper understanding of what drives QoS and how to correctly manage large quantities of latencies. New visualization techniques enhance capabilities to understand latency behavior and define critical scenarios that drive latency. Learning Objectives: 1) Identify shortcomings of current QoS reporting; 2) Generate more reliable QoS values; 3) Techniques to broaden understanding of groups of latencies; 4) Identification of critical transitions in latency; 5) Identify inaccuracies that inhibit understanding QoS.

51 MINNOV 6
Comments
#113: Latency is more than just a number

#112: Computational Storage Architecture Development

With the onset of the Computational Storage TWG and growth of interest in the market for these new and emerging solutions, it is imperative to understand how to develop, deploy and scale these new technologies. This session will walk through the new definitions, how each can be deployed and show use cases of NGD Systems Computational Storage Devices (CSD). Learning Objectives: 1. Learn the different kinds of Computational Storage 2. Understand the use cases for each type of solutions 3. Determine the ease of deployment and the value of these solutions

50 MINOCT 29
Comments
#112: Computational Storage Architecture Development

#111: SMB3 Landscape and Directions

SMB3 has seen significant adoption as the storage protocol of choice for running private cloud deployments. With the recent advances in persistent memory technologies, we will take a look at how we can leverage the SMB3 protocol in conjunction with SMBDirect/RDMA to provide very low latency access to persistent memory devices across the network. With the increasing popularity of cloud storage - technologies like Azure Files which provide seamless access to cloud stored data via the standard SMB3 protocol is seeing significant interest. One of the key requirements in this space is to be able to run SMB3 over a secure / firewall friendly internet protocol. We will take a quick look at some work we are doing to enable SMB3 over QUIC - which is a recent UDP based transport with strong security and interop properties. We will also explore some work we have done to enable on-the-wire compression for SMB3. Learning Objectives: 1) Learn how we can use SMB3 to setup direct RDMA access to remote persistent memory; 2) Using QUIC as a transport for SMB3; 3) How can we use data compression algorithms to optimize SMB data transfer?

45 MINOCT 15
Comments
#111: SMB3 Landscape and Directions

#110: Datacenter Management of NVMe Drives

This talk describes work going on in three different organizations to enable scale out management of NVMe SSDs. The soon to be released NVME-MI 1.1 standard will allow management from host based agents as well as BMCs. This might be extended to allow support for Binary Encoded JSON (BEJ) in support of host agents and BMCs that want to support the Redfish Standard. We will also cover work going on in SNIA (Object Drive TWG) and DMTF in support. Learning Objectives: 1) Principles and limitations of scale out datacenter management; 2) An understanding of the NVMe-MI standard; 3) A Redfish profile for NVMe drives; 4) Inside the box management networks and outside the box management networks; 5) Platform Layer Data Model (PLDM).

43 MINOCT 9
Comments
#110: Datacenter Management of NVMe Drives

#109: Real-world Performance Advantages of NVDIMM and NVMe

As NVDIMMs enter the realm of standard equipment on servers and storage arrays and NVMe is standard equipment for servers and consumer devices alike, what is the actual performance advantage of using NVDIMM over NVMe, or NVMe over SAS or SATA SSDs? First, we’ll review some purely synthetic benchmarks of single devices using different storage technologies and see how they differ. Then, we’ll enter a more real world environment and see what performance gains can be had. One use of NVDIMMs is as a transaction log to allow quick acknowledgement of write operations. In our real-world scenario, we discuss the performance differences of using NVDIMMs, NVMe Flash, or SAS/SATA Flash as the SLOG or “write-cache” for an OpenZFS pool. Learning Objectives: 1) Performance differences between different storage media and storage transports for transactional workloads; 2) Basic overview of OpenZFS and how a SLOG works; 3) Impact of low latency NVDIMM and NVMe storage for application and user latency.

44 MINOCT 2
Comments
#109: Real-world Performance Advantages of NVDIMM and NVMe

#108: SPDK NVMe: An In-depth Look at its Architecture and Design

The Storage Performance Development Kit (SPDK) open source project is gaining momentum in the storage industry for its drivers and libraries for building userspace, polled mode storage applications and appliances. The SPDK NVMe driver was SPDK’s first released building block and is its most well-known. The driver’s design and architecture is heavily influenced by SPDK’s userspace polled-mode framework which has resulted in some significant differences compared to traditional kernel NVMe drivers. This presentation will present an overview of the SPDK NVMe driver’s architecture and design, a historical perspective on key design decisions and a discussion on the driver’s advantages and limitations. Learning Objectives: 1) Gain a deeper understanding of the architecture and design of the SPDK NVMe driver; 2) Identify the key design differences between a userspace polled-mode driver and a traditional kernel-mode driver; 3) Describe the key advantages and limitations of SPDK and its polled mode NVMe driver.

34 MINSEP 17
Comments
#108: SPDK NVMe: An In-depth Look at its Architecture and Design

#107: The Long and Winding Road to Persistent Memories

Persistent Memory is getting a lot of attention. SNIA has released a programming standard, NVDIMM makers, with the help of JEDEC, have created standardized hardware to develop & test PM, and chip makers continue to promote upcoming devices, although few are currently available. In this talk two industry analysts, Jim Handy & Tom Coughlin, will provide the state of Persistent Memory and show a realistic roadmap of what the industry can expect to see and when they can expect to see it. The presentation, based on three critical reports covering New Memory Technologies, NVDIMMs, and Intel’s 3D XPoint Memory (also known as Optane) will illustrate the Persistent Memory market, the technologies that vie to play a role, and the critical economic obstacles that continue to impede these technologies’ progress. We will also explore how advanced logic process technologies are likely to cause persistent memories to become a standard ingredient in embedded applications, such as IoT nodes long before they make sense in servers. Learning Objectives: 1) What is the state of emerging memory technologies; 2) What technologies will be used in future NVDIMMS; 3) Emerging memory use in embedded and enterprise applications; 4) What are the costs for making emerging memories.

49 MINAUG 27
Comments
#107: The Long and Winding Road to Persistent Memories

#106: Container Attached Storage (CAS) with openEBS

Applying micro service patterns to storage giving each workload its own Container Attached Storage (CAS) system. This puts the DevOps persona within full control of the storage requirements and brings data agility to k8s persistent workloads. We will go over the concept and the implementation of CAS, as well as its orchestration. Learning Objectives: 1) Go over the modern day apps and their storage needs; under the notion of applications have changed someone forgot to tell storage; 2) What are the problems to use technologies like user space IO, in particular using technologies like SPDK among others; 3) Looking devops and the k8s model, how can we bring the power of user space storage in developers hands? Virtio for containers? direct access from the go run time for example SPDK?; 4) We have tried both, and like to share the outcome of this with you.

39 MINAUG 20
Comments
#106: Container Attached Storage (CAS) with openEBS
hmly
himalayaプレミアムへようこそ聴き放題のオーディオブックをお楽しみください。