Author Archives: walthubistechcom

SNIA Webcast: Encryption & Key Management for Storage Systems

I’ll be presenting at a BrightTalk seminar today about key management today. The details:

US Date/Time:  Tuesday, June 7, 2016 – 1:00 pm PT / 4:00 pm ET

Register: https://www.brighttalk.com/webcast/663/196469

Moderator: Thomas Rivera, Senior Technical Associate, Hitachi Data Systems

Panelists: Tony Cox, Director of Strategy and Alliances, Cryptsoft; Eric Hibbard, CTO of Security and Privacy, Hitachi Data Systems; Walt Hubis, Owner, Hubis Technical Associates; Tim Hudson, CTO, Cryptsoft

About the Webcast

This Webcast will explore the fundamental concepts of implementing secure enterprise storage using current technologies, and will focus on the implementation of a practical secure storage system. The high level requirements that drive the implementation of secure storage for the enterprise, including legal issues, key management, current available technologies, as well as fiscal considerations will be explored.

There will also be implementation examples that will illustrate how these requirements are applied to actual system implementations.

At the end, there will be a Q&A at the end for the audience to ask questions for the Panelists.
 

Encryption, Key Management, and ISO/IEC 27040

SNIA has recently published a paper that I worked on with Eric Hibbard of Hitachi Data Systems and the SNIA Security Technical Work Group. This paper provides some insights into the requirements of ISO/IEC 27040 Information technology — Security techniques — Storage Security.

The paper is entitled Storage Security: Encryption and Key Management:

Abstract: The ISO/IEC 27040:2015 (Information technology – Security techniques – Storage security) standard provides detailed technical guidance on controls and methods for securing storage systems and ecosystems. This whitepaper describes the recommended guidelines for data confidentiality, including data in motion encryption, data at rest encryption, and key management. The practical implications of these recommendations are discussed from both an end user and storage vendor perspective.

2015 Self-Encrypting Drive Market and Technology Report

Tom Coughlin (Coughlin Associates) and I have been working on a report on SED drives in the marketplace. Here’s the release:

2015 Self-Encrypting Drive Market and Technology Report Released

Analysis and Projections for SED HDDs and SSDs

San Jose, CA—April 14, 2015—A newly released report from Coughlin Associates on the self-encrypting drive market, the 2015 Self-Encrypting Drive Market and Technology Report, provides 38 pages of in-depth analysis of factors preventing and promoting growth of self-encrypting HDDs and SSDs. Projections out to 2019 for SED capable and SED enabled hard disk drives and solid state drives show expected growth of these products and their applications.

The major conclusions from this report are:

  • By 2017 we project that 100% of all HDDs shipped will be SED capable, driven by implementation of this capability into commercial HDD controllers.
  • By 2018 about 11% of all HDDs shipping units will shift to SED enabled or promoted products, driven by security adoption demand.
  • By 2018 the high, median and low estimates for SED enabled adoption for HDDs are 85 M, 70 M and 54 M units.
  • By 2014 almost all SSDs were SED capable and by 2015 they all have this capability.
  • Although actual SSD SED feature implementation in 2018 is 100% in about 236 M SSDs, the projected actual SSDs from that year intended for security and data protection purposes is estimated at less than 24 M units.

Various factors behind the slow market adoption of the technology in its early history:

  • slow corporate IT spending due to uncertainty and tight IT budgets in the last few years,
  • lack of knowledge about the difference between HW based encrypted SEDs and SW encrypted solutions.
  • lack of training of OEMs and integrators on the use and advantages of SEDs limits their growth.
  • legal issues limiting the use of encrypted drives in some countries,
  • a limited initial market mainly driven by government mandates,
  • until recently, a lack of common standards and a continuing lack of product certification,
  • lack of secure auditing facilities and,
  • concerns about data availability, largely due to key management issues and operating system support.

Various factors that favor the continued growth of SEDs:

  • cost parity of SEDs to non-self-encrypting storage devices will make it easier to get these products adopted universally,
  • with SEDs there is no discernable encryption time like there is with SW encryption,
  • SEDs don’t have the performance overhead that SW encryption running on the host has, leading to better overall system performance,
  • SEDs may have a somewhat longer useful life than drives used in a software encrypted system, due to increased reads and writes with SW encryption,
  • because the encryption key is stored on the storage device, it cannot be accessed through host hacking, like SW encryption can,
  • SEDs are less complex to implement in storage array encryption solutions,
  • government mandates and regulations are increasing the requirements for privacy and favor the use of SEDs, particularly those with FIPS 140 certification,
  • secure erase reduces re-provisioning and end of life costs, and is the only effective way to make data on a SSD inaccessible

Purchase of the report and the accompanying power point presentation with report figures and tables provides the most definitive information on the self-encrypting HDD and SSD markets and technology.

The 2015 Self-Encrypting Drive Market and Technology Report, is now available from Coughlin Associates. To get a copy send a completed order form in the report brochure at http://www.tomcoughlin.com/techpapers.htm or call us at 408-978-8184 or email us at tom@tomcoughlin.com.

Atomic Commands Part IV: Wrapping it Up

At the May T10 meeting in Vancouver BC, a revision of the atomic write proposal was brought in by Samsung, HP, and NetApp (SBC-4 SPC-5 Atomic writes and reads (14-043) [Martin, Ballard, Knight]) and was accepted by the committee. This provides for a single atomic write and associated information regarding atomic boundaries and granularities. It is unlikely that any vectored command will be proposed unless a strong champion arrives.

This is probably unlikely in the near term, given the unprecedented consolidation now occurring in the solid state storage industry. For example, LSI was acquired by Avago and then sold the SandForce organization to Seagate, while Sandisk acquired Fusion-io. It is more likely that the features that flash technologies can provide will become available through proprietary software stacks for the time being.

Perhaps the most interesting work around persistent memory is occurring in the SNIA Non Volatile Memory technical work group (NVMP). This group has been dealing with some of the changes required by software applications to accommodate using non-volatile memory as storage. Discussions around atomics and transactional operations are routine in this work group, and participation by industry leaders has been steadily growing, a good indication of interest in this technology. The first draft specification is available now, and works in continuing on dealing with further topics, including remote persistent memory and isolation.

Flash Memory Summit 2014

I’ll be speaking at several events during the Flash Memory Summit, August 5-7 in Santa Clara, CA:

  • On Monday, there’s a SNIA Storage Tutorial session where I’ll be presenting my Secure Storage Tutorial.
  • I’ll be giving a really brief introduction to the Non-Volatile Memory Programming technical work group at the SNIA SSSI Solid State Storage Reception on Monday evening.
  • I’ll be participating in an open session Storage Security: Back to the Future (Security Track) where I’ll be talking about the impact of the upcoming ISO/IEC 27040 specification.

Atomic Commands Part III: Atomic Vectored Commands

The real value of vectored commands and atomic commands becomes apparent when they are combined into vectored atomic writes. A database update that requires multiple writes to update a record (e.g., multiple data fields or a data filed and links) will be significantly speeded if the writes can be performed in an vectored write command can be performed atomically. In this case, all of the data segments defined in the vectored command are either all completed successfully, or if an error occurs, they are all restored to their original values before the atomic write was attempted. In other words, if the vectored write operations succeed, all of the data segments will contain the new data. If not, all of the data segments will contain the data they had before the vectored write operation was attempted. This aggregation of writes along with the atomicity properties can lead to a significant improvement in data base performance.

So, what’s the problem?

First, there are issues with vectored commands around error reporting. When if one of the segment writes fails, how do you tell the initiator where the command failed? If the writes are all atomic, it doesn’t matter since they are all the old data. While it is interesting to note the position of the error for failure analysis, but that can be obtained through a variety of ways, including vendor specific methods.

Another problem is related to support for bi-directional commands. Here’s a flow diagram for a typical, non-vectored READ command:

Vectors Figure 2

And here’s the flow for a vectored read command:

Vectors Figure 3

In this case the initiator needs to send the segment descriptor list to the target (a data out phase) and then turn the bus around to receive the incoming data (a data in phase), which a bi-directional SCSI command. This has been the single biggest objection to vectored commands. Many implementations of SCSI transports were not designed to accommodate this type of bus transaction. And it’s not just a firmware change – much of this low level processing has been embedded in SCSI controller state machines.

Finally, implementing vectored atomic write operations is difficult in both traditional rotating media and in array controller systems. For these types of systems, the expense and maintenance of such functionality simply does not make economic sense. But for flash memory based storage systems typically use some sort of write logging mechanism, implementing this functionality is remarkably easy.

…to be continued…

Atomic Commands Part II: Vectored Commands

Another factor that’s important to consider in the atomic write story is the advent of PCIe based storage. Multiple vendors now produce PCIe cards that can provide terabytes of data on a single PCIe card. The NVMe interface was developed to take advantage of this class of device, but T10 was not far behind when the SCSI over PCIe (SOP) and the PCIe Queuing Interface (PQI) specifications were developed in response and are now reaching maturity. These developments have allowed database serves to become much more efficient by speeding access to significant amounts of data at much higher speeds – usually by several orders of magnitude.

With this increase in speed, the system overhead required to process each SCSI command CDB becomes a much larger part of the total time required to process a write or read operation.  With rotating media, it may take many milliseconds to process a write command due to the latency inherent in the physical media. In this case, the time required to Process the command CDB is very small compared to the overall operation. For flash devices, the access is measures in micro-seconds and processing of the CDB becomes a significant portion of completing the write operation.   

Vectors Figure 1

A solution to this problem is to define vectored read and write operations. These commands permit read and write operations to multiple data segments which do not have to be contiguous, unlike normal read or write commands. This is analogous to the scatter/gather lists employed in typical HBA interfaces and provided in the PQI and SOP specifications. Historically, there has been strong resistance to vectored commands within the T10 committee due to the complexity of error processing. It also makes little sense in rotating media where the latencies are large and a queue of many individual commands works as well and is more easily implemented and managed. But for flash memory based storage with a PCIe interface, atomics may provide a strong motivator.

…to be continued…

Atomic Commands

At the January T10 meeting, the proposal for an initial atomic read and write command was accepted in the T10 Commands, Architecture, and Protocol (CAP) committee, only to be rejected at the plenary meeting. In the March meeting, a greatly simplified proposal for atomic writes was presented that did not make it out of committee (4 yes, 7 no, and 6 abstain). The atomic read and write proposal, along with several variants, has been in play for over three years and has had innumerable revisions. What’s going on here?

Some background: an atomic write is one which writes some data to a storage device. When the command is completed, the new data has either been successfully written to the device, or in the case of an error, none of the data was written and the data that was present before the write is maintained.  This avoids the problem of a torn write where only part of the specified data range was written correctly and the remainder may be either old data – or worse – undefined. In database applications this sort of behavior can lead to an inconsistent database. This is usually avoided by writing a log record to the database which is used by the database application to recover from the failure and make the database consistent. But this requires an additional write for each data base record update and slows the system. As large flash-based storage systems become commonplace in database servers, this behavior becomes doubly problematic. Not only do these additional writes reduce overall system performance, they are also a further write burden on the flash memory system of the storage device.

In the past, this was never an issue because atomic writes are difficult to implement on disk drives. For rotating media it is more cost effective to have the application simply do another write to cover the relatively rare error, rather than build that complexity into cost sensitive disk drives. With the advent of flash memory based storage systems, implementing atomic writes is much easier thanks to the way flash memory is written and managed. Reducing the number of write operations is also a big plus for flash due to the relatively limited write capacity of most flash based storage devices.

Another factor that’s important to consider in the atomic write story is the advent of PCIe based storage. Multiple vendors now produce PCIe cards that can provide terabytes of data on a single PCIe card. The NVMe interface was developed to take advantage of this class of device. T10 was not far behind when the SCSI over PCIe (SOP) and the PCIe Queuing Interface (PQI) specifications were developed in response and are now reaching maturity. These developments have allowed database servers to become much more efficient by speeding access to significant amounts of data at much higher speeds – usually by several orders of magnitude.

These changes have allowed servers to keep large databases in local storage with speeds approaching that of random access memory. More virtual machines can be instantiated on each server and have improved performance over network attached storage while reducing overall data center costs and power consumption. And SCSI is an ideal interface for such installations thanks to a mature and well developed protocol for handling multiple initiators. Recent work in the committee is under way to provide and manage millions of LUNs from a single SCSI target. While NVMe is a lightweight and low latency interface, it does not have the breadth or maturity of the SCSI protocol for handling large numbers of initiators or logical units.

…to be continued…

T10 (SCSI) March 2014 Meeting

NewOrleans2014

The March T10 meeting week was held in New Orleans, Louisiana this year. SOP-1, PQI-1, SBC-3, all closed public review with no comments. SPC-4 should go to public comment in May 2014.The atomic read and write proposals were overhauled after being rejected at the January plenary meeting, but all to no avail. Work on the initial draft of Zone Block Recording (ZBR) continues in force.

My full notes are here, the official T10 minutes for CAP are here, and the T10 minutes for the plenary sessions are here.