Feb 202015

Hard disk logoSince we have had hard drives, we’ve been trying to make them larger and larger, just like with any other data storage medium. I believe that for mechanical disks, no single parameter is more significant than sheer size. Recently Seagate blessed us with its first SMR (“Shingled magnetic recording”) hard drive, giving us a technology bringing more capacity again, but with a few drawbacks. Drawbacks significant enough to make me want to talk about SMR and its competing technologies today, and also about why SMR is here already, and the others are not.

1.) Steamroller tactics

As I said, we’ve always been trying to increase the storage space of disks. There are usually two ways of achieving this, of which some are particularly challenging. The easy method? Just cramp in more platters. And with the platters, more read/write heads. This clearly has its limits of course. Traditionally, the maximum number of platters that could be operated safely next to each other in a regular half-height 3.5″ HDD was 5. Keep in mind that for a 7200rpm drive, we have 120 platter rotations within a single second! Smaller 2.5″ platters as used in certain enterprise disks or the WD Raptor drive can spin even faster, at 10.000rpm or 15.000rpm which means 166 and 250 rotations per second respectively.

Rotational speeds so high mean there’s going to be a lot of air turbulence in there, and air is essential to keep the heads floating at very low altitudes (freaking nanometers for Christs sake!) over the platters. Too much disturbance and you get instabilities and potentially fatal head crashes.

A Hitachi 2TB 5-platter disk

A Hitachi DeskStar 7K2000 2TB, 5-platter, 10-head disk (click to enlarge)

In recent days, Hitachi Global Storage dared to replace air-based designs with low-density helium-filled drives, thus enabling them to pack an unbelievable amount of 7 platters into a normal 3.5″ drive, accessed by 14 heads, all made possible by lower gas turbulence and resistance. This also enabled them to use lower-powered, lighter motors to spin the platters. Seagate stroke back, by using shingled magnetic recording for inexpensive disks and six platters for enterprise disks – despite the conventional air filling. The reason why Seagate didn’t introduce SMR to the enterprise markets yet are said drawbacks. But there is no way we can pack more and more platters and heads into disks of the same volume – there is only so much space, and 7 platters is insane already.

2.) The easiest way out may not always be a smooth ride…

So what is SMR, and what made it appear on the markets well before its competing technologies, like HAMR (heat-assisted magnetic recording), MAMR (microwave-assisted magnetic recording) and BPMR (bit-patterned media recording)?

Basically, price made SMR happen.

But let’s just dive into the tech first.

As you may have heard, the term “shingled” is derived from roof shingles and the way they actually overlap when being put on a roof. That’s for gabled roofs of course, not flat ones. ;) Regular hard drives have sectors sitting in line on what we call a track. Then, there is a slight gap, and next to the track there is another track and so on. Like rings sitting on the disc. The sectors tend to be of the same physical size (in square area covered), which is why data can be read the fastest on the outermost parts of the platter – more sectors passing by the head for an equal amount of angular movement.

This is considered to waste space though. Modern read heads can read much more narrow tracks than what write heads are able to store safely on the disks magnetic film. Individual bits are stored using roughly [20-30 grains of magnetized material] on the disks film right now. So, ~20 grains with the same magnetic orientation, distributed amongst a few nanometers. It seems read heads can cope with less though, so the industry (or academia?!) came up with this:

SMR structual comparison

A structural comparison: a normal disk to the left, and shingles to the right. As one can see, the shingled magnetic recording disk allows for packing more data into the same space, despite individual tracks being wider when written.

So, yeah. No gap anymore, no more wasted space, right? The gap was actually there because the edges of the tracks are not too well defined by classic write heads, so reading safely could be tricky when you pack them too close together. The ones used on SMR disks aren’t any different technologically. They’re just wider, writing fatter tracks, which enables the head to write more well-defined track edges. Now the read head can pick up the narrower tracks even without any gaps between the tracks. Thus, we can pack stuff tighter and still read back without corruption. But…  how do we modify written data?! Each track looks like it’s been partially overwritten by its fat-ass successors? Lets see what would happen, if we attempted to write “the regular way” to a part of filled-up shingled disk surface, as compared to a normal one:

Writing to an SMR surface is a problem

Writing to an SMR surface is a problem. Writes within the structure overwrite adjacent tracks, because write heads are wider than read heads to be able to create strong enough magnetic fields for writing.

On the normal drive (to the left) we just write. No problem. On the SMR disk we write. And…  oops. Its not actually a write, it’s a write+destroy. In the example density above, by writing three sectors in directly sequential order (maybe a small 1.5kiB file), we effectively overwrite six additional sectors “to the right”, because the write head is too wide. For writing 1.5kiB, we potentially corrupt 3kiB more. Maybe even more when using 4kiB sectors instead of 512 byte ones. The effective amount of destroyed data depends on how much overlap there actually is of course – but there will always be data loss!

So how do we rewrite?! Well, we do it like SSDs do it in such a case – which is why SSDs also need the TRIM command and/or garbage collection algorithms. First, we need to read all affected data, which is basically everything down-cylinder from the write location (called “downstream” in SMR lingo). Reason is that if we rewrite just the six additional sectors right next to the affected ones, we lose the next six etc.


We read everything downstream into a cache memory in as many cycles as there are tracks downstream, then we write downstream in as many cycles as there are downstream, including the original write. This is known in a less extreme form from solid-state disks, as a “read-modify-write” cycle:

Classic HDD writes vs. read-modfiy-write on SMR disks

Classic HDD writes vs. read-modify-write on SMR disks

So, what does that mean? Lets sum up the operations:

Regular hard drive:

  • Wait for platter to rotate and seek head to first target sector in track
  • Write three sectors in direct succession

SMR hard drive:

  • Wait for platter to rotate and seek head to target track + 1
  • Read three sectors in direct succession, store in cache
  • Wait for platter to rotate and seek head to target track + 2
  • Read three sectors in direct succession, store in cache
  • Wait for platter to rotate and seek head to target track + n
  • Read three sectors in direct succession, store in cache
  • (Repeat until we hit end of medium* or band)
  • Seek head to target track
  • Write original three sectors
  • Wait for platter to rotate and seek head to target track + 1
  • Rewrite three previously stored sectors, recalled from cache
  • Wait for platter to rotate and seek head to target track + 2
  • Rewrite three previously stored sectors, recalled from cache
  • Wait for platter to rotate and seek head to target track + n
  • Rewrite three previously stored sectors, recalled from cache
  • (Repeat until we hit end of medium* or band)

*As you can see, this is crazy. If we write to the up-most sector, we have to rewrite all the way downstream, this could be millions of sectors and seeks for a small file, affecting the entire radius of the platter! This is why SMR doesn’t completely do away with track gaps. It’s just that tracks are now being grouped into bands of arbitrary size to limit the read-modify-write impact. Let’s have a look at two side-by-side 7-tracks-wide SMR “bands”, both being written to:

Shingled media - organized into bands

Shingled media – organized into bands 7 tracks wide

From this we can learn two things: Bands can mitigate the severity of the issue. Also, the amount of work depends on where within a band we write to. The farther downstream, the less latency hit we will have to endure, the less seeks and write overhead we’ll have. Bands can’t be too wide, as write performance would deteriorate too much. Bands can’t be too narrow either, or we’ll lose too much of the density advantage because we’d have more band gaps using up platter real estate.

Let’s look at an overview regarding band width efficiency:

SMD band width efficiency

SMD band width efficiency[1] (click to enlarge)

I won’t go into all the detail about what this means, but especially the part about reserved non-shingled storage on the disk is pretty much unusable in todays scenarios I believe.  So, please pay attention to the green lines, where f=0. The number r is simply the track width in nanometers. By looking at the graph we can learn, that the sweet spot for the number of tracks per band is maybe around 10 to 25 or so. Beyond that we don’t gain much by saving us band gaps, and below that, data per square area isn’t packed with enough increase in density.

This makes me think that Seagate went with a rather “low” band width with their current SMR drive (the [Seagate Archive HDD v2]), as the platter size increase was only +250GB, so from 1TB to 1.25TB for the first SMR generation, and then +333GB with 1.33GB platters in the final generation hitting the market. So they got to an areal density increase factor of just 1.33×, which may correspond to 6 tracks per band, maybe 8 or 10 depending on track width in nanometers (I do not have solid data on track widths of any modern drives, especially not Seagates SMR disks). Some rumors are saying “5 – 10 bands” which does seem right considering my math.

Probably bad enough, but hey.

As said, SMR disks – like SSDs – are not showing their inner structure to the operating system as-is, as this would require new I/O schedulers, file systems, applications[2] and so on. Instead, they’re going for a fully firmware-abstracted[3] approach, showing only a “normal hard drive” to the OS, like any SSD would do too. All the nasty stuff happens on the drive itself, implemented 100% in the drive firmware.

File systems also need to be considered. A file system that fragments quickly will scatter larger files all across the disk, potentially across a multitude of bands. Rewriting such a file on a file system that can’t do [copy-on-write] and is fragmented will likely be painful even with firmware optimizations and cached/delayed writes in place. Exposing SMR to the file system would help a lot, but would also mean a lot of work for the file system developers side, and I just don’t see that happening at least outside of seriously expensive large-scale systems. Current file systems like FAT, exFAT, NTFS, ReFS, EXT2/3/4, XFS, btrfs, ZFS, UFS/FFS and so on simply don’t understand SMR bands. To my knowledge there is no file system that would. It’s likely going to be handled like 512e Advanced format – all the magic happens below the fake layer the operating system is being presented.

2a.) Pricing

Now, in the beginning I said the main reason for SMR to appear right now is price. Thing is, with SMR you can still use the same platters, and mostly also the same heads as before, with just minor modifications like the wider write head. It’s much less of a radical hardware design challenge, and more of a data packing and organization solution saving a lot of money. To give you an idea, let’s compare some actual prices from the EU region as of 2015-02-20, (only drives actually in stock), source: [Geizhals]German flag. Drives compared are roughly targeting the same market or at least share as many properties as possible, like warranty periods, 24/7 qualification, URE ratings, etc.:

Some regular drives:

  • Western Digital Purple 6TB: 251,52€ (Price per GB: 4,19¢)
  • Western Digital Red 6TB: 263,75€ (Price per GB: 4,40¢)
  • Hitachi GST Deskstar NAS 6TB: 273,32€ (Price per GB: 4,56¢)
  • Hitachi GST Ultrastar He8 8TB: 660,95€ (Price per GB: 8,26¢. This is an Enterprise helium drive with 5 years warranty and an URE rating at <1 bit read error in 1015 reads, so it’s hardly comparable. It’s the only other 8TB drive available though, which is why I’d like to show it here too.)
  • Seagate Surveillance HDD 7200rpm 6TB: 395,14€ (Price per GB: 6,59¢)
  • Seagate NAS HDD 6TB: 394,44€ (Price per GB: 6,57¢)

SMR Drive:

  • Seagate Archive HDD v2 8TB: 259,–€ (Price per GB: 3,24¢)

So as you can see, the price per GB of the shingled disk is simply unmatched here. First of all, it has no direct competitors to run against it, and it’s much cheaper per GB than the 6TB disks of the competition! Heck, even its absolute price is lower in most cases. Seagate does state, that the drive is meant for specific work loads only though. In essence, the optimal way to use it would be to treat it as a WORM (write once, read many) medium, as read performance is not impacted by SMR. But how bad is it really?

2b.) Actual performance numbers

Those are extremely hard to come by at this stage, as there are no real reviews yet. All I could find are some inconclusive tests [here]German flag. Inconclusive, because no re-writes or overwrites were tested. So far all that can be said is that the disks seems to show more progressive write caching, which does make some sense for grouping data together so whole bands can be written at once. We’ll have to wait a bit longer for any in-depth analysis though.

If anything comes up, I will add the links here.

For now, let’s continue:

3.) SMR super-charged: 2-dimensional reading (TDMR)

I’ll be brief about this, as the basic ideas of 2D reading are “relatively” simple. Here we’re trying to make the tracks even narrower, up to a level where a single read head might run into issues when it comes to staying on track and maintaining data integrity on reading. The idea is to put two more read heads on the whole head, one that would read at position shifted slightly in direction of track n-1 and one that would read shifted to the opposite side, track n+1. Like this:

Multiple read heads

Multiple read heads for two-dimensional magnetic reading[4]

From the differences between the read heads’ acquired data, a more precise 2D map can be constructed, making it easier to decide what the actual data must be, and what’s just interference from nearby tracks. The end result of course being an even higher density and increased data integrity.

To save money, one could also retain the single read head setup and read the adjacent tracks in additional passes. Naturally, this would be much slower and possibly less safe.

TDMR readback

Two-dimensional magnetic readback[4]

From our current standpoint it is hard to tell how much more density can be gained by TDMR-enhanced SMR, and at what cost exactly. The basic problems of SMR aren’t solved by TDMR at all however, as the data is still being organized in shingled bands. For regular drives, TDMR doesn’t make much sense, as the written tracks are more than wide enough for a single read head anyway. This could be considered useful for a second or third generation SMR technology, if ever released. It may have other uses too however, see 5.

4.) The future #1: Heat/microwave-assisted magnetic recording (HAMR/MAMR)

There are technologies in the making that are aiming at increasing density by employing whole new technologies for reading and writing without necessarily tampering with the data organization on the platters. HAMR or the less-known MAMR are one of them.

I mentioned before, that current drives store a single bit by magnetizing about 20-30 individual grains on the actual surface of the disk. This number can not be reduced easily any longer for data integrity reasons, and the grain size can’t be reduced either, which is because of a magnetic limitation known as the super-paramagnetism wall[4]. In essence, you need a certain amount of energy to change the polarity of a bunch of grains. Due to this effect, the smaller the grains you have, the more energy you must concentrate on a smaller spot to make a write operation work.

As electromagnetic energy increases on extremely dense small grain tracks, the field becomes too strong and will affect nearby tracks, data might be lost. Also, the heads must shrink together with the tracks, and it becomes harder and harder and eventually impossible to even generate fields that strong with downscaled write heads.

The super-paramagnetic wall – or rather the grains themselves – have an interesting property though. The wall shifts with the materials’ temperature, because temperature affects the grains coercivity. The colder the surface becomes, the more energy is required to write to it. The hotter it becomes, the easier writing will be, with less powerful magnetic fields required.

Given that the wall can be pushed around at will by changing the materials coercivity, researchers came up with a few solutions based on Lasers and Microwave emitters. So basically, shoot the surface with a Laser or with Microwaves to heat it up with pinpoint accuracy, then write with a small heads weak field, and profit!

HAMR head with Laser emitter

HAMR head with Laser emitter[4]

MAMR head with microwave emitter

MAMR head with Microwave emitter[4]

Clearly, to make that happen, several challenges must be mastered. First of all, you’d want a disk surface which doesn’t mess with the Microwaves or get messed up by them. Alternatively, the surface must have properties that do not limit the effectiveness of the Laser. In both cases, it needs to house smaller grains. And on top of that, we need highly miniaturized Laser or Microwave emitters, and we’re talking about nanometer level here. Additionally, robustness has to be ensured, which might be an issue with the head permanently shooting Laser beams around or heating itself up with Microwaves.

This is why HAMR/MAMR development is extremely hard and extremely expensive. And not just development; The entire hard drive manufacturing process would need to change considerably, creating additional costs. None of this is true for SMR.

Naturally, a working HAMR/MAMR solution doesn’t need to mean the end of SMR. It may give us a way to keep pushing out large disks without SMR implications for certain professional markets and even larger ones using SMR for regular end users. Currently, it seems that HAMR is getting the most attention, and MAMR is likely not going to ever see the light of day.

Seagate HAMR prototype

Seagate has already demonstrated a working HAMR prototype. MAMR is nowhere to be seen. 20TB is the goal here according to Seagate.

5.) The future #2: BPMR (bit-patterned magnetic recording)

Another approach towards dealing with the density issue is to deposit the magnetic film layer and have the grains sit on nanolithographically created, elevated “islands”. When doing so, the grains show a strong coupling of their exchange energy, which is a quantum-physical effect. That coupling means, that grains will follow the magnetic orientation of their neighbors more willingly, and also all together stay in the same orientation. By doing this, the energy required for altering the magnetic orientation is proportional to the islands volume, not the volume of individual grains representing a single bit. So what we could do is use smaller grains. Or maybe even make the “1 grain per bit” dream a reality by putting a single grain onto each island of a bit-patterned medium.

Bit-patterned media

Bit-patterned medium[5]

So far it’s been said, that staggered BPMR would be the easiest to manufacture and might enable manufacturers to pack data even tighter together (think: hexagonal pattern alignment), although this might require two-dimensional magnetic reading again, so: three read heads to eliminate cross-read interference. Since TDMR wouldn’t imply SMR in this case as there is no shingling with BPMR, the two could be used together without any issues for the end user.

Staggered bit-patterned surfaces

Staggered bit-patterned surfaces[6]

Also, servo patterns are making progress, which is equally important as are the actual data islands to maintain proper head positioning:

A BPMR servo pattern to the right with non-staggered data tracks/islands to the left

A BPMR servo pattern to the right with non-staggered data tracks/islands to the left[7]

Even with all that goodness, the super-paramagnetic limit will still apply here, albeit in a reduced fashion. A BPMR head could however once more be equipped with a Laser, thus combining BPMR with HAMR technology to further downscale the islands. Throw staggered island tracks and triple read heads for two-dimensional reading into the mix, and the density possibilities grow even further.

Needless to say, this costs a truckload of money. BPMR would require even more drastic changes in how hard drives are being manufactured, as we’d need 20 – 10nm nanolithography technology, a different platter composition, and new write and read heads plus the software – or rather firmware – to properly control all of that.

6.) Conclusion

I don’t believe we’ll really see HAMR anytime soon. On top of that, SMR might not really be just as intermediate a technology, only filling the gap until we can do better and bring the Lasers to the storage battle. When you look at HAMR, it becomes clear that SMR or SMR+TDMR are still feasible in conjunction with HAMR. I believe what we’ll see there will be an even stronger segmentation of hard drive markets, where fast writers might be available only in certain expensive enterprise segments, and slower-(re)writing SMR based media might serve other markets like cold data / cloud storage and data archiving. HAMR would be enhancing both of them, whether SMR is on board or not. Anything with a considerably higher need for reads could be using SMR, for a long time to come.

HAMR or not, any disk featuring SMR will definitely be larger and also feature the implications discussed here.

Then comes BPMR, likely after all other technologies described here. Now this baby is something else. Shingling tracks is pretty much out of the question here, as grains and the bits they represent can no longer overlap, becoming essentially atomic. If BPMR ever hits the markets – and I’m guessing it will in 10-15 years – the time for SMR to disappear will have come. But thats so far into the future that calling SMR “intermediate” now might be a bit premature. I’m guessing it may stay for a decade or more after all, even if I consider it quite ugly by design.

People needed to pay increasing attention to what drives they were going to buy over the past years, and with SMR that will just intensify. Users will need to have a proper idea of their I/O workloads and each and every storage technology available to them to make good decisions so that nobody runs off and buys a bunch of SMR disks to build a random-write-heavy mail server with them.

Good thing I’m building my new RAID-6 soon enough to still get SMR-free drives that I can consider “huge” despite them being non-shingled. ;)

[1] Gibson, G.; Ganger, G. 2011. (Carnegie Mellon University Parallel Data Lab). “Principles of Operation for Shingled Disk Devices“.

[2] Dunn, M.; Feldman, T. 2014. (Seagate, Storage Networking Industry Association). “Shingled Magnetic Recording Models, Standardization and Applications“.

[3] Feldman, T.; Gibson, G. 2013. “Shingled Magnetic Recording“. Usenix ;login: vol. 38 no. 3, 2013-06.

[4] Wood, R. 2010. (Hitachi GST). “Shingled Magnetic Recording and Two-Dimensional Magnetic Recording“. IEEE SCV MagSoc, 2011-10-19.

[5] Toshiba. 2010. “Bit-Patterned Media for High-Density HDDs“. The 21st Magnetic Recording Conference (TMRC) 2010.

[6] A*Star. “Packing in six times more storage density with the help of table salt“.

[7] Original unscaled image is © Kim Lee, Seagate Technology.

CC BY-NC-SA 4.0 The future of hard drives; Is shingled magnetic recording really a good solution? How we are trying to boost HDD capacity and what I think… by The GAT at XIN.at is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

  9 Responses to “The future of hard drives; Is shingled magnetic recording really a good solution? How we are trying to boost HDD capacity and what I think…”

  1. This is a poorly written article. Content wise its OK, but the sentence structures chosen by the author sound too much nontechnical. Had to go through thrice to keep track of the disturbing sentence structures and grammar.
    Please pay attention to it. I know its a basic problem of some native English speakers who try to exercise the same type of informality even during an academic communication.

    • Hello Hasan,

      My apologies for my lack of linguistic skill. I believe my problem is, that I’m simply not a native speaker (My native tongue is German). Most of my English skills come from other non-native speakers from all over the planet, which includes Asian people mostly, but also people from some European and South American countries so far. That, plus some English from games and movies, but it seems that’s just not enough influence.

      I’d love to improve my writing, but I fear I’m unable to do much. :( To really improve, I’d need to spend some time in an actual English speaking country, but as it stands that’s just not going to happen. I’m sorry to have made the reading so exhausting for you…

      Do you think it would help if I’d try to write more concisely, making the sentences shorter?

      • Thanks for a great article. I’m a native English speaker and your English is excellent. Hasan has the audacity to bring up sentence structure and grammar, and then proceeds to say “the author sound too much nontechnical”. FFS. I wouldn’t read too much into a critique from someone that doesn’t seem to know the difference between “its” and “it’s” (or is just too lazy to punctuate properly).

      • I agree with RJP. The comment critical of your layperson grammar was not only audacious, it was inconsistent with your audience and purpose. Your article is a real eye-opener and an excellent explanation of the underlying tech, and cost/benefit/deficit trade-offs.

        In English, my response to your article is “Bravo!”. In your native tongue, I say “ausgezeichnet!”

        • Hello Philip,

          Thank you very much for your words! 8-) I still feel that Hasan wasn’t entirely wrong though. Some of my sentences do tend to become unnecessarily complex and lengthy I think, making them tedious to read? But improving that is surprisingly hard in practice. ;)

          I will keep my style informal though. I’m not even qualified to write “academic” articles, given that I’m not an academic in the first place.

  2. Didn’t perpendicular recording promise 10 TB hard drives? (Without using more than four platters).

    Developments in the last few years have not exactly given us a more data on a platter. It has stayed at about 1 TB/platter. There are a few 1,25 and 1,33 TB/platter drives, but they aren’t cheap (because they’re the ones with a lot of platters).

    I’m really wondering how flash-based drives will do. Of course that will run into other limits, and I’m really wondering what will happen.

    • I don’t actually remember what we were promised at the dawn of perpendicular recording, all I do remember is that it allowed us to breach the 2TB barrier. It got us all the way to 4TB alright, but it seems there are limits just too coercive to allow us to go much further without drastic measures (like 6 platters in air-filled drives, helium-filled 7stac drives or SMR for the 1.25/1.33TB ones).

      Naturally, SSDs will remain king of the hill when it comes to random I/O, but I assume that pricing will stay prohibitive in the 4-8TB areas and beyond for flash-based storage. As I said, I see more market segmentation heading our way. SSDs for areas heavily depending on random I/O with a no-holds-barred policy on pricing. Regular non-SMR drives for high-capacity needs with data integrity being an issue (think rebuilds with large drives requiring an URE rate of at least <1 in 1015 bit reads). Mission-critical large-scale storage? Well: Regular Enterprise HDDs!

      Then, the cloud server markets. Who cares, if some image file or some archive of some user gets damaged if it’s just one in a million? Capacity’s what counts? Then: SMR drives! Just make the rebuilds URE fault-tolerant (all modern RAID controllers I know can do that), and done! With RAID-6, chances nobody’ll ever notice are pretty high anyway!

      And end users? Who cares about them anyway, just feed them SSDs for the speed sensation (an approach quite valid) and SMR drives for the data messies (also relatively valid an approach). Power users are going to know what’s coming anyway, making their choices wisely. ;)

      • I’ve got a hard drive with EIGHT platters:
        Yeah, baby!

        I also have a ST225 around, but somehow I don’t have a controller for it anymore. (no shit) :-)

        I think there’s also a market for large slower SSDs in the consumer area. At least it’s quiet, and no desktop user is going to cry foul when it explodes with a queue depth of 32. Uhm, let me rephrase that: most SSDs are ‘slower’ consumer drives which explode (or something with less effect) when hitting a queue depth of 32.
        Nothing wrong with that, but I like a few cheaper serverquality SSDs to put in several projects and desktop computers which should not fail easily (because I’m the field circus).

        For downloaded^H^H^H^H^Howned media I might be buying two of those SMR drives soon, the price per GB is quite low! It compares to 3TB drives, which currently have the lowest price per GB.

        • But that’s unfair, the ST11200 is a full-height drive! ;) And for the MFM one you’d probably also need an ancient operating system? Or do modern OSes still work with ancient MFM controllers on ISA?

          About the Seagate 8TB SMR drive: I now do know somebody who owns one and has started to publish [benchmarks]German flag. I have asked for a simple sequential rewrite test, but the guy has not yet performed it. For most people, random rewrite would be much more interesting anyway, but I am not yet sure how to simulate that using synthetic benchmarks. Somebody suggested IOmeter, maybe I’ll take a look to see whether random rewrite testing is possible.

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> <pre lang="" line="" escaped="" cssfile="">