SanDisk

Seagate Mach.2 Dual Actuator HDDs | Investigating How

Seagate Mach.2 Dual Actuator HDDs | Investigating How They Actually Work

#Seagate #Mach.2 #Dual #Actuator #HDDs #Investigating

“Art of Server”

In this video I’m investigating the Seagate Mach.2 dual actuator HDD technology. These drives have the potential to make a massive leap forward in HDD performance both for bandwidth and IOPS. We’ll examine what Seagate has to say about their technology and I will share with you my thoughts….

source

 

To see the full content, share this page by clicking one of the buttons below

Related Articles

37 Comments

  1. If you have been in the game long enough ALL hard drive manufacturers produce bad drive models from time to time. So saying you don't recommend manufacture X because of problem Y is frankly as dumb as hell. Besides at this point there is only Seagate, Western Digital and Toshiba left standing. At work I currently have hundreds of Seagate drives in use and the failure rates are low, astoundingly low given that a couple hundred of the drives are over 10 years old.

  2. I'm using smartctl on both proxmox nodes and Truenas scale, both don't show the messages like you have. Did you configure anything to show it with these (very helpful) messages?

  3. The question is, does it make sense to theoretically have twice the IOPS? If it will bring higher holding and maintenance costs, it will be more difficult to compete with ultra-high IOPS NVME SSDs. I hope to have ultra-high capacity and ultra-low power consumption(motor rotation speed). I wish the HDD manufactures keep placing INTEL 3DXPoint technology's NVRAM on the hard drive's controller board, whether as a buffer, as cache or for storing metadata(the data used to describe data on the disc).

  4. It's not that new an idea. I recall someone experimenting with an actuator on each side of the drive so you had two heads for each platter. It was passed over because the platters had to be smaller to allow the second actuator on the other side to keep the drive at a standard size. They should have each platter's head move independently. you could (internal to the drive) treat each plater as a single 'drive' in a raid array.

  5. Exos ? That's server grade drive, ain't ?
    0:58 this animation, on behalf of Seagate, means a single thing: that is was offloaded to a thirdparty PR team, that doesn't have a clue.

  6. Sounds more like half than twice. Sorry Seagate. Come back when you have two independent sets of heads and that will really impress me. How nobody has done that yet is beyond me. Heads on both sides of the drive. You could put two independent controller boards on it too. For use with cluster file systems from two different hosts. Or use the multiple paths for more throughout. I bet you could physically put three sets of heads on the same platter, 120 degrees apart though at that point you have no hope for it to resemble the standard form factor. I wonder if that would increase heat a lot

  7. You need to get a couple drives and setup a system with a SAS 2 and 3 backplane and install the drives there in the backplane. Do the tests both with a single connection to the backplane and a redundant SAS connection which would use both SAS channels on the drives. Then redo the tests.

    I would also setup a basic install of Truenas Scale and do the tests with the drives in a mirror and a Z1raid. Checking if Truenas can actually handle these drives properly.

    There was a big argument on the old Truenas forumslast year with a lot of half info tossed out, where someone bought a bunch of these drives and the Pool created only showed half of the capacity and testing only showed half the capacity was recognized in their configuration. They wanted to know what happened to the other half of the drive capacity The argument was never really solved, and I think the OP of the drives sent them back and conventional drives were installed. (Maybe you got one of them). I believe there have been random reports of new drives in certain systems not reporting the correct capacity or acting weird, dropping known good and new drives etc.)

  8. I'm NOT a fan either!! My home is in Santa Cruz, CA And Scotts Valley, CA (where seagate is located) is just 7 miles up the road, (off of HWY 17). They have ALWAYS had bearing problems! SUGGEST: Use CTRL-L to clear the screen before each command you use. It's sometimes hard to see the bottom of the screen!

  9. Is others have said I really don't like this design from a standpoint of fault tolerance. You really need to have greater than raid 6 or RAIDZ2 as a single drive could bring your system precariously close to a failed array. Unless you stagger them. So each disk is on a different zpool. Even then it increases the odds of degrading multiple pools at the same time. I'd rather see this technology integrated into a single drive and increase the overall throughput through SAS. I mean you mentioned or actually demoed how you can do a raid zero through the operating system. And well that's nice it'd be nice to actually see that simply done at the disck level bypassing the need to actually mess around at the operating system level. Because I'm thinking that this might be a nightmare on something like TrueNAS Scale.

  10. Mirrored stipes may work good with this setup. But one issue for SOME people is that if their OS has a limit on how many physical disks they can have (UnRaid) then this would count as 2 disks towards their license. And I defialnately wouldn't use this for parity.

  11. Interesting, but I still don't trust Seagate with my data. As you touched on, care would be needed for ZFS use as the potential for failure is higher if you don't aggregate different VDEVs across the physical device.

  12. So HDD manufacturers are deliberately crippling their products – for double the speeds you do not need a separate actuator – you only need an ASIC which allows to connect more read/write heads! The surface bitrate of this Mach.2 drive is exactly same as on conventional drive (or even less – if this would need 2 servo sides compared to one). So why no vendor has a higher performing chipseet, and all they do is to mux all the heads (like 10+ now) into like 2-3 channels for the asic.

  13. Soo all this made me wander. If they show one drive with two parts if the drive fails both parts will be gone, because they share the same electronics and so on. So if you make a ZFS with Z1 and two parts are gone will this lead to a broken pool? Will it be safer with this drives to make a Z2 pool then to counter for the split personality of this drives…

  14. If we want to see real performance in mechanical disks, we must use the two-head technology for a disk group patented by Segate. Fitting two disks into one box just saves space.

    If we can use two or more read-write heads for a disk group, then we can start talking about real performance in mechanical disks.

    This is just a little vaccine for the survival of mechanical discs, we need real solutions.

  15. I believe disk logic should create internal raid 0 and present it to OS as single drive taking the performance gain. Then you can add it to the pool as normal hdd w/o concerns how to threat your halves.

  16. Interesting stuff! Will be curious to see how much they sell for. Any idea what capacities they go up to?

    Obviously there is risk in using these with say, ZFS, but I think with the right setup it could work out. Spread across lots of mirrors perhaps? Or maybe it'd be too complicated for it's own good and if you really want better performance you should just look to flash. Either way really neat, thanks for sharing!

  17. can i split this two part into 2 different vdev combine with outher similar dual actualtor disk forms raidz in one pool and still get parity i need? another question is , since it's still occupy single SATA/SAS port,will HBA card channels reduce to 1/2 when attache this kind of disk?

  18. I see this as incredibly dangerous… Imagine you are running your zfs pool as raid Z1 and the controller or motor dies a Mach.2 unit, this means that 2 "drives" drop out of the zpool and data loss occurs. Whilst its true you could build your pool around this feature, its a mistake just waiting to happen.

Leave a Reply