I read way too much, sadly I came across a post this morning about data storage technologies that [http://www.enterprisestorageforum.com/management/features/article.php/3867506/Top-10-Data-Storage-Technologies-That-Coul will die]. Most of those I think are fairly obvious, however some are just wrong and some lack an explanation. I’ll start with the biggest error:
Firstly, scripts aren’t anything to do with data storage other than being just general useful things. The gobsmacking phrase “scripts don’t automate well” had me stunned. Of _course_ scripts automate well – it’s been the basis of Unix sysadmin for decades. The implication that the alternative (GUIs in their article) automate well is of course laughable. Perhaps they mean that instead of data retention policy being set by scripts (er, how?) it would be set automatically by backup software. This is the way any decent backup currently works – after all you want to say “please keep three copies of this, at least one a week old and one off site at all times”.
“All the rage in the nineties”. Well, and the noughties too. Yes, RAID-1 is inefficient compared to RAID-5 or RAID-6. If you have an 10 disk array, with RAID-1 you would have 5 disks of storage, with RAID-5 (assuming a hot swap spare) you’d have 8 disks, with RAID-6, 7 disks. As disks have become larger at a faster rate than their performance has increased, their rebuild times have increased – leading many to say that [http://blogs.zdnet.com/storage/?p=162 RAID-5 is obsolete]. However where does this leave RAID-1 – is that as the article claims, obsolete too? Well at large array sizes, if you need high write performance (I’ll assume you have a decent write-caching controller), whilst RAID-5/6 should be fine, RAID-1 will be faster – and with modern large disk sizes, surely the wasteful space is _less_ important than it was since performance, not size is the limiting facter. However the real reason why RAID-1 won’t disappear is on small arrays. If you only have two disks, RAID-1 is the only sane choice. Personally I’d suggest using net/iSCSI/SAN-boot such small systems anyhow if it’s practical.
At last, one I do agree with, but I’ll state some reasons. Tapes are horrifically expensive – often more than disk drives. That’s also before you take into account the cost of a tape driver (I’ve certainly seen Fibre Channel drives sold for £6-12K although that was a few years ago). However the two nails in tape’s coffin for me are reliability and (lack of) random access. The former is a personal bugbear – I’ve seen more failures with tape libraries than any other computing hardware. It’s just not possible to get something as complex as a tape mechanism as reliable as the far simpler disk mechanics. I don’t know about long-term archive storage reliability which can be important to certain sectors.
In my opinion they’ve missed the one technology which will change data storage more than any other – SSDs. In terms of pure IOPS it is now possible to replaces huge storage arrays with a single, cheap SSD. Granted, that SSD won’t last very long – however they are many orders of magnitude faster than HDDs. Such a change in performance hasn’t occurred in computing for a long time – it opens up [http://www.theregister.co.uk/2010/03/12/password_cracking_on_crack/ new ways] to solve problems. SSDs themselves won’t be an obsolete technology, however I wonder what they will render obsolete.
Now over to you to poke holes in _my_ article