We are regularly told that checking our own bodies for signs of change is a
good thing. Early diagnosis of disease gives more of a fighting chance of
curing the problem. So, in the IT world, where we assume all of our backups
have been taken successfully, how often should we be checking the results and
ensuring the backup will work on the fateful day we need to do a restore?
This question was posed by Federica Monsone on Twitter this week. Here’s
an attempt to provide an answer.
First of all, let’s consider the whole point of taking backups. Excluding
the inappropriate use of backup for archiving, the backup process is there to
ensure you can maintain continuous access to your data in the event of
unforseen circumstances. Usually (but not exclusively) these are data loss
due to equipment or power failure, data corruption (whether software bug or
malicious), a... (more)
So there we have it. A week after Dell announce their intention to acquire
3PAR, HP put their cards on the table and trump the Dell bid with an updated
offer of $24 a share. This represents a 1/3 increase over the Dell
$18/share offer.
HP have been pretty acquisitive in the storage arena over the last few years,
acquiring LeftHand, Ibrix and others. Why now would they want to acquire
3PAR?
Defensive positioning – 3PAR being acquired by any of the major vendors
could weaken HP’s position in the mid-to-enterprise market. EVA is a
fading product and unlikely to be the first ch... (more)
It’s been a busy couple of weeks for me, with two lots of travelling in
different directions. Last week I was in Barcelona at the HP Converged
Infrastructure (CI)Event and the week previously at HDS’s launch of the new
VSP (Virtual Storage Platform). Both HP and HDS released their new platform
at the same time (HP as the P9500, HDS as the VSP) and both originate from
Hitachi in Japan. As is usual with these releases, both companies claim to
be integral to the product’s development and perhaps they were in different
areas. Lots of other things have been happening too.
The Ris... (more)
There’s no denying that virtualization platforms such as VMware and Hyper-V
have revolutionized the way in which computing resources are deployed.
Physical servers were usually under-utilized and took time and effort to
deploy. These servers also consumed data center space, power and cooling.
Virtualization reduced hardware costs, reduced the environmental requirements
by saving on power and cooling and improved the utilization of physical
hardware in comparison to dedicated server environments.
Read the rest of this article at Datamation.
sr_adspace_id = 1000004213707; sr_adspace... (more)
It’s pretty easy to pick holes in the current legacy storage products,
especially when it comes to integration within both public and private cloud
deployments. However it’s worth discussing exactly what is required when
implementing cloud frameworks, as the way in which storage is deployed is
radically different from the traditional model of storage operations. In
this post we will look at why traditional methods of storage management need
to change and how that affects the way in which the hardware itself is used.
This leads to a discussion on APIs and how they are essential... (more)