This is the third of eight in a series titled 8 Must Haves for the IT Director

Number 3 – Thin Provisioning

Dell Compellent significantly reduces the cost of storage by enabling you to purchase and manage
fewer disk drives now and in the future. With other storage systems, physical disk capacity is
preallocated when the volume is created. Administrators estimate how much capacity may be required
for a given application and allocate “extra” space to accommodate growth. If the volume created is
500 GB, all 500 GB are set aside for that application. No other applications can use any of the preallocated
disk space, and none of it can be reclaimed later if actual utilization doesn’t coincide with
staff estimates. In most cases, only a fraction of the pre-allocated capacity is ever actually used,
resulting in the accumulation of purchased but “stranded” storage.

Such inefficient disk utilization inflates capital expenditures, operating expenditures and, ultimately,
your total cost of ownership (TCO). Administrators are forced to buy more capacity than needed
upfront, when the price per GB is sure to fall. Over time, as capacity is consumed (or stranded), even
more capacity must be purchased, further expanding the data center footprint. And all of this storage
must be provisioned manually, a time-consuming process that often requires downtime. In the end,
regardless of how much data is truly stored, all of these disks require continuous power and cooling.

Dell Compellent Thin Provisioning software, called Dynamic Capacity™, completely separates allocation
from utilization, eliminating preallocated but unused capacity. Administrators can provision any size
virtual volume upfront yet only consume physical capacity when data is actually written to disk. That
means you purchase the data you need to store your data today, then continue saving by expanding the
system on demand, adding the right capacity at the right time as your business needs change. In most
cases, organizations can regain 40 to 60 percent of disk space that would have been lost to preallocation.
You can even reclaim capacity from volumes provisioned with legacy systems using Thin
Import.

Next up in our 8 Must Have series is #4, Automated Tiered Storage.

  • Share/Bookmark
 

This is the second of eight in a series titled 8 Must Haves for the IT Director

Number 2 – Storage Virtualization

The term “virtual storage” has about as many definitions as there are vendors that provide it. In it’s most basic form, Compellent Storage Virtualization means that logical disk volumes are not directly associated with physical disk devices. A single volume, for example, might be spread across many physical drive types and raid levels.

Here’s Compellent’s overview:

Dell Compellent virtualizes enterprise storage at the disk level, creating a dynamic pool of shared storage resources available to all servers. With read/write operations spread across all drives, multiple requests can be processed in parallel, boosting system performance. Dell Compellent Storage Virtualization allows users to create hundreds of volumes in seconds to support any virtual server platform and optimize the placement of virtual applications.

How to Increase Performance with Storage Virtualization

  • Create any size virtual volumes without allocating drives to specific servers or dealing with complicated capacity planning and performance tuning
  • Present network storage to servers simply as disk capacity, regardless of tier, RAID level or server connectivity
  • Automatically restripe data across all drives in the storage pool when adding disk capacity
  • Dynamically scale the storage pool and implement system upgrades without disruption
  • Use virtual ports to increase port capacity, disk bandwidth, I/O connectivity and port failover

While these are certainly all important bullets, I’d like to add my own from a “benefit” perspective. For an IT administrator, Compellent Storage Virtualization:

  • Eliminates “hot spots” because individual drives are not target for specific apps
  • Improves performance by utilizing all spindles available; gets faster as it gets larger.

Next up in our 8 Must Have series is #3, Thin Provisioning.

  • Share/Bookmark
 

This is an excerpt describing Dell Compellent’s move from older PCI-X to PCI-E and its impact on customers. This was a response to a customer’s concern about having to upgrade their Compellent gear. Although Compellent has a good story on minimizing “forklift” upgrades, there comes a point when technology must be refreshed. Does anyone out there still use 5.25″ floppies?

Comment by Bob Fine, Dell Compellent Marketing:

“Full disclosure – I work for Dell Compellent. I manage the Compellent product marketing team. There are two macro level transitions here – the industry wide transition away from PCI-X to PCI-e and the transition from SATA to SAS.

We do offer a variety of ways for our customers to avoid forklift upgrades as much as possible. In the case of SATA technology, the industry has shifted away from SATA to SAS. This isn’t a Compellent decision, but across the entire drive industry. Compellent delayed the end of life long past when drive shipments ended. We still provide Copilot support for SATA, although the drives and enclosures are no longer available for upgrades or new orders.

For many of our customers, they can use a PCI-e SAS card in their existing controller and leverage this new drive technology and avoid a forklift upgrade that some vendors require. Unfortunately our older controllers only have PCI-X interfaces, and PCI-X SAS interface cards are not available from our vendors as part of the industry transition away from PCI-X.

A key Compellent advantage is that by moving to the latest controller will allow use of all your existing Fibre channel enclosures and drives along with SAS, something that most competitors do not support.

I’d welcome the opportunity to discuss this further offline from the blog.

- Bob Fine”

Our take: Having been involved with hundreds of enterprise storage projects over 20+ years there is one truism in data storage. Every piece of storage hardware will eventually need to be replaced, so plan on a periodic refresh. I once worked on a Fed Gov project that had a 70-year data retention policy. Although extreme, the requirement forced us to build into our design the ability to migrate forward (i.e. refresh) data from older media to newer. That meant we could not tie our applications to a specific locations and/or mount points. Locations are best kept in a database that can be revised over time.

I appreciate Bob’s response and it is obviously a sincere effort to explain the need for a refresh. The only problem I have is when a marketing team with no real applicable technology experience makes claims it cannot back up.

  • Share/Bookmark
 

This is the first of eight in a series titled 8 Must Haves for the IT Director

Number 1 – Fluid Data Architecture

Dictionary.com defines Fluid as:

flu·id
   /ˈfluɪd/ Show Spelled[floo-id]
noun
1.
a substance, as a liquid or gas [or data], that is capable of flowing and that changes its shape at a steady rate when acted upon by a force tending to change its shape.

adjective
2.
pertaining to a substance that easily changes its shape; capable of flowing.
3.
changing readily; shifting; not fixed, stable, or rigid: fluid movements.

The Importance of Fluidity of Data

The underlying goal of scaling out large virtual environments is cost savings; both from using less machines and less labor to manage them. One administrator can now manage dozens of virtual environments across multiple application service level agreements.

Individual application availability and performance is where the Fluid Data Architecture shines. Like liquid seeks it’s own level, Fluid Data methods balance data placement with natural request history. More frequently used data is placed on closer, faster disk spindles. The concept is not new. Hierarchical Storage Management (HSM) systems have been around since the 60′s and 70′s. What has changed is the built-in intelligence and granularity of data placement.

Each block in a Compellent Fluid system has meta data characteristics

Dell Compellent’s Fluid Data Architecture

Ok, let’s cut to the chase. The key to Compellent’s “put the data where it’s needed at the precise time it’s needed” fluid data process is intelligence….at the BLOCK LEVEL. I didn’t mean to shout, but this is important. Each block carries with it information that other systems simply don’t have.

Check out the graphic to the left. Note that usage and access characteristics accompany each block. By using this metadata the system can make real decisions on where to place the data.

Data progresses from one tier to another using metadataThis simple tagging of usage information at the block level means the Compellent operating system can easily determine where to best place the data. If, over time, data has not been accessed and still resides on expensive Tier 1 spindles, it can be migrated down to Tier 2 disks, freeing up space.

Likewise, Tier 3 data that suddenly becomes active can be migrated back up to Tier 1 automatically. This built-in process is called “data progression management” and is the key to the Fluid Data Architecture.

Since we are focusing on IT Director “Must Haves”, it is important to translate this technical feature into benefits.

A Fluid Data Architecture enabled by block level meta data means:

  • You buy fewer drives and use more Tier 3 storage, saving on power and space
  • You simplify your IT infrastructure with zero touch management
  • Scale without limits on a persistent, technology-independent platform
  • Recover instantly and set up multi-site replication in minutes

Next up in our 8 Must Have series is #2, Storage Virtualization.

  • Share/Bookmark
© 2012 Federal Appliance 4Compellent is not affiliated with Dell Corp.
Federal Appliance is a Dell Preferred Partner
Suffusion theme by Sayontan Sinha