Removable storage devices: Is there a faster way?

There was a time where we used to burn optical discs such as CD-ROM media to move data around (that replaced floppies for some of us!). And then we got into the USB era and all types of storage options made it so easy to move data across systems or to carry with us in the line of being an IT professional.

Through the years, we have seen USB media become a standard of sorts for removable media on client systems, Windows systems in particular. It’s not without flaws but it has historically gotten the job done. But what about large amounts of data that resides on removable drives? Is there a better faster way to move and access data from removable storage? I believe so, meet Thunderbolt!

I recently had a chance to check out the new Thunderbolt Hard Drive enclosure by StarTech. This enclosure connects up to two disks via the new-ish Thunderbolt interface.

image

The Thunderbolt enclosure is tough, small and ready to be on the go.

Now you may have heard of Thunderbolt in the Mac OS circles, but you can get it for Windows client systems; and if you do – life is good. Why so? Well, Thunderbolt has a very fast transfer interface, up to 10 Gigabits per second. If you use Solid State Drives (SSDs) as part of your removable media, you may be missing out. Thunderbolt’s speeds can literally blow USB 3.0 (max speed of 5 Gigabits per second) off of the backplane; and only the newest USB 3.1 is speed on par with Thunderbolt.

You may have Windows client systems with DisplayPort interfaces, these look like Thunderbolt but in many systems only do video transfers. Thunderbolt, as a pure standards-based interface, can do storage device and video display. You can add a Thunderbolt controller to your Windows systems of interest (and many Mac OS systems have had Thunderbolt for a long time) to get high performance from you removable media. If you are moving large amounts of data and have invested in solid state drives, the StarTech enclosure with a controller for Windows systems may be something to consider. This enclosure in particular has the ability to hold two drives (it pops open and they mount inside) as shown in the figure below:

image

Add power and I/O and you are ready to go!

The notable miss here is that Thunderbolt is not catching on for server systems. The Thunderbolt controllers that are in the wild are all options for some PC new builds or a card you can add. Regardless of the hassle involved, if you want the fastest access to (presumably) fast drives such as SSDs; this is likely the easiest way to provide that high I/O. Add the occasional Mac OS system in the mix, and your high-speed interoperability challenges are made moot.

Have you moved to Thunderbolt for removable media on your Windows client systems? If so, how much have you used it? Share your usage strategies below.

Blog disclaimer: A review unit was provided for short term evaluation and was sent back.

Veeam Availability Suite v8: NOW AVAILABLE!

It’s almost too good to be true, but today is the day! v8 is out!

We’ve been promoting v8 for quite a while, in fact it feels like forever; but today it is a reality that you can have in your hands right at Veeam.com. There are so many new features, it is hard to pick a favorite. But I’m thinking Veeam Cloud Connect is my favorite categorical new thing. Here’s a quick rundown of some of my other favorites:

  • NetApp Plug-In: Veeam Explorer for Storage Snapshots and Backup from Storage Snapshots for this popular array type on VMware environments.
  • Veeam Explorer for SQL and Active Directory: Critical application restores made very easy.
  • Encryption: Great way to secure data at source, in-flight and at rest with a key safeguard if you lose the password.
  • EMC Data Domain Boost: This is a real winner if you need to keep a lot of full backups on disk, DD Boost is an incredible ingest technique that will extend the source of the deduplication out of the target and do synthesized I/O on the appliance. Great stuff.

imageThose are just my favorites, but you should give a look at the What’s New Document. Yeah, yeah, I know it is 12 pages but it’s the best way to take all of the new features in.

We’ve put all of this and more at Veeam.com, so the smart option would be to spend some time reading all of the documents before just throwing v8 into production. You also could download the free edition or a trial and give the new features a try in an isolated environment. Note that if you have Hyper-V in your environment it’s a bit harder to “co-exist” on v7 and v8 simultaneously as the transport service and other components are installed on the Hyper-V host. So if you have Hyper-V in your environment, don’t upgrade (or even add a production host) until you are ready to do such. VMware environments can work side-by-side with v7 and v8 as the APIs don’t care who is doing the backup.

You may wonder, “What do I do if I want to upgrade?” Well, here are a few tips:

  • Start here for your upgrade.
  • Are you finally ready to go to Windows Server 2012? Maybe use the Configuration Backup as a mechanism to move your Veeam console to a new OS. (Or maybe make the console virtual from physical or the reverse)
  • Did you get your new licenses? You can login to the customer portal (CP.Veeam.com) to get them.
  • Is there anything you want to ‘change” before your upgrade? Such as moving tape drives around or adding new storage or connectivity? Draw all of those changes up and then install the upgrade. Have a question: Ask me! (Twitter is best @RickVanover or comments below)
  • image

And of course there are more things to consider, but they all start to “depend” after that. I’m really happy that this is now available, our R&D and marketing teams have really put a lot into this release (and more is to come –> Just wait!!) and I’ll take a quick breath and then prepare for more. Oddly, I’m on holiday right now, and feel kinda bad being away from it; but the team is bigger now. With that, go forth and protect my friends! Let me know if you have any questions, comments or observations.

UCS is #1 in Americas and more!

imageOne benefit of being a Cisco Champion is the ability to get pre-release information of critical Cisco news. Just yesterday, I got an invite to a same-day embargoed content WebEx about some critical Cisco Unified Computing System (UCS) news. Cisco UCS are more than just servers, their well connected and well managed infrastructure to deliver anything that may come across the wire in the data center of today.

In 2009 when Cisco entered the market with UCS servers I was rather uninterested, in fact I think I was interviewed by Beth P. at TechTarget and I may have used the word “temporary milestone” as at the time the UCS server came on the market with market leading RAM and CPU capabilities. Also at the time they were released, the UCS server had zero market share in a segment that has low margins (compared to storage), can be viewed as commodity and is very much a passionate topic in the datacenter as server admins don’t historically have the brand promise that network administrators have had with Cisco.

That was 5 years ago. With the news today, Cisco has gone from last to first in some key measures. The numbers come from the IDC Worldwide Quarterly Server Tracker that is the basis for most of these statistics.

Here are some statistics from Cisco’s news today, I’ll share some selected slides (you can view them all here):

image

IDC  reports for the first quarter of 2014 (Compared to when Cisco UCS entered the market in 2009), Cisco UCS has a 40% market share in North America for x86 blade servers. Note that Cisco UCS also as chassis or “rack” servers, that’s the C-Series. The blades are the B-Series. 

image

The worldwide number for x86 blades has Cisco UCS at 26.3%, #2 position in just 5 years.

Market share is one thing, but growth year over year is also very intriguing. Look at this for server growth (not an IDC number, but a revenue interpretation):

image

Congratulations to the Cisco UCS team! This is an incredible accomplishment in a very tough market segment!

I believe there is an additional story here however. All C-Series and B-Series Cisco UCS Servers do one key thing: Offer unified fabric communication and management. This means they are more than just a server. What this means is that the market is responding, there is a shift and converged infrastructure solutions are not just validated, but market leading. I don’t see any data that puts the C-Series next to x-86 non-blade servers; so one can assume it’s not as good of a story as the B-Series (x86 blade) numbers above.

I’ve long evangelized the benefits of the Cisco UCS Fabric Interconnect, partly for my role at work where it works excellently for backups – but also because it is an awesome technology. The bigger story of ultimate connectivity, ultimate management and best of breed components top to bottom make Cisco UCS a solid compute platform. Dare I say the natural choice?

Newest AwesomeSauce: Veeam Explorer for SQL

SQL Server is one of those applications that we all care very much about, so much so, that we want some serious protection offerings. That’s why we are very excited to announce Veeam Explorer for Microsoft SQL. This solves a real problem for SQL Servers by giving additional restore options that can address entire DB issues all the way down to the specific transaction, and of course we can still restore the whole VM and more.

Let’s break down the three new things that Veeam Explorer for Microsoft SQL bring to the table:

1. Whole DB Recovery: For most situations (90%+) this is going to be what will get you out of a jam quickly for one or more databases. Using Veeam Explorer for SQL, you can simply right-click on the database and restore it from the full image-based backup that was taken.
clip_image002
Of course this all depends on your recovery requirements, and with certain situations – you may want to restore just one database that may provide one application without touching the other databases on this same SQL Server that impact other applications. This is an easy restore scenario, and like the additional options described below – it does not require SQL expertise.

2. T-Logs and Point in Time. The second scenario that comes with Veeam Explorer for SQL is that now the transaction log backups and point in time recovery options are available. This is made possible by new options that work in conjunction with the image-based backup that already ready has over 27 restore scenarios from one agentless backup. The figure below shows how you can configure SQL transaction log backups:

clip_image003
The transaction log backups and point in time recovery options allow Veeam customers to address recovery situations that can’t quite be covered in the first option. For example, if you set up a SQL Server Backup – you now are able to restore one or more databases to a specific logged point. This is by a new feature that will copy the backup logs to the specified interval in the figure above. The Veeam Explorer for SQL wizard then goes into a truly awesome set of restore options, the first will use the transaction logs to generate a point it time recovery as shown below:

clip_image005
This will read the image backup we have, plus the transaction logs we’ve gathered in the backup engine to give you this specific restore scenario and can address by some estimates 30% of the SQL restore scenarios where a specific, advanced recovery is needed.

3. Transaction rollback. The third excellent feature that Veeam Explorer for SQL brings is the ability to roll back to a specific transaction that caused you grief. This is for the very rare restore scenario, and based on our support case data and what we gather from the forums, is the rarest restore case (1-5% of situations). But what if you need it and you don’t have a SQL DBA?

This is where Veeam Explorer for SQL will shine!

You can go to the next level and select the specific transaction option in the wizard above, and find the database action that needs to be undone. The database will then be restored to the transaction right before the undesired one as shown in the figure below:
clip_image006

As you can see, Veeam Explorer for SQL Server brings serious options to your data protection arsenal – the best part is that you don’t need to be a SQL DBA!

We don’t yet have a beta for this tool – but stay tuned!

ATLSECCON Session: Data Protection Mishaps to Avoid

This week I am in Halifax, NS for a Veeam-sponsored event, Atlantic Security Conference (ATLSECCON).

image

There I had a speaking session get accepted: Data Protection Security Mishaps that you can avoid.  Here is the description:

When it comes to data protection, the risks are high. Too many times companies take adequate protections for live workloads; but are the same standards are applied to the durability of the data protection scheme? Different backup technologies offer different opportunities and risks for security the backup data.
In this breakout session, join backup expert Rick Vanover for practical security tips for data protection administrators to avoid being the next headline. Topics covered in this session include:
• Storage security strategies for backups
• Managing multiple security techniques
• Identifying backdoors from data protection solutions
• Implementing controls for each step of the data protection process

image

The session was very well attended and I got some great feedback! So, here’s the gist of my presentation:

Download PPTX: https://www.dropbox.com/s/k5pj45srxx6sd2r/ATLSecCon%20-%20Session.pptx

Here is a summary list of the mishaps to avoid on what I presented:

  • Today it’s more that tapes falling off the truck.
  • The primary systems are protected well, the data protection application has many surfaces and is subject to the same security rules.
  • Identify surface areas of data protection solutions. Kicker: You may have more than one data protection solution.
  • Monitor restores. The Redirected restore could breach security profiles. Recommended solution includes the Veeam Restore Activity Report.
  • Have monitoring and logging framework in place now. It’s a lot harder to set it up after an incident and know what to look for.
  • Identify where data protection logging exists. In addition to aforementioned report, come components may have logging also (tape moves, modules within data protection solution, etc.).
  • Storage for backups is usually an afterthought in most organizations. Primary storage may be secured well, backup storage should have the same standards.
  • Know what frameworks are in use. VMware vSphere or System Center Virtual Machine Manager administrators can take a backup of a VM. Even if they don’t have access to the guest operating system.
  • Don’t “lock your keys in your car”: Don’t rely on CIFS or SMB for backup storage that is managed by Active Directory. Why? What happens when you need to restore Active Directory? Same for storing VM backups inside of your VM infrastructure. What if that’s the problem?
  • Don’t store backups at home. Get indoor public storage. It’s very affordable, has 24/7 access and can be an cost-effective alternative to storing backups (tape/disks) at home.
  • Don’t “Overdo” Deduplication. Don’t double or triple dip deduplication (additional security surface areas and minimal gain for a lot of I/O and CPU consumption). Additoinally, beware of a Windows Server 2012 deduplicated volume encapsulated on a VHD or VHDX and copied or otherwise silently exiting the datacenter.
  • Watch the encryption vs. performance discussion. Make sure different parties don’t “Temporarily” disable volume encryption because backups are slow…
  • Use the 3-2-1 rule. Simple timeless rule can address almost any failure scenario:
    • Keep 3 different copies of your data
    • On 2 different media
    • 1 of which is off-site

    A special thank you to those who attended and for the ATLSECCON board for allowing me to present and Veeam to sponsor!

What can you do to avoid the next cloud failure?

Companies investing in cloud-based solutions should do so very carefully, like any other technology or business decision. In an era where not all cloud solutions are made for the long-haul; there needs to be some clear insight on what is a good decision today and into the future. We’ve seen two key cloud failures recently in the form of services ceasing. The first happened last year when cloud storage provider Nirvanix filed for bankruptcy and the other recent example is Symantec Backup Exec.cloud shutting down. Aside from offerings being closed down, we’ve also see outages of cloud-based solutions that can impact applications or content delivery.

image

The reality is that cloud-based solutions may not make it, it’s a very diverse offering of services for companies to choose from today and the benefits of cloud-based solutions don’t always apply to all organizations. It is also a natural conclusion to plan for some form of outage. This applies to traditional hardware and software products as well, so the decision process isn’t new; it however needs different handling.

So, what can you do to avoid the next cloud failure? It starts with full examination. Companies can latch on the business benefits that a cloud-based solution brings, but part of the admission process should include a plan for evacuation. To put it another way, the cloud has infrastructure too. Things can go wrong, and it needs to be managed and protected. This applies both to the providers of a cloud-based solution, but also as a fiduciary responsibility to those who subscribe to it. Taking this key approach to going into a cloud-based solution will make a material difference on what needs to happen, should a cloud failure occur.

I advise companies to take the following points in to a cloud-based solution investment:

  • Ensure portability to another cloud, or back on-premise
  • Design the specification of the cloud-based solution to be ready for another public cloud, even if you have chosen another public cloud
  • Give extra consideration to application dependencies on a particular cloud

Do you see any risk of more cloud failures? I’m sure we’ll see them, but none have been very impactful thus far. Share your predictions in this interesting category below.

Product Review: Generator Interlock for Standby Power

If you are like me, you want to have a backup plan. I do this in my professional role for data protection, why not do the same at home! Note: Consult a qualified electrician for your panel modifications.

When I lived in West Michigan, we had many occurrences of multi-day power outages and we installed an interlock kit for a safe feed for generator power. Since we’ve moved to this new house, I decided to go ahead and get a kit installed here. Here is the generator we bought in 2005 or so:

image

On the right side, the 4-pole interface is a 240-volt, 20-amp interface (L14-20P). Now this generator is nice, but it doesn’t have the legs to run everything in the house. Namely, I have to turn off the Rickatron lab datacenter and avoid running the air conditioner and electric dryer. Heat (natural gas), kitchen, garages and lighting however can run on this generator. I’d like to have had a 30-amp / 9000 Watts or so unit; but this is what I have.

While I don’t live in such a rural area now, there is always the risk of a power outage. And the way I see it, the problem is solved either way:

Power goes out: I’m good.

Power stays on: I’m good.

My natural choice in this situation is a manual kit, I call it a “double-throw bypass switch” but basically it’s an interlock kit. I have a Siemens electrical panel at home and bought the right kit for my house and feed from InterlockKit.com.

Here is how these systems work:

  • The panel has two breakers added to bring in power in my case 2×20 AMP feeds from my generator.
  • The Interlock switch keeps these two breakers off until there is a power incident.
  • When you have a power incident, you connect your generator feed and start the generator.
  • Then throw the two switches to provide a feed that is a closed system from the generator that is safe and to code.

Let’s walk through the steps. The picture below is my panel after the Interlock kit has been installed and is the normal running configuration when I have municipal power:

image

The top 2 right breakers are the input from the generator, and the interlock kit keeps them off during normal situations (when municipal power is on).

It’s a good idea to test the system, for the following reasons:

  • You are familiar with the process works
  • You know the pieces and parts work
  • You will extend the life of the generator by keeping it running occasionally

To hook up the generator a proper installation has a weatherproof box installed on the exterior of the house (with an adequate gauge going to the panel, again leverage the qualified electrician).  Part of the solution is to have a long cable going from the weatherproof box to the generator. This is shown below (and the other end of this cable goes to the generator):

image

Once the wiring is in place, I can switch the panel to use the generator feed. Note the two steps below:

  • Stop the municipal feed
  • Switch the interlock
  • Activate the feed from the generator

 

image

While the generator (5500 running/7000 max Watts | 20 Amp) doesn’t have the full power for this house, it does keep the heat on, the kitchen going and all lights as well as TV and cable. This solves my concern on what to do if/when the power goes out. Further, this is the “Few-Hundred-Dollar Solution” compared to autoswitch standby systems:

Generator:  $400-600

Interlock Kit: up to $150

Electrician: up to $300

If you get a full house, auto-switch generator, it can easily get to a $10,000 solution. Further, those generators run on natural gas – which you can’t assume will be in place at all times. I keep enough fuel for 2 days of generator runtime, which is nice to ensure that I’m managing that process.

Final verdict: Interlock Kit A+ | Highly Recommend.

Follow

Get every new post delivered to your Inbox.