<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1678611822423757&amp;ev=PageView&amp;noscript=1">
Defrag This

| Read. Reflect. Reboot.

Noble Truth #4: Downtime Is Not an Option

Ipswitch Blog| May 04 2015

| monitoring

“I guess it’s okay if our network is down for a little while. I’m sure nobody will notice.”

noble-truth-number4File that one under the “things-an-IT-department-should-never-say” category. Unfortunately, downtime does happen. Not because it’s okay with IT departments, but because their networks aren't optimized for peak performance. More on this in a moment.

As we mentioned in a previous post, downtime means different things to different personas. To the business user, it’s a frustrating but ultimately forgivable offense. For the customer, it’s seen as a sign to take their business elsewhere. And to the business owners—and we’re including IT departments in this category—it’s seen as a way to break an SLA and potentially lose millions in revenue. That’s right, millions.

Here’s a snippet from our eGuide on the 9 Noble Truths of Network, Server and Application Monitoring that backs up the claim:

You can’t afford downtime—its cost to a modern company can easily exceed $500,000 per hour. According to Dunn & Bradstreet, the productivity impact of downtime alone is estimated at more than $46 million per year for a Fortune 500 enterprise.

That’s a lot of millions. And as the reliance on networks grows in the coming years, those numbers are only going to increase. An Aberdeen study, in fact, once found that the average cost of downtime increased 38% from 2010 to 2012. We have no doubt that the number has risen in subsequent years.

So the question isn't really whether or not downtime is an option (clearly, it’s not). The real question is why downtime still occurs. There are a few reasons why.

One is network overload. As we've said before, network complexity has exploded in recent years. There are more devices being used than ever before, consuming more and more data. With the exception of denial-of-service attacks and other security related incidents, downtime usually occurs because of internal bottlenecks—over capacity, in other words. IT must prioritize limited bandwidth to business-critical applications and quickly unclog bottlenecks so slowdowns don’t turn into downtime.

They would do this, of course, except that without the right monitoring tool, identifying the bottlenecks is an extremely time-consuming, frustrating experience. If IT can’t pinpoint problems quickly (i.e. if they are guessing as to what the root cause might be) the business impact becomes crippling. To minimize business risk and the cost of downtime, IT Teams need to allocate network resources effectively to mitigate issues before users are impacted, and rapidly find and fix any problems that do occur.

In today’s non-stop world, systems must be up 24/7. The business requires nothing less. Sure there are excuses, but no one is interested in hearing them.

If you’re interested in learning more about ensuring that downtime doesn’t plague your organization, then you’ll want to download our latest eGuide.

9 Noble Truths of Network, Server and Application Monitoring »


Topics: monitoring

Leave a Reply

Your email address will not be published. Required fields are marked *

THIS POST WAS WRITTEN BY Ipswitch Blog

Free Trials

Getting started has never been easier. Download a trial today.

Download Free Trials

Contact Us

Let us know how we can help you. Focus on what matters. 

Send us a note

Subscribe to our Blog

Let’s stay in touch! Register to receive our blog updates.