Subscribe by Email

Your email:

Get the Midlands Data Center Whitepaper

Follow Me

Blog

Current Articles | RSS Feed RSS Feed

The Future of IT Security at Omaha Compliance and Security Summit

 

 Header resized 600

Thanks to all who showed up for yesterday's 3rd Annual Corporate Compliance and Security Summit!  It was a resounding success, with about 200 Security/Compliance professionals turning out to see presentations from Jim Nelms (CISO for the Mayo Clinic), SSA Justin Kolenbrander (FBI's Cyber Task Force), as well as breakout sessions with Solutionary, ISC2- Omaha, Cosentry, and Continuum.

In addition to the presentations, the Omaha World Herald's Barbara Soderlin interviewed Jim Nelms after his presentation for an excellent piece on the role of IT security professionals. Here is a snippet:

Businesses cannot completely prevent information security breaches, but they can manage risk better by evaluating its probability scientifically, rather than through perception or fear, he [Jim Nelms] told The World-Herald after his talk. Then it's the security chief's role to be able to explain the risks to other executives in a common language that they can understand.

Omaha business owners who are struggling to understand how to best protect their data or intellectual property shouldn't be shy about asking a consultant for help, Nelms said.

“You cannot grow security organically — the industry moves too fast,” he said, with risks evolving over the past 18 months to include “advanced volatile threats” — malicious software programs that damage a computer network's memory, then disappear without a trace, making them hard to detect.

For the full article, head on over to the Omaha World Herald.

Grace Hopper- The Most Influential Computer Scientist You Hadn’t Heard of Until Today

 

Grace Hopper resized 600As you may have noticed, Google has a very special doodle adorning the search page today, signifying what would be the 107th birthday of Rear Admiral Grace Hopper- famed pioneer of computer science and programming.

Many people outside of the IT industry have never heard of Grace Hopper, so let’s start with a little bit of background. Hopper is referred to frequently as the "Mother of Computing," and with good reason- she was one of the team members that developed the MARK I at Harvard, back in the 1940s, and continued on to help develop the UNIVAC as well as the COBOL programming language. COBOL was one of the first programming languages to be written using commands similar to the English, and is still in use in government and military agencies.

Grace Hopper is well known amongst those who enjoy learning about the early days of computers.  She was the one who popularized the phrase “debugging,” for example, after clearing out a moth from one of the MARK I’s electromagnetic relays. 

She was also famous for her lectures and presentations across the country in the later years of her life, using 11.8 inch lengths of wire to demonstrate the distance electricity can travel in a nanosecond. This was an impressive visualization, as she also brought a 984 foot wire representing the distance that electricity could travel in one microsecond- roughly 1000 times faster than the relay in the original MARK I.  This is a great representation of “Moore’s Law” in action; although Moore’s Law technically refers to the number of transistors on integrated circuits, many use it as a more generic way to describe the speed at which computing hardware performance improves every year.  Grace Hopper had a way with presentations that made technology understandable- she also appeared on Letterman and other media outlets and her grace (no pun intended) and her intellect was an inspiration for many young technologists, myself included.

All of this is to say that not only did Grace hopper help push the science of computers forward, but she also changed how we think about computer technology as an industry.

It’s also easy to forget that over half of the original MARK I programming team was made up of women from the 1940s all of the way until the 60s and 70s.  Now, hardware and programming is heavily skewed towards men, with women making up less than 30% of workers in the field (landing it somewhere between coal mining and animal processing in terms of what proportion of the workforce is made up of women).  It’s a trend worth mulling on (and fixing) because we want to make sure we don’t miss out on the “Grace Hoppers” of the future.

Grace Hopper has left her mark on the world, so join Google (and us) in celebrating her and the longstanding impact she had on computer science.

   
Kevin 2 small resized 600

Kevin Dohrmann, Chief Technical Officer and Co-Founder, Cosentry

Kevin has over 25 years of experience in the technology industry including technical support, international call center operations management, Internet telephony and the World Wide Web. He cofounded an Internet telephony company that provided Web-based telephone services to large U.S. and European carriers and has been involved in various aspects of Internet and Web development since 1992. Kevin has published many articles on technology networking topics and is a frequent speaker on technology trends and the impact technology has on business.

Cosentry and the Data Center Industry in the News

 

newspapers resized 600

For those of you outside of the city, Cosentry was covered extensively in feature article by the Omaha World Herald this week.  If you haven’t seen it yet, you can take a look at the article here:

As businesses’ data storage needs expand, Cosentry adds to its Papillion center

Cosentry has been a fixture in Omaha for the past 12 years, starting in 2001 with the Bellevue Data center and Workgroup Recovery center (you can read our full history over here).  Over the years it has grown to the largest provider of data center services to the Midwestern region, with 6 data centers spanning across 4 states.  We have grown significantly over time, making the Inc 5000 list for seven consecutive years.

Along the way, we have had clients ranging from Fortune 100 companies to local business owners, and we do our best to stay true to our philosophy- being a trusted IT partner to organizations, taking on their infrastructure and IT needs so they can return to spending time and resources on the business that they worked so hard to build.

Now, one of the really interesting things this article does is that it addresses the industry we have grown alongside for the past decade.

First, there is what Data Center Knowledge described as a "shortage of data center space", given that demand has outpaced construction.  One of the reasons that we have grown so consistently over the past decade is that the cost and resource benefits of colocation are so dramatic, that building your own data center is generally at the bottom of the list.  

This means that most businesses are, or should be, looking to colocation facilities or virtualization options to keep costs down and reliability high. Some regions are underserved by local colocation centers though, by either not having enough space, or by having  space available that doesn’t match up to standards required by compliance or regulation standards. We think Cosentry has a part to play in solving this discrepancy.

 

Second , there is a broader look at the sea change in data and reliability over the past few years.  The article opens with an example from Black Hills Corp, checking and evaluating their readings every 15 minutes, to better “manage capacity and predict demand”. It's easy to take for granted, but it is pretty amazing that current technology allows this level of information, both real time evaluations and long term predictive analytics. 

Data Centers are something of a self perpetuating machine in this way, given that this large amount of data and analytics requires additional space and power.  The equipment used to store and analyze this data then churns out more data to analyze.  As part of what EMC describes as "the internet of things," the data center industry lives as one of the primary drivers to, and beneficiaries of, the movement to big data. Virtualization and ecommerce do the same thing, which is just another reason the need for data centers is growing.

 

We hope you enejoyed the article, and we are looking forward to what's coming next!

DDOS 101: Mitigation and Prevention

 

Please stand by 

Distributed Denial of Services (DDOS) attacks against Internet based companies has become a commonplace event over the course of the last several years.  Some MSPs provide Internet services at a level that exceeds the scale of most Denial of Service attacks, which is one of the best protection measures available right now.  However, any individual company could become the target of a DDOS attack that could exceed the data center capabilities or the client’s equipment of services.

Organizations that have multiple geographically diverse and geographic load balanced configurations are much less likely to be affected by a noisy neighbor or a DDOS attack on a neighbor, because they have much more excess capacity that can serve up the noise of a DDOS attack, along with the good traffic intended for clients.

There are additional products and services that can be purchased to mitigated and prevent a DDOS attack that is intended for a client.  Large tier I Internet backbone carriers (AT&T, Level 3 and CenturyLink) all have packet analysis services: DDOS prevention/ mitigation that will analyze each IP packet destined for your servers and toss out all the DDOS based packets only passing through legitimate packets.  These services are available for several thousand dollars per month depending on the amount of capacity to protect. 

As long at the scale of the DDOS attack is less than the amount of processing you purchased, you will be protected.  Once these service volumes are exceeded, the same problem will still occur.

Prevention appliances include Arbor Networks, Prolexic, Neustar, Black Lotus and several others. These options will sit at the edge of your network as an appliance, or provide a dedicated network and or cloud based filter capability, and use a variety of methods to filter out bad traffic and allow only good packets through. Again, this is only up to the level of throughput of the device you have placed into service.

Experts can work with you to make sure that you understand the risks and costs of DDOS mitigation using appliances and carriers based services to meet your financial and uptime requirements.  There is real danger of DDOS attacks for many companies that rely on uptime for revenue, and the world knows it- DDOS solutions, from bandwidth expansion to behavior analysis, are one of the most widely researched new technologies in the industry.

   
Kevin 2 small resized 600

Kevin Dohrmann, Chief Technical Officer and Co-Founder

Kevin has over 25 years of experience in the technology industry including technical support, international call center operations management, Internet telephony and the World Wide Web. He cofounded an Internet telephony company that provided Web-based telephone services to large U.S. and European carriers and has been involved in various aspects of Internet and Web development since 1992. Kevin has published many articles on technology networking topics and is a frequent speaker on technology trends and the impact technology has on business.  Kevin graduated from Iowa State University.

Managed Hosting and High Availability Websites- An Interview

 

green traffic light banner resized 600

An Interview with John Grange, Product Lead for Cosentry’s Hosting Services

John Grange has been in the hosting industry for years. John is currently the product leader for Cosentry's Hosting Services, built from the ground up to maximize the same qualities clients find appealing about Cosentry services- availability, reliability, and security.  In this interview, John clarifies what managed hosting means, what makes it a good choice for certain organizations, and what the best ways to implement it would be. 

Q: John, can you please describe what a high availability hosting configuration is?

A: With websites, intranets, or even custom applications, it is always important to maximize uptime. A high availability hosting configuration allows certain components of your hosting environment, say a web server or database server, to fail without your users experiencing any downtime. More importantly, in a high availability environment, failover happens almost instantly and without any human intervention which makes high availability a must for mission critical applications.

A high availability hosting solution also provides increased scalability and performance. If you are expecting more users because of a campaign or an important event, it's very easy to simply add another web server to handle the increased load. Additionally, each individual server is doing less work and able to produce faster load times for your users.

High availability environments typically include:

  • Multiple web or application servers that are replicated so they contain identical data.
  • One or more load balancers that direct traffic to the different web or application servers. In the event of a problem, the load balancers refrain from directing user traffic to the problem server.

Q: You have seen a lot of company websites. Why is high availability important in a hosting solution?

High availability is an incredibly important attribute of a hosting solution because downtime can often translate to lost dollars, whether it's a reduction in productivity, sales or even reputation damage. Amazon famously reported that for every 100 milliseconds of load time their profits decreased by 1%. Google says half a second delay results in a 20% drop in traffic.

A high availability hosting environment can drastically reduce downtime while improving performance which drives real business

Q: John can you help us understand with some examples of situations where a high availability hosting solution makes sense?

A: Mitigating the risk of downtime is something that you really should do regardless of the application. But certain applications or scenarios carry a greater importance, and steps should be taken to reduce the risk of downtime or poor performance. Examples of these are: -Large Websites -Corporate Intranets -E-Commerce -Critical applications -Marketing campaign sites

Q: Isn't high availability difficult to implement, and adds a lot of complexity?

A: You can't get around the fact that advanced configurations such as high availability add complexity to a hosting environment. The key is adequately weighing complexity versus capability and cost. It is completely possible to reduce your risk of downtime to almost zero, however, the costs associated with this would be enormous. So you need to map out your objectives then create an implementation plan.

Your implementation plan should address these items:

  • Identify your traffic and resource needs 
  • Determine your uptime needs, taking into account planned maintenance and updates
  • Identify the points of failure from your users all the way to your servers and then determine what redundancy you need to meet those uptime requirements
  • Design an architecture based on the needs and objectives you've determined through planning
  • Install and configure the servers and devices

Q: A lot of companies offer hosting capabilities.  How does Cosentry implement high availability hosting solutions?

A: Our approach is fairly unique in that we leverage our Managed Hosting Platform that features built-in high availability components such as a load balancing grid and a redundant database stack, in addition to working hands on with the clients to make sure that the environment is tailored to the application we're hosting.

This approach allows our clients to realize efficiencies and cost savings from our existing platform, while receiving a consultative, hands-on service to ensure we're meeting individual needs. We've gone to great lengths to develop a turn-key process for bringing our clients into high availability environments, allowing their people to focus on the business and not on the hosting infrastructure.

John also put together a "hosting roadmap,"  designed to help organizations make smart choices about their hosting needs.  You can download it at the link below.

Welcome Brad Hokamp, Cosentry's New CEO

 

Brad square     cosentry   horizontal   4 color   for white background resized 600

 

Cosentry came out with some big news this week- we officially announced Brad Hokamp as the new Chief Executive Officer with the company!  The energy here is buzzing, and we are incredibly excited about the vision that Brad is bringing to Cosentry.

Brad has been working in the data center, hosting, and cloud industry for over 25 years, with companies of various sizes, and oversaw growth wherever he went.  He has a very strong idea of what Cosentry should be, and I would like to share it with you here.  

Brad’s vision for Cosentry’s future is simple and direct- Cosentry will continue operating as our clients’ trusted Midwestern partner- we actually sit down with our clients, understand their business needs, and develop comprehensive IT solutions that help our clients pursue their goals. This focus on client services, as well as a history of expertise and reliability, is what drove us to become the largest, and most trusted provider of data center services in the region, and it will lead us as we grow.

The result is that Cosentry will continue investing in the resources that will truly take your organization to the next level. We will sit down with you, advise you about your choices, and develop solutions that meet your needs.  What sets us apart is that we don’t just sell products to you- we partner with you, developing the strategy that will help your organization rise above the fray.

So that’s the plan. This vision of our company, the decision to embrace the core values that made Cosentry the leader in Midwestern data center services, is why we knew Brad was the right choice for us. 

So join me in welcoming Brad Hokamp to the Cosentry family.  It’s an exciting time to be at Cosentry, and I can’t wait to see what happens next.

Orchestrating Success with Hybrid Clouds

 

Tuba resized 600

 

In the past, we have discussed all sorts of topics around cloud computing, and the value of hybrid clouds.  For this post, I wanted to cover what the next big step in cloud computing will be- but first, let’s have a look at the big players:

 

The cast:

Public cloud- Like Amazon and Microsoft’s offerings, the public cloud is a virtual environment divvied up amongst the customer base.  This is the most affordable cloud option, but also the least reliable, thanks to the neighbors being able to use up your resources, and least secure.

Private Cloud- This is a cloud environment that maintains dedicated resources, and can either be closely managed by a company, in a managed cloud setting, or can be a hands off self-service virtual private data center.

Hybrid cloud- some combination of public cloud, private cloud, and possibly colocation.

Orchestration software- the user-facing software that allows a user to control their cloud services.

 

The plot: 

Businesses need the cloud to stay relevant- that much is clear to any enterprise business that has any sort of digital presence- so the real trick is how to make cloud computing work as well as possible for your organization. As we’ve mentioned before, hybrid solutions tend to be the best option for most companies.  A company uses the private cloud to store their most important and sensitive data, and uses the more affordable public cloud to store their less important data.

After a company has chosen to follow this route (as most eventually do), the next question they have to ask is how their cloud will be managed.  To get the most out of your cloud processes and setup, you have to be able to scale your resources up and down as necessary, back up, and facilitate the flow of information between your networks.  There are a couple of ways to go about this: first, you can simply pay a company to take care of it for you- this would be a managed hybrid solution.  On the other end of the spectrum, you can control your cloud with an orchestration layer.

For those of you who are thinking a couple of steps ahead, you can see how the orchestration layer on top of your cloud resources can be the most powerful aspect of the process.  Most companies, if given the chance, would choose a more affordable and customizable cloud design process, which means that the right orchestration layer can allow a company new, unparalleled flexibility in virtualization, as long as it is user friendly enough to use.

 

Ending:

The future of cloud computing lies in the orchestration layer.  With a user-friendly orchestration tool that can easily manage both a private and public cloud, and facilitate between the two, a company can scale quickly, keep costs down, and keep their cloud customized just the way they want it.  Look for cloud providers to be focusing on building intuitive orchestration tools as we enter the cloud age.

Guest Post: Benefits of BYOD and Cloud Computing

 

appshark cosentry banner


Adding mobility to enterprise applications has become a top priority for businesses in all areas of the world. People want reliable, secure access to their work-related data on any device they bring into or use outside of the work environment. Whether it's an iPad, iPhone, Droid, Kindle, HP laptop or even a smart TV, people expect to be able to work anywhere and on any device they choose.

A recent article posted by CIO online reported that 77 percent of 1,300 companies surveyed about mobile cloud strategies thought that businesses with mobile apps and services will soon become the standard means through which employees access IT systems for work. That's a huge number!

OK – so we all understand the importance of mobility. But, how do we get there?

The problem for many IT managers is how to add mobility quickly and cost effectively, while ensuring these applications will be built in such as way that they won't compromise secure company data. 

Many people are familiar with a native app, which is downloaded onto a device from the AppStore and can be accessed quickly on your device's home screen. The problem with native apps is that they are costly to develop and maintain. One app must be developed for each platform (Apple, Microsoft, Google, Android). That doesn't bode well when you need multi-device, cross-platform functionality.

A better alternative for many organizations is HTML5. With HTML5, one application will work on any platform or device. New advancements with the technology mean that performance is nearly on-par with native apps and secure cloud hosting of your data is equivalent to the security offered by on-premise apps.

If you are still using legacy apps, converting these to current .Net or other technology will not only help you get rid of old code , but it will also help you get to a point where you can add HTML5 mobility to your business. It might take an up-front investment to convert apps and migrate them to the cloud, but your total cost of ownership will be lower because you are freeing up valuable resources needed to maintain old apps and help with downtime and emergency recovery. Not to mention, new, better apps promote greater productivity, and that leads to a higher ROI.

If you are thinking about adding mobility to your business, consider your mobile strategy first. Ask what you want to accomplish. Enterprise mobility, for example, will expand your user base and user adoption. Mobile apps can also create a competitive advantage in terms of productivity and generating more profits. Another study cited by Dell said mobile apps added 240 productivity hours per mobile employee each year.

On the marketing side, marketing mobile apps tend to give organizations a competitive advantage over similar businesses without a mobile app.

For additional questions on custom development, legacy application conversion and migration or mobile enablement, contact us today.

 

This is a guest blog post from Matt Dillon, Vice President of Development at AppShark. Cosentry partners with AppShark to offer application migration, cloud deployment, HTML5 mobility and managed hosting services.

The 5 Major Themes of EMC World 2013

 

EMC world resized 600

All right, we are a few days out of EMC World, and it is time to take a look back at what we learned over the course of the week.  EMC has been taking an interesting strategy with its partners- as the dominant player in storage, EMC is clamping down on the industry by bringing together companies like VMware, Pivotal, and RSA; however, instead of expanding the company to include them, EMC allows them to continue operating reasonably independently in order to have the best products for their segment of the industry.  It is a fascinating strategy, and those of us in the data center industry will get to reap the benefits.

The Show was full of amazing keynote speakers.  For those of you who weren't there, you can find the speeches at EMC TV. If you want to see the main theme of each keynote speaker in adorable cartoon format, I recommend checking out the EMC world blueprints gallery.

Now, I thought I would take a minute and touch on the major announcements and themes for this years epic EMC World, boiled down into 5 main points.  

    1. Getting it Right with ViPR

      ViPR signifies a major evolution in the entire virtualization industry- it can take and unify hardware from any array manufacturer under the ViPR banner, automatically deploying storage resources.  This way your valuable software can be running on anything, anywhere.  Some other companies have tried similar products, but EMC built ViPR from the ground up to be used by the coming storage industry generation. I’m excited to see where journey to Software Defined Storage (SDS) takes us.

        2. Big Data, Fast Analytics

          It was the central theme of the keynote speech with Pivotal’s Paul Maritz- What do we do with big data?  The number of devices generating data in the world is growing exponentially, already beating out the amount of information coming from people.  If every device generates terabytes of data, how do we decipher the meaning behind it all?  Pivotal’s goal is to build programs that actually analyze the data as it is generated- fast data, they call it.  It won’t be a surprise to see the need for this sort of data analysis explode over the next couple of years

            3. Where Does Security Go Next?

              The digital security world is changing.  Perimeter protection is great, but as time passes, it is become less effective as malevolent forces find ways around perimiter roadblocks as fast as organizations put up new ones up.  It’s similar to the vaccine problem- we always need to stay one step ahead of the game to keep ourselves safe from harm.  Art Coviello with RSA wants to change the game by taking security in a new direction:  real time analysis of behavior patterns to recognize the bad guys. 

                4. Software Defined Everything

                  Software Defined Storage (SDS) and Software Defined Data Centers (SDDC) were the star of the show, for EMC and VMware, respectively.  EMC’s primary update on software defined storage was the ViPR announcement, but the whole show was indicative of their new company strategy EMC will be putting their considerable company resources behind the SDS idea.  VMware has been pushing the software defined data center for years now, and adding ViPR into their tried and true mix of automation, security, and converged infrastructure will put them on a strong path to their goal.

                    5. Obligatory Vblock Update

                      Can’t let EMC world pass us by without an update on our favorite Converged infrastructure product- the Vblock.  If you have forgotten, the Vblock is a “data center in a box”, which ships with all of the storage/server blades/switches/etc. in one large cabinet.  It’s built with the best enterprise class technology from EMC, Cisco, and VMware, which means that it is no surprise that VCE’s awesome infrastructure offering now owns 57 percent of the market.  Look for that number to get bigger and bigger as more organizations realize the benefits of installing or running the cloud on Vblock technology.

                       

                      So, we have talked a bit about the main announcements from EMC World, but I do want to take a minute to say- it was an incredible show.  From the amazing A/V effects, to the informative presentations, to the personal DJs located in the exhibition halls, the show was bombastic on nearly every level.  To all of those who were not able to make it- it is never too soon to start lobbying your boss for a ticket for EMC World 2014. 

                      EMC World Blog, Day 2- Federation of Partners

                       

                      describe the image

                      Day 2 of EMC World has come to an end, and today's focus was how EMC's federation of partners will be building a path forward to the Software Defined Data Center (SDDC) and Software Defined Storage (SDS).  Parters ranged from Pivotal, EMC's newest independent spin off, to VMware, longtime ally and partner in the VCE corporation.

                       

                      Pivotal

                      Big data, big data, big data. That is the name of the game for Pivotal.  Paul Mauritz got up on stage and started talking about the internet of things- how, in a few years, people won't be the bulk of what generates information online.  There will be tens of billions of devices, from airplanes to cellphones to glasses to houses, that generate terabytes of data about their needs and activities on a daily basis, and properly analyzing this is one of the biggest issues as we move into the future.

                      Pivotal plans to address the main needs in this space:

                      1. Store and reason over large amounts of data

                      2. Rapid app deployment

                      3. The ability to ingest a wide stream of information

                      4. Upgrading legacy apps to reflect these changes. 

                      Pivotal has built what they describe as an operating system (that sits on the "hardware" of the cloud) that allows a high level of automation and analysis of the data flow.  While describing this process, a great quote came up: " The enemy of reliability is the human".  The goal here is to build an OS that automatically processes all of this data in a reliable and consistent way, and then present that information consumably.  It was an impressive presentation, and I look forward to hearing more from them.

                      Isilon

                      Isolon focused on what they like to call the "transformation squeeze"- Upkeep of traditional infrastructure while trying to make the move to new applications.  As such, they announced the new version of Isilon OneFS, which will autimate storage efficiency, as well as integrate the Common Event Enabler to allow API access by third party tools.  The goal here is to enhance ROI while keeping  regulatory compliance in mind.

                      VMware

                      VMware is still making big steps forward in it's attempt to software define anything that moves. The main focus here was the Software Defined Data Center (SDDC) which has been their goal for some time.  VMware always seems like it knows where the industry needs to go next, and gently pushes the change resistant types in that direction.  Right now, over 80% of servers are still in the physical realm, but that is going to change quickly- just look at how companies have been using the Vblock.

                      This year, they are featureing NSX, a combination of Vmware development and Nicera to build a new networking layer.  They previously established that the network was one of the big problems in moving toward software defined data centers, so that is the primary issue they are tackling today.

                       

                       

                      The big theme of this conference is becoming clearer and clearer though.  With ViPR coming forward as the main announcement of day 1, and looking at the direction  all of these independent parters are moving, it is pretty clear to see that EMC has a gargantuan task on it's hands.  Everything needs to be tied together in a way that allows it to function as a cohesive whole- the software defined data center will drive storage and cloud progress for years to come.  It is the next step, and I look forward to seeing how they build on this idea on Day 3.

                      All Posts