High Availability and Fault Tolerance

Most users that I’ve talked with are having a difficult time in differentiating these two kinds of systems. Hopefully, this article would give all of us some light on how these two different IT infrastructures are different from each other and hopefully would give light on how to deal and match these with their respective Service Level Agreements.                                                             Image

High Availability means that the infrastructure has been set so that it STILL GIVES OFF MINOR INTERRUPTIONS due to the following factors:

  • Components are not fully fault tolerant.
  • Components are designed and placed so that it is redundant with each of the component. An example of these are two database servers which are mirrored with each other, in which, if one fails, the other one handles the processing. The servers are not exactly fault tolerant or zero tolerant, but the downtime will be minimized, if the second server switches automatically or is switched manually, if the first server goes down.

In short, high availability is a mixture of several components which will handle the processing load, if the a similar component goes down. There could be a small downtime IF, the components were configured or set automatically, or even if the components are switched manually if a similar component goes down.

Normally, creating a high available structure would entail a user to always create two copies of everything, using two different equipment so that if the first goes down, the second goes on, until the first equipment is repaired or diagnosed.

Fault Tolerance on the other hand, is the same as Zero Tolerance, in which, if we dig in deeper, would mean, ZERO TOLERANCE to downtime. It simply means, that the infrastructure cannot go down or be unavailable. Normally, these infrastructure contain fault tolerant equipment that contain two motherboards, two hard disks, two power supplies, two memory modules that are integrated within a central resource unit (or what we call a chassis), in an active-active mode, meaning, both are working and replicating on real-time. This ensure that if a component goes down, the other one still continues to be working, therefore, eliminating the unavailability factor.

In reality, high availability components are cheaper per unit, but would be more expensive to implement because of:

  • Two units of equipment each.
  • Two licenses per equipment each.

On a per unit basis, the costs of equipment having a fault-tolerant mode, is more expensive, but implementations-wise, it would be cheaper because:

  • You would only get one fault-tolerant machine.
  • You would only pay one license per equipment, instead of two. Image
  • Easier to manage and to maintain.

However and whatever it goes, it would depend on each user and business needs to decide and choose if high availability or fault tolerance would best work on his scenario. But consider these:

  • If the business requires second by second updates on transactions, fault-tolerance is advised. Examples of these are banks and stock exchanges.
  • If the business do not do these, then probably, high availability is better.

But then consider the costs:

  • For high availability, the costs involved are: two units of each of the equipment/component of the infrastructure (example: two application servers, two database servers, two routers, two firewalls), all configured as redundant.
  • For high availability, you would need to license two application servers, two database servers, etc.)
  • Also for high availability, you would need skilled staff to manage the redundance of the servers. It would be very costly, if the business let’s say, manage redundant Oracle RAC servers.

Apparently, for these cases I’ve enumerated above, the cost of Fault Tolerant equipment pay for itself in the long run. Although the market has now recognized these facts and has now created solutions for the fault-tolerant requirements of each businesses, we still have not seen quite a handful of these products in the market today, except for a few server providers which offer 100% fault-tolerant solutions. Other manufacturers have also started introducing semi-fault tolerant solutions, but somehow, it may not be the solution that offers and promises a 100% availability promise.

Let’s wait for around two years more before the market matures on these.

Advertisements

LTE For Business Use

Hi readers, good to be back.   Image

The past few weeks, we have been bugged by several questions on the feasibility of having to use LTE-4G on their corporate networks. Here is our take on this.

LTE-4G, is not an entirely new technology. LTE, for those who are not familiar with it means Long Term Evolution. It is previously marketed as 4G or HSPDA. I don’t know the hype behind it, but it would be the mid to long term communications channel for mobile networks. It still uses the cellular network as its main point of connectivity and will be the standard for data communications for mobile users.

With regards to the questions that we have been encountering, I don’t think it will be feasible for business users to have it integrated within their main network that will be used as their main protocol for network connection. It would be best to check first the following factors that need reliable connections on internal business networks like:

  1. Strength of signal
  2. High availability requirements
  3. Functionality of the network.
  4. Availability of other connection options
  5. Availability of equipment that can be used to integrate the LTE network externally going to the internal network.

Here in the Philippines, LTE is quite new and there are limited options available, based on carriers, signal strength and equipment. Yes, LTE would be best for mobile users going around the city using their LTE capable hand units or dongles, but I think that would be it. It may not be capable of transmitting high capacity data that is required in corporate networks as of the moment. It may not be that feasible in having your own internal T1 or even DSL networks to be migrated as of yet, sooner maybe, but now would not be the appropriate time until telcos have established stability and an appropriate service level standard for this method.

Hey! Is that a Cloud?

Nice to be back here after a long absense.

Well, part of our absense is mainly due to further studies that we have undertaken, including implementation on the results of these studies to our business model.

Also, I have been talking with lots of people on technologies and we think, one of the most misconcepted and misunderstood term in computing these days is CLOUD COMPUTING. 

What actually is CLOUD COMPUTING? Well, its NOT having your applications run on the Internet, it is also not accessing your own network via VPN on the Internet. If these definitions do not hold true, then what is actually CLOUD COMPUTING?

Per Wikipedia, CLOUD COMPUTING is the delivery of computing and storage capacity as a service. If as a service, what we mean is that it should be paid and not purchased. It means, running your applications, your infrastructure and your platform as though you are subscribing to it on a fixed or variable period. A CLOUD COMPUTING SERVICE is already there, up and running. All a user would have to do is to subscribe, given an access, and run. This concept is different from having your servers and your applications be opened on the Internet for your internal users. CLOUD COMPUTING also implies that the facility is there are you are sharing that facility with other subscribers, security or non-security issues nonwithstanding.

Most vendors are now leveraging the cloud, but actually, most of them are not familiar with the cloud, which adds to the confusion. But then again, only a few vendors would be best qualified to really be determined as a “valid” cloud provider. We will not be naming names here, but it would be up to the readers to really determine who is really telling the truth in terms of cloud marketing.

There are many things that we would want to discuss on CLOUD, but it would be better if we sub-divide these topics for future readings….Watch out for it…..

The Year of the Breach

As the year is coming to a close, news headlines were dominated by reports of high-profile security attacks, some launched by “hacktivists” such as Anonymous and LULZSEC.

But something  larger was brewing. Amidst hacktivists’ attacks on Sony, HBGary and NATO, highly sophisticated, clandestine attackers—the kind with the rarefied expertise, deep pockets and specialized resources typically only seen in nation-state adversaries—were actively infiltrating a broad range of targets.

These attacks were different: they were patient, stealthy and leveraged a potent combination of technical skill and social savvy.  Some used clever social engineering to get a foothold into their target organizations, while others used zero-day vulnerabilities—previously unknown holes in software—to penetrate defenses. 

While advanced attacks have happened for years, IT security experts observed recent attacks had grown bolder and more frequent. Recent attacks were also highly targeted, customized, well-researched and, in many cases, employed both technical and social
components.

The term used to describe such complex, sophisticated attacks was
“advanced persistent threats” (APTs), but as IT security experts quickly pointed out, APTs were only as advanced as they needed to be to get the job done. A concrete definition is elusive and, as cautioned, “Defining it could limit us and lead us to be blindsided. We need to constantly revisit the characteristics because they’re always changing.”

Much of the day’s focus was on the techniques of highly organized attackers. such
advanced threats, which include APTs, span from corporate espionage to hacktivism.

This article distills certain key insights from those discussions and
aspires to advance the industry’s dialog on advanced threats, spur disruptive innovation and disseminate some of our learnings from some of the most seasoned professionals in information security.

From a Cookie-Cutter Approach to Adaptive

In 2000, the I LOVE YOU worm crippled more than 50 million Pcs. The delivery mechanism was simple but effective: an e-mail showed up in your in-box with a subject line of “iloveyou.” When people clicked on the e-mail’s attachment, titled “love-leTTer-Foryou,” they were infected with a computer worm. while the damage was significant, a
partial solution to this problem came in the form of antivirus software: a signature could be deployed to antivirus agents that would identify the file as malicious and arrest its actions.                                                                          
Today, generic malware is still profuse but signature-based defenses, at either the network or host layer, can greatly decrease the odds of infection. What makes recent
advanced threats different is their defiance of a signature. In the world of advanced threats, malware evolves quickly, and security experts have  described several cases of special-purpose malware custom-developed specifically for their targets. Some were
compiled within hours of launching the attack.

It became clear that enterprises targeted by highly organized attackers cannot depend on signature-based “bread and butter” security tools as a sole means of defence. While the payloads of some advanced threats were fairly standard, entry strategies were often custom tailored.

Attackers typically used social networking sites to gather intelligence and identify specific users within an organization. Some of the main infection vectors that cited were  e-mail, Skype and instant messages with malware payloads in the form of PDFs, compressed HTML, script files, executables and attachments.
customization of attack techniques extend through data exfiltration.

Advanced threats often use sophisticated methods for compressing, encrypting and transmitting data to other compromised organizations, leaving little evidence of the origin of the attack or the destination for stolen information. This move from generic to tailored, from cookie-cutter to adaptive, means that security organizations need to think beyond signatures and re-evaluate how effective their current defenses are.

Remember that people, not technology, were the Achilles heel in most defensive strategies.

People are the Weakest Link 
“People are the weakest link” is perhaps the biggest cliché in information security. Security experts have long understood that users make bad choices, click on links they shouldn’t and install malware through simple ruses.

Corporate IT departments deploy multiple controls to help deal with this threat: e-mail filtering solutions catch many attacks before they make it to users, malicious links are blocked by the network, network scanners look for malicious content, and host-based antivirus (the last line of defense) tries to stop what slips through the cracks.

This process works well for generic, shotgun attacks in that signatures can be updated quickly to immunize users. Advanced attackers, however, are now creating highly credible scenarios in which they convince users to click on dialog boxes warning of fake software updates, retrieve content from quarantined areas and act (unknowingly) on behalf of the attacker.

Attackers have become dangerously adroit at using our weaknesses and behaviors against us. Attackers are creatively leveraging people inside the company to help accomplish their goals. “Internet scams are supposed to be sloppy, but they work.”

Advanced threats defy that stereotype. Experts put a fine  point on it: “The perimeter is not a firewall; it’s our users. They don’t treat their computer as an adversary; they treat it as a tool—an extension of themselves—so they don’t question what it tells them.”

Addressing the people problem will take more than technology. Organizations need to
drive a sense of personal responsibility for security among employees.

Attackers Aim for Advantage, Not Infamy

Advanced attacks are typically not the product of hobbyists. These attacks often require months of planning, mapping out internal networks by looking at the fringes.

The reconnaissance can go much further: targeting key employees, deconstructing their life by scouring social media, custom-crafting an attack so that it is stealthy, patient, and very effective.

Cybercriminals, the ones who look to steal credit card numbers and other
commoditized and sellable data, have become increasingly sophisticated but advanced
attacks are different. Increasingly, they focus on espionage—stealing specialized data
that may be of high value and strategic importance to the commissioning entity, which
can be foreign governments, rival corporations and organized crime groups. The entities behind advanced attacks literally mean business.

Also, entities perpetrating many advanced attacks are substantively different from the
hacktivists groups that have attracted attention in recent times. Hacktivists want to
embarrass and expose their targets’ activities, taking pride in publishing their conquests.

Many advanced attackers, in contrast, have the goal of stealth. They do not want to be
discovered or seek publicity.

Now some advanced threats are now masquerading as hacktivist attacks, with the goal being to confuse forensics and place blame on groups that are often eager to accept it. This pattern makes it difficult to size the scale of advanced threats: a willing scapegoat makes post-incident attribution particularly problematic.

The New Normal: Act as Though You Are Already Hacked  

The events of the year have shown that determined adversaries can always find exploits through people and in complex IT environments. It’s not realistic to keep
adversaries out. Organizations should plan and act as though they have already been breached.

Three foundational principles of security are compartmentalization, defense in depth and least privilege. in combination, these three tenets dictate that if one system (or person) is compromised, it should not result in a compromise of the entire system.

While simple in concept, these tenets have proven complicated to implement in practice. Organizations have long relied on the notion of a “perimeter,” where a big thick wall—in the form of firewalls and gateway defenses—guards the organization, with good guys (insiders) on one side of the wall and attackers on the other.

Security perimeters are now considered a construct of the past. Boundaries are nearly
impossible to define in modern organizations. The inclusion of partially trusted users
such as customers, suppliers, contractors, service providers, cloud vendors and others
have made organization boundaries very porous. Beyond the eradication of traditional
organizational boundaries, the consumerization of IT has brought a rash of unmanaged
devices into the enterprise and exposed the organization to services (and suppliers) that are opaque.

IT consumerization has also blurred the line between the business lives and
the personal lives of employees. We have moved from the illusion of a perimeter-driven defense to living in a state of compromise.

Accepting that some systems, some people, and some services may already be under the control of attackers changes information security strategy. it forces a return to the core principles of compartmentalization, defense-in-depth, and least privilege.

Organizations need to focus on closing the exposure window and limiting damage through efforts to compartmentalize systems, stop sensitive data egress and contain malfeasance. This new model also demands that we rethink old habits of sharing sensitive corporate information—such as source code, product plans and strategic roadmaps—using collaborative processes that presume perimeter defenses can keep attackers out.

Security improves through greater situational awareness: gaining the ability to
understand what’s happening beyond our network boundaries to detect threats on the horizon. Organizations get smarter by looking beyond their infrastructure and observing  the ecosystem. The ecosystem approach to security relies on organizations actively sharing information with other organizations about threats. It also demands greater visibility into the security of suppliers and service providers within one’s supply chain.

The key is to know what digital assets are important to protect, where they reside, who
has access to them and how to lock them down in the event of a breach. This ability to
tighten the net before and during an attack is key, and it requires a mature process for
incident handling. Incident response should not be considered exclusively a security
function. Instead, it is an organizational competency that must be developed and
continually honed well before an attack occurs. if organizations are planning responses
as an attack unfolds, they are too late. A competency approach allows remediation
activities to kick in automatically—like a reflex.

The Road Ahead

The reality of advanced threats demands a disruptive approach to defense—one where
enterprises can be agile and thrive in a contested environment. This approach must be
applied holistically: approaching advanced threat defense not as a discrete function but
as a natural consequence of robust but agile security.

Many of the holes that exist today come from an unmanageably complex iT infrastructure. Given that information security is a “weakest link” problem, only through understanding our assets, processes and endpoints do we have a chance at effective defense. Unraveling complexity and fielding a successful defense means that we also need to think creatively about the range of attacker motivations, which can extend far beyond data theft.

With every new technology, we have the ability to weave security into its fabric, to begin anew. We are at the start of an industry-wide move to cloud-based services and systems. We stand on the precipice of a sea-change in technology. There is a new mantra that goes within the industry saying “If we can’t get it right with cloud, shame on us.”

Today more than ever, security is an ecosystem problem in which every constituent has a responsibility. Attackers are collaborating, sharing information, going after the supply chain, co-opting careless insiders and evading our long relied-upon defenses. we need disruptive collaboration and innovation in defense. Through collaboration, information sharing and increasing our agility, we can successfully fend off APTs and other advanced threats.

Happy Holidays! And a blessed new year to all!

 

Solving Google Chrome’s High CPU Usage

Most of us know that Google Chrome is the fastest browser today. It loads fast, it starts fast and it really navigates fast. 

But how many know that Chrome utilizes so much processor’s resources that for those PCs who have lower powered processors installed, suddenly, everything crawls down.

My laptop has a dual core processor and 2 GB of RAM (the standard off the shelf package). The only difference is that I use this laptop as a network management and monitoring unit. I’ve installed a processor and RAM monitoring facility (naaah, I am not using Windows gadgets which consumes lots of RAM), but instead, am using Rainmeter and its necessary skins.

What I’ve noticed is that when I was using Mozilla Firefox, CPU activities only range from 8-15% utilization on 45% RAM usage. The CPU activities nor the RAM usage do not go up when using Firefox’s multiple tab functionalities.

Recently, I’ve tried using Chrome, and using the same behavioral processes in browsing, I’ve noticed that my CPU and RAM usage spikes, even though I am confined with one tab. Much worse, it increases when I am using at least three tabs.

As I am writing this blog article using Chrome, I am only using 6% of my CPU and 40% of my RAM. Yes, I have three open tabs and I have multiple processes running in the background.

The secret?

I’ve found out that Chrome, in its default installation, protects users from phishing and malware on real time. Sure, small deal, but if it consumes 80% of your CPU and 80% of your RAM, its definitely big deal.

What I did is just to disable this functionality under the Options – Under the Bonnets area. And right there and then, my CPU usage went down from 60% to 14%, my RAM usage went down from 70% to 45% and everything is back to normal.

Of course, I am not saying that you go through everything without the phishing and malware protection for this is important.

But it surely works. I am also attaching a screenshot of my resource monitoring results here.

-ViZ-

Five Myths on Tape Storage

Technological change is part of life, and that is true whether you are chillaxing in your living room or doing your daily battles in the data center at the office. Machines and devices come and go, capabilities change usuall for the better, and technology seems to get smaller and faster.

With this concept in mind, using tape storage for your business data seems like you are in Jurassic Park. After all, all forms of tape products that used to surround us, like video and cassette tapes have already disappeared and were replaced byoptical disks, flash and USB devices that have storage capacities that employ ease of use and better reliability.

But, our living rooms and the data center are two different places. No doubt that vendors now push for disk storages that are bigger and faster, but of course, they are selling technology. The reality is tape storage is STILL A KEY COMPONENT IN AN ENTERPRISE DATA CENTER. I’ve seen tapes that are still in use in todays large enterprise data centers. It continues to be a key component in backup strategies and no doubt that tape still runs in the biggest data centers today. This still holds true today, and I think this will still hold true in the future.

But there are myths that still surround the use of tapes. No doubt that my early IT years were spent in managing these tapes so I can have some authority on this matter. If you are to look closely on how these technologies work, these myths do not hold true.

MYTH #1: TAPE IS MORE EXPENSIVE THAN DISK

On the cost of acquisition, tape costs less per gigabyte of data than disks. And if a company decides or is already starting to use one, expandability is lesser in cost if you use tapes than disks. Per my last acquisition, one 100 GB of tape costs around 800 pesos. Try lookin at disks with the same capacity on how much it costs per gigabyte.

Tape also costs less to operate, in terms of energy used than disks. Since disks run on multi processors with several high capacity power supply systems, tape drives take only a fraction of the amount of energy used.

Tape libraries are also highly scalable and of lesser cost. Compare the cost of having to run a tape jukebox than to run an extensive RAID library. 

MYTH #2: TAPE IS CHEAPER TO BUY BUT MORE EXPENSIVE TO OPERATE

I will not be detailing as much on this, but everything boils down to the TCO (total cost of ownership), which includes, purchase price, operating price, manhour price to operate, maintenance costs and other things.

MYTH #3: TAPE IS GONE… NO ONE IS USING IT ANYMORE.

Data center managers are a conservative lot, and they do not want to be the last one standing with a technology the rest of the world has left in the dust. That is not a worry with tape storage, because most of the biggest enterprises in the Philippines (and even globally), use some form of a tiered storage strategy with tape as the foundation layer where most of their data resides.

Banks have lot of regulations they need to follow regarding data storage, availability and security. And surprisingly, most top banks in the Philippines and all 10 of the world’s largest banks rely on…..get this…..tape storage products specifically from IBM and Oracle. The same can be said for the biggest three telecommunication companies here in the Philippines and most of the largest pharma firms here in the Philippines.

MYTH #4: TAPE IS UNRELIABLE

That holds true for our cassette and VHS tapes where everything can be swallowed and destroyed by having a dirty tape path.But in terms of its use in data centers, tape is more reliable than disk storage. We have experienced several NAS disks going down, leaving the users clueless on what happened, leaving no trace of data. I have not heard any tape drive or tape backup failing as of this time. But let us assume that you want to move everything to disks and drop your tape storage completely. You need a large RAID configuration, and you are going to have to pay for a RAID controller. DISKS ARE KNOWN TO FAIL and so are these controllers. In fact, 85% of our data recovery clients are of failures in disks and its controllers.

MYTH #5: TAPE IS A BIGGER SECURITY ISSUE THAN DISKS

Yes, tapes are movable and portable. But there is such as thing as “tape encryption”, which works similar to “disk encryption”. Tape encryption is transparent and works at the tape drive level itself and runs at full speed without quality degradation. An encrypted tape would be useless to someone who stole it. Much more, a stolen tape is useless to someone without a tape drive….. Compare that to disks where it can be configured to run anywhere…

Conclusion:

Disk storage is important, no doubt about it, but beware of vendors selling exclusively disk based solutions. There is a lot of misinformation out there that paints tape technology as inferior to disk and as a last generation solution on the verge of extinction.

Tape and disk storage systems can be and should co-exist in a tiered storage strategy that uses the less expensive tape tier for tasks like long term backup and archiving. If you are looking to add storage capacity in your enterprise data center, make sure you know that facts and understand vendors who lack a complete portfolio and sell only disk.

Just because tape storage technologies disappeared from your living room does not mean they should do the same in your data center.

So let us visit the museum and try to see if there are tapes and tape drives that are displayed.. Chances are, we won’t find any… Not during the next few years.

 

Disaster Recovery, Maximize IT Protection and Minimize Downtime

We all define disasters differently. Some companies view it on a scale of disruptions in case of accidental or malicious data loss or damage, some companies view it as disruptions due to power loss or other natural calamities, some companies view it on equipment and systems failures, while some companies view it as nothing, wherein, they ignore these disruption causes because they feel they do not need it or their business are indestructible. 

No matter how we view disasters, we all know that today, we need a strategy that provides protection, fast recovery and high availability of these systems so as it will create minimal or no effect on the entire business process as a whole.

Basic backup does not cut it nor address it. You need effective ways on how to keep your employees working, continue communications, and maintain business as usual during disasters.

This area of IT and/or business should be closely looked upon.