Simple Is Better

Enterprises and businesses think that if they have something which is not common to other businesses, they are better. It may sound true for products and services, but for their IT infrastructure, it may not always be the case.

We have seen infrastructures ranging from the most common to the most extensive and complex setup, but everything boils down to how it can serve the needs of the business and manageability.

Yes, the thing is infrastructure architects make it too complex for everything, including management of these infrastructures and even to the users themselves. The thing is it all boils down to productivity. These architects may think that complexity is equal to competitiveness, productivity and that internal feeling of accomplishment because of their “high-tech” infrastructure.

But think about this:

– do these architects use the infrastructure on a daily basis?

– if something conks out, how soon can the infrastructure recover? how does it affect the business’ daily operation?

Of course, it is not just the infrastructure that we may be talking about. It may also include all internal and external processes and procedures that relates to the infrastructure as a whole.

Technology should not be too complex. It is supposed to be a tool to make everything easier.

Linux vs. MacOS

I will not be talking about the most popular desktop and mobility operating systems here now, for I was asked a question the other day…

“If Windows have lots of security breach attempts, then I would say, better use MacOS instead or Linux, for Linux’s popularity would make it the next operating systems hacker’s nest…” My reaction was….yes…No reaction.. But after a minute or so, I asked that person I was talking to a question..

“What are you thinking of? Better security? Stability? Number of applications that it can run?” The guy I was talking to was not able to answer, but later responded to me saying… “Uhhrm, for daily use only, nothing special, no applications, just browsing the Internet and some bit of e-mails.” I smiled, knowing that this guy is obviously one of the people who are more concerned on their physical image, on how they look like while surfing inside Starbucks, with an iced mocha on the other hand, a Mac book in front of him, and his IPhone 4 beside him.

Being a long time Linux user and sort of a fan, I’ll make no bones about where my preference lies–and that I think the success of the Mac is mostly a matter of marketing. Whatever your own personal beliefs, though, there’s no denying that there are certain things Linux clearly does better than Mac OSX. If you’re trying to decide on a platform for your business, these factors are worth keeping in mind.

1. Hardware

Hardware

Hand-in-hand with the question of flexibility is the fact that OS X–like Windows–is very restrictive in the hardware that it will work with, requiring pretty much the latest and greatest hardware to run well. Try it on anything less, and you’ll pay the price.

One of Linux’s most endearing virtues, on the other hand, is its capability to run on just about anything. In fact, there are even Linux distributions (or what we call distros. e.g. CentOS, Ubuntu, Debian, Suse) designed for really limited computing environments, such as Puppy Linux and Damn Small Linux.

With OS X, Apple tells you what hardware you must have; with Linux, you tell it what you’ve got and go from there.

2. Cost

Of course, if MacOS is somewhat choosy with the hardware, then go out an buy yourself a Mac, which costs a premium and may even cost more than the usual Windows based machines.  Linux (well, depending on the distro that you choose) runs on all hardware and is FOC (free of charge). All you need to do is to get a stable and fast bandwidth, and you will do fine.

Sure, there are proprietary vendors who will try to convince you that Linux’s long-term total cost of ownership is higher. That, however, is just a myth. For one thing, as I’ve previously noted, such arguments typically don’t factor in the cost of being locked in with a particular vendor.

There are also numerous studies confirming Linux’s cost advantages. Then, of course, there’s all the anecdotal evidence in the form of governments and organizations around the globe turning to Linux in growing numbers every day.

3. Customizability

I understand that there are people who would want to live in a cage and is really content on doing things the way Apple dictates them to do things. For me, these restrictions are unacceptable.

With Linux everything can be customized and configured based on what you want them to be. There are different desktop solutions (like KDE or GNOME?) and others.

4. Security

The reason why the Mac OS enjoys better security is because it is installed on a smaller base than Linux, or even Windows, meaning, hackers do not really care about it, but Linux leaves it behind when it comes to security.

For one, Linux users are not automatically given administrator privileges on their computers, meaning viruses and malware do not have automatic access to everything within your computer. So when your computer is compromised, most it can do is to delete your local files and programs, not the entire system.

With Mac OS, as with Windows, the thing called social engineering is very easy. Just convince the user to click on something and BOOM! Where is everything?

Mac OS is also trying to protect users by keeping all the innards of its operating systems secret and out of view. It is more extreme than Windows, and the one who can see and watch for vulnerabilities are….the Apple engineers and developers.

With Linux, on the other hand, there is a world of users examining the code every day. No wonder, then, that Linux vulnerabilities can be found and fixed more quickly.

5. Ease of Use

I should have left out this last item, but personally, I find MacOS very hard to use. Kinda strange for a computer techie like me.

Again, to the person I was talking to the other day..Swallow this.

Migrating your existing Analog PABX Solution to VOIP?

Given that technology changes very rapidly and an SMB has an existing analog PABX system.. The question is, do you need an upgrade? The benefits of using VOIP is very clear, and the upgrade procedures, if done properly, is not that scary. But, consider these steps:

a) Set a reasonable budget.

b) Identify what drives the need for an upgrade.

c) Answer the following questions:

  • Do you need to network remote locations to headquarters?
  • How much intra-company calling is occurring between locations? For example, is there a benefit to transferring customers, vendors or other parties between various locations?
  • Do you have locations and/or clients outside the Philippines? Do you have workers that routinely travel between locations?
  • Do you have remote workers (both permanently remote and “road warriors”)?
  • Are there special applications (call center, integrated voicemail, unified communications) driving the move to VoIP?
  • Is the telephony staff/IT staff administering multiple PBXs in multiple locations?

d) Do you need a single or multi vendor solution? There are pros and cons to each. While the multiple-vendor approach may provide a superior experience to users, sorting out responsibilities and accountability between vendors in the event of a problem can diminish or completely negate the value of the approach. Good contracts and savvy management can make it work, but the single-vendor approach may carry lower management costs and simplify the resolution of any issues.

To help you make the right decision, create a list of “must haves” and draft a “good, better, best” approach to evaluating each vendor’s ability to meet your needs. By establishing a clear understanding of why your company is moving to VoIP you will ensure you choose the best approach and vendors to drive the overall success of the project.

The Implementation Stage

SMBs that are starting from scratch with VoIP should consider a network assessment to ensure the infrastructure is capable of supporting the new technology. Examine connectivity to the wide area network (WAN) Internet and public switched telephone network (PSTN) to determine if the infrastructure is sufficient to handle VoIP. Here are some key considerations:

  • Wiring: CAT5 or better is preferred
  • Switches: Do you have Power over Ethernet (PoE)? Are the switches Layer 3?
  • WAN: Does the WAN support voice traffic? Is the WAN managed or unmanaged?
  • PSTN: How many lines and what type of lines are at each location?
  • Power: VoIP phones require power, as opposed to a digital set that gets power from the cabinet. Power can be supplied by local power, power injectors or PoE switches. What power solution will work best for the business environment?

Once you’ve established a business plan and the infrastructure is ready, you are nearly finished. Since you can never be too prepared, here are some common pitfalls and best practices to incorporate into your approach:

  • Prepare for updates: Although there can be fewer phone systems to maintain, the pace of software and hardware updates increases in frequency for VoIP compared to time-division multiplexing (TDM). This is something organizations must plan for and work around to be sure systems stay up to date with minimal down time.
  • Plan ahead: Without proper planning and implementation, end user adoption can suffer. Consider the time of year that is most conducive to a successful roll out. For example, not tax season for accountants or the holiday season for retail businesses. Through thoughtful and accurate timing, businesses can avoid unforeseen issues
  • Take it slow: Many times, businesses take the approach that they must implement all aspects of VoIP at the same time. It is often better to take a phased approach to roll out enhancements, features and functions to the end user community. Identify key areas, and start there first
  • Test, Test, Test: Test and prioritize the network so you know up front if you have enough bandwidth to serve your traffic. This will ensure that the quality of service from your VoIP meets your expectations. Next, while deploying the system, be sure to test all locations before putting them into production. Lastly, document your network and system settings carefully for future deployments or troubleshooting purposes
  • Communicate with end users: Establish proactive communication with employees both before and after the deployment. This will prepare employees for the change and encourage them to accept it as a benefit to themselves as well as the company. It is also important to provide sufficient training and support for questions and concerns regarding the initiative. Proper training will lead to a higher end user acceptance and less long term support costs

In the end change is good, and inevitable. With proper planning, testing and a business driven approach, the move to VoIP saves money on local and international calling as well as on cabling costs for new buildings. Further, switching to VoIP uses the same staff for network, data and telephone setup maintenance, saving time and headaches. If your business hasn’t already made the transition, then think about it.

Open Source Software and IT Security

Open Source Software or OSS is a computer software whose source code is available to the general public with relaxed or non-existent intellectual
property restrictions (or arrangement such as the public domain) and is usually developed with the input of many contributors.

This type of development also brings the expertise of multiple persons to bear on the design and architecture of a software program, making it robust and capable of doing the job for which it is being designed.

The openly viewable nature of the source of a program means that if possible problems are found, they can be quickly addressed and altered to adapt, with the supervision of more than one company / programming team.

When choosing between proprietary and open source security solutions, many organizations are misled by open source myths. As a result, they ask the wrong questions when evaluating their options and unnecessarily limit their IT solutions.
Is it risky to trust mission-critical infrastructure to open source software? Why should we pay an open source vendor when open source is supposed to be free? Will a shift to open source add complexity to our IT infrastructure?
These questions all arise from open source myths that this blog article will explain and dispel, allowing IT decision makers to focus on more important
organizational issues: return-on-investment, ease-of-use, agility, reliability,
and control.

First Myth: Open Source Software Is Too Risky as viewed in IT Security

Many IT decision makers have a knee-jerk reaction to OSS, especially when it
comes to security. They believe OSS is most appropriate for do-it-yourself
technology geeks working in their basements. It might be fine for a company
with an obsessive technology savant on staff, but for the rest of us, OSS is
unproven, complex, and risky.

This is a myth. The real fact is that OSS is already an integral part of enterprise network infrastructures. A recent Network World magazine article looked at the state of open source adoption within the enterprise and found it widely pervasive. It states, “Most of the packaged security appliances for everything from firewalls to security information management are built on the same BSD Unix and Linux distributions as the application servers you build yourself.”
A recent Forrester Research report  further argued that enterprises should
seriously consider open source options for mission-critical infrastructure.
“Although fewer than half of the large enterprises in Europe and North America are actively using or piloting open source software, a majority of those are using it for mission-critical applications and infrastructure,” the report said.

Second Myth: Open Source is Free

Another myth is that open-source is free of charge, and, as such, generic open source implementations can save thousands of dollars. A common question that open source vendors face every day is, “why should we pay for something we can go download for free?”

Certainly, OSS can be downloaded for free, but that is where “free” begins and ends. There are certainly other advantages to OSS, such as strong community support, continuous upgrades, and the ongoing improvement of projects by those using them. All of these advantages are technically free to any user, but someone must manage, evaluate, and then support whatever open source product your organization adopts.

If your organization would rather concoct its own OSS security suite from scratch, then it is possible to do so; however, be prepared to invest vast amounts of IT capital into such an effort. Not only must a company install and configure individual projects, but actually blending multiple projects together, all working with the correct interoperation and harmony, and being maintainable with regards to security patches and other upgrades, is a vastly complex task.
For example, while installing an Intrusion Detection component along with a
VPN solution on the same platform is technically possible, it takes a highly
detailed understanding of many different factors in order to ensure proper
processing of the traffic. For example, the VPN tunnel traffic is first decrypted, and then run through the IDS engine ensuring that encrypted traffic handled by the tunnel does not contain malicious payloads. Making things operate together is an essential component in deploying an effective security system.

A final issue is accountability. If homespun open source security fails, who is to blame? Is it the software itself? Perhaps, but what if it was configured
improperly? Is it some other product within your infrastructure that has
created a conflict? It’s possible, but you’ll need to search through bulletin
boards or wait for an expert within the community to respond to the question you post to find out. Is it your own IT staff member who managed the project? After all, that’s the exact person you will have to ask to get an answer. As the cliché states, ‘you get what you pay for.’

Third Myth: Open Source Vendors Add Little Value to OSS Projects

There is sometimes a perception that paying for open source-based products is a waste of money, since acquiring the same projects a company bases a
product on can be done for free (see Myth #2), and companies that attempt to commercialize OSS do not really add anything substantial to the offering that justifies the costs they demand. Also, some question the legality of charging money for products based on the work of others.

This myth is partially based on a common misunderstanding of open source licenses. Under the most common of open source licensing, known as the GPL, vendors are free to distribute and sell OSS if they follow the rules of the license and add value. In various products, vendors not only harness existing projects and code-bases in order to build their solutions, but then contribute back to the community in offering features, performance improvements, financial support, and more.
This further evolves the community so that it benefits from the
commercialization and can continue to evolve. Examples of this are the many versions of Linux and the Apache web server.

Companies that commercialize open-source software and add value, such as
documentation, guides, interfaces, interoperability and more, create a solution known as “mixed source” or “hybrid” solutions: a blend of both open-source and proprietary components. These solutions give customers the best of both worlds; they are based on a solid open source foundation, while also offering the support, documentation, QA testing, and upgrades. This provides a final level of polish that makes the solution stable, manageable, and realistically deployable at more companies than an open-source-only solution.

Fourth Myth: Proprietary Solutions and more reliable than OSS

As mentioned at the start of this blog,  the reliability and dependability of OSS
is called into question by closed-source proponents. If today’s security
solutions – open source and proprietary alike – start with the same Linux or
Apache foundation, then those tasked with securing the world’s networks
disagree with this premise. If security experts trust open source, why shouldn’t you?
Proprietary solutions do present many advantages, such as providing technical support, training, pushed updates, integration via APIs, and innovative GUIs. Today however, these same advantages are being added to lower-cost OSS alternatives by mixed-source vendors. Adding to this is the fact that the open source community actively resists much of what customers dislike about proprietary solutions, such as vendor lock-in, high initial costs, lack of feature upgrades/additions, and escalating maintenance contracts.

Open-source licenses discourage the kind of secrecy that has plagued proprietary software for decades—secrecy that has led to vulnerabilities and the inability to enhance or customize the software.
When something goes wrong in an open source security project, distributors
cannot deny, hide or downplay the issue. The OSS community actively polices itself and discourages anything other than openness.

Fifth Myth: OSS and its security is too complex for SMEs

There is some truth to this myth. Projects like Snort (a popular open-source
Intrusion Detection project) are certainly designed with expert users in mind – and they may work poorly (or not at all) if users are not familiar with their
approach and implementation possibilities.

Even if implemented correctly, an end-user must then ensure the program remains updated and continues to work correctly with the rest of the network security programs that are deployed. Fortunately, more and more software vendors are adapting open source projects to the demands of the market, making them very flexible and capable of being deployed in diverse network scenarios with ever-increasing ease.
There is also a second myth at work here. Proprietary solutions don’t
necessarily lack complexity. The idea that open source is an all-or-nothing
commitment is false. Proprietary vendors who follow the traditional shrinkwrap model work a lot harder to lock customers into their product lines. They won’t guarantee interoperability with competing products, and for those features they don’t offer, they’ll point you to equally expensive partner solutions. Moreover, many users of proprietary solutions are bemoaning a new datacenter problem: appliance overload. Even those proprietary vendors with a broad range of security offerings tend to deliver them as separate, standalone products. These many layers add cost and complexity to the IT infrastructure, as well as presenting multiple points of failure that could undermine security if even one appliance is mis-configured or out of date.

With mixed-source solutions, your organization can put together a best-in class security lineup without the associated costs or complexity. You also gain the flexibility to change your security posture as you see fit, without fear of breaking contracts, voiding warrantees, worrying about interoperability, or throwing away existing investments by being forced to abandon legacy
products that still work perfectly fine.

Conclusion

At the root of a myth there usually exists some level of truth or a situation that caused and then propagated the myth. The basis of this truth is then twisted and diluted and often lost amongst incorrect opinions or common
misconceptions.

Open-source is steeped in history and capability and remains
daunting to those that have not been educated in this exciting area of
development. This massive community has created some truly remarkable
tools; however, it continually faces various reactions to adoption of its ideas
and projects. This situation is mostly due to the community focusing more on
creation than marketing, and end-user awareness therefore suffers.
Mixed-source security solutions give customers the best of both worlds – the
low cost and reliability of open source, as well as the technical support,
training, and user-friendly interfaces of proprietary products. These are no
longer just tools for the gifted.

Choosing The Right Application Software

Last night, someone asked me, “If you are to choose, would you choose a software package that comes off the shelf? Or would you want it developed from scratch?”

I paused and I gave an answer….. I said, I would still go for the development from scratch.

But in reality, there are lots of considerations that has to be made. Coming from almost 17 years in implementing software and infrastructure solutions in various companies, the following (based on my experience) should be considered:

a) Time and urgency of the application to run.

This holds really true. Is the application urgently needed? Does it affect your business operations wherein you lose money each day if it is not there? Does it adversely affect your productivity? If the application is there, will it help your business even if there are no customizations, and just initially have it running out of the box?

If this is the case, then another question will be asked. How long would it take for the base application to be installed and configured?

b) Does the packaged application system supports more than 70% of your business processes?

If it does, then it would probably work. What you may just be needing are some additions/customizations or even interfaces for reporting or customizations to support the other 20-30% of your non-supported business processes.  If it only supports 50% or below, then forget it. You will have headaches just to make it work.

c) For the software package, are the program codes as well as systems design and logic documentations available for us to do the customizations in-house?

If they are, then good for you. You may have too many options on how to customize it. You may do it within your own team, hire contract programmers or even outsource everything.

But here is one bad news. MOST SOFTWARE PACKAGES DO NOT COME WITH THESE CODES AND THESE DOCUMENTATIONS. They are kept by the developer or by the software vendor as the ACE UP THEIR SLEEVE. By purchasing these codes and these documentations, YOU MAY SHELL OUT MORE MOOLAH, which costs MORE THAN THE actual software package.

d) How about support? Are they available?

More often than not, YES. They are available. But they come at a very STEEP PRICE. Most support costs from the software provider are computed on a PER HOUR BASIS and if you would sum them up, IT WOULD COST AROUND FOUR TO FIVE TIMES THE COST OF THE ORIGINAL SOFTWARE PACKAGE.

Another issue on the support is where is it located? Are they supposed to be located in your same location as where you are holding your business? Or in another country? If they are located in an another country, DO WE HAVE THE SAME TIMEZONE?

I had business complaining that their support comes from the States and it takes them a day to respond. Much more often, people from their internal support team would have to stay overnight, adding expenses on allowances and overtime, just to talk to their applications support in the States.

So where does it leave you?

As I have stated above, if the application is really not that critical wherein a business losses money each day by not having an applications in place, it is much better if it IS TO BE DEVELOPED FROM SCRATCH. Why?

a) The business can define what they need and how they need it, from the time the applications project is being concepted or planned. It will definitely save you a lot by doing so. ONE WORD OF CAUTION THOUGH, MAKE THE REQUIREMENTS DEFINITION AS PERFECT AND AS CLOSE TO WHAT YOU NEED THE FIRST TIME AROUND.

Work closely with your business analysts, system analysts, and project managers on this. It would be costly later if you forget to define something that you would need later on. And definitely, the analysis and design phase as well as the programming phase will be affected.

b) Do the development in-house or via local outsourcing firms from the same location as where you are operating your business. Why? to resolve support issues. If an application has bugs or goes down, then you can expect someone to attend to your issues immediately.

BUT ANOTHER WORD OF CAUTION. Studies have shown that it is more cost efficient to have an application developed via outsourcing firms because most outsourcing firms do their costing based on the package of work, not on a per hour basis. Having an application developed internally via employees would make you hire more employees, paying them more via salaries and benefits. By hiring in-house programming contractuals, you pay higher monthly rates, and the outputs or the results can be questionable.

I had the experience of implementing around 3 software packages. One of these software is from the States and two are from my country of origin. All of them are really difficult to implement, and proved costly, we spent almost three times the initial cost of the software.  I also had the experience of having to implement application packages made from scratch, using internal programmers. Although it really takes a lot of effort, work, patience and time, it was more worth it. I had also utilized contract programmers before, but results were really so-so, and I had to shell out money just to pay for their hourly rates…Not worth it…. And, I had the experience of implementing a software developed from scratch using a third party development firm… Oh yeah….

And yes, if you would be utilizing a third party software development firm, just make sure that your contract covers everything, from start to finish. And make sure that you assign a really good project manager from your end to manage the project as well as do the proper coordinations with the third party software developer.

You are still not satisfied? Well, there is another solution….. OPEN SOURCE.

Most OPEN SOURCE software packages are downloadable from the Internet. And most of them support business operations. I have not seen a business segment not supported by an Open Source Software. And yup, these OPEN SOURCE SOFTWARE can be downloaded complete, with PROGRAM CODES, DOCUMENTATIONS and LOGIC DOCUMENTATIONS. If you would go this route, make sure that:

a) You have evaluated the software properly and

b) You have the resources to support the implementation (like programmers, analysts, etc.)

But then, there is always the option of outsourcing the support..

-viz-

Web 2.0, What is it all about?

By now, you may have heard of the term Web 2.0 and you may be asking, why? is there such kind of a thing as web 1.0 or web 1.5 so it merits having a term web 2.0?

To tell you honestly, yes, there is such a thing called web 1.0, and web 2.0 is a generational successor to it. But what is web 2.0?

According to Wikipedia, Web 2.0 is commonly associated with web applications that facilitate interactive information sharing, interoperability, user-centered design and collaboration on the Internet. So in common layman’s terms, Web 2.0 are those websites which do these functionalities. If Web 1.0 is commonly used as static and single platform websites, Web 2.0 is having multi function on a single website.

Another difference of Web 2.0 is that Web 2.0 is a system in which online users become participants rather than mere viewers. So there is interactivity between the site and the users, instead of the users just reading the site.

Everyone knows Facebook right? Yes, its on Web 2.0. Because you can access different applications while everything is on one site and you do not go out of the Facebook domain.

You might be thinking now, then what good does Web 2.0 do to me?

With Web 2.0, information can be pulled from a number of different places, and it can be personalized to meet the needs of a single user. Applications can be built on the existing applications that comprise the Web 2.0 interface. It could be said that Web 2.0 will allow the mass population to communicate with each other and spread ideas rather than receiving theirinformation from a single authority. Based on these descriptions, it should be easy to see the advantages of this system. Information will flow freely, and people can express their ideas without fear of repression. Web 2.0 would make the Internet a true democratic system, a digital democracy.

The population as a whole would become more informed. Instead of getting information from once source that could have an agenda, they can receive their information from multiple sources, and this will allow them to make better decisions about the world around them. A good example of this is the ability to read newspapers from various countries other than the one you reside in. You can view events from more than one perspective, and this allows you to be a more well informed person. Another powerful advantage of Web 2.0 is communication. It has become obvious that the Internet is one of the greatest communication mediums in the world.

In my  opinion, the Internet surpasses even the telephone and printing press. The reason I say this is because the masses can communicate with each other without the oversight of governments or corporations. This has created an environment where ideas and freedom is allowed to flow unrestricted. People can communicate from around the world for a fraction of the cost they would pay to make an international phone call. Web 2.0 will make the Internet more personalized. Everyone has different needs, and Web 2.0 will allow each individual to have information that is tailored to their needs and interests. (except perhaps in China where everything can be censored).

And, if there are advantages, there are also disadvantages. One of this is dependence. If you connection goes down, then so goes your information source. And if your connection or server or even host goes down, then so goes your marketing. If everything is interconnected, including your supply management systems and customer information systems, then so goes your supplier and so goes your customer… As you see, everything is linked, and one failure can affect another aspect.

There is nothing wrong with dependency, but everything would have to be done in a presence of a stable backup. It may be systems, it may be procedures, but who cares. As long as the business is not affected and may continue in case of failures, then everything would work out.

-viz-

Disaster Recovery, You Don’t Need One? This might make you change your mind

I have talked with lots of people on their business preparedness when disasters come.. Most of them say:
“I do not need it, our business is too small to be disrupted..” or
“Yes, we are prepared, we backup everything.” or
“Hmmmm, that is too expensive..” or
“Well, maybe next time…”
While the danger from disasters to small and medium-sized companies has significantly increased over the past decade, there are more disaster recovery solutions available—at a variety of price points—than ever before.
Why? One reason for this is the emergence of replication as a means of data protection and its evolution as an essential component of both disaster recovery and operational backup for applications.

Remember, if your IT or your business stops, so goes your profit, and so goes your customers. That simple.

Let me enumerate to you the facts why even SMEs need disaster recovery procedures

A. Storage failures are not a temporary inconvenience and WILL, i repeat, WILL affect your business in the long run.

Statistics say that 93% of companies who lost their data center in a disaster filed for bankrupcy within one year. What is your chance to be in the lucky 7% who did not file bankrupcy?

B. The odds of an adult having a dream coming true is 1:2.33.

Actually, this is the same probability of a company without a DRP re-opening after a catastrophe. As per article I got from KPMG, 1 in 4 business experience major crisis involving DRP every year. This is 25%… For those companies without DRP, 43% will not re-open and only 29% will only operate two years later. At best, a business loses time, money and opportunities.

So, are you prepared?

-viz-

7 Things IT Must Solve

Ever since I got out of the corporate world, the scenario still looks the same. IT Infrastructure then still looks like the IT infrastructure now. What seems to be the problem with this? Nothing much in general, but if you would take a look at it, it would only boil down to one thing…..HIGHER COSTS OF MAINTAINING INFORMATION TECHNOLOGY.

Three years have been gone, and three years seem to be a long time in terms of technology where new trends go out every two to three months. But has the industry adopted so well on this? Hmmmm, not really, but all I can say is that there might be some other factors like budget, non-adaptability or lack of management initiative.

Anyway, this article enumerates things that I think should be done generally, within the IT industry, specially here in the Philippines during the next few years.

A. DESKTOP COMPUTING REPLACEMENT                

Oh my! Enterprises still use fat desktop solutions in their computing needs? Not only it is too costly to maintain individual desktops whose processing power is only utilized by 10%. And the mere fact that each desktop installed in an enterprise still remains a fat target for hackers and other malicious users.

B. SOFTWARE LICENSING  

Since this is related to what I have stated above, imagine savings if these desktops are removed and are replaced with thin clients.

C. NETWORKING  

Still, enterprises use an expensive way to connect. How about using a VPN? Internet connectivity gets cheaper everyday.

D. PASSWORDS and the END OF IT

As we navigate via the Internet, most sites require a user to input a password. Some are lax, some are too complex wherein there is a need for the users to write them off and post it somewhere where it is too obvious, some are….uhhhrrrrm, not needed at all.

There’s also the significant annoyance of trying to enter strong passwords on mobile devices. With or without a physical keyboard, it can present a significant challenge. No matter how you cut it, passwords are just a bad idea.

But what can replace them? Smart cards and USB keys are great for one network or one device, but the problem is bigger than that. In a world of cloud services, iPads, and the Chrome OS, tokens aren’t the answer. It may be that the only “something you have” as convenient and portable as a password — and that could conceivably be applied across many systems and devices — is biometric authentication. But then every client device would need to be fitted with the required fingerprint or iris scanner.

Biometrics are also problematic from a user standpoint. Although I don’t necessarily share this concern, I’ve heard several people mention that they’d rather not lose a thumb to a villain who’s trying to crack into their bank account. Then there’s the possibility that if your biometric code was compromised, you can’t just reset it since it’s, well, attached and reasonably permanent.

Voice recognition, facial recognition, or any other form of recognition will have to supplant the common password eventually — let’s hope it’s sooner rather than later.

E. SPAM  

No, not the canned one… I am talking about e-mail, or even SMS spams. As it stands, we are still not much better off than we were five or even three years ago. The volume of spam  has stayed fairly consistent, at somewhere between 95 and 98 percent of all email. It’s possible that the number of spam emails that actually make it into recipient’s mailboxes has decreased somewhat due to enhanced filtering techniques and an army of humans employed at various antispam companies flagging common spam emails. However, the problem continues unabated.

F. Virtualized Appliances  

Everything now should be plug and play, plug and go or plug and use. But still, there are lots of appliances still in the market wherein only specialists can only install and configure. I myself, have been involved with a project wherein we have to put out an application via the web, but the machine that was installed by the user is really too complex.. Takes really time to configure, to test and to implement.

G. IPV6  

One plain comment… Where is it now? Hurry! IP addresses are fast running out.

– viz-

Is Google Chrome Overtaking Mozilla Firefox on the Browser Popularity Race?

I have been personally using Google Chrome browser for the past two years. Coming from Mozilla Firefox, I find Chrome to be faster, more lighter than Firefox. Lately, I’ve also tried Opera and Safari, but again went back to Google Chrome.

When Chrome was first released to the public, the difference in speed between Firefox and Chrome were as large as the difference between Firefox and Internet Exploer 6 back in the day. It was simply a massive step forward, with a cleaner user interface and much better browsing performance.

In Chrome’s early days, there was no question which browser was preferred among geeks: it was Firefox all the way, if only for its excellent extension framework and accompanying ecosystem. Since Chrome had none, it lacked what many consider basic features, such as AdBlock and FlashBlock.

Another issue Chrome had to face was the fact that it took Google quite a while to get decent Linux and Mac OS X versions out the door, which didn’t sit well with many power users, who, obviously, tend to use Linux and Mac OS X more often than regular users. Google did good by taking its time, though; I can only speak for the Linux version, but it turned out pretty good.

Both issues have been fixed now, though, and it seems like more and more people are moving from Firefox to Chrome. I got many people around me to switch to Google’s browser, and all of them have been pretty unanimous: they love the speed (of browsing and the application itself), the minimal interface, the stability. The auto-update feature is something they love as well, even though most of them probably have no idea they actually love it at all – the beauty of silent updates.

There’s still issues to be dealt with in Chrome but I am sure that these can be resolved in due time. I can think of few reasons why you would prefer Firefox over Chrome (other than purely ethical reasons like not liking Google). I’m sure someone is going to mention Firebug or the Firefox Add-Ons, but from what I gather.

So, did you make the switch? Why (not)?

Managing Cloud Computing Security Risks

Cloud Computing is all the rage these days. CIOs seem to be diving into cloud-based solutions with reckless abandon despite the fact that a mistake in planning or execution can have career-limiting effects. So, let’s take a moment to balance the benefits against the potential securiy pitfalls that lie in the clouds.

The really important question is, How safe is your business in the clouds? After all, cloud vendors all aim to put your stuff onto cloud servers, and in most cases, these systems sit outside of your data center and outside of your direct control.

While this may buy you some cost reductions, it carries significant risks. Let’s consider the classic triad of information security: confidentiality, integrity and availability.

There’s no getting around that putting data onto an external server carries confidentiality risks. No matter what your cloud vendor may promise contractually or in its service-level agreement, if its security gets breached, so may yours.

How do you counter that risk? You can encrypt sensitive data, or you can keep the real sensitive stuff off the server. Encryption can be a viable path for some stuff like off-site backups. Being particularly careful about what goes on the server can help as well, so long as you maintain some level of oversight and control over the day-to-day decisions. That is, if you give your users the ability to store stuff on a cloud server, they’re liable to store all sorts of stuff there, blissfully unaware of the security risks.

As to integrity, the risks in cloud computing are relatively small, unless your cloud service provider’s security gets breached anyway. If an attacker breaches its defenses and tampers with your business data, then integrity can become vitally important all of a sudden, depending on the nature of the data.

And then there’s availability. You’re gambling that your data will be available when you need it when you put it in the cloud, betting that the availability won’t be eroded by network outages, data center outages and other single points of failure. You can hedge your bet a bit by going with an industrial-strength cloud provider, but you’ll pay more. If availability of data is important to your business, then you can’t blithely go with the lowest bidder. You need to do appropriate due diligence and find out everything you can about your vendors’ availability, disaster recovery and business continuity plans. “Trust but verify” should be your mantra.

Much of this sounds like Information Security 101. To be sure, there’s a lot of plain old common sense that should be applied when considering cloud solutions.

At my company, we do use some cloud services and get gobs of value from them. For example, we are a fan of Google Docs. It helps us  keep our documents synchronized across my various computing devices. But I’m also careful about the data I put there. I keep business-sensitive information on my local hard drives, and generally encrypted.

I’ve also found great value in using cloud services as part of my  disaster recovery.

But the bottomline is that it is about balancing risks and benefits.

That’s how we should view cloud services in general. It’s important to make informed decisions before diving into the latest trend. There is value to be found in cloud computing. But rely too heavily on it, or place your deepest darkest secrets on it, and you’re likely to be disappointed.

-viz-