The Year of the Breach

As the year is coming to a close, news headlines were dominated by reports of high-profile security attacks, some launched by “hacktivists” such as Anonymous and LULZSEC.

But something  larger was brewing. Amidst hacktivists’ attacks on Sony, HBGary and NATO, highly sophisticated, clandestine attackers—the kind with the rarefied expertise, deep pockets and specialized resources typically only seen in nation-state adversaries—were actively infiltrating a broad range of targets.

These attacks were different: they were patient, stealthy and leveraged a potent combination of technical skill and social savvy.  Some used clever social engineering to get a foothold into their target organizations, while others used zero-day vulnerabilities—previously unknown holes in software—to penetrate defenses. 

While advanced attacks have happened for years, IT security experts observed recent attacks had grown bolder and more frequent. Recent attacks were also highly targeted, customized, well-researched and, in many cases, employed both technical and social

The term used to describe such complex, sophisticated attacks was
“advanced persistent threats” (APTs), but as IT security experts quickly pointed out, APTs were only as advanced as they needed to be to get the job done. A concrete definition is elusive and, as cautioned, “Defining it could limit us and lead us to be blindsided. We need to constantly revisit the characteristics because they’re always changing.”

Much of the day’s focus was on the techniques of highly organized attackers. such
advanced threats, which include APTs, span from corporate espionage to hacktivism.

This article distills certain key insights from those discussions and
aspires to advance the industry’s dialog on advanced threats, spur disruptive innovation and disseminate some of our learnings from some of the most seasoned professionals in information security.

From a Cookie-Cutter Approach to Adaptive

In 2000, the I LOVE YOU worm crippled more than 50 million Pcs. The delivery mechanism was simple but effective: an e-mail showed up in your in-box with a subject line of “iloveyou.” When people clicked on the e-mail’s attachment, titled “love-leTTer-Foryou,” they were infected with a computer worm. while the damage was significant, a
partial solution to this problem came in the form of antivirus software: a signature could be deployed to antivirus agents that would identify the file as malicious and arrest its actions.                                                                          
Today, generic malware is still profuse but signature-based defenses, at either the network or host layer, can greatly decrease the odds of infection. What makes recent
advanced threats different is their defiance of a signature. In the world of advanced threats, malware evolves quickly, and security experts have  described several cases of special-purpose malware custom-developed specifically for their targets. Some were
compiled within hours of launching the attack.

It became clear that enterprises targeted by highly organized attackers cannot depend on signature-based “bread and butter” security tools as a sole means of defence. While the payloads of some advanced threats were fairly standard, entry strategies were often custom tailored.

Attackers typically used social networking sites to gather intelligence and identify specific users within an organization. Some of the main infection vectors that cited were  e-mail, Skype and instant messages with malware payloads in the form of PDFs, compressed HTML, script files, executables and attachments.
customization of attack techniques extend through data exfiltration.

Advanced threats often use sophisticated methods for compressing, encrypting and transmitting data to other compromised organizations, leaving little evidence of the origin of the attack or the destination for stolen information. This move from generic to tailored, from cookie-cutter to adaptive, means that security organizations need to think beyond signatures and re-evaluate how effective their current defenses are.

Remember that people, not technology, were the Achilles heel in most defensive strategies.

People are the Weakest Link 
“People are the weakest link” is perhaps the biggest cliché in information security. Security experts have long understood that users make bad choices, click on links they shouldn’t and install malware through simple ruses.

Corporate IT departments deploy multiple controls to help deal with this threat: e-mail filtering solutions catch many attacks before they make it to users, malicious links are blocked by the network, network scanners look for malicious content, and host-based antivirus (the last line of defense) tries to stop what slips through the cracks.

This process works well for generic, shotgun attacks in that signatures can be updated quickly to immunize users. Advanced attackers, however, are now creating highly credible scenarios in which they convince users to click on dialog boxes warning of fake software updates, retrieve content from quarantined areas and act (unknowingly) on behalf of the attacker.

Attackers have become dangerously adroit at using our weaknesses and behaviors against us. Attackers are creatively leveraging people inside the company to help accomplish their goals. “Internet scams are supposed to be sloppy, but they work.”

Advanced threats defy that stereotype. Experts put a fine  point on it: “The perimeter is not a firewall; it’s our users. They don’t treat their computer as an adversary; they treat it as a tool—an extension of themselves—so they don’t question what it tells them.”

Addressing the people problem will take more than technology. Organizations need to
drive a sense of personal responsibility for security among employees.

Attackers Aim for Advantage, Not Infamy

Advanced attacks are typically not the product of hobbyists. These attacks often require months of planning, mapping out internal networks by looking at the fringes.

The reconnaissance can go much further: targeting key employees, deconstructing their life by scouring social media, custom-crafting an attack so that it is stealthy, patient, and very effective.

Cybercriminals, the ones who look to steal credit card numbers and other
commoditized and sellable data, have become increasingly sophisticated but advanced
attacks are different. Increasingly, they focus on espionage—stealing specialized data
that may be of high value and strategic importance to the commissioning entity, which
can be foreign governments, rival corporations and organized crime groups. The entities behind advanced attacks literally mean business.

Also, entities perpetrating many advanced attacks are substantively different from the
hacktivists groups that have attracted attention in recent times. Hacktivists want to
embarrass and expose their targets’ activities, taking pride in publishing their conquests.

Many advanced attackers, in contrast, have the goal of stealth. They do not want to be
discovered or seek publicity.

Now some advanced threats are now masquerading as hacktivist attacks, with the goal being to confuse forensics and place blame on groups that are often eager to accept it. This pattern makes it difficult to size the scale of advanced threats: a willing scapegoat makes post-incident attribution particularly problematic.

The New Normal: Act as Though You Are Already Hacked  

The events of the year have shown that determined adversaries can always find exploits through people and in complex IT environments. It’s not realistic to keep
adversaries out. Organizations should plan and act as though they have already been breached.

Three foundational principles of security are compartmentalization, defense in depth and least privilege. in combination, these three tenets dictate that if one system (or person) is compromised, it should not result in a compromise of the entire system.

While simple in concept, these tenets have proven complicated to implement in practice. Organizations have long relied on the notion of a “perimeter,” where a big thick wall—in the form of firewalls and gateway defenses—guards the organization, with good guys (insiders) on one side of the wall and attackers on the other.

Security perimeters are now considered a construct of the past. Boundaries are nearly
impossible to define in modern organizations. The inclusion of partially trusted users
such as customers, suppliers, contractors, service providers, cloud vendors and others
have made organization boundaries very porous. Beyond the eradication of traditional
organizational boundaries, the consumerization of IT has brought a rash of unmanaged
devices into the enterprise and exposed the organization to services (and suppliers) that are opaque.

IT consumerization has also blurred the line between the business lives and
the personal lives of employees. We have moved from the illusion of a perimeter-driven defense to living in a state of compromise.

Accepting that some systems, some people, and some services may already be under the control of attackers changes information security strategy. it forces a return to the core principles of compartmentalization, defense-in-depth, and least privilege.

Organizations need to focus on closing the exposure window and limiting damage through efforts to compartmentalize systems, stop sensitive data egress and contain malfeasance. This new model also demands that we rethink old habits of sharing sensitive corporate information—such as source code, product plans and strategic roadmaps—using collaborative processes that presume perimeter defenses can keep attackers out.

Security improves through greater situational awareness: gaining the ability to
understand what’s happening beyond our network boundaries to detect threats on the horizon. Organizations get smarter by looking beyond their infrastructure and observing  the ecosystem. The ecosystem approach to security relies on organizations actively sharing information with other organizations about threats. It also demands greater visibility into the security of suppliers and service providers within one’s supply chain.

The key is to know what digital assets are important to protect, where they reside, who
has access to them and how to lock them down in the event of a breach. This ability to
tighten the net before and during an attack is key, and it requires a mature process for
incident handling. Incident response should not be considered exclusively a security
function. Instead, it is an organizational competency that must be developed and
continually honed well before an attack occurs. if organizations are planning responses
as an attack unfolds, they are too late. A competency approach allows remediation
activities to kick in automatically—like a reflex.

The Road Ahead

The reality of advanced threats demands a disruptive approach to defense—one where
enterprises can be agile and thrive in a contested environment. This approach must be
applied holistically: approaching advanced threat defense not as a discrete function but
as a natural consequence of robust but agile security.

Many of the holes that exist today come from an unmanageably complex iT infrastructure. Given that information security is a “weakest link” problem, only through understanding our assets, processes and endpoints do we have a chance at effective defense. Unraveling complexity and fielding a successful defense means that we also need to think creatively about the range of attacker motivations, which can extend far beyond data theft.

With every new technology, we have the ability to weave security into its fabric, to begin anew. We are at the start of an industry-wide move to cloud-based services and systems. We stand on the precipice of a sea-change in technology. There is a new mantra that goes within the industry saying “If we can’t get it right with cloud, shame on us.”

Today more than ever, security is an ecosystem problem in which every constituent has a responsibility. Attackers are collaborating, sharing information, going after the supply chain, co-opting careless insiders and evading our long relied-upon defenses. we need disruptive collaboration and innovation in defense. Through collaboration, information sharing and increasing our agility, we can successfully fend off APTs and other advanced threats.

Happy Holidays! And a blessed new year to all!



Malicious Software (Malware) and Viruses on Firmware?

Just as we thought where viruses and malware are only for operating systems, think again…. these malicious software are also possible to be found on firmwares, or those small softwares that control various electronic devices, which can be within or outside the computing areas.

We can only enumerate these electronic devices that may use firmware. Among of these are : calculators, printers, TFT monitors, digital cameras, mobile phones, music instruments like electronic keyboards, electronic drumpads or synthesizers. Of course, your CMOS – those things you see while your computer is booting up, your external hard disks, your LAN and WLAN routers, also use firmwares.Your most favorite MP3 players also utilize firmwares. Your car’s computer chip also utilizes firmware.

Recently, IT security experts have found traces of these tiny software embedded within the CMOS, whose purpose is to destroy the equipments as well as for information theft. Most of these firmwares contain “logic bombs” which are timed to go off noticably or unnoticably at a specific time. The payloads of these are from a simple hardware failure, destruction of the attached devices, or if it is within your computer’s CMOS, may set off file deletions or even network intrusions.

Now, once malicious firmware has been inserted into electronic components, it can be almost impossible to detect. Because it is in the hardware, the malware will remain in place even where all the software has been upgraded or replaced. The circuits in which the malware would be hidden are microscopically small and enormously complex. What’s more, like malicious software, it is possible to look directly at malicious firmware and not see anything wrong with it.

Cleverly written malware will perform the kinds of operations that the system or the equipment is routinely supposed to perform. It will just perform those operations at exactly the wrong time, for example, running a payroll process every week, or place an electronic order to a supplier everyday.

What can be done to avoid this problem now? Nothing. One thing we can do about it is to check whether your equipment manufacturer employs strict standards on the installation of firmwares on their equipments prior to assembly. If you are in doubt on that manufacturer, do not buy the product. Please take note that there are no anti-virus solution that can detect these for these malware are embedded within the circuitry systems of each device. If an anti-virus provider says they can provide protection and solution, then, do not listen to them.. They are bluffing. Imagine putting an anti-virus program in your car’s computer system…

Since the scope of the problem is really too broad, solving the problem on our current situation may seem impossible at this time.

Now the good news… Logic bombs may only work once, but that’s also the case for real bombs. No one complains about their lack of repeatability, but to the effect of what?

It’s hard to tell if this is a realistic and growing threat that government, corporate agencies, the private sector and individual consumers should worry about, or whether it’s one of those late-night worries about risks with catastrophic consequences but no real chance of happening – like being struck by lightning while waiting for a ride home.

It is one more thing to worry about, though, and one more reason to make sure you have internal security systems designed to detect malicious activity – not just malware signatures – so they can identify and shut down attacks whose source you can not yet identify.

It’s just a little disturbing to hear that even if you build a rock-solid defense against malware entering from all those other points, an RFID chip or print-toner-monitoring component could seed your network with malware that gives someone else a porthole through which to watch you work.

No reason to panic though…..

Insider Threat

Why does your competitor have your latest research, product that is still to be launched into the market or financial figures? It must be an insider — or is it?

Before the digital revolution, security professionals were kept awake at night worrying about the potential threat posed by an untrustworthy member of their organization. Commonly referred to as the “insider threat” , this person possibly had privileged access to classified, sensitive or propriety data; providing the insider a unique opportunity, given his or her capabilities, to remove information, predominately in paper form, from the facility and transfer it to whomever they desired.

Over the years, extensive knowledge has been accumulated on ways to identify and counter the insider. Centuries of experience indicates that insiders are mainly motivated to steal information for money, ideology, ego or due to coercion. Through understanding these motivations, personnel security programs were established to help identify employees who may be potential insider threats. For instance, if an employee in serious financial debt is determined to be vulnerable to one of these motivations, then the security professional may deem it best, with their superior’s approval, to temporarily suspend their access to sensitive information.

The insider in previous days could do great harm to an organization. However, research and tools were developed to help mitigate the threat. Primary controls revolved around the previously mentioned personnel security measures, physical security measures such as storing the information in a safe, and procedural mechanisms such as establishing access to information based upon a “need-to-know” basis. These safeguards helped make it more difficult for an insider to steal documents.

While protecting sensitive information in paper form is still a daunting task for security professionals, today is different as the previously one-dimensional insider threat now has three dimensions. Though there are many areas to consider when discussing the insider threat (i.e. mergers, acquisitions, supply chain interaction, globalization), there are three classes of insiders: trusted unwitting insider, trusted witting insider and the untrusted insider.

We now live in the digital world, where information travel at the speed of light. As such, the insider has a greater ability to pass the information we protect to outsiders with a lesser chance of being detected. The trusted unwitting insider threat is predominately a person with legitimate access to a computer system or network, but who errs in judgment. For instance, this insider may find a USB thumb drive in the companys restroom and, in an effort to be a good employee, plugs it into his or her company computer to determine the owner. Unknown to this user, the drive was strategically placed in the restroom by an outsider with the hope that an employee would find it and use it on a company computer system. Once the drive is accessed it installs malicious software, which leads to the compromise of that computer system and potentially the overall network. An innocent effort to help a fellow employee, who may have misplaced a USB drive, turns out to be a classic case of the trusted unwitting insider.

Like the previous case, the trusted witting insider threat is a person with legitimate access to a computer system or network. This person, however, makes a conscious decision to provide privileged information to an unauthorized party for either personal gain or malicious intent. An increasingly familiar scenario is the disgruntled employee surreptitiously downloading sensitive files to a thumb drive and selling it to a competitor. Whatever the motivation, the end result is a witting violation of security protocols for deadly reasons, justifying the designation as the trusted witting insider.

The untrusted insider, which was unprecedented before the digital age, is a direct result of the interconnection of disparate elements on the Internet. This person is not authorized access to the computer system or network in question. However, he or she has taken advantage of compromised user credentials or a backdoor in the system to assume the role of a trusted employee.

In the context of the first scenario, the outsider who planted the USB thumb drive in the restroom became the untrusted insider when the malicious files were installed on the company computer giving them access. Essentially he was a wolf in sheep’s clothing inside the network. Once this role is assumed, the outsider is now an insider and has unprecedented access to internal information. This is no longer a simple intrusion, as this untrusted insider can now perform actions reserved for your trusted employees.

Insider threats are hard enough to detect. Network perimeter security is rendered useless once the untrusted insider has used valid credentials to gain access to the computer or network. Most of the components of layered defense strategies, such as policies, procedures and technical controls, can be rendered useless during this type of compromise. Technical controls stand the best chance of stopping the wolf, but differentiating the wolf from the sheep is an extremely difficult problem to solve.

The new breed of insider threats begs the question: Are you looking inward on your networks to protect against the insider threats, including those that look like your own employees? User credentials, or usernames and passwords, are compromised on a seemingly regular basis. Hotmail and other web based e-mail systems were recently the victims of large-scale credential theft, with a vast amount of the stolen information posted on various hacker websites. What if your users use the same password for their personal email that they use for their work computer?

In an effort to memorize the least amount of information, it is only human nature to try to use the same password for multiple systems. This situation occurs more frequently than administrators, CSOs, CEOs and business owners like to admit. With the range of actions that can be taken by intruders with usernames and possible passwords in hand, a whole new class of insider threat is emerging.

Aside from having unrestricted access to your sensitive data, the untrusted insider may now have the capability to use your systems against you. Changing a purchase order quantity from 10 to a 1000, or placing new unwanted orders from vendors are only a very mild form of havoc a creative untrusted insider can create for your enterprise. What if they have access to your systems and create safety issues that can cause physical damage and loss of life?

Does your risk management strategy take the untrusted insider into account? As you audit your computer systems and networks, who do you see & a wolf or a sheep? Chances are you see both.

Security: Never Mind the Products or the Solutions, Just Educate the Users

If there is such thing to improve on IT Security, its not on the products, its on the better education and literacy of the users.

By the way threats have been going out now, we have to admit it that we are too naive to recognize these threats. We need to take IT security seriously. We can do so much things at a technological level, but by the time that we have to choose our own passwords, we choose the weak ones. 

Sometimes, we feel that it is better to keep data and information where security products can see it. However, improved user education can only accomplish so much: IT systems developers should also need to make their solutions simplier to use safely.

If you want millions to use a product or a service, it needs to be easy, without the need for them to install more software.

But the obligation isn’t only on customers to learn: it’s also on suppliers to inform. Buyers can’t make educated decisions about how to set up and run their IT infrastructures unless vendors supply them with the necessary information.

Nowhere is that more the case than in the market for cloud computing services, where vendors vaunt the fact that their customers don’t need to know how things work.

We need transparency from vendors and providers. We should know how their systems are organized, and we should know about the people they hire.

She wants to see more transparency in such products and services, and better standards for security practices, so that customers can evaluate service vendors and providers.

If the level of security and transparency is very high, then there is a probability that clients and users are willing to pay more. They do not care about security because they can pay less, but at least, it gives them a choice.

There’s still a lot of work to do on standards and certification” of security practices, but are we willing to pay for it?


Crack Your Own Passwords For Better Security

Passwords are the primary key to our digital lives–providing the only barrier preventing sensitive data from being compromised in most cases. IT administrators should think and act like a hacker to proactively identify weak passwords, and stay one step ahead of a data breach.

Subsequent analysis of exposed passwords highlights yet again how passwords are a weak link in the security chain. Commonly used passwords, like “123456” are the digital equivalent of placing the key to your house under a door mat with a flashing sign that says “key hidden here”. The trivial security provided by weak passwords is hardly worth the effort of implementing and maintaining them.

Conventional wisdom and established best practices suggest a number of basic password rules:

  • passwords should not be based on personal information or words found in the dictionary;
  • they should be a certain length, have some measure of complexity, and;
  • they should be changed on a periodic and frequent basis.

Unfortunately, the more difficult a password is to be cracked or guessed by an attacker, the harder it is for the legitimate user to remember it as well. That leads users to write passwords down on notepads on their desk, or actively work to circumvent password policy to make passwords easier to recall and manage.

Some experts have pointed out that the standard practice of forcing passwords to be changed periodically may be misguided. A cracked password will most likely be used in the immediate future, making a policy to change it once every three months, or even once a month, a bit like shutting the barn door after the horses have escaped.

IT administrators can test the effectiveness of password security by thinking like a hacker and using the tools that an attacker might use to try and crack passwords and breach sensitive data. Easy acquirable tools like Cain and Abel or John the Ripper can identify passwords that represent the low hanging fruit and provide easy prey for attackers. The results can be quite enlightening.

Even with password policies that appear to follow conventional wisdom, it is often possible for users to find ways to create weak passwords. Investing the time to crack passwords provides an opportunity to educate users with weak passwords, modify password policies to prevent similar passwords, and collect hard evidence to present to management to justify password policy changes.

No password is invulnerable given enough time. But, with sufficiently strong passwords, attempts to crack them will at least take a significant period of time to crack–making it unnecessary to change them as frequently and easier for users to manage.

Open Source Software and IT Security

Open Source Software or OSS is a computer software whose source code is available to the general public with relaxed or non-existent intellectual
property restrictions (or arrangement such as the public domain) and is usually developed with the input of many contributors.

This type of development also brings the expertise of multiple persons to bear on the design and architecture of a software program, making it robust and capable of doing the job for which it is being designed.

The openly viewable nature of the source of a program means that if possible problems are found, they can be quickly addressed and altered to adapt, with the supervision of more than one company / programming team.

When choosing between proprietary and open source security solutions, many organizations are misled by open source myths. As a result, they ask the wrong questions when evaluating their options and unnecessarily limit their IT solutions.
Is it risky to trust mission-critical infrastructure to open source software? Why should we pay an open source vendor when open source is supposed to be free? Will a shift to open source add complexity to our IT infrastructure?
These questions all arise from open source myths that this blog article will explain and dispel, allowing IT decision makers to focus on more important
organizational issues: return-on-investment, ease-of-use, agility, reliability,
and control.

First Myth: Open Source Software Is Too Risky as viewed in IT Security

Many IT decision makers have a knee-jerk reaction to OSS, especially when it
comes to security. They believe OSS is most appropriate for do-it-yourself
technology geeks working in their basements. It might be fine for a company
with an obsessive technology savant on staff, but for the rest of us, OSS is
unproven, complex, and risky.

This is a myth. The real fact is that OSS is already an integral part of enterprise network infrastructures. A recent Network World magazine article looked at the state of open source adoption within the enterprise and found it widely pervasive. It states, “Most of the packaged security appliances for everything from firewalls to security information management are built on the same BSD Unix and Linux distributions as the application servers you build yourself.”
A recent Forrester Research report  further argued that enterprises should
seriously consider open source options for mission-critical infrastructure.
“Although fewer than half of the large enterprises in Europe and North America are actively using or piloting open source software, a majority of those are using it for mission-critical applications and infrastructure,” the report said.

Second Myth: Open Source is Free

Another myth is that open-source is free of charge, and, as such, generic open source implementations can save thousands of dollars. A common question that open source vendors face every day is, “why should we pay for something we can go download for free?”

Certainly, OSS can be downloaded for free, but that is where “free” begins and ends. There are certainly other advantages to OSS, such as strong community support, continuous upgrades, and the ongoing improvement of projects by those using them. All of these advantages are technically free to any user, but someone must manage, evaluate, and then support whatever open source product your organization adopts.

If your organization would rather concoct its own OSS security suite from scratch, then it is possible to do so; however, be prepared to invest vast amounts of IT capital into such an effort. Not only must a company install and configure individual projects, but actually blending multiple projects together, all working with the correct interoperation and harmony, and being maintainable with regards to security patches and other upgrades, is a vastly complex task.
For example, while installing an Intrusion Detection component along with a
VPN solution on the same platform is technically possible, it takes a highly
detailed understanding of many different factors in order to ensure proper
processing of the traffic. For example, the VPN tunnel traffic is first decrypted, and then run through the IDS engine ensuring that encrypted traffic handled by the tunnel does not contain malicious payloads. Making things operate together is an essential component in deploying an effective security system.

A final issue is accountability. If homespun open source security fails, who is to blame? Is it the software itself? Perhaps, but what if it was configured
improperly? Is it some other product within your infrastructure that has
created a conflict? It’s possible, but you’ll need to search through bulletin
boards or wait for an expert within the community to respond to the question you post to find out. Is it your own IT staff member who managed the project? After all, that’s the exact person you will have to ask to get an answer. As the cliché states, ‘you get what you pay for.’

Third Myth: Open Source Vendors Add Little Value to OSS Projects

There is sometimes a perception that paying for open source-based products is a waste of money, since acquiring the same projects a company bases a
product on can be done for free (see Myth #2), and companies that attempt to commercialize OSS do not really add anything substantial to the offering that justifies the costs they demand. Also, some question the legality of charging money for products based on the work of others.

This myth is partially based on a common misunderstanding of open source licenses. Under the most common of open source licensing, known as the GPL, vendors are free to distribute and sell OSS if they follow the rules of the license and add value. In various products, vendors not only harness existing projects and code-bases in order to build their solutions, but then contribute back to the community in offering features, performance improvements, financial support, and more.
This further evolves the community so that it benefits from the
commercialization and can continue to evolve. Examples of this are the many versions of Linux and the Apache web server.

Companies that commercialize open-source software and add value, such as
documentation, guides, interfaces, interoperability and more, create a solution known as “mixed source” or “hybrid” solutions: a blend of both open-source and proprietary components. These solutions give customers the best of both worlds; they are based on a solid open source foundation, while also offering the support, documentation, QA testing, and upgrades. This provides a final level of polish that makes the solution stable, manageable, and realistically deployable at more companies than an open-source-only solution.

Fourth Myth: Proprietary Solutions and more reliable than OSS

As mentioned at the start of this blog,  the reliability and dependability of OSS
is called into question by closed-source proponents. If today’s security
solutions – open source and proprietary alike – start with the same Linux or
Apache foundation, then those tasked with securing the world’s networks
disagree with this premise. If security experts trust open source, why shouldn’t you?
Proprietary solutions do present many advantages, such as providing technical support, training, pushed updates, integration via APIs, and innovative GUIs. Today however, these same advantages are being added to lower-cost OSS alternatives by mixed-source vendors. Adding to this is the fact that the open source community actively resists much of what customers dislike about proprietary solutions, such as vendor lock-in, high initial costs, lack of feature upgrades/additions, and escalating maintenance contracts.

Open-source licenses discourage the kind of secrecy that has plagued proprietary software for decades—secrecy that has led to vulnerabilities and the inability to enhance or customize the software.
When something goes wrong in an open source security project, distributors
cannot deny, hide or downplay the issue. The OSS community actively polices itself and discourages anything other than openness.

Fifth Myth: OSS and its security is too complex for SMEs

There is some truth to this myth. Projects like Snort (a popular open-source
Intrusion Detection project) are certainly designed with expert users in mind – and they may work poorly (or not at all) if users are not familiar with their
approach and implementation possibilities.

Even if implemented correctly, an end-user must then ensure the program remains updated and continues to work correctly with the rest of the network security programs that are deployed. Fortunately, more and more software vendors are adapting open source projects to the demands of the market, making them very flexible and capable of being deployed in diverse network scenarios with ever-increasing ease.
There is also a second myth at work here. Proprietary solutions don’t
necessarily lack complexity. The idea that open source is an all-or-nothing
commitment is false. Proprietary vendors who follow the traditional shrinkwrap model work a lot harder to lock customers into their product lines. They won’t guarantee interoperability with competing products, and for those features they don’t offer, they’ll point you to equally expensive partner solutions. Moreover, many users of proprietary solutions are bemoaning a new datacenter problem: appliance overload. Even those proprietary vendors with a broad range of security offerings tend to deliver them as separate, standalone products. These many layers add cost and complexity to the IT infrastructure, as well as presenting multiple points of failure that could undermine security if even one appliance is mis-configured or out of date.

With mixed-source solutions, your organization can put together a best-in class security lineup without the associated costs or complexity. You also gain the flexibility to change your security posture as you see fit, without fear of breaking contracts, voiding warrantees, worrying about interoperability, or throwing away existing investments by being forced to abandon legacy
products that still work perfectly fine.


At the root of a myth there usually exists some level of truth or a situation that caused and then propagated the myth. The basis of this truth is then twisted and diluted and often lost amongst incorrect opinions or common

Open-source is steeped in history and capability and remains
daunting to those that have not been educated in this exciting area of
development. This massive community has created some truly remarkable
tools; however, it continually faces various reactions to adoption of its ideas
and projects. This situation is mostly due to the community focusing more on
creation than marketing, and end-user awareness therefore suffers.
Mixed-source security solutions give customers the best of both worlds – the
low cost and reliability of open source, as well as the technical support,
training, and user-friendly interfaces of proprietary products. These are no
longer just tools for the gifted.

Philippines State of IT Security – An Overview


The vulnerability of the Philippines’ government web sites was again exposed by hackers last week, prompting renewed calls for the introduction of a Cybercrime Bill which has been on the legislative backburner for a decade.

Ivan Uy, the recently appointed Chairman of the Commission on Information and Communications Technology (CICT), is to hold meetings with the ICTchairs of the House of Congress to discuss how the Cybercrime bill, which would bring in greater powers to detect, investigate and punish cyber crimes, could be made law.

Uy was a supporter of the Cybercrime Bill when he was CIO of the Supreme Court.

The Cybercrime Bill has been amended 10 times since 2000, mostly because of disagreement over what constitutes a computer crime. Opponents say the proposed law could lead to a clampdown on citizen privacy and freedom of expression.

A hacker attack on the Philippine Information Agency’s portal will put further pressure on legislators to review the bill. The site was down for several hours on August 29th with the words “Hacked by 7z1” appearing if searched for on Google. An error message was displayed whenever a user tried to enter PIA’s website.

In the same week, the local government web site of Bulacan, a major province north of Manila, was infiltrated by a Chinese hacker who demanded an apology for the Manila hostage tragedy that led to the death of eight Hong Kong tourists on 23rd August.

The personal Facebook account of President Benigno Aquino’s, which is linked to his official website, has also been attacked over the hostage tragedy, prompting the President to censor his Facebook page.

This is not the first time that government web sites have been hacked this year. In January, hackers defaced the home page of the Technical Education and Skills Development Authority (TESDA) web site.

One month before this breach, the web sites of the Department of Health, Department of Social Welfare and Development, Department of Justice, the Philippine National Police’s Criminal Investigation and Detection Group, and the Information Technology and Electronic Commerce Council were also hacked.

Earlier this year, the CICT set up the National Cyber Security Office (NCSO) to formulate and implement national cyber security plans.

The NCSO has plans to set up a national network of sensors to monitor hacking behaviour, and has launched C-Safe, a cyber security awareness campaign.