Microsoft Outlook Support

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg

Tuesday, 31 March 2009

Security Implications Of The Virtualised Datacentre

Posted on 05:03 by Unknown
By Bill Beverley - Security Technology Manager, F5 Networks

Introduction
The concepts behind application and operation system virtualisation are not new. The recent rate of virtualisation adoption however, especially that of software operating system virtualisation, has grown exponentially in the past few years. Virtual machines have finally come into their own, and are quickly moving into the enterprise data centre and becoming a universal tool for all people and groups within IT departments everywhere.

So what exactly is a virtual machine? VMware defines a virtualisation as “an abstraction layer that decouples the physical hardware from the operating system...”. Today, we commonly think of virtual machines within the scope of one hardware platform running multiple software operating systems. Most often this concept is implemented in the form of one operating system on one hardware box (the host platform) running multiple independent operating systems on virtual hardware platforms in tandem (the guests).

Platform virtualisation usually relies on full hardware segmentation: allowing individual guest platforms to use specific portions of the physical host hardware without conflicting or impacting the host platform, allowing the host and guest(s) to run in tandem without stepping on top of each other.

There are two primary types of platform virtualisation: transparent and host-aware. Transparent virtualisation is implemented so that the guest is not aware that it’s running in a virtualised state. The guest consumes resources as if it were natively running on the hardware platform, oblivious to the fact that it’s being managed by an additional component, called the VMM (Virtual Machine Monitor), or hypervisor. The more standard forms of virtualisation today, such as those by VMware, implement transparent hypervisor systems. These systems can be thought of as proxies: the hypervisor will transparently proxy all communication between the guest and the host hardware, hiding its existence from the guest so the guest believes it’s the only system running on that hardware.

Host-aware implementations differ in that the guest has some form of virtualised knowledge built into the kernel. There is some portion of the guest operating system kernel that knows about the existence of the hypervisor and communicates with it directly. Xen (pronounced ‘zen’), a popular virtualisation implementation for Linux, uses a host aware architecture, requiring special hypervisor command code actively running in both the host and all running virtualised guests.

One of the driving factors in virtualisation adoption is the open nature of hardware support for VMMs: Hardware platforms, which run and manage the primary host operating system, and the VMM are not specialized devices or appliances. This flexibility, the move of virtualisation software to everyday hardware, has allowed everyone direct and inexpensive access to run virtualised environments. Virtualisation allows a company to purchase one high end hardware device to run 20 virtual operating systems instead of purchasing 20 commoditized lower-end devices, one for each single operating platform.

Virtualised Threat Vectors
The benefits of virtualisation are obvious: more bang for your buck. But everything has a pro/con list, and virtualisation is no exception. The pro column is a large one, but the con list isn’t so obvious. What could be bad about running 20 servers for the price of one? Although by no means considered to be a large threat today, security of virtual machines and environments is typically not considered, not because the security of these implementations is a technological mystery, but because it is generally an unknown vector by the groups that are implementing wide-spread virtualisation. In other words, virtualisation is usually implemented with no specific regard to the new security risks it brings.

Virtualisation brings an entire new set of security issues, problems, and risks. Security administrators are familiar with phrases such as “hardened operating system,” “walled garden,” and “network segmentation” in the one-box-for-one-application world, but how do administrators apply these concepts to the uncharted waters of the virtual data centres? How can we protect ourselves in new environments we don’t understand? Today’s system and security administrators need to begin focusing on virtual security, preparing for a new threat arena for distributed and targeted attacks.

There are many, many security risks and considerations that virtual infrastructure administrators should be aware of and prepared for, many of which were not covered in this discussion. And there are many questions that still need to be addressed before moving to a fully virtualised environment, such as:
  • How will our current analysis, debugging, and forensics tools adapt themselves to virtualisation?
  • What new tools will security administrators be required to master between all of the virtualisation platforms?
  • How does patch management impact the virtual infrastructure for guests, hosts, and management subsystems?
  • Will new security tools, such as hardware virtualisation built into CPUs, help protect the hypervisor by moving it out of software?
  • How will known security best practices, such as no-exec stacks, make a difference when fully virtualised? Will hardware virtualisation pave the way to a truly secure VMM?
  • Virtualisation and shared storage: What happens if we virtualised all the way down to the iSCSI transport layer? Are we opening up a floodgate which bypasses built-in SAN security?
These are all questions that need to be addressed before the enterprise world moves full-on into virtualisation. More than anything, we should be thinking today about where virtualisation security will take us tomorrow. We all agree that virtualisation is for the better and it’s here to stay, but security administrators need to make sure they keep ahead of the threats and think about virtualised threat vectors before attackers have already coded for them.

F5 Networks is exhibiting at Infosecurity Europe 2009, the No. 1 industry event in Europe held on 28th – 30th April in its new venue Earl’s Court, London. The event provides an unrivalled free education programme, exhibitors showcasing new and emerging technologies and offering practical and professional expertise. For further information please visit www.infosec.co.uk

Source: Infosecurity PR
<>
Read More
Posted in F5 Networks, Infosec Europe 2009, Infosecurity Europe 2009 | No comments

Cloud-based security services: Will 2009 be the year this much hyped sector comes of age?

Posted on 04:59 by Unknown
Pravin Mirchandani, CEO of network security specialists Syphan Technologies, argues that the emergence of new high-speed security technologies as we head into a recession is likely to be the catalyst for more widely available cloud-based security services.

The term Security-as-a Service was first coined by the marketing folks at McAfee in 2001 to describe their vision of an outsourced approach to the provisioning and management of the full range of anti-X technologies needed to maintain corporate security, via the Internet. From a technical and business perspective, the idea of being able to devolve the responsibility for keeping complex network infrastructures secure and threat–free, to third party specialists, had many attractions, particularly as IT security professionals were both thin on the ground and expensive heads to have on the payroll.

Given that this was also a time when the battle between security vendors and the hacker community was really getting into its stride, and new vulnerabilities were being discovered on a seemingly hourly basis, it is surprising that, eight years later, the industry is still struggling with the concept of cloud-based security. In fact, if anything, the fundamental drivers underpinning the argument for a SaaS approach have strengthened in the intervening years: in 2008 there were over 5000 new vulnerabilities identified in common applications, operating systems and networking components; new PCI regulations and government legislation means that enterprises now face serious consequences if they fail to maintain stringent security standards; and low cost, high-speed internet connectivity is virtually universal.

So the logical question is: why is cloud-based security not more widely adopted as mainstream policy? Clearly there is no one simple answer to this and no doubt resistance to some of the changes in thinking and internal processes needed to implement a SaaS strategy is a significant factor. However, as we face the prospect of a lengthy downturn in the global economy, companies are being forced to take a fresh look at their cost base, including the core IT infrastructure fundamental to their business operation. Constrained economic circumstances are traditionally the time when the advantages of outsourcing are more readily accepted by an organisation.

One very obvious reason for the slow uptake of SaaS is that there are few companies that actually offer the full security package that businesses require. Whilst this can be regarded as one of those circular “chicken-and-egg” arguments, there are some real and fundamental technology issues that have delayed the MSSP sector from seizing the opportunity and making the leap from remote network security management to delivering the full range of hosted security services online.

In particular, security vendors have failed to keep pace with the new multi-gigabyte network speeds needed to power bandwidth-hungry applications such as VoIP and multi-media streaming that many organisations have been quick to embrace, for which users demand consistent and reliable levels of performance.

One of the other big factors that has occurred in the last few years, and is also contributing to the delayed roll-out of SaaS, is the increased sophistication of the threats facing network infrastructures as the hacker community has found new ways to circumvent the latest security technology to deliver their malware payloads. The response by the security industry has been to try to adapt old technology to operate in a modern high-speed environment and to mitigate complex threats that it was never designed for, usually resulting in increased latency and unacceptable degradation of network performance. The latest multi-staged “low and slow” attacks are a specific case in point. Delivered over time in incremental parts, these attacks are virtually undetectable by existing IPS and firewall systems and require a totally new approach to intrusion detection and prevention.

Most of the big global network security vendors have announced products that include the option of 10G connectivity and make claims of high-speed throughput with multiple threat mitigation functionality. In theory they can provide the necessary protection but in practice these ASIC plus CPU based systems are restricted by the limits of their processing architectures and are unable to offer true 10G throughput performance, creating an overall bottleneck in the system and major problems for the users of VoIP and other real-time applications downstream.

As with the threat posed by multi-staged stealth attacks, resolving the issue of throughput performance requires more than just tinkering with existing technology, which in this case has effectively reached the limits of its capability. Syphan is one company that is tackling this problem head on through its innovative use of FPGA-based multi-dimensional parallel processing techniques. Using programmable silicon also means that the technology can be quickly upgraded in situ with new rule sets as and when new threats emerge, and by enabling full packet inspection against multiple rules in parallel, true 10G performance without latency is a practical reality.

With the emergence of these new technologies at a time of economic uncertainty, the roll out of scalable online security services has become a much more attractive proposition for MSSPs and their customers alike. Whilst not everyone welcomes the prospect of scaling back their internal operations, the option for businesses to eliminate their security management and infrastructure costs without compromising their security posture or risking impacting the day-to-day business operation is a likely to be a strong factor in making 2009 the year that the cloud-based security market, envisaged by McAfee, starts to take hold.

Syphan Technologies is exhibiting at Infosecurity Europe 2009, the No. 1 industry event in Europe held on 28th – 30th April in its new venue Earl’s Court, London. The event provides an unrivalled free education programme, exhibitors showcasing new and emerging technologies and offering practical and professional expertise. For further information please visit www.infosec.co.uk

Source: Infosecurity PR
<>
Read More
Posted in Infosec Europe 2009, Infosecurity Europe 2009, Syphan Technologies | No comments

Saturday, 28 March 2009

Time of Proactive Security is Beginning!

Posted on 11:05 by Unknown
By Ari Takanen, CTO, Codenomicon

The easiest method of conducting a security compromise is to look for a vulnerability in widely used software and exploit that. The problem today is that vulnerabilities rarely become public. There is very little motivation to disclose security findings anymore. Unfortunately this also makes reactive tools such as intrusion detection systems, security scanners and vulnerability scanners useless. They are all based on public vulnerability knowledge, and today they just do not have the data to work on. More and more zero-day attacks emerge with no protection available. It is time to be proactive!

Fortunately a set of proactive security assessment tools has emerged to fill the gap. These tools can be divided in three categories: static code analysis tools, reverse-engineering tools, and fuzzers. But which of these tools are useful for your everyday security engineer trying to defend his or her enterprise network? Maybe to you, they all look like quality assurance tools rather than enterprise tools? Code auditing tools require access to source code to be useful. Reverse-engineering is powerful, but often illegal means of finding vulnerabilities. That leaves fuzzing as the only proactive means for protecting your system.

Recently, fuzzing tools have been adapted in standard penetration testing practices and certification processes. For example in SCADA (industrial automation) fuzzing has become a critical part of the security test. Such tests have also been adapted as the procurement criteria in telecoms. Also if you look at the recent marketing materials for Google Chrome (http://www.google.com/googlebooks/chrome/) you can see that major software companies have taken fuzzing as part of their quality assurance process.

Without knowing, you might already be using a product that has been fuzzed during its lifecycle. I definitely hope it has been. The only way to ensure that is to fuzz it yourself. This was the beginning of enterprise fuzzing market, and more and more end-user organizations are adapting and integrating fuzzing into their standard auditing, acceptance and procurement processes.

What is Fuzzing?

Fuzzing is nothing new. For years already, software testers, developers and auditors have used fuzzing in their proactive security assessments. It is used to easily find defects that can be triggered by malformed inputs via external interfaces, This means that fuzzing is able to cover the most exposed and critical attack surfaces in a system relatively well, and identify many common errors and potential vulnerabilities quickly and cost-effectively. There are no false positives with fuzz testing. A crash is a crash, you cannot argue against that.

Although today most widely used fuzzers are all commercial, much of the notoriety of fuzzers has arisen from the success of open source projects. The best-known fuzzing comes from testing Unix command-line tools with fuzzed parameters in 1989 by Miller et al. (see http://www.cs.wisc.edu/~bart/fuzz/). Their research indicated that 20-40% of all tested software failed (crashed) when random inputs were provided. Back then fuzzing was dumb but still powerful. During the last 10-15 years, fuzzing has gradually developed towards a full testing discipline with support from both the security research and traditional QA testing communities, although some people still suffer from misconceptions regarding its capabilities, effectiveness and practical implementation. Fuzzing today is extremely intelligent!

Fuzzing Value

Fuzzing is especially useful in analyzing proprietary and commercial systems, as it does not require any access to source code. The system under test can be viewed as a black-box, with one or more external interfaces available for injecting tests, but without any other information available on the internals of the tested system. A practical example of fuzzing would be to send malformed HTTP requests to a web server, or create malformed Word document files for viewing on a word processing application.

The purpose of fuzzing is to find flaws in software, and it does that extremely efficiently. In tests conducted by Codenomicon Labs (www.gohackyourself.net) the researchers found out that none of the available WLAN access points used by consumers could withstand any fuzzing. Elimination of such flaws with automated black-box tools reduces the cost of software in both R&D, as well as maintenance costs by the end-users of the communication products. Potentially in the world of tomorrow, you will not need any security devices, because the networks themselves will have been thoroughly tested, with fuzzing, to tolerate any surprises coming from the network.

Codenomicon is exhibiting at Infosecurity Europe 2009, Europe's number one dedicated Information security event. Now in its 14th year, the show continues to provide an unrivalled education programme, the most diverse range of new products & services from over 300 exhibitors and 12,000 visitors from every segment of the industry. Held on the 28th - 30th April 2009 in Earls Court, London this is a must attend event for all professionals involved in Information Security. www.infosec.co.uk

Courtesy: Infosecurity PR
<>
Read More
Posted in Codenomicon, Infosec Europe 2009, Infosecurity Europe 2009 | No comments

Vulnerability Management -Battling the Unknowns with Intelligence

Posted on 11:02 by Unknown
by Chris Schwartzbauer, Vice president of development and customer operations, Shavlik Technologies, LLC.

Too many companies, today quite savvy about security and compliance requirements, continue to struggle to get to grips with the basics – understanding what is on their network, how it is configured, its purpose and what is running on it. Often the decision makers, the CIO, Security and Risk Managers, assume the basics are resolved because a significant investment has been made in sophisticated security strategy and technologies. They have not, however, recognised that it is the mundane processes, the policy and configuration management where the vulnerability gaps are left wide open. This leaves them working in the dark, unable to track and therefore effectively enforce IT security policy. Ongoing investments in security compliance for PCI, or to adopt ISO 27002 standards and others are also compromised as this weak link in security strategy persists.

You can’t secure what you don’t know about and unfortunately the unknowns are many:
  • Companies are often unaware of all of the servers live on their network
  • Laptops are offline when vulnerability scans occur/its agent software is not activated
  • Data governance is poor – easily copied and moved around the organisations by employees
  • Virtualisation has proliferated the number of machines that must be protected, while too many can create virtual machines
  • Unknown network connections & account privileges persist
  • Unknown applications – whether malicious or loaded inadvertently by employees, for the latter patches are never applied
  • Oversights in configuration settings
The resolution lies in addressing the problem from the ground up. Attention must be paid to equipping the administrator with the ability to discover and evaluate all of the systems on and connecting to the network. They need access to usable information to ensure they comprehend the entirety of the problem, can set priorities, and instil confidence by communicating progress. The vulnerability gaps, once discovered, will usually require the most basic of security controls – configuration according to current access policy or removal of unauthorised software. The complexity lies in finding the gaps so that they can be filled.

For their part security administrators tell us that they are recognising the need to develop a meaningful overview of their network assets, largely a response to the increasing pressure to report more on their security status from the executives newly motivated to demonstrate responsibility to customers and board members alike. They are challenged however, by the complexity of their heterogeneous networks, an overwhelming amount of log data that is too time consuming to interpret, and a reticence to automate where manual processes are no longer adequate. The latter point is illustrated in a recent international study released by industry analysts Aberdeen Group which suggested only 51% of companies have automated basic vulnerability management operations such as patch and configuration management despite widespread acceptance that many security vulnerabilities can be avoided by fixing this issue.

The struggle to glean good, complete information about the security status of their information systems is most obvious when it comes to audit time. In a 2008 survey Shavlik conducted of over 400 delegates attending trade shows in the US and Europe, they identified over 120 different solutions for managing the audit process, with many trying to develop their own management programs or pull together information from `a lot of systems.’ A significant proportion, nearly 40%, indicated that they were dissatisfied with this situation. Other feedback shared by our customers suggests that they want interoperability or even integration across the disparate solutions they have deployed for vulnerability management-application control, configuration management, and virtualisation control, patch management, even anti-virus and spam control- so that they can develop that comprehensive view of what it is happening.
Some vendors are responding: Many of us are committing to standards such as SCAP, which though an initiative of a US government agency, leverages internationally recognised open standards, such as the Common Vulnerabilities and Exposures (CVE) identifiers, the Open Vulnerability and Assessment Language (OVAL), and Common Vulnerability Scoring System (CVSS). Commercial application promises to deliver the improved interoperability across functions that are being demanded. The opportunity is there for companies and organisations is to establish an integrated approach for their security operations.

It used to be that hackers wanted to make a big impact- create and distribute malicious programs that could proliferate quickly and cause great disruption. Now most attacks are designed to go undetected to give the program the time to invade a piece of software, search out, and steal valuable data that can be sold on a black market. They are also more focused on endpoint machines and PCs, given the comprehensive investment in firewalls and historic focus on defending the network itself. Such an attack can last for months, and avoid detection until a customer realises that a breach has occurred. This phenomenon is catching public attention with publicised data losses alerting everyone of their vulnerability—while executives are increasingly asking their CIOs if their company could make the next news headline.

It’s time to recognise that organisations must work with a solid understanding of whether a given box is relevant and configured for its task, whether users downloaded anything, whether it’s all patched—there can be hundreds of checks that administrators will want to and should verify. This will rely on the will to plan, organise and take advantage of their security management information, starting with a query of the potential unknowns. Before systems can be patched and configured according to policy, administrators must proactively scan for what systems exist, and ensure laptops are detected whenever they connect to the network. They must understand what software exists on them, and whether the approved configuration is appropriate. The remediation that follows can be systematic and sustainable, and communicable through a rich resource of reporting information that can be tailored for whoever may be looking for reassurance. Until these basics are effectively managed, there will always be a risk to company security and any effort at compliance with security policy or external regulation.

Shavlik Technologies is exhibiting at Infosecurity Europe 2009, the No. 1 industry event in Europe held on 28th – 30th April in its new venue Earl’s Court, London. The event provides an unrivalled free education programme, exhibitors showcasing new and emerging technologies and offering practical and professional expertise. For further information please visit www.infosec.co.uk

Shavlik Technologies, LLC delivers businesses robust software solutions that rapidly accelerate and continuously improve security and compliance readiness by simplifying IT operations, and identifying and reliably closing system security gaps.

Courtesy: Infosecurity PR
<>
Read More
Posted in Infosec Europe 2009, Infosecurity Europe 2009, LLC, Shavlik Technologies | No comments

Thursday, 26 March 2009

Finjan confirms cybercrime revenues exceeding drug trafficking

Posted on 12:43 by Unknown
Farnborough, United Kingdom, 26th March 2009: Testimony from AT&T's Chief Security Officer Edward Amoroso, in which he told a US Senate Commerce Committee that revenues from cybercrime - at $1 trillion annually - are now exceeding those of drug crime, have been confirmed by Finjan, the business Internet security expert.

"Our latest research suggests that, whilst the economic downturn is reducing the income of drug traffickers, cybercriminals are becoming ever more innovative in the ways they extract money from companies and individual," said Yuval Ben Itzhak, Finjan's Chief Technology Officer.

"In our Q1 2009 report on cybercrime, for example, we revealed that one single rogueware network are raking in $10,800 a day, or $39.42 million a year. If you extrapolate those figures across the many thousands of cybercrime operations that exist on the Internet at any given time, the results easily reach a trillion dollars," he added.

According to Ben-Itzhak, Finjan's Q1 2009 security trends report also revealed that traffic volume to compromised Web sites has increased significantly, so luring masses of potential buyers to rogueware offerings.

As we have reported many times in our quarterly reports, he said, cybercriminals keep on looking for improved methods to distribute their malware and rogueware.

And since they make money by trading stolen data or selling rogue software, they are always looking for new and innovative techniques all time, he explained.

"It's against this backdrop that we can confirm AT&T CSO Amoroso's testimony that cyber-security threats have increased significantly over the past five years, and have reached the point where they pose a significant threat to all organisations," he said.

“We have seen a trend of unemployed IT personnel finding new and easy income by purchasing and using Crimeware Toolkits that are sold by professional hackers. We believe that this was just the beginning of a wider trend that we will experience in 2009 and 2010. Having the large number of layoffs of IT professionals all around the world, especially in the USA, we expect a rising number of people willing to ‘give it a try’ and to get stolen credit card numbers, online banking accounts and corporate data that they can use to generate income,” he added."

Because of this, we are urging companies to constantly review their IT security defences and the ways they monitor their IT resources against all forms of incursion and data leakages. It's only with extreme vigilance that IT managers can reduce the risk of a serious cybercrime event causing severe fiscal damage to their firm," he added.

For more on Edward Amoroso's Senate testimony: http://preview.tinyurl.com/cpc2pa

For more on Finjan's Q1 2009 intelligence report: http://www.finjan.com/cybercrime_intelligence

Finjan MCRC specializes in the detection, analysis and research of web threats, including Crimeware, Web 2.0 attacks, Trojans and other forms of malware. Our goal is to be steps ahead of hackers and cybercriminals, who are attempting to exploit flaws in computer platforms and applications for their profit. In order to protect our customers from the next Crimeware wave and emerging malware and attack vectors, Finjan MCRC is a driving force behind the development of Finjan's next generation of security technologies used in our unified Secure Web Gateway solutions. For more information please also visit our info center and blog.

Secure Gateway provides organizations with a unified web security solution combining productivity, liability and bandwidth control via URL categorization, content caching and applications control technologies. Crimeware, malware and data leakage are proactively prevented via patented active real-time content inspection technologies and optional anti-virus modules. Powerful central management enables intuitive task-based policy management, excellent drill-down reporting capabilities and easy directory integration for all network implementation options. By integrating several security engines in a single dedicated appliance, Finjan’s comprehensive and integrated web security solution enables quick deployment, simplified management and reduction of costs. Business benefits include real-time web security (no patches or updates needed), lower total cost of ownership (TCO), cost savings in administration efforts, lower maintenance costs, and reduction in loss of productivity. Finjan's security solutions have received industry awards and recognition from leading analyst houses and publications, including Gartner, IDC, Butler Group, SC Magazine, eWEEK, CRN, ITPro, PCPro, ITWeek, Network Computing, and Information Security. With Finjan’s award-winning and widely used solutions, businesses can focus on implementing web strategies to realize their full organizational and commercial potential. For more information about Finjan, please visit: www.finjan.com.

Neil Stinchcombe, Eskenzi PR
<>
Read More
Posted in | No comments

Experts say energy network hacks could be avoided with code auditing

Posted on 12:41 by Unknown
Fortify says energy network hacks can be avoided through the use of code auditing and analysis

26th March 09 - Commenting on the reported vulnerability of the energy and utility networks to external attacks by hackers, Fortify Software, the software security assurance experts, says that the custom code seen in many energy applications means that program code auditing and analysis is now a must for security.

"The problem facing IT managers within energy companies is that a lot of programs they use on their IT resources are either heavily customised or written from scratch, such as SCADA applications," said Rob Rachwald, Fortify's Director of Product Marketing.

"Because of this, the code auditing and review process must involve building security into the software from the ground level upwards. The problem is, however, that this is not a frequently used mantra in the energy industries, many of whom use modified Windows 98 and even DOS applications dating back several years," he added.

According to Rachwald, the process of integrating security within the program code of energy companies is not to build operational standards, but preventative ones.

Rachwald says that Fortify has been working with Cigital, a consulting firm specialising in software security, to develop the 'Building Security In Maturity Model (BSIMM),' a set of benchmarks for developing and growing an enterprise-wide software security programme.
The BSIMM programme, details of which were released in early March,says Rachwald, are highly applicable to the reported security worries surrounding the vulnerability of utility, and in particular, energy networks, since they create benchmarks where none existed previously.
Under BSIMM, he explained, Fortify and Cigital have developed a structured set of practices based on real-world data and which provides an insight on what successful organisations actually do to build security into their software.

It also, he says, gives developers an understanding of how to mitigate the business risk associated with insecure applications.

"The North American Electric Reliability Corporation - NERC - has also been working on required source code reviews. This is especially relevant given the trend to using open source programs as a baseline for energy company customised software," he said.

"Using the NERC approach to code auditing and reviewing is an excellent starting point on which to build a program audit process and a great step towards engendering a preventative mindset on the software development front," he added.

For more on the energy network security debate: http://tinyurl.com/cvac8t

For more on Fortify Software: http://www.fortify.com

Yvonne Eskenzi, Eskenzi PR
<>
Read More
Posted in Fortify Software | No comments

Wednesday, 25 March 2009

IBM in talks to acquire Sun Microsystems

Posted on 11:43 by Unknown
by Michael Smith

Deal would strengthen computing giant’s open source credentials but will that be good for Open Source and the freedom of open source and the code?

IBM is in acquisition talks with hardware and software platform vendor Sun Microsystems, who are also, in a way, behind Open Office, according to the Wall Street Journal.

The report has been neither confirmed nor denied by either party.

According the Wall Street Journal’s sources, IBM would pay at least $6.5 billion for Sun Microsystems. That is almost twice its present market capitalization, but half its total revenues in the 2008 financial year – testament to the fact that investors have little faith in Sun’s ability to make money this year.

Sun has had a disastrous financial year so far. The company lost $1.7 billion in the first quarter, announcing shortly after that it plans to lay off 6,000 employees.

Among the many causes of Sun’s woes have been some expensive acquisitions, notably that of storage equipment manufacturer StorageTek in 2005 for $4.1 billion and MySQL for $1 billion. The latter, in particular, formed the basis of a ‘commercial open source’ business model that has yet to prove ‘commercial’, in the traditional sense.

That means that IBM may be picking up a bargain. The IT giant has also built an open source strategy, which would be bolstered by Sun’s credibility (if not profitability) in the field. However, there may also be an overlap in the companies’ hardware portfolios.

This is, probably, one story of “don't be greedy” and the same could be a warning for other when it comes to acquisitions.

In my view the question is that while this acquisition of Sun Microsystems by IBM may give IBM open source credentials, the question, as I stated to begin with, is whether this is good for Open Source in itself.

The takeover, if it comes to it, by IBM of Sun Microsystems may not, necessarily impact on the most famous and most used open source office suite, that is to say Open Office, as the development, in the main, is done by the Open Office,org team, but Open Office is, nevertheless, part of Sun and there is always the possibility that, suddenly, the free open source office suite we are used to will not longer be free or available.

I guess we will have to wait and see as to the outcome.

© M Smith (Veshengro), 2009
<>
Read More
Posted in IBM, open source, Sun Microsystems | No comments

Tips on stamping out Data Leakage & Industrial Espionage during a Recession

Posted on 11:41 by Unknown
Cyber-Ark Software, explains why the recession is impacting IT security and provides top tips to ring fence the risk

By Mark Fullbrook, UK Director –Cyber-Ark Software

At a recent monthly gathering of both good and bad hackers in a dingy pub in Leicester Square, I asked them whether the economy was opening up new opportunities for them. The response was an overwhelming yes, with nearly everyone saying that the cut backs had caused jobs to be outsourced and, with less folks in IT looking after security, there would be increased room for vulnerabilities and for mistakes to emerge. They were also quick to state that the sentiment amongst redundant employees was that of disgruntlement and that therefore they were more inclined to exploit loop-holes in their previous employers’ networks.

The hacker community reinforced findings Cyber-Ark had unearthed in a recent survey it had conducted amongst 600 office workers in London’s Canary Wharf, New York’s Wall Street and also in Amsterdam. The study explored whether the recession was affecting peoples’ attitudes to work ethics and data security and, shockingly, it revealed that data theft and industrial espionage were on the up, worryingly not from hackers, but from the workforce itself concerned about impending job losses.

56% of workers surveyed said they were worried about losing their jobs because of the economic climate and, in anticipation, over half admitted to downloading competitive corporate data which they had identified as a useful negotiating tool in preparation to secure their next position. Top of the list of desirable information to steal is customer and contact databases, with plans and proposals, product information, and access/password codes all popular choices with a perceived value.

Memory sticks are the smallest, easiest, cheapest and least traceable method of downloading huge amounts of data which is why, according to the Cyber-Ark survey, they’re the “weapon of choice” to sneak out data from under the bosses nose. Other methods were photocopying, emailing, CDs, online encrypted storage websites, smartphones, DVDs, cameras, SKYPE, and iPods. Rather randomly, yet disconcerting, is that in the UK seven percent said they’d resort to memorising important data!

It’s not all doom and gloom as the survey also discovered that 70% of companies had implemented restrictions to prevent employees from taking information out of the office but that still leaves a worrying 30% unprepared for the snake in their midst.

Top Tips to Ring Fence The Risk
So what can companies do to stop data leakage and company secrets being exposed during these very uncertain times? My best advice is to …

1.Only allow people access to the information that they need for their everyday activity. Install multiple layers of security within the organisation depending on the value of the information, in this manner only those that are privy to highly sensitive or important data are allowed access to it. The best way to do this is to have a “digital vault - where you can encrypt the company’s most critical assets and allow only those with privileged access into the vault.

2. Regularly change passwords on admin accounts or privileged accounts which are accessed by more than one user as you will often find that these power passwords are being informally shared amongst those people that shouldn’t be using them. It’s once you change these that suddenly people phone in and ask why they can no longer access the data and you realise just how many unauthorised people were unnecessarily accessing the information. It’s these admin accounts and privileged passwords that hackers will always try and access first as they are often badly managed leaving gaping holes in the network.

2.Drum into your staff the importance of respecting company data and make sure you instil good IT security housekeeping rules. You can have the best IT security products in the world, but if your staff lets you down by stealing the information or, then all your best intentions and investments go out the window – along with the data!

3.Make sure you have an audit trail to the sensitive and important data. That way you can track who has access to what information and can check at all times who is accessing it.

4.Have a strict password usage policy that means that all users within the company have to change passwords regularly mixing numbers, letters and symbols. Do not allow users to know, or worse share, each others passwords. As I mentioned earlier manage and audit the highly sensitive administrative passwords to prevent hackers, and increasingly important insiders, exploiting the systems.

5.Ensure that you have a strict protocol for remote users and administer security products onto mobile devices centrally. Deploy the best, most transparent, encryption solution that doesn’t impede the device or impact the user, otherwise they will do their utmost to bypass it.

6.Have protection in place against data deletion and loss - earlier file versions should be retained, ensuring an easy way to revert to the correct file content or recover from data deletion quickly with minimal disruption.

7.Always use digital signatures so that unauthorised changes in files are detected.

8.Make sure you have end-to-end network protection. Security must be maintained while data is being transported over the network. The process of transferring data has to be, in itself, secure. It should be necessary for users to be authenticated, and access control used to ensure that users only take appropriate action, and that only authorised actions are carried out.

9.Maintain process integrity at all times. As data transfer is an essential part of a larger business process, it is critical to be able to validate that this step in the process is executed correctly. This requires the solution to provide auditing features, data integrity verification, and guaranteed delivery options.

In this current economic climate employers need to be able to trust their staff, however, with everyone jittery about keeping their jobs - the instinct is to look out for number one. The result is that employers need to be stricter about locking down sensitive and competitive information. It would be unthinkable to leave money on a desk, an obvious temptation to anyone passing, instead it is always safely locked away and the time has come for companies to give sensitive information the same consideration. If times get hard, and they invariably will, companies need to ensure that any cutbacks aren’t deeper then expected when stolen data unexpectedly eradicates any chance of survival. CyberArk’s advice is only allow access to your most critical assets for those that really need it, encrypt.

Yvonne Eskenzi, Eskenzi PR
<>
Read More
Posted in Cyber-Ark | No comments
Newer Posts Home
Subscribe to: Comments (Atom)

Popular Posts

  • DDoS-Attacks disable many shopping websites, including Amazon
    Just in time for last minute Christmas shopping major shopping sites disabled by Michael Smith (Veshengro) London, December 26, 2009: An...
  • Open Source Software in Business & Government
    by Michael Smith (Veshengro) Lots of Open Source in use in mainland Europe, including EU member states, very little in the UK and less still...
  • Cyber-Ark Expands RSA Secured Partner Program Certification Status
    Cyber-Ark Privileged Identity Management Suite, Inter-Business Vault and Sensitive Document Vault Now Formally Interoperable with RSA enVisi...
  • Infosecurity Adviser applauds forensics lab training facilities at key UK university
    London, UK. May 2009: Infosecurity Adviser, Infosecurity Europe’s online community for the information security industry, has published a r...
  • Scientific company discusses simultaneously protecting applications and data
    Simultaneously protecting applications and data: The next evolution in security? September 2009 (Eskenzi PR) – In a recent Imperva podcast...
  • TUFIN TECHNOLOGIES WINS the PRESTIGIOUS 2010 Computing Security Award for ‘Best bench tested solution of the Year’
    Network Computing and Computing Security Magazine Editors Select Tufin’s SecureChange Workflow as the Top Product Reviewed in 2010 Londo...
  • Brocade Service Could Help Reduce Billions in Data Centre Operations Costs
    New Energy Efficiency Review provides holistic assessment and remedial strategies to help companies optimise efficiency and reduce costs Ene...
  • Infosecurity Europe 2011 Hall of Fame nominations now open
    London UK, February  2011 – The time is ripe to elevate the greatest movers and shakers in the world of information security as nominations ...
  • Tufin survey reveals the truth about fudging audits, IT cost cutting and buying equipment online
    Ramat Gan, Israel – May 27, 2009 – Tufin Technologies today announced the results of its “Reality Bytes” security survey. The survey parti...
  • ISACA’s EuroCACS Conference Demystifies the Cloud
    Event for IT Professionals Will Take Place 20-23 March, Manchester London, England, (8 th March 2011)— Global business and information ...

Categories

  • ASUS
  • AVG Link Scanner
  • BeCrypt
  • book review
  • Brocade
  • Codenomicon
  • Columbian USB stick loss
  • computer recycling
  • Conficker worm
  • Credant Technologies
  • cyber crime
  • Cyber-Ark
  • Cyber-Ark®
  • Data Center
  • data encryption
  • DeviceLock
  • Digital Pathways
  • diskGenie
  • Eclypt
  • Eee PC
  • Eee PC Seashell 1008HA
  • F5 Networks
  • Facebook
  • Finjan
  • Finjan Inc.
  • Finjan MCRC
  • Firewall Management
  • Fortify
  • Fortify 360
  • Fortify Software
  • Fortify® Software
  • gadgets
  • Google
  • Google Chrome
  • green computing
  • green IT
  • IBM
  • Infosec
  • Infosec Europe 2009
  • Infosecurity Adviser
  • Infosecurity Europe
  • Infosecurity Europe 2009
  • Internet privacy
  • iStorage
  • iStorage diskGenie
  • iStorage Ltd.
  • Juniper Networks
  • Lakeland
  • Lapdesk
  • LLC
  • Logitech
  • malware
  • ManageEngine
  • McAfee International Ltd
  • MI6
  • MI6 data loss
  • Microsoft
  • MiFi™ 2352
  • Mio
  • Mobile Broadband
  • MS Office
  • National Cybersecurity Advisor
  • Navman
  • Navman Spirit
  • Netac
  • Novatel
  • Novatel Wireless Intelligent Mobile Hotspot 2352
  • OneClick IntelliPanel Desktop
  • online social media
  • open source
  • OpenOffice.org
  • Optenet
  • Origin Data Locker
  • Origin Storage
  • PNDs
  • product review
  • Red
  • SaaS
  • Sat Nav
  • saving energy
  • Security
  • Shavlik Technologies
  • SIS
  • spam
  • Stonewood Group
  • Storage Area Networks
  • Storage Expo
  • Storage Expo 2009
  • Sun Microsystems
  • Swine Flu
  • Syphan Technologies
  • Throwing Sheep in the Boardroom
  • Tufin Technologies
  • Twitter
  • U256
  • Unisys Security Index
  • USB drives
  • Vektor
  • VisionRacer
  • VisionRacer VR3
  • VMware
  • Weast
  • Web Apps Security
  • WebFilter PC Solution
  • WebSpy
  • XSS-driven attacks

Blog Archive

  • ►  2012 (1)
    • ►  January (1)
  • ►  2011 (67)
    • ►  December (1)
    • ►  April (1)
    • ►  March (14)
    • ►  February (30)
    • ►  January (21)
  • ►  2010 (192)
    • ►  December (20)
    • ►  November (22)
    • ►  October (19)
    • ►  September (5)
    • ►  August (8)
    • ►  July (5)
    • ►  June (22)
    • ►  May (13)
    • ►  April (11)
    • ►  March (13)
    • ►  February (27)
    • ►  January (27)
  • ▼  2009 (240)
    • ►  December (25)
    • ►  November (9)
    • ►  October (21)
    • ►  September (19)
    • ►  August (30)
    • ►  July (35)
    • ►  June (30)
    • ►  May (21)
    • ►  April (42)
    • ▼  March (8)
      • Security Implications Of The Virtualised Datacentre
      • Cloud-based security services: Will 2009 be the ye...
      • Time of Proactive Security is Beginning!
      • Vulnerability Management -Battling the Unknowns wi...
      • Finjan confirms cybercrime revenues exceeding drug...
      • Experts say energy network hacks could be avoided ...
      • IBM in talks to acquire Sun Microsystems
      • Tips on stamping out Data Leakage & Industrial Esp...
Powered by Blogger.

About Me

Unknown
View my complete profile