Friday, March 30, 2012

Security Issues for cloud computing

SaaS Security Issues
As interest in software-as-a-service grows, so too do concerns about SaaS security.  Total cost of ownership used to be the most frequently cited roadblock among potential SaaS customers. But now, as cloud networks become more frequently used for strategic and mission-critical business applications, security tops the list.

1.  Cloud Identity Management is lacking
Companies that have existing identity services running behind their firewalls may not find SaaS integration an easy proposition. Compatibility in this regard is a little behind the curve. Some companies are working on this, developing third-party applications that will allow IT departments to extend authentication into the cloud through a single log-on. Ping Identity and Simplified are two examples of this. This leads to another problem as well. The whole point of moving to SaaS is to reduce complexity. Buying more applications from more vendors only reintroduces the complexities that you’re probably hoping to avoid, not only for your infrastructure but your users as well.

2. Industry Secrecy
While vendors for cloud software naturally argue that their systems are far more secure than traditional infrastructures, they are disquietingly secretive about their security procedures. When questioned about this, the common response is that this is – oddly enough – done to protect the security of their systems. This may sound innocent enough, but several analysts claim this is a bad sign.
Specifically, analysts from the Burton Group have challenged Amazon’s Chief Technical Officer (CTO) with not being forthcoming enough about the company’s security practices, stating that when customers don’t know enough, they should assume the worst. Microsoft, on the other hand, has done a reasonable job of proving their security according to the analysts.

3. Open Access Increases Convenience but also Risk
One major benefit of software-as-a-service -- that business applications can be accessed wherever there is Internet connectivity -- also poses new risks. Coupled with the proliferation of laptops and smart phones, SaaS makes it even more important for IT shops to secure endpoints."Because of the nature of SaaS, its accessible anywhere," Senior Vice President Rowan Trollope of Symantec Hosted Services notes. "If I decide to put my e-mail on Gmail, an employee could log in from a coffee shop on an unsecured computer. It's one of the benefits of software-as-a-service, but it's also one of the downsides. That endpoint isn't necessarily secure. The data is no longer in your walls in the physical sense and in the virtual sense.

4. Cloud standards are weak
When you’re shopping around for a software vendor, one of the first things you want to see is the vendor’s security qualifications. Passing various standards makes this process easy – if a company boasts certain credentials, you can immediately understand the measures that company has taken to secure your data. Unfortunately, there aren’t any standards built around cloud software just yet.
SAS 70 is an auditing standard designed to show that service providers have sufficient control over data. The standard wasn’t crafted with cloud computing in mind, but it’s become stand-in benchmark in the absence of cloud-specific standards.
ISO 27001 "is not perfect but it's a step in the right direction," MacDonald says. "It's the best one out there, but that doesn't mean it's sufficient."There's no guarantee that your data will be safe with an ISO 27001-compliant vendor, however.

5. Your Data May Move Without Your Knowledge
A big perk of cloud computing is the lack of local storage. All of your files are stored on a remote, centralized server, which means you can access and modify your data from anywhere. However, one technical step that makes this possible is “load balancing.” If you access your files from a geographically distant place – say you’re on a business trip in Europe and trying to access your files from an American server – your cloud network will actually copy those files to a closer server to you to improve performance. This is a great feature, but it can run afoul of certain regulations – such as the Federal Information Security Management Act (FISMA) that requires companies to keep sensitive data inside the US.
Some companies offer guarantees that they can lock your data down to a particular geographic nation, but this is still a rare feature in SaaS vendors. Until vendors can reliably guarantee the geographic location of your data, or until a third-party vendor can accurately track the migration of that data, companies with sensitive data will need to make extra preparations before jumping into SaaS.

PaaS Security Issues

SOA related security issues
The PaaS model is based on the Service-oriented Architecture (SOA) model. This leads to inheriting all security issues that exist in the SOA domain such as DOS attacks, Man-in-the-middle attacks, XML-related attacks, Replay attacks, Dictionary attacks, Injection attacks and input validation related attacks. Mutual authentication, authorization and WS-Security standards are important to secure the cloud provided services. This security issue is a shared responsibility among cloud providers, service providers and consumers.

Vendor Lock In
Platform as a Service (PaaS) vendors tend to dictate the database, storage and application framework used, so what about those legacy applications? Enterprises will still require the skills and infrastructure to be able to run them.

Business Continuity Planning and Disaster Recovery with PAAS vendor
As for Example: Windows Azure platform, Microsoft's cloud computing platform, suffered an outage one weekend in March, 2009. Had your enterprise been using the service, how would the outage have affected the organization's ability to conduct business? Alternatively, it would have been Microsoft's responsibility to fix it, not your IT teams.

Mobility
The mobility of an application deployed atop a PaaS is exceedingly limited, Lori says, first because "you are necessarily targeting a specific platform for applications: .NET, Java, PHP, Python, Ruby." The second limiting factor with PaaS "is the use of proprietary platform services such as those offered for data, queuing and e-mail. Image." What happens, then, is customers are locked into cloud platform service providers.

IaaS Security Issues

1. Trust and Transparency
One problem associated with cloud computing is that corporations are required to give up transparency into their IT resources. The promise of the cloud is that you pay for services and do not have to worry about the implementation behind the scenes. This “black box” concept in cloud computing creates issues that corporations will have to overcome if cloud computing is going to continue in its present course of growth. An organizations resources and data are essentially at the mercy of the cloud providers employees and policies. Organizations must adhere to regulations on certain types of data and with cloud computing, it’s almost impossible to tell if these regulations are being satisfied by the cloud service provider. Monitoring the internal operations of the cloud is very difficult and reliant upon the cloud service provider.

2.  Data Location
Certain types of businesses may need to follow additional regulations with regard to the location of their data. The term “location” can apply in two different ways; not only the logical location on the storage device, but also in what geographic location the storage resides. When data is stored in the cloud, you only know what your CSP has promised contractually. There may be very little information available about how they actually go about meeting those agreements. You might know a size limit on your account or even the fileshare name, but you probably won’t know who else is using that disk in the same storage device. Your data might very well be living next to your biggest competitor. This might make a company think twice about migrating to the cloud. In looking for commonalities among these operator-induced failures, the study found that system Configuration errors far outnumbered other types of errors (e.g., accidental deletion of data, moving users to the wrong fileserver, etc.). The study also found that many operator-induced failures could have been prevented had the operator better understood the system’s configuration, and in some cases how the system evolved to that configuration. This study definitely put a dent in the concept of trust, and it reveals an availability-related weakness in the cloud computing as it relates to errors that could be possibly made by data center personnel, and that could have serious operational impacts on a customer organization, if there are no appropriate safeguards in place.
In addition to operator errors, Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks. Present a serious threat to system availability. According to Roland Dobbins, solutions architect for network security specialist Arbor Networks, distributed denial of service attacks are one of the most under-rated and ill-guarded against security threats to corporate IT, and in particular the biggest threat facing cloud computing.
Now as IT organizations consolidate data centers, the problems to be addressed are getting bigger as well. One of the first things that many IT organizations will discover is that once you consolidate data centers or when you transfer these services to large third parties providing cloud services, data centers become bigger security targets.
While one of the cloud’s biggest selling features is reliability, this begs the question of whether the replicated copies of data are given the same care with regards to security as the originals. Dev Central points out that knowing the location within a country of the data center is just as important as certain locations could be prone to more natural disasters. This, combined with international concerns, makes it even more imperative to know the physical locations of all copies of company data.

3.  Availability and Denial of Service Attacks
There are many definitions of system availability, and one of them defines availability as “The degree to which a system or subsystem, or equipment is in a specified operable and committable state at the start of a mission, when the mission is called for at an unknown, i.e., a random, time.” Simply put, availability is the proportion of time a system is in a functioning condition. In a non time-critical environment, availability may not be a significant factor for an organization but what if continuous system availability is critical to the existence of an organization?
Companies whose entire existence depends on on-line sales, government organizations that depend heavily on IT services for communications and mission critical applications (Department of Defense) are typical examples of entities that can fully appreciate an uninterruptable availability of IT services, and that can suffer serious financial and operational consequences from their abrupt and long-term unavailability.

4. Insecure APIs
There are many organizations that build upon cloud interfaces to offer new or enhanced services to
Customers. From the cloud provider’s side, it is important to monitor the access to the interfaces as it leverages
And introduces a potential high risk threat to the cloud and its customers. Because the IaaS model contains several
 Components that are shared and sometimes outsourced, the entire system (the cloud) is exposed to great risk because at the end, one component can impact the other and consequently the whole cloud can be threatened.

Tiered Security Issues

Client
The biggest issue in my opinion with cloud computing is what would happen if we lose the service for any reason. I believe it’s important for any company making use of such a service to have an effective disaster recovery plan.  Granted that this event occurring is not very likely; however, one has to have a contingency plan nonetheless. Ask yourself what it will mean if your business were to lose the cloud service? Can the business carry on? What will the cost per day of downtime be? And most importantly if the service is gone for good how long will it take to bring the system back up to operational level?
It is very important that, should the service close down or be temporarily disabled, the business has an effective disaster recovery plan that can help bring the business back up.  While this can sound complicated and expensive to carry out it doesn’t necessarily have to be that way.
When storing data in the cloud one has to be aware that data travels through a number of network points before reaching its destination, and at any one of these points (as well as in-betweens) it is possible to spy on any data going through there. It is very important to protect this data against prying eyes and this can easily be achieved if the link between your company and the cloud is encrypted.
Despite the entire buzz the cloud is generating, the nasty issue that nimbus advocates are glossing over is what happens when a service provider goes down. It’s happened with Google. It’s happened with NetSuite. And you can be certain that it will happen again. Those failures by service providers may be small potatoes, however, compared to the really frightening prospect of an Internet failure. In that scenario, the service providers’ systems would be humming away, but it won’t do their clients any good because they’ll be no Internet to access those systems. Any enterprise that is thinking seriously about the cloud should make sure they have a Plan B that will keep them up and running should their cloud disappear behind the horizon.

Server/Mid-Tier
Cloud services are amazing enough to attract more and more users yet, like any other web server; it’s prone to cyber attacks.
In cloud computing, anybody can register and place their data to the cloud, paying a small fee. Cyber criminals often plant malicious codes to their hosting account and carry attacks on others account hosted on the same server.
Such attacks can only be avoided by repeated checks for loopholes and flaws in the security. Though, Cloud data of all users are placed in separate compartments but black-hats find their way to others account. With new technologies like CAPTCHA solving farms, automated brute force attack and others infiltration into user accounts have gone very easy. Since cloud server is much like your laptop containing your entire private and business details, losing such vital stuff is enough to take your sleep away. Most low cost cloud servers have a weak registration process. One can start using cloud hosting just by providing name, email-id and payment, hence there is no genuine way to trace the wrong doers.
Apart from this, cloud servers are in limited numbers, insufficient to meet the increasing requirements. Due to overflowing demand, most cloud servers provide services with less concern over persisting flaws and vulnerabilities. This can come out as a big security issue making the system more vulnerable to cyber attacks.

Database

Data Loss or Destruction: Infrastructure Admin Deletes Data
A user with administrative privileges to the data storage infrastructure may maliciously or accidentally destroy the encrypted cardholder data.

One or All Data Center is temporarily or Permanently Unavailable
a natural disaster, power failure, or other situation could render one data center unavailable.

"Hacker" Destroys Data
a hacker could exploit vulnerability within a system or network device that allows access to or control of data (remember, we are intentionally ignoring application level issues in this particular article). The attacker may choose to delete, destroy, or render that data inaccessible.

Infrastructure Admin Accesses Data
a user with administrative privileges to the data storage infrastructure may attempt to access cardholder data for the purpose of financial gain.

Miscellaneous: Data Stored in Hostile Country
It is important for customers to ensure sensitive data or intellectual property is properly protected. Some countries do not have adequate laws or security standards in place to protect sensitive data or to restrict seizure of data by the government.

Availability Issues
when data or services are unavailable, companies may lose revenue during the down time and may lose customers concerned with reliability of data or applications.


Other issues

Bandwidth and latency issues
Bandwidth and latency issues arise from the need to move data in and out of the cloud. We assume (for now) that the Cloud is located outside any campus (i.e. either "Public Clouds" or a "Private Cloud" located at a shared data center/colocation facility). We assume that researchers would move data in/out of the cloud from/to a campus-resident storage facility, but note that there will likely be cases where this is not true: where researchers will want to transfer data from storage outside the campus directly to the cloud. There are additional bandwidth and latency issues that arise from the possibility that applications may be "pulled apart" and run on different clouds.
EECS paper reported average bandwidths of 5 to 18 Mb/s writing to Amazon's S3 cloud. Based on those results, they postulated a "best case" scenario in which a researcher writes 1TB of data to S3 at 20Mb/s on average: it would take approximately 45 days to complete the data transfer. We consider this scenario unacceptable and identify the issues that need to be addressed in order to resolve this problem.

Network capacity/capabilities
Campus
At present, most UC campuses have multiple Gigabit Ethernet (GE) connections to the CalREN networks, CalREN-HPR and CalREN-DC. A few campuses have upgraded one (or more) connections to CalREN-HPR to 10Gigabit Ethernet (10GE); no campus (to our knowledge) has plans to upgrade its CalREN-DC connections to 10GE. At the larger campuses, these GE connections are largely consumed with normal day-to-day campus traffic demands; a significant increase in bandwidth due to cloud computing would require funding additional connectivity between the campus and the CalREN backbone(s).
Many campuses operate border firewalls or packet filters that restrict various network protocols (SNMP, SMB, NFS, etc.).
It may not be possible to utilize or manage some cloud computing services, especially storage-based services, if these restrictions are not relaxed or removed. In the case of computing services that demand very high levels of performance, these devices might introduce performance penalties that compromise the cloud computing service.

CENIC
The CENIC backbone is believed to have sufficient capacity to support the initial stages of a cloud computing roll-out, but would likely require augmentation to support large scale use of cloud computing (either internal or external). If increasing capacity can be done within the existing footprint of the [CalREN] networks (e.g. by adding line cards and transponders), this work could proceed relatively quickly given available funding. However, if increasing capacity to meet the needs of cloud computing were to require additional rack space and/or power at the CENIC POPs, significant delays could occur as space and power are not readily available at all colocation facilities.
UC campuses have connections to both the CalREN-DC and CalREN-HPR networks. In general, higher capacity and performance is available via the [CalREN]-HPR network, and it is assumed that most cloud computing connectivity should be provided via that network. In the case of internal clouds, this is relatively easy to ensure. However, in the case of externally provided clouds, it is most likely that the external provider will be connected to the CalREN-DC network and not the [CalREN]-HPR network. Heavy utilization of external cloud computing services might require significant re-engineering to match traffic patterns to the available network topology.

Exchange/peering points
If cloud computing services are provided by institutions not directly connected to [CalREN] networks, the traffic to and from those clouds will need to pass in and out of the [CalREN] networks through established peering and transit points. CENIC maintains a large number of peering relationships; however these connections are sized to cover existing traffic loads. Assuming the large scale use of external cloud facilities, the bandwidth provided at these peering facilities will need to be increased to avoid negatively impacting existing use of the network. Additionally, the geographic/topological placement of these facilities might need to be reviewed to address latency or other network performance related issue. Both increasing bandwidth and changing peering locations involve often significant, costs to CENIC, and by extension the CENIC membership. Settlement free peering (via the [CalREN]-DC network; see above) is in place for connectivity to the largest cloud computing providers (Google, Amazon, Microsoft); the use of providers for which settlement free peering is not available will require the payment of ISP bandwidth fees.

L2/L3 issues
The preceding sections address capacity on the campus and CENIC Layer 3 (routed IP) networks. Applications requiring very high bandwidth (i.e. approaching 10Gbps) or very low latency/jitter might be better served by a Layer 2 connection: either a dedicated wave on an optical network with Ethernet presented at both ends, or a VLAN configured on a switched Ethernet network running on top of an optical network. CENIC offers both types of two campuses connected to the HPR network. Thus, an L2 connection between any UC campus and a cloud that is directly connected to CENIC's optical network will be relatively easy and inexpensive.
Additionally, L2 connection capabilities are present in both national R&E networks, the Internet2 Network and National Lambda Rail (NLR). An L2 connection between any UC campus and a cloud connected to either the Internet2 Network or NLR will be relatively easy to set up, but will involve additional costs that may be significant.
L2 connections to a cloud not directly connected to the CENIC, Internet2 or NLR optical networks will likely be challenging and very expensive.
However, campus security concerns arising from these kinds of connections remain largely unresolved, since in most cases they will bypass existing campus firewalls and intrusion detection/prevention systems. Addressing security concerns on such "bypass networks" will require additional campus resources, both human and machine. These concerns already exist in the larger context of research computing, of course; they are not confined to cloud computing.
It should be noted that we are unaware that any existing cloud provider has been asked if it would support an L3 connection. The large public clouds have clearly made large investments in their L3 connectivity and might be understandably reluctant to consider alternatives. That seems likely to translate to a requirement that the organization requesting an L2 connection pay the entire cost of building and operating the connection. Even so, use of an L2 connection over the CENIC operated portion(s) of the path, and possibly over any NLR or Internet2 portions, could provide sufficiently improved performance to make it worthwhile.

Your Reviews/Queries Are Accepted