Wednesday, May 30, 2012

Project Managers Need to Think About Cloud Computing

Whether software development or infrastructure, the approach in which cloud computing projects are delivered is different enough to require a slightly different come up to project management. So while the same essentials subsist on cloud computing projects as any other, the succession that they need to be done in is a bit chaotic up, the spotlight is higher on some areas than customary and supplementary risks present themselves. The instantaneous accessibility of computing resources solves some tribulations, such as the dependencies on procurement and provisioning of assets, but replaced with far more authorized and contractual issues that need to be managed in advance on as some sticking points may get in the way.

Dealing with New Risks

With any new set of technologies and concepts new risks materialize. The leading risks to cloud computing projects may be non technical and may result from a superior stakeholder reading a sensationalized chronicle about data loss in the cloud and, fearing litigation or brand negativity, wants to shift the new cloud computing initiative on premise. Other risks include harms that result from purveyor/vendor platform and utensil immaturity, availability of skills, increased engineering expenditure, inability to satisfy certain chuck (such as latency), obedience issues and a whole host of other tribulations waiting to ambush the unsuspecting project manager.

Moving Activities Upstream

The on stipulate availability of assets presents fascinating opportunities to budge tricks, which have habitually been downstream activities, to be moved forward. Testing on a ‘production platform’, security, billing and operational handover, which should be moved forward, imitate the areas of spotlight on cloud computing projects that are either glossed over or taken for approved on traditional projects. Plainly security and regulatory concerns that would normally be preoccupied in the incumbent datacenters credentials will have to be dealt with in fact, both technically as well as the management of perceptions in the region of them. The project manager will need to fritter a lot of time allaying suspicions, proving the key and generally providing assurance and answers where few subsist.

Cloud Computing Projects Will Be Different

While purveyor will go to great lengths to notify us how household cloud computing is to their vacant offerings, things are different and, in many belongings, significantly singular. While project managers are outfitted with the broad-spectrum skills to manage a cloud computing project, most project managers would not have deliberation about cloud computing in ample detail to foresee what needs to be made on the project and will manage it as he or she would manage any other project until it all falls separately. There is a need and a prospect in the cloud computing world for project managers who at least have a conceptual perceptive of the issues.

Monday, May 28, 2012

Advances in Cloud Computing In the Past Two Years

The dizzying speed at which technology can move often leaves the casual observer baffled. One day something is space-age, think gesture controlled games consoles and TVs, the next it ‘comes as standard’. Cloud computing is now such a well-established technology that when Apple, Microsoft and Amazon talk about “keeping it in the cloud”, most techno-fans know exactly what they are talking about.
While indeed well-established, the bounds of advancement in all aspects of cloud computing and cloud solutions means that the whole concept can feel brand new. Though some are generally comfortable entrusting their media library to the Apple iCloud, there are many who shudder at not having their data on hardware they can hold in their hands, or on servers that they own.
Until recently there was some good reason for this; at the inaugural International Conference on Cloud Computing, held in Lisbon, Portugal in 2010, one concern that was repeatedly raised was that there was no security standard for cloud computing. While this didn’t necessarily leave data at risk, it was incredibly difficult to claim that security was anywhere close to 100% tight.
As well as security advancements, the last two years have seen a push towards cleaner inter-operability, with the Open Cloud Consortium (OCC) taking that very concept as their main remit. To try and keep pace with the evolving technology, both Europe and the United States have sought reassurances that cloud operators adhere to existing laws surrounding personal data with the United States even going so far as to instruct the Federal Risk and Authorization Management Program (FedRAMP) to assess and provide authorisation to cloud providers.
Perhaps the most significant advancement in recent years has been the move towards ‘greener’ cloud solutions. Vastly increasing cloud usage has led to data centres becoming one of the largest electricity consumers in the developed world and so providers are keen to use methods such as ‘free cooling’ (where natural cooling is provided by wind or surroundings) and intelligent power allocation to cut down on costs but also cut down on carbon emissions.
There is little doubt, within the industry or without, that cloud computing provides a model for the way we store data in the future and has been a long-held ambition for many,with one industry insider pointing out that “you may not have to manage your own storage. You may not store much before too long.” Who said that? Steve Jobs. When? 1996

-by Telehouse

Current Market Trends and Future Opportunities

Since cloud computing seems to be on everyone's lips, it is very difficult to achieve what the latest trends and evolve in that direction. Sometimes it seems that there is more confusion than clarification. If a private cloud or hybrid? If you choose IaaS, PaaS or SaaS?

In previous articles, I discussed the advantages and disadvantages of each technology of cloud. Now look at the latest trends in cloud computing. It is expected that both large companies and SMEs will adopt cloud computing technologies at an exponential rate.

I am generalizing here what I believe to be the top 5 cloud computing trends:
  • ·         IT departments will be forever changed. The IT infrastructure will be crucially transformed and new skills will be needed, thus pushing IT people to adjust themselves to this trend. The need for IT support will be reduced, but people will need to understand how to integrate the newest technologies in their companies and manage the cloud vendors.
  • ·         Cloud security will no longer be an issue. This is related directly to the first point, as IT professionals discover the fact that the managed cloud can be more secure than a physical  environment that is managed by your own IT staff that are responsible for many IT projects.
  • ·         Custom cloud computing services: Cloud migrations span from migrating from physical to SaaS, PaaS and IaaS.  This is a lot of ground to cover for an IT firm trying to be your cloud experts. Outsourced IT organizations will concentrate on automating very specific migrations and become the experts in those types of migrations. An example is outsourcing your exchange environment. This is one of the most painful cloud migrations and IT companies focus just on this type of migration offering services and automated software to make sure the migration is smooth and painless.
  • ·         Custom software development will shift towards the cloud. Legacy software applications will need to be refactored to run more efficiently on cloud environments. This will increase software development and outsourcing will experience a boom.
  • ·         Innovation – probably the most important one Innovation will drive down cloud computing costs, increase security and help with migration from physical to cloud. As cloud computing innovation continues it will be difficult to make the argument that companies should not move to the cloud.

Moreover, I also believe that an alignment of standards is necessary – so far there are organizations such as The Green Grid and Cloud Security Alliance, but a comprehensive guide/entity to which most cloud providers to adhere is yet to be created.

All in all, I believe that more businesses to will get over the fear of embracing cloud computing as IT directors start to fully understand how their businesses could benefit from this new technology. I am expecting an even wider cloud adoption with a more accelerated increase.

Wednesday, May 23, 2012

Linux Directory Structure


Contains the basic shell commands that may be used both by root and by other users. These commands include ls, mkdir, cp, mv, rm, and rmdir. /bin also contains Bash, the default shell.


Contains data required for booting, such as the boot loader, the kernel, and other data that is used before the kernel begins executing user mode programs.


Holds device files that represent hardware components. 


Contains local configuration files that control the operation of programs like the X Window System. The /etc/init.d subdirectory contains scripts that are executed during the boot process.


Holds the private data of every user who has an account on the system. The files located here can only be modified by their owner or by the system administrator.


Contains essential shared libraries needed to boot the system and to run the commands in the root file system. The Windows equivalent for shared libraries are DLL files.


Contains mount points for removable media, such as CD-ROMs, USB sticks, and digital cameras (if they use USB). /media generally holds any type of drive except the hard drive of your system. As soon as your removable medium has been inserted or connected to the system and has been mounted, you can access it from here. 


This directory provides a mount point for a temporarily mounted file system. root may mount file systems here.


Reserved for the installation of additional software. Optional software and larger add-on program packages can be found there. 


Home directory for the root user. Personal data of root is located here.


As the s indicates, this directory holds utilities for the superuser. /sbin contains binaries essential for booting, restoring, and recovering the system in addition to the binaries in /bin.


Holds data for services provided by the system, such as FTP and HTTP.


This directory is used by programs that require temporary storage of files.


/usr has nothing to do with users, but is the acronym for UNIX system resources. The data in /usr is static, read-only data that can be shared among various hosts compliant to the Filesystem Hierarchy Standard (FHS). This directory contains all application programs and establishes a secondary hierarchy in the file system. KDE4 and GNOME are also located here. /usr holds a number of subdirectories, such as /usr/bin, /usr/sbin, /usr/local, and /usr/share/doc.


Contains generally accessible programs.


Contains programs reserved for the system administrator, such as repair functions.


In this directory, the system administrator can install local, distribution-independent extensions.


Holds various documentation files and the release notes for your system. In the manual subdirectory, find an online version of this manual. If more than one language is installed, this directory may contain versions of the manuals for different languages.

Under packages, find the documentation included in the software packages installed on your system. For every package, a subdirectory /usr/share/doc/packages/packagename is created that often holds README files for the package and sometimes examples, configuration files, or additional scripts.


Whereas /usr holds static, read-only data, /var is for data which is written during system operation and thus is variable data, such as log files or spooling data. For example, the log files of your system are in /var/log/messages (only accessible for root).

Configure High Availability Cluster (Heartbeat) On Linux

Refer the following steps :

Number of nodes : Two (node1 and node2)
Clustering software : Heart Beat
Service to be high available : http
IP : to be configured on node1 to be configured on node2 for Heart Beat clustering enabled services(http etc)

1. setup the above two IP on the server and make it sure that uname -n returns node1 or node2
2. yum install heartbeat
make sure that you installed :


3. configure heartbeat on two node cluster. We will deal with three files. These are:


Now moving to our configuration. But there is one more thing to do, that is to copy these files to the /etc/ha.d directory. In our case we copy these files as given below:

cp /usr/share/doc/heartbeat-2.1.2/authkeys /etc/ha.d/
cp /usr/share/doc/heartbeat-2.1.2/ /etc/ha.d/
cp /usr/share/doc/heartbeat-2.1.2/haresources /etc/ha.d/

4. Now let's start configuring heartbeat. First we will deal with the authkeys file, we will use authentication method 2 (sha1). For this we will make changes in the authkeys file as below.

vi /etc/ha.d/authkeys

Then add the following lines:

auth 2
2 sha1 test-ha

Change the permission of the authkeys file:

chmod 600 /etc/ha.d/authkeys

5. Moving to our second file ( which is the most important. So edit the file with vi:

vi /etc/ha.d/

Add the following lines in the file:

logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
initdead 120
bcast eth0
udpport 694
auto_failback on
node node1
node node2

Note: node1 and node2 are the output generated by

uname -n

6. The final piece of work in our configuration is to edit the haresources file. This file contains the information about resources which we want to highly enable. In our case we want the webserver (httpd) highly available:

On node 1 :-

vi /etc/ha.d/haresources
Add the following line:
node1 httpd

On node 2 :-

vi /etc/ha.d/haresources
Add the following line:
node2 httpd

7. Copy the /etc/ha.d/ directory from node1 to node2 :

scp -r /etc/ha.d/ root@node2:/etc/

8. As we want httpd highly enabled let's start configuring httpd :

vi /etc/httpd/conf/httpd.conf

Add this line in httpd.conf:


9. Copy the /etc/httpd/conf/httpd.conf file to node2:

scp /etc/httpd/conf/httpd.conf root@node2:/etc/httpd/conf/

10. Create the file index.html on both nodes (node1 & node2):

On node1:

echo "node1 apache test server" > /var/www/html/index.html

On node2:

echo "node2 apache test server" > /var/www/html/index.html

11. Now start heartbeat on the primary node1 and slave node2:

/etc/init.d/heartbeat start

12. Open web-browser and type in the URL:

It will show node1 apache test server.

13. Now stop the hearbeat daemon on node1:

/etc/init.d/heartbeat stop

In your browser type in the URL and press enter.

It will show node2 apache test server.

14. We don't need to create a virtual network interface and assign an IP address ( to it. Heartbeat will do this for you, and start the service (httpd) itself. 

Monday, May 21, 2012

Cloud Computing Operating Systems

Cloud is not just a natural form of smoke. It is also the most hyped term in the IT industry right now. Everyone is talking about cloud and vendors all cloudify their products and service offerings. In the area of operating systems this is also happening and a cloud OS is simply a simplified operating system that runs just a web browser (at least that is one definition of it), providing access to a variety of web-based applications that allow the user to perform many simple tasks without booting a full-scale operating system. Because of its simplicity a cloud OS can boot in just a few seconds. The operating system is designed for Netbooks, Mobile Internet Devices, and PCs that are mainly used to browse the Internet. From a cloud OS the user can quickly boot into the main OS, because it is possible to continue booting the main OS in the background while using a cloud OS (at least this is the goal).

Combining a browser with a basic operating system also allows the use of cloud computing, in which applications and data “live and run” on the Internet instead of on the hard drive. This is also referred to as platform as a service (PaaS) and Software as a service (SaaS). A cloud OS can be installed and used together with other operating systems, or can act as a standalone operating system. When used as a standalone operating system, hardware requirements can be very low.
This amazing technology allows a user to access their own virtual desktop from anywhere around the world, without even using having network access to a remote PC. In addition, you are essentially using the Internet to work as a desktop. Wikipedia specifically states that: “Cloud Computing is Internet-based computing, whereby shared resources, software, and information are provided to computers and other devices on demand, like the electricity grid.

Glide OS 4.0 is a comprehensive Ad-Free cloud computing solution. Glide is a free suite of rights-based productivity and collaboration applications with 30GBs of storage. Users who want extra storage or would like to add extra users can upgrade to Glide Premium now with 250 GBs for $50.00 a year or 20 cents per GB per year. With a Glide Premium account you can set up and administer up to 25 users. The Glide OS provides automatic file and application compatibility across devices and operating systems. With Glide OS you also get the Glide Sync App which helps you to synchronize your home and work files.


amoebaOS is an advanced Online Operating System. Log in to your free account and join a cloud computing revolution that begins with great apps like Shutterborg, Exstream and Surf.


myGOYA is a free online operating system. Your own personal desktop can be accessed from any Internet PC in the world and includes e-mail, chat, filesharing, calendar and an instant messenger. Manage your contacts from anywhere in the world.


Kohive is an online desktop where you can easily collaborate with others. It’s perfect for freelancers, small businesses, students and groups with similar interests.


ZimdeskOS is your computer on the web – the entire functionality of a PC – online. There is nothing to install. A web browser and internet connection are all you need to access your desktop, files and favourite applications. You can access your data anytime from anywhere, from any PC.

Ghost Cloud Computing is a leading company in the cloud computing industry specializing in cloud computing for the end user. Ghost offers individuals and businesses file storage and apps in the cloud to enable secure personal computing from any device. Ghost is distributed directly from its web site and through channels. Ghost web interface is very simple and easy to use, it make it quick and easy to manage your files and folders. You can upload data of any type to your cloud storage from any device. You can view and edit any of your files in any browser.
You can instantly share files and documents with any friend by sending them a link. Wherever you are, you can edit documents and pictures directly online within Ghost portal. It also offers full mobile support, you can browse your file and folders from your cellular device or you can mount as a Windows drive; just like a USB flash drive. You can move files between local hard disk and your Cloud File.

Joli OS

“Joli OS is a free and easy way to turn any computer up to 10 years old into a cool new cloud device. Get on the Web and instantly connect to all your Web apps, files and services using the computer you already own. You may never need to buy a new computer again. It’s easy. Just download Joli OS. It installs in just 10 minutes.”

Cloudo is a free cloud operating system that lives on the Internet, right in your web browser. This means that you can reach your documents, photos, music and all other files no matter where you are, from any computer or mobile phone. It features an open, powerful, stable and versatile development environment. With the click of a mouse button you can get started with creating applications for yourself, a group of people or even everyone. And if you’re good, you can make money out of this as well. You can easily share a set of files, images or set up a joint account with friends and colleagues.


The CorneliOS Web OS is an easy-to-use, multi-user and cross-browser “Web Desktop Environment”, “Web Operating System” or “Web Office” and comes with a set of cool applications.

Lucid comes with lots of applications. You can browse photos, listen to music, and edit documents. It also comes with an RSS feed reader, some games, a calculator, and a bash-like terminal application. You can install additional third-party applications, which allow you, do even more!


eyeOS is one of the most used WebOSes which is released under the AGPLv3 license and only needs Apache + PHP5 + MySQL to run. With eyeOS you can build your private Cloud Desktop. Using eyeOS Web Runner you can open your eyeOS files from your browser with your local apps and save them automatically on your cloud. In eyeOS 2.0 you can work collaboratively with other users simultaneously in the same document, it is the Safe Cloud Computing system because you can host it in your own company or organization. You will get privacy and cloud computing at its best.

With Startforce, you can run Windows apps such as MS Office, Adobe Acrobat and Quickbooks. You can also stitch in web apps such as, Google or your company’s intranet web apps.
I hope you like this post. In most cases having the best wordpress plugins installed is your way to a powerful and popular website.

Friday, May 18, 2012

Cloud ROI

With many pilot cloud projects gathering steam, organizations are evaluating transitioning their IT systems to a cloud-based architecture. However, such a full-scale move must take into account security risks, lock-in risks, and cost-benefit analysis over the lifetime of the investment. An InformationWeek Analytics report outlined a comprehensive survey of 393 individuals within various companies, 28% of whom had more than 10,000 employees. Amongst the many findings within the report, the most interesting ones were:
  • 34% of respondents involved in the cloud used it for SaaS (applications delivered via the cloud), 21% for IaaS (storage or virtual servers delivered via the cloud), 16% for PaaS (web platform delivered via the cloud), and 16% for DaaS (Data-as-a-Service for BI and other data lookup services delivered via the cloud)
  • 29% were not using the cloud at all
  • More than one-third claimed to build in 31% or more excess server and storage capacity for non-cloud computing systems
  • 73% cited "Integration with Enterprise applications" and 69% cited "Cost of Hardware and Software" as factors when choosing a business technology
  • Almost 92% exhibited some sort of likelihood to comprehensively carry out an extensive ROI analysis of the expected lifespan of a cloud computing project
  • 46% said that their ROI calculation would span 3-5 years
  • 45% stated that "elasticity" is frequently or often required

There was also a feeling amongst respondents that cloud computing works for commodity applications but that complex integration requirements make costs skyrocket. The major sources of cost savings touted by cloud proponents involved three areas: efficiencies as a result of economies of scale, use of commodity gear and elasticity.
This last area of elasticity is worth exploring further, especially in light of the number of respondents claiming to require excess capacity for their non-cloud computing applications, and the assertion that complex integration requirements are increasing the costs in the cloud. Elasticity refers to the ability to scale up or scale down on storage resources on the fly. But the reality of elasticity "on-demand" is that most major software vendors don't provide the ability to add CPUs without additional costs, and coding applications that appropriately scale up are difficult. Thus, given the above data about the necessity of elasticity and the large percentage of companies that conduct detailed cloud ROI analysis, it is evident that these two factors are correlated.
  • Increasing the ROI of a Cloud Deployment
  • In order for CIOs to see more of an ROI from deploying applications in the cloud, several things must happen:
  • Data center automation software must reach a level of sophistication where they are able to automatically coordinate tasks between on-premise and cloud applications to optimize elasticity
  • Software vendors must allow for special pricing for cloud providers so that these savings can be passed onto customers - a way to allow this is to ensure a multi-tenant architecture consisting of customers that use the same on-premise software as is used in the cloud-based edition by the respective provider.
  • Ensure that the web platform used for PaaS purposes by the customer is compatible with the SaaS applications that they subscribe to, in order to enable any custom widgets that may need to be written
  • Massive improvements in the "converged fabric" architecture that brings together servers, storage, and networking, so that pools of additional capacity are easily available where elasticity is needed.

Wednesday, May 16, 2012

Cloud disaster Recovery

Each repository and topology packaged into a virtual Cloud Container similar the idea of shipping containers filled with server racks. 

The process involves five steps to reach Disaster Readiness, and another 3 steps to have a working Disaster Recovery facility operational. The steps are:

Step 1: Choose Your Cloud
Select a public cloud(s) to meet your scaling, geographic, technology, and vendor diversification needs.

Step 2: Build a Secure Environment You Control
Create a controllable and secure virtual overlay network on top of the cloud provider's physical network.

Step 3: Test Scaling and Failure Modes
Even before any IP or data is moved to the cloud, you have reduced your application's recovery time objective (RTO).

Step 4: Migrate Your Application Repository
Deploying copies of the digital assets needed for recovery.

Step 5: Commence Data Synchronization
Implement a the simplest workable method for synchronizing production data to the repository.

Disaster Readiness Accomplished!
You have narrowed your RTO by staging what you need to bring up the application in the cloud providers' facilities. And, you have establish a process for moving data to the cloud facility, which means your RPO is a a much more known and fixed down-time risk. Hold here until disaster strikes or to further tighten the recovery window, continue:

Step 6: Define & Deploy the Application Topology
Decide on an aspirational topology and deploying a scaled down version of your production systems.

Step 7: Process Live Data
Run select transactions through the Disaster Recovery facility as an extension of production.

Step 8: Conduct Periodic Disaster Drills
At this step the Disaster Recovery facility is fully operational, now your attention can turn to preparedness.

Tuesday, May 15, 2012

Faces Of Cloud Computing

Cloud Computing has its well eminent two faces in the eyes of its users and to the IT professionals. In many instances we always experience some positive and negative experience as we engage in many activities. But as we think of it, everything is really a mixture of the so called bad and good experiences that life can offer to us. So we must change our thinking and be open to the possibility that every discovery has its drawback.

IT professionals are not anymore shocked about the different drawbacks of cloud computing that is a hot topic to some blogs and website today over the World Wide Web. This negative aspect of cloud computing is predicted and anticipated by a number of IT specialists. Even though there are disadvantage of cloud computing but then it never stop the drastic increase op cloud supporter in any parts of the world. Aside from the increasing number of cloud users, there are also increasing number of cloud provider today.  When you browse in the internet you can see many cloud vendors that persuade many prospective clients to subscribe to their service. Many vendors have their own strategy so that clients will be hook and are interested to join the network. Some Cloud provider uses some tricks to hook by any means any possible cloud users. Therefore you need to be vigilant and be well informed first about the cloud computing service provider of your choice.

As we all know we should not decide like just a blink of an eye or a snap of our finger. We need to do the step by step process of investigating and knowing things that we know may have a huge effect on our life. Sometimes regrets happen in the end so be wise and sure about your choice. You can start by having simple information about cloud computing on different site over the World Wide Web. Then go deeper by knowing the two faces of cloud computing – the advantages and disadvantages. After knowing such investigation you weigh your choices and cast certain preference. At start you can have your own choice because if you already subscribe to any cloud service providers some of your choices will be limited because you should follow to the set of rules provided by the cloud provider.

Monday, May 14, 2012

History of Cloud Computing

Widespread study is the reason for the breakthrough of cloud computing. Perhaps a lot of of us inquire if what time and what place it was truly established or began. Believe it or not! But the original thought and notion about cloud computing can be seen in the year of 1960s.
The discovery of cloud computing is a product of the partnership of some IT professionals to improve the existing computer hosting service before. Extensive research is the culprit of the discovery of it. Maybe many of us ask if when and where it’s really starts? Believe it or not! But the fundamental idea and concept of this can be trace back on the year 1960s, when John McCarthy speak out that computation may someday be organized as a public utility.'   More or less all the recent features of it, the similarity to the electricity industry and the use of private, public, community, and government forms, were systematically investigated in the book The Challenge of the Computer Utility by Douglas Park hill in the year 1966.
The definite word 'cloud' started on 1990 wherein one telecommunication company offers a service that is dedicated in point-to-point data routes and Virtual Private Network (VPN) services with analogous feature of service but at a much lesser charge. By controlling traffic to steadiness consumption as they saw in shape, they were capable to make use of their general system bandwidth more efficiently. The cloud representation was used to signify the separation peak linking between the task of the cloud provider and the accountability of the cloud user. Cloud computing broaden this boundary to cover up servers over and above the network infrastructure. It is an expected advancement of the widespread adoption of virtualization, service-oriented design, autonomic, and service computing. Fine points are explained from potential cloud users, who no longer have need for proficiency and technical knowledge in, or direct control over, the technology infrastructure of cloud computing.

Saturday, May 12, 2012

Common Terminologies used in Cloud Computing

With so much buzz around Cloud Computing and SaaS you must be wondering about the common terms revolving around this new paradigm shift of technology. There are multiple definitions and explanations provided over the internet at your disposal. Here are some of the most common terminologies with a simple explanation of each of them for your easy understanding.

API - Application Programming Interface allows software applications to interact with other software. Requested data from another application is returned back in a predefined format and according to specific rules.
ASP - Application Service Provider; typically associated with a hosted single tenant software solution wherein a business provides computer based services to customers over a network.
Cloud Computing - Cloud computing is Internet-based computing, whereby shared resources, software and information are provided to computers and other devices on-demand, like the electricity grid. It describes a new consumption and delivery model for IT services based on the Internet, and it typically involves the provision of dynamically scalable and often virtualized resources as a service over the Internet. Customers do not own and maintain the physical infrastructure, instead avoid capital expenditure by renting usage from a third-party provider. They consume resources as a service and pay only for resources that they use.

Cloud operating system - A computer operating system that is specially designed to run in a provider’s datacenter and can be delivered to the user over the Internet or another network. Windows Azure is an example of a cloud operating system or “cloud layer” that runs on Windows Server 2008. The term is also sometimes used to refer to cloud-based client operating systems such as Google’s Chrome OS.
Freemium – A business model, in which a SaaS provider offers basic features of its software to users free of cost and charges a premium for supplemental or advanced features.
Hosted application - An Internet-based or Web-based application software program, that runs on a remote server and can be accessed via an Internet-connected PC or thin client.
Hybrid cloud - A networking environment that includes multiple integrated internal and/or external Cloud providers.
IaaS - Infrastructure-as-a-Service refers to a combination of hosting, hardware, provisioning and basic services needed to run a SaaS or Cloud Application that is delivered on a pay-as-you-go basis. It is a virtualized environment delivered as a service over the Internet by the provider. The infrastructure can include servers, network equipment and software.
Mashup - Mashup is a web application that combines data or functionality from two or more external sources to create a new service.
Multi-tenancy - Multi-tenancy refers to software architecture where a single instance of software runs on a server, serving multiple client organizations (tenants).
PaaS - Platform-as-a-Service solutions are development platforms for which the development tool itself is hosted in the Cloud and is accessed through a browser. With PaaS, developers can build web applications without installing any tools and then they can deploy their application and services without any systems administration skills.
Pay as you go - A cost model for Cloud services that includes both subscription-based and consumption-based models, in contrast to traditional IT cost model that requires up-front capital expenditure for hardware and software.
Private Cloud - A private cloud is a proprietary network or a data center that supplies hosted services to a limited number of people.
Public Cloud - A public cloud sells services to anyone on the Internet. It is a cloud computing environment that is open for use to the general public. Currently, Amazon Web Services is the largest public cloud provider.
SaaS - Software-as-a-Service refers to multi-tenant software delivered over the internet and customers consume the product as a subscriptions service that is delivered on a pay-as-you-go basis. Applications don’t have to be purchased, installed or run on the customer’s computers.
Subscription-based pricing - A pricing model that lets customers pay a fee to use the service for a particular time period.

Vendor lock-in - Dependency on the particular cloud vendor and difficulty moving from one cloud vendor to another due to lack of standardized protocols, APIs, data structures (schema), and service models.

Virtualization - Virtualization means to create a virtual version of a device or resource, such as a server, storage device, network or even an operating system where the framework divides the resource into one or more execution environments.

Vertical Cloud - A cloud computing environment optimized for use in a particular vertical i.e., industry or application use case.

Service Orientated Architecture (SOA) - A service-oriented architecture is essentially a collection of services. These services communicate with each other. The communication can involve either simple data passing or it could involve two or more services coordinating an activity.
The next time you talk to your cloud service provider, you will be able to understand the jargon they use and decide the best solution for yourself.

Recommended Cloud Computing Management Tools

Designed from the ground-up to provide next generation Cloud management, Abiquo is the most complete and advanced solution available on the market today. Abiquo provides class-leading features like virtual to virtual conversion through a platform that is easy to implement and operate, liberating your IT organization from the drudgery of managing thousands of virtual machines, without relinquishing control of the physical infrastructure.

BitNami Cloud is based on the Amazon Cloud, you have access to a wide variety of server types that can be configured with custom storage. You can start with a basic server and then scale up (or down) the type of server and the disk space as your needs change.

mCloud™ On-Demand is an operational management platform that works with Amazon EC2 to provide an integrated cloud stack for deploying, managing and monitoring enterprise class Rails, Java or PHP applications with ease. mCloud On-Demand also lets you focus on building business value instead of managing infrastructure.

Scalr provides you with a high uptime, fault-tolerant website: Scalr monitors all your servers for crashes, and replaces any that fail. To ensure you never lose any data, Scalr backups your data at regular intervals and uses Amazon EBS for database storage.

With CloudStack as the foundation for infrastructure clouds, data center operators can quickly and easily build cloud services within their existing infrastructure to offer on-demand, elastic cloud services.

CrowdDirector monitors and manages traffic across your mission-critical servers, services, and compute clouds to maximize availability and control of web and Internet services. CrowdDirector enables content providers to configure and manage disparate network resources – without having to build and support the tools to monitor and manage them. CrowdDirector is equivalent to having high-powered load balancers distributed across the Internet, setup to feed real-time information about your site operations real-time.

The RightScale Cloud Management Environment provides all you need to design, deploy, and manage your cloud deployments across multiple public or private clouds, giving you direct access to your server and storage resources in the cloud as if they were in your own data center.

rPath is a unique system automation platform that accelerates and improves the quality of IT service delivery by automating platform provisioning, managing application release processes, and providing a way to predictably deliver updates and patches up and down the stack.

Is the first of the next generation of identity and access management solutions that is delivered as a fully managed service. Symplified is available either completely hosted or on-premise with an appliance. Let Symplified apply our extensive IAM expertise to deliver customized identity services.

Kaavo’s offerings solve the challenge of deploying and managing distributed applications and workloads in the clouds.  Kaavo is the first and only company to deliver a solution with a top down application focused approach to IT infrastructure management in public, private, and hybrid clouds.

Cloudera offers enterprises a powerful new data platform built on the popular Apache Hadoop open-source software package.

Monitis automates and makes easy monitoring of dynamic cloud resources. Monitis supports monitoring of most popular Cloud computing providers including Amazon EC, Rackspace, GoGrid, Softlayer, and more.

How much is 1 byte, 1KB, 1MB, 1GB, 1TB , 1PB, 1EB, 1ZB, 1YB ? ? ?

1 byte v/s 1KB v/s 1MB v/s 1GB v/s 1TB v/s 1PB v/s 1EB v/s 1ZB v/s 1YB 

The basic unit used in computer data storage is called a bit .
8 Bits are equal to 1 Byte .

Bit : A Bit is a value of either a 0 or 1.

Byte : 1 Byte = 8 Bits

Kilobyte (KB) : 1 KB = 8,192 Bits ,
                           1 kB = 1,024 Bytes

Megabyte (MB) 1MB =1024 KB

Gigabyte (GB) 1GB = 1024 MB

Terabyte (TB) 1TB = 1024 GB

Petabyte (PB) 1PB = 1024 TB

Exabyte (EB) 1EB = 1024 PB

Zettabyte (ZB) 1ZB = 1024 EB

Various Layers Of The OSI Model

Open System Interconnection (OSI) Model was developed by the international standard organisation that describe the flow of information from one computer to another computer. It is also called as ISO OSI Model basically. There are total 7 Layers on the OSI Model that perform there distinct functions in the model series.
All the layer of the OSI Model uses difference protocols. Protocol defines the procedures & consequences that how to transmit the data. It is a set of rule that how to be transmit the data.

Layer 1: Physical Layer

The physical layer defines electrical and physical specifications for devices. In particular, it defines the relationship between a device and a transmission medium, such as a copper or fiber optical cable. This includes the layout of pins, voltages, cable specifications, hubs, repeaters, network adapters, host bus adapters (HBA used in storage area networks) and more.

The major functions and services performed by the physical layer are,

• Establishment and termination of a connection to a communications medium.
• Participation in the process whereby the communication resources are effectively shared among multiple users. For example, contention resolution and flow control.
• Modulation or conversion between the representation of digital data in user equipment and the corresponding signals transmitted over a communications channel. These are signals operating over the physical cabling (such as copper and optical fiber) or over a radio link.
• Parallel SCSI buses operate in this layer, although it must be remembered that the logical SCSI protocol is a transport layer protocol that runs over this bus. Various physical-layer Ethernet standards are also in this layer; Ethernet incorporates both this layer and the data link layer. The same applies to other local-area networks, such as token ring, FDDI, ITU-T and IEEE 802.11, as well as personal area networks such as Bluetooth and IEEE 802.15.4.

Layer 2: Data Link Layer

The data link layer provides the functional and procedural means to transfer data between network entities and to detect and possibly correct errors that may occur in the physical layer. Originally, this layer was intended for point-to-point and point-to-multipoint media, characteristic of wide area media in the telephone system. Local area network architecture, which included broadcast-capable multi-access media, was developed independently of the ISO work in IEEE Project 802. IEEE work assumed sub layering and management functions not required for WAN use. In modern practice, only error detection, not flow control using sliding window, is present in data link protocols such as Point-to-Point Protocol (PPP), and, on local area networks, the IEEE 802.2 LLC layer is not used for most protocols on the Ethernet, and on other local area networks, its flow control and acknowledgment mechanisms are rarely used. Sliding window flow control and acknowledgment is used at the transport layer by protocols such as TCP, but is still used in niches where X.25 offers performance advantages.

The ITU-T standard, which provides high-speed local area networking over existing wires (power lines, phone lines and coaxial cables), includes a complete data link layer which provides both error correction and flow control by means of a selective repeat Sliding Window Protocol.

Both WAN and LAN service arranges bits, from the physical layer, into logical sequences called frames. Not all physical layer bits necessarily go into frames, as some of these bits are purely intended for physical layer functions. For example, every fifth bit of the FDDI bit stream is not used by the layer.

WAN protocol architecture

Connection-oriented WAN data link protocols, in addition to framing, detect and may correct errors. They are also capable of controlling the rate of transmission. A WAN data link layer might implement a sliding window flow control and acknowledgment mechanism to provide reliable delivery of frames; that is the case for Synchronous Data Link Control (SDLC) and HDLC, and derivatives of HDLC such as LAPB and LAPD.

IEEE 802 LAN architecture

Practical, connectionless LANs began with the pre-IEEE Ethernet specification, which is the ancestor of IEEE 802.3. This layer manages the interaction of devices with a shared medium, which is the function of a media access control (MAC) sub layer. Above this MAC sub layer is the media-independent IEEE 802.2 Logical Link Control (LLC) sub layer, which deals with addressing and multiplexing on multi-access media.

While IEEE 802.3 is the dominant wired LAN protocol and IEEE 802.11 the wireless LAN protocol, obsolescent MAC layers include Token Ring and FDDI. The MAC sub layer detects but does not correct errors.

Layer 3: Network Layer

The network layer provides the functional and procedural means of transferring variable length data sequences from a source host on one network to a destination host on a different network, while maintaining the quality of service requested by the transport layer (in contrast to the data link layer which connects hosts within the same network). The network layer performs network routing functions, and might also perform fragmentation and reassembly, and report delivery errors. Routers operate at this layer, sending data throughout the extended network and making the Internet possible. This is a logical addressing scheme – values are chosen by the network engineer. The addressing scheme is not hierarchical.

The network layer may be divided into three sub layers:

Sub network access – that considers protocols that deal with the interface to networks, such as X.25;

Sub network-dependent convergence – when it is necessary to bring the level of a transit network up to the level of networks on either side

Sub network-independent convergence – handles transfer across multiple networks.

An example of this latter case is CLNP, or IPv7 ISO 8473. It manages the connectionless transfer of data one hop at a time, from end system to ingress router, router to router, and from egress router to destination end system. It is not responsible for reliable delivery to a next hop, but only for the detection of erroneous packets so they may be discarded. In this scheme, IPv4 and IPv6 would have to be classed with X.25 as subnet access protocols because they carry interface addresses rather than node addresses.

A number of layer-management protocols, a function defined in the Management Annex, ISO 7498/4, belong to the network layer. These include routing protocols, multicast group management, network-layer information and error, and network-layer address assignment. It is the function of the payload that makes these belong to the network layer, not the protocol that carries them.

Layer 4: Transport Layer

The transport layer provides transparent transfer of data between end users, providing reliable data transfer services to the upper layers. The transport layer controls the reliability of a given link through flow control, segmentation/de-segmentation, and error control. Some protocols are state and connection oriented. This means that the transport layer can keep track of the segments and retransmit those that fail. The transport layer also provides the acknowledgement of the successful data transmission and sends the next data if no errors occurred.

OSI defines five classes of connection-mode transport protocols ranging from class 0 (which is also known as TP0 and provides the least features) to class 4 (TP4, designed for less reliable networks, similar to the Internet). Class 0 contains no error recovery, and was designed for use on network layers that provide error-free connections. Class 4 is closest to TCP, although TCP contains functions, such as the graceful close, which OSI assigns to the session layer. Also, all OSI TP connection-mode protocol classes provide expedited data and preservation of record boundaries.

Layer 5: Session Layer

The session layer controls the dialogues (connections) between computers. It establishes, manages and terminates the connections between the local and remote application. It provides for full-duplex, half-duplex, or simplex operation, and establishes check pointing, adjournment, termination, and restart procedures. The OSI model made this layer responsible for graceful close of sessions, which is a property of the Transmission Control Protocol, and also for session check pointing and recovery, which is not usually used in the Internet Protocol Suite. The session layer is commonly implemented explicitly in application environments that use remote procedure calls. On this level, Inter-Process (computing) communication happen (SIGHUP, SIGKILL, End Process, etc.).

Layer 6: Presentation Layer

The presentation layer establishes context between application-layer entities, in which the higher-layer entities may use different syntax and semantics if the presentation service provides a mapping between them. If a mapping is available, presentation service data units are encapsulated into session protocol data units, and passed down the stack.

This layer provides independence from data representation (e.g., encryption) by translating between application and network formats. The presentation layer transforms data into the form that the application accepts. This layer formats and encrypts data to be sent across a network. It is sometimes called the syntax layer.

The original presentation structure used the basic encoding rules of Abstract Syntax Notation One (ASN.1), with capabilities such as converting an EBCDIC-coded text file to an ASCII-coded file, or serialization of objects and other data structures from and to XML.

Layer 7: Application Layer

The application layer is the OSI layer closest to the end user, which means that both the OSI application layer and the user interact directly with the software application. This layer interacts with software applications that implement a communicating component. Such application programs fall outside the scope of the OSI model. Application-layer functions typically include identifying communication partners, determining resource availability, and synchronizing communication. When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. When determining resource availability, the application layer must decide whether sufficient network or the requested communication exists. In synchronizing communication, all communication between applications requires cooperation that is managed by the application layer.

Your Reviews/Queries Are Accepted