Wednesday, November 4, 2009

Have a safe flight in the Cloud of multi-tenancy..

Multi-tenancy at Infra level: sharing resources & computing power of a physical boxTypically Cloud Host are the Landlords provide basic amenities to all the tenants and help them gracefully co-exists. All is good, till you have a bunch of fair tenants. Think one who always peeping thru the wall of others and privacy of neighbors at its helm ..

As you see, virtual host provides a layer of visualization on the resources being shared by the VMs. We will try to figure the risk on the paradigm..



Sunday, October 4, 2009

Mix 'N' Match ..




Lets run a small CRM for your employees, that can be shared between the departments without impacting each others model.

TBD..

Friday, October 2, 2009

Shop As You Like ..


Do you believe in application shopping ? Days gone when you negotiate to a vendor on time & money, bargain for a little more.. Here you go, with packaged solution tailored & branded in minutes only for you. And these all will happen with a couple of mice click & just swipe of a card..
Nevertheless, need more to scale dont worry just know us. Will scale you before you think strategy, becoz all these a solved problem for industries; we just leverage them the way you want...

Tuesday, September 22, 2009

Bringing SAAS to Cloud for SMEs ..



Concepts

The backdrop is cloud and related promising technologies of open source world that venturing across the market. The system is to take care of grossly S/M Enterprises, addressing some of its key areas of “wish lists” that large enterprise still striving for ..

1. Optimization of Computing resources across enterprises
2. Participate in “Go Green” , “Save Earth” kind activities by optimizing resource usage
3. Minimize the Infrastructure maintenance Costs
4. Provide true high availability of Software services , platforms & workbench

The system was divided into three major parts:

1. CWB (the Nucleus – cloud provider)
2. RPM Packager (the cook – the cloud deployer)
3. DB Backup & restore management , Distributed Storage System (the Cloud DATA)

Thursday, September 10, 2009

A Midas touch ..

I have approached to Amazon to allow this kinda usage model, that will relieve data locks as such.. And you start enjoying the delta layer formation..

Thursday, August 27, 2009

Protocol..

National Institute of Standards and Technology, Information Technology Laboratory

Note 1: Cloud computing is still an evolving paradigm. Its definitions, use cases, underlying technologies, issues, risks, and benefits will be refined in a spirited debate by the public and private sectors. These definitions, attributes, and characteristics will evolve and change over time.

Note 2: The cloud computing industry represents a large ecosystem of many models, vendors, and market niches. This definition attempts to encompass all of the various cloud approaches.

Definition of Cloud Computing:

Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is composed of five essential characteristics, three service models, and four deployment models.

Essential Characteristics:

On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service’s provider.

Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).

Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines.

Rapid elasticity. Capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.

Measured Service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service.

Service Models:

Cloud Software as a Service (SaaS). The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

Cloud Platform as a Service (PaaS). The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.

Cloud Infrastructure as a Service (IaaS). The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).

Deployment Models:

Private cloud. The cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on premise or off premise.

Community cloud. The cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on premise or off premise.

Public cloud. The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.

Hybrid cloud. The cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds).

Note: Cloud software takes full advantage of the cloud paradigm by being service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability.

Friday, August 21, 2009

How to get Xen instance IP from outside..

I prepared a script to get IP from Xen instance..

1) Put detIP.sh to Xen box

2) Usage: ./detIP.sh myVm 2> /dev/null


You should return by the IP of myVm instance..

detIP.sh
========




Hope this would be helpful ...

Monday, July 27, 2009

How to configure CAS with LDAP

The configuration is quite trivial.. still i wrote down:
1. install openldap and add some users:
ldapadd -x -D cn=admin,dc=demo,dc=cloudtronics,dc=com -W -f test.ldif
where test.ldif,


Now how to search, from application.. In tomcat web use deployerConfigContext.xml:

Friday, June 26, 2009

Morphology of Cloud Apps

Life Cycle of the Cloud Apps:

The figure shows a typical lifecycle of an image running on the Cloud, having non-transient vista. It's evident that persistency of any degree should be made out-of-box.

This spawns another interesting topic. Should admire that Image must undergo at least few changes after instantiation. These changes could be as small as root password to new applications installation..

Just imagine then who could be the version controller for the Cloud Apps -: ) Yes nobody, Cloud don't take responsibility to define such requirements which is real but overlooked !!!

Isn't sound strange??


Amazon says to provide some discrete tool set, but fail to define a solid process that can be automated or having enterprise standard.
You will feel slightly embarrassed, when try to model an object oriented & loosely coupled application deployable to an Enterprise Cloud. Your frustration will mount high, when you really require to maintain & manage multiple copies of the applications alongwith periodic updates & patches..

Listen Gurus, you gone by then - : )

Now, if you do envisage of Amazon public image libraries, see its quite static and stagnant !!
Most of us, don't like to shop around through the expired products, will you ??.. Where specially the IT, changing in each seconds..

If you don't want to pay for "what was" rather pay for "what is" or "what will be" continue with me..

I will tell you how to crack this classic stagnancy..


Tuesday, June 16, 2009

My first 2 Cents ..

Enough theory, let's do something handson today. Let's setup a Box for preparing the Cloud Stack.. I carefully picked-up the box by running "egrep '(vmx|svm)' /proc/cpuinfo" command. Because I would like to have a box that has a hypervisor support and can yield optimized results for the instances to run in the Cloud. I took a box with AMD processor and 8GB RAM , 320GB HD.. After thinking twice, I took Centos5.3 64bit OS to install. I took 64bit as this provides the best & versatile support to create both 32 & 64 bit instances on the Cloud.
During installation I chose All the default options, except Selinux & Firewall Off. Since I'm within Corp-Net, I dont want doubly protected!!
Also I created some LVG(logical-vol group) around 50GB, so that I can use it for RAW image creation later.. Hope you appreciate that using LVM image creation is pretty flexible..

This point, letmm explain a bit about what I'm looking for:

If you are familiar with the conventional Grid Computing which unifies the processing power of the multiple machines as surface to collimated usage; inturn you should also appreciate that people will be looking for some ROI out-of-it.. And, that is fulfilled by the Utility Computing..
Cloud Computing goes one step ahead to warp them and present in a SAAS enabled model. Apart from grid & cluster; cloud added a fine grained control as middleware to the architecure. This works a brain for the Cloud building blocks. Typically Client Apps on Cloud use the Middleware to position themselves on the Cloud..

Cloud usage has two facet; the transient Cloud Apps and persistent Cloud Data (used by Apps).. And you also knew, to achieve Cloud Data persistency and failover always includes the cost of redundancy.
There're various interface softwares to provision and govern Cloud & its resources; mostly recalling & acknowledge EC2 as consumable.

Next post I will blog more about the Lifecycle of Cloud Apps and its operational Data..
Till then, stay tuned -: )


Thursday, June 4, 2009

Cloud Workbench Featurology

As I commited yesterday to provide detail of the workbench features, here it goes:

AmazonCloud Workbench
EC2 Functionality
Create an Amazon Machine Image (AMI)yesyes
Use pre-configured, templated images to get up and running immediatelyyesyes
Upload the AMI into Amazon S3yesyes
Choose the instance type(s) and operating system you wantyesyes
Start, terminate, and monitor as many instances of your AMIyesyes
Static IP endpointsyesnot sure as hot requirement, but can be alloted by fix Mac
Attach persistent block storage to your instancesyesyes
Elastic
Increase or decrease Instance capacityyesyes
Completely Controlled
root access to instanceyesyes
Instances can be rebooted remotely yesyes
access to console output of your instancesyesyes
Flexible
The choice of multiple instance types, operating systems, and software packagesyesyes
select a configuration of memory, CPU, and instance storageyesyes
Use with other Amazon Web Services
Supports Amazon Simple Storage Service (Amazon S3)yesyes
Amazon SimpleDByessuggestions welcome how to replicate locally
Amazon Simple Queue Service (Amazon SQS)yessuggestions welcome how to replicate locally
Reliable
Replacement instances can be rapidly and predictably commissionedyesyes
Commitment is 99.95% availability yesyes
Secure
Interfaces to configure firewall settings that control network access to and between groups of instancesyesyes
Inexpensive
Pay for the resources consumed, like instance-hours or data transferyesits free as you own the complete infrastucture
Amazon Elastic Block Store
Off-instance storage that persists independently from the life of an instanceyesyes
EBS volumes are highly available, highly reliable volumes that can be attached to a running Amazon EC2 instance and are exposed as standard block devicesyesyes
Amazon EBS volumes are automatically replicated on the backendyespossible using a Cron job taking regular backup
Snapshots of your volumes yesyes
Multiple Locations
Launching instances in separate Availability Zones to protect your applications from failure of a single locationyeslive migration possible
Elastic IP Addresses
Static IP addresses designed for dynamic cloud computingyespossible using fix Mac
Amazon CloudWatch
Resource utilization, operational performance, and overall demand patterns—including metrics such as CPU utilization, disk reads and writes, and network trafficyespossible thru commandline tools
Auto Scaling
Automatically scale your Amazon EC2 capacity up or down according to conditions you defineyeswelcome suggestions as how to replicate this feature locally
Elastic Load Balancing
Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instancesyeswelcome suggestions as how to replicate locally
Instance Types
Small Instance (Default) 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of instance storage, 32-bit platformyesyes
Large Instance 7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each), 850 GB of instance storage, 64-bit platformyesyes
Extra Large Instance 15 GB of memory, 8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each), 1690 GB of instance storage, 64-bit platformyesyes
Amazon S3 Functionality
Write, read, and delete objects containing from 1 byte to 5 gigabytes of data each. The number of objects you can store is unlimitedyesyes
Each object is stored in a bucket and retrieved via a unique, developer-assigned keyyesyes
REST and SOAP interfacesyesyes
Features of Amazon EBS volumes
Amazon EBS allows you to create storage volumes from 1 GB to 1 TB that can be mounted as devices by Amazon EC2 instances. Multiple volumes can be mounted to the same instanceyesyes
Each storage volume is automatically replicatedyesyes
EBS also provides the ability to create point-in-time snapshots of volumesyesyes
Create new volumes yesyes
Amazon SimpleDB Functionality
CREATE a new domain to house your unique set of structured datayesyes
Query your data setyesyes
Amazon SQS Functionality
Developers can create an unlimited number of Amazon SQS queues with an unlimited number of messagesyesnot yet thought
Queues can be shared with other AWS accounts and Anonymouslyyesnot yet thought
Access to SQS through standards-based SOAP and Query interfacesyesnot yet thought
Amazon Elastic MapReduce Functionality
Hadoop implementation of the MapReduce framework on Amazon EC2 instancesyesenvisage the feature to replicate locally
Cross Virtual Machine Portability
Instances portable to VMWare,XEN etcnoyes
Software Update Service to Instances
Instances receive updates seemlesslynoenvisage the feature to implement

The above table will give you an idea how far the other Amazon friendly implemnetaions are..

This also indicates how I'm defining that 60% Gap exists in the opensource world to deal with Cloud ready application development & testing end2end.

Next, I will define how can I really achive this, probably a Development Cloud Stack (just ready to use) for the opensource world..

Wednesday, June 3, 2009

Cloud Workbench (Building a readymade solution for Cloud Developers)

I started thinking of a Workbench for the Cloud developers to ease their efforts and save some bucks while working with commercial Cloud providers like Amazon.

In Opensource world how seemless is to develop a Cloud ready application (rather amazon ready) and test with end2end features before actually deploy it on the Public cloud..

Here I want to do pretty serious business of developing an amazon ready application with EBS & S3 supports locally that can move back & forth from private to public workspace.. Are any ready made Developer Stack available now, so I can still run instances overnight without fearing of dollars.. Of course like you guys, I also want a sound sleep at night -: )

There're opensource offerings to make a private cloud as more similiar to Amazon, but are those supports all the features like Amazon provides? Or can I simply clone my Amazon Instance (with EBS & S3 intact) to these opensource offerings ? I would say it's 40% yes, 60% no..

Then how to fill the 60% Gap that still to go..

Initial Thoughts


Concepts

The backdrop is Cloud and related promising technologies of open source world, venturing across the market. The system is to take care of grossly bigger Enterprises, addressing some of its key areas of “wish lists” that large enterprise still striving for ..


1. Optimization of Computing resources across enterprises
2. Participate in “Go Green” , “Save Earth” kind activities by optimizing resource usage
3. Minimize the Infrastructure maintenance Costs
4. Provide true high availability of Software services , platforms & workbench

The system was divided into three major parts:

1. The Nucleus – cloud provider
2. RPM Packager (the cook – the cloud deployer)
3. DB Backup & restore management , Distributed Storage System (the Cloud DATA)

I have drafted some notes on the Stack as follows:


Infrastructure
=============

Completely customized Cloud Controlled XEN vms, that can be simply powered-up to Go with User data stored in HDFS..
Custom changes in (domU)VMs seamlessly to be written to union filesystem (using unionfs/aufs alike techniques).
this concept be extended to transparently localize Amazon S3 using S3fs fuse..

There may be 3 possible scenarios to use unionfs with Xen VMs:

1) To have a common filesystem image shared by many DomUs, and use unionfs to allow each DomU to have it's own modified version
2) To do this in Dom0 and export the filesystem to the DomUs
3) To export the common filesystem (readonly) and run unionfs in the DomU

And finally, the user modified file-systems made exposed to HDFS (using hadoop/sector DFS)
A private cloud to be built using Eucalyptus and physical Nodes with [Xen vms + Suse]
There are Nagios plug-ins available-ready for Xen-suse , can be used for monitoring the Nodes and probably some auto discovery..


Backup & Recovery
=================

Xen's capability to adopt live migration will be the base for application failover /recovery..
Data recovery to be done for the above user modified filesystem, using a suitable cloud & S3 supported tool (amanda/zamanda/baracula)


Packaging the End2End process
=======================

VMs with delta file changes to be deployed live on Xen based Cloud and that should be powered-on to go..

1) Taken the input from User
2) Prepare application (vm) to be deployed over the Cloud
3) Deploy app-vm & power-on
4) Provide user, the logistic infos & access
5) User’s changes are saved as delta & backed-up




hit counter javascript





myspace hit counter