On Premise vs. Cloud – Pricing Model

Category : Architecture, Cloud, SAAS, web

Everybody is talking cloud – every startup I meet, every CTO I work with, and all the software vendors are now cloud-oriented.

But when it comes to implementation it is a different story all together – most companies realize the value of the cloud (or at least say they do) but very few are ready for the paradigm shift – especially when it comes to the pricing model.

How do we do pricing on premise?

Before the cloud, architects and IT engineers used to perform an activity called “sizing exercise” or ”sizing estimation” before buying the actual servers for production. These activities are based on past experience, combined with buffers and optimism:
These exercises would go (very generally) like this:

Sizing exercise -
1) We think we are going to have around 10,000 users per day
2) On the weekends we hope are going to spike to 50,000 users per day
3) The system should support at most 100,000 user per day
4) A user views 3 pages in average
5) …
6) …
7) We estimate that a server would hold 1000 users per day
Conclusion – we need 10 servers, a nifty Database and 100GB of storage
Production environment cost is $35K

The key here is that on premise (in a non-virtualized environment) it is hard and expensive to provision new servers. That is why we need to size properly and put the cost in the IT budget in advance. These costs are Capital expenditures (expenditures creating future benefits)

Cloud advocates would say that these calculations are limited and do not take under account things like backup, IT cost, electricity, and other run-time costs. But in typical Sizing exercise these services are taken or granted.

How do we do pricing in the cloud?

In the cloud we do not need to do this initial “sizing exercise”, we start with one or two servers and if we need more we just easily provision more as we see fit. But when it comes to pricing we need to start thinking in terms of usage (I call it “metering exercise”):

Metering exercise –
1) How much bandwidth are we going to consume?
2) How much storage and storage transaction are we using?
3) How many CPU hours do we need?
4) Are there periods where we need more/less CPUs
5) ….
6) ….
7) Our database is going to start with 100MB of data and grow as time pass
Conclusion –We estimate that the initially we would be spending $150 per month with a growth rate of 5% per month.

Pricing in the cloud is based on actual usage – meaning you only provision what you need right now and you only pay for what you use. This cost is an operating expenditure (an ongoing cost for running a product)

The Needed Paradigm Shift

The problem is that most people are used to “sizing exercise” and Capital expenditures. They want to put something in the budget and forget about it – even if it costs much more. Moreover, some IT pros are not experienced in these “metering exercises” and often get it wrong the first couple of times. I think that making this error is a learning exercise and you often get it right after a few times -“metering exercises” errors often cost much less than “sizing exercise” errors and are more easily corrected.

Some IT organization try to do “sizing exercise” for the cloud – that is a common pitfall these days because you loss the elasticity and “pay for what you use” and in most cases pay much more that you should have- and even if you do this “sizing exercise” you still do not have the predictability that you are used to.

IT managers need to embrace the shift, they need to understand that pricing is changing from “buy” to “lease”, and that they should now treat compute costs in the same way they manage their electric bill or cellular cost – because that is the way of the cloud…

The Role of the Software Architect as a Roadblock

Category : Architecture, Opinion

A software architect has an impotent role in software development- he explores the business needs, chooses the right technologies, designs and aids in implementation. Amongst other things an architect serves as a quality gate in the organization, preventing bad technology decisions and mitigating technological risks. But what happens when a quality gate is raised to high and risk mitigation is more impotent than the needs for change? Then the architect becomes a roadblock to any change and innovation, the architect then becomes a person you need to “pass” rather than “consult with”.

Continue Reading

Open source presentation at the Wellington Architect forum

Category : Architecture, Microsoft, Open source, Software development

Just finished my presentation on Open source and Architecture in the Wellington Software Architect Forum.

We have covered these topics:
1) Definition, Licensing & players
2) Open source based architecture examples
3) Best practices
4) ROI, TCO and other TLA
5) Open source tools for architecture
6) Want to be an open source developer?
7) Future FOSS trends

You can download the presentation here.

Effective Development Environments – Development, Test, Staging/Pre-prod and Production Environments.


Category : Architecture, Best practices, Software development, Tips

The following happens in many software projects -
At start, it seems you only need one environment for your web application, well, at most two:
One development environment (AKA your PC) and one server.

But as time pass, you find you need additional environments:
The clients might want their own testing environment, sometimes you need to have a pre-production environment or a staging environment, so business managers can approve the ongoing content as well as look & feel.

Do you really need these environments? What are these environment good for?

Here is a short description of some of the more popular environments and their purpose.
Continue Reading

10 things every software architect should consider (AKA – 10 key architectural concepts)


Category : Architecture, Best practices, Software development

After a session I gave about Scalability in Wellington NZ, one of developers asked me what are the things software architect should consider. I have gathered and compiled this list:

1. Security

Application security encompasses measures taken throughout the application’s life-cycle to prevent exceptions in the security policy of an application or the underlying system (vulnerabilities) through flaws in the design, development, deployment, upgradation, or maintenance of the application. [1]

2. Reliability / Consistency

Data consistency summarizes the validity, accuracy, usability and integrity of related data between applications and across the IT enterprise. This ensures that each user observes a consistent view of the data, including visible changes made by the user’s own transactions and transactions of other users or processes. Data Consistency problems may arise at any time but are frequently introduced during or following recovery situations when backup copies of the data are used in place of the original data. [2]

3. Scalability

Scalability is a desirable property of a system, a network, or a process, which indicates its ability to either handle growing amounts of work in a graceful manner, or to be readily enlarged. [3]

4. High Availability

High availability is a system design protocol and associated implementation that ensures a certain absolute degree of operational continuity during a given measurement period.

Availability refers to the ability of the user community to access the system, whether to submit new work, update or alter existing work, or collect the results of previous work. If a user cannot access the system, it is said to be unavailable. Generally, the term downtime is used to refer to periods when a system is unavailable. [4]

5. Interoperability / integration

Interoperability is a property referring to the ability of diverse systems and organizations to work together (inter-operate). With respect to software, the term interoperability is used to describe the capability of different programs to exchange data via a common set of exchange formats, to read and write the same file formats, and to use the same protocols. (The ability to execute the same binary code on different processor platforms is ‘not’ contemplated by the definition of interoperability.) The lack of interoperability can be a consequence of a lack of attention to standardization during the design of a program. Indeed, interoperability is not taken for granted in the non-standards-based portion of the computing world. [5]

6. Maintainability

In software engineering, the ease with which a software product can be modified in order to:

* correct defects

* meet new requirements

* make future maintenance easier, or

* cope with a changed environment;


7. Recovery / DR

Disaster recovery planning is a subset of a larger process known as business continuity planning and should include planning for resumption of applications, data, hardware, communications (such as networking) and other IT infrastructure. A business continuity plan (BCP) includes planning for non-IT related aspects such as key personnel, facilities, crisis communication and reputation protection, and should refer to the disaster recovery plan (DRP) for IT related infrastructure recovery / continuity. [7]

8. Performance

Determine how fast some aspect of a system performs under a particular workload. It can also serve to validate and verify other quality attributes of the system, such as scalability, reliability and resource usage. Performance testing is a subset of Performance engineering, an emerging computer science practice which strives to build performance into the design and architecture of a system, prior to the onset of actual coding effort. [8]

9. Standards/ Compliance

Software standard essentially is certain agreed to terms, concepts and techniques by software creators so that many different software can understand each other.

For instance, HTML, TCP/IP, SMTP, POP and FTP are software standards that basically all software designers must adhere to if the software decides to interface with these standards. For instance in order for a email to be read from Yahoo! Mail that is sent from a Microsoft Outlook software application and vice versa, the Microsoft Outlook needs to send the email using for instance the SMTP (Simple Mail Transfer Protocol) protocol (standard) and Yahoo! Mail receives it also through SMTP reader and displays the email. Without a standardized technique to send an email from Outlook to Yahoo! Mail, they can’t be able to accurately display emails sent between the two entities. Specifically, all emails essentially have “from,” “to,” “subject,” and “message” fields and those are the standard in which all emails should be designed and handled. [9]

10. User experience

A newly added member – User experience design is a subset of the field of experience design which pertains to the creation of the architecture and interaction models which impact a user’s perception of a device or system. The scope of the field is directed at affecting “all aspects of the user’s interaction with the product: how it is perceived, learned, and used. [10]

Seems about right… What do you think?

I love my MVC …

Category : Architecture, Opinion, Software development

If this was a comics style blog it would start like this:

They both stood there, the clean and virtues super-hero MVC and his arch-enemy the dirty and corrupt spaghetti-design-pattern… they were both aware that only one of them will prevail.

The fact is that real life is more complicated than that, writing code is an on-going process and sometimes you need to “get things done now” instead of “get things done right”. The important thing is to keep in mind the important notions  that lays behind MVC.

So who is this super-hero? From Wikipedia: Model-View-Controller (MVC) is a design pattern used in software engineering. In complex computer applications that present lots of data to the user, one often wishes to separate data (Model) and user interface (View) concerns, so that changes to the user interface do not impact the data handling, and that the data can be reorganized without changing the user interface. The Model-View-Controller design pattern solves this problem by decoupling data access and business logic from data presentation and user interaction, by introducing an intermediate component- the Controller.

While MVC might take a little more time to design, the notion of decoupling presentation, logic and data layers is imperative. Code that has been written this way can easily be changed without enforcing changes on other layers of code. Because it is a well known fact of life that things always change, agile code makes our life easier and simpler. It is also usually makes the code more comprehensible.

On the other hand you have spaghetti code, it is takes no planning at all but changing it is a nightmare. Moreover, spaghetti code is incomprehensible and as the saying goes: “Always write your code as if the person who will maintain it is a violent psychotic who knows where you live.”

So keep your layers decoupled and live longer and happier.

BTW: if you don’t know which song was playing while I wrote this blog see the title or click here.

Scale out versus scale up – How to scale your application.


Category : ajaxdo, Architecture, Software development

When designing enterprise application architecture, talking to clients and doing interviews for my group, I sometimes tackle the “scale up vs. scale out” software architectural dilemma.

To set the stage lets define what scale means as well as what scale-up and scale-out mean.

The problem domain:

We have an application that serves about a 100 users (an online video store web site for example). Now, all is dandy and the application is running fast and smoothly. But then our application becomes more popular and more users want to use it at the same time. Now the application needs to serve thousands or even millions of users. The application is becoming sluggish and non-responsive because it runs out of computing power, memory or network bandwidth. There is an irony in this situation because this is what we hoped for (millions of users) and this is the time when we need the application to work at its best, while in reality this is when most applications tend to fail.

The scalability solution:

scalability is a desirable property of a system, a network, or a process, which indicates its ability to handle growing amounts of work in a graceful manner.

There are two kinds of scalability strategies:

Scale up (Scale vertically) means to run the application on a stronger computer.

Scale out (scale horizontally) means to run the application on many regular computers.

If we think of this in terms of a housing problem, these solutions make more sense.

Let’s say we have a family house with five people in it.


After a few years the family grows and is now composed of ten people and there is no more room. What do we do?

Scale up would mean to put everyone in one big building:

house scale up building

Scale out would mean to put these people in several small size houses:

house scale out sub

Scale up – pros and cons:


This is a straight forward solution that does not demand a change in the architecture of the software we write. You just run the same application on a stronger computer.


On the other hand, the problem with scale up is that it is a costly and not an infinite solution. Big computers, like houses, cost a lot and there is a physical limitation to the computing power and memory you can have in a single computer.

Scale out – pros and cons:


If planed right, this solution offers infinite scalability, when you need to support more users you just add more low cost computers to your server farms.


On the other hand, this is not a straight forward solution. You need to design, architect, and develop your application to be ready to scale out (this is a topic for another blog I plan to write)


For small scale application scaling up might be cheaper and faster to develop and implement. Having said that, most large scale applications, such as Google, Amazon, and Microsoft, use the scale out solutions in order to handle their scalability challenges.

Bottom line conclusion – if you plan for success do some pre-thinking and additional development to make your application scale out ready.