A software architect has an impotent role in software development- he explores the business needs, chooses the right technologies, designs and aids in implementation. Amongst other things an architect serves as a quality gate in the origination, preventing bad technology decisions and migrating technological risks. But what happens when a quality gate is raised to high and risk mitigation is more impotent than the needs for change – then the architect becomes a roadblock to any change and innovation, the architect them becomes a person you need to “pass” rather than “consult with”.
Category Archives: Architecture
Open source presentation at the Wellington Architect forum
Just finished my presentation on Open source and Architecture in the Wellington Software Architect Forum.
We have covered these topics:
1) Definition, Licensing & players
2) Open source based architecture examples
3) Best practices
4) ROI, TCO and other TLA
5) Open source tools for architecture
6) Want to be an open source developer?
7) Future FOSS trends
You can download the presentation here.
Effective Development Environments – Development, Test, Staging/Pre-prod and Production Environments.
The following happens in many software projects –
At start, it seems you only need one environment for your web application, well, at most two:
One development environment (AKA your PC) and one server.
But as time pass, you find you need additional environments:
The clients might want their own testing environment, sometimes you need to have a pre-production environment or a staging environment, so business managers can approve the ongoing content as well as look & feel.
Do you really need these environments? What are these environment good for?
Here is a short description of some of the more popular environments and their purpose.
Continue reading
10 things every software architect should consider (AKA – 10 key architectural concepts)
After a session I gave about Scalability in Wellington NZ, one of developers asked me what are the things software architect should consider. I have gathered and compiled this list:
1. Security
Application security encompasses measures taken throughout the application’s life-cycle to prevent exceptions in the security policy of an application or the underlying system (vulnerabilities) through flaws in the design, development, deployment, upgradation, or maintenance of the application. [1]
2. Reliability / Consistency
Data consistency summarizes the validity, accuracy, usability and integrity of related data between applications and across the IT enterprise. This ensures that each user observes a consistent view of the data, including visible changes made by the user’s own transactions and transactions of other users or processes. Data Consistency problems may arise at any time but are frequently introduced during or following recovery situations when backup copies of the data are used in place of the original data. [2]
3. Scalability
Scalability is a desirable property of a system, a network, or a process, which indicates its ability to either handle growing amounts of work in a graceful manner, or to be readily enlarged. [3]
4. High Availability
High availability is a system design protocol and associated implementation that ensures a certain absolute degree of operational continuity during a given measurement period.
Availability refers to the ability of the user community to access the system, whether to submit new work, update or alter existing work, or collect the results of previous work. If a user cannot access the system, it is said to be unavailable. Generally, the term downtime is used to refer to periods when a system is unavailable. [4]
5. Interoperability / integration
Interoperability is a property referring to the ability of diverse systems and organizations to work together (inter-operate). With respect to software, the term interoperability is used to describe the capability of different programs to exchange data via a common set of exchange formats, to read and write the same file formats, and to use the same protocols. (The ability to execute the same binary code on different processor platforms is ‘not’ contemplated by the definition of interoperability.) The lack of interoperability can be a consequence of a lack of attention to standardization during the design of a program. Indeed, interoperability is not taken for granted in the non-standards-based portion of the computing world. [5]
6. Maintainability
In software engineering, the ease with which a software product can be modified in order to:
* correct defects
* meet new requirements
* make future maintenance easier, or
* cope with a changed environment;
7. Recovery / DR
Disaster recovery planning is a subset of a larger process known as business continuity planning and should include planning for resumption of applications, data, hardware, communications (such as networking) and other IT infrastructure. A business continuity plan (BCP) includes planning for non-IT related aspects such as key personnel, facilities, crisis communication and reputation protection, and should refer to the disaster recovery plan (DRP) for IT related infrastructure recovery / continuity. [7]
8. Performance
Determine how fast some aspect of a system performs under a particular workload. It can also serve to validate and verify other quality attributes of the system, such as scalability, reliability and resource usage. Performance testing is a subset of Performance engineering, an emerging computer science practice which strives to build performance into the design and architecture of a system, prior to the onset of actual coding effort. [8]
9. Standards/ Compliance
Software standard essentially is certain agreed to terms, concepts and techniques by software creators so that many different software can understand each other.
For instance, HTML, TCP/IP, SMTP, POP and FTP are software standards that basically all software designers must adhere to if the software decides to interface with these standards. For instance in order for a email to be read from Yahoo! Mail that is sent from a Microsoft Outlook software application and vice versa, the Microsoft Outlook needs to send the email using for instance the SMTP (Simple Mail Transfer Protocol) protocol (standard) and Yahoo! Mail receives it also through SMTP reader and displays the email. Without a standardized technique to send an email from Outlook to Yahoo! Mail, they can’t be able to accurately display emails sent between the two entities. Specifically, all emails essentially have “from,” “to,” “subject,” and “message” fields and those are the standard in which all emails should be designed and handled. [9]
10. User experience
A newly added member – User experience design is a subset of the field of experience design which pertains to the creation of the architecture and interaction models which impact a user’s perception of a device or system. The scope of the field is directed at affecting “all aspects of the user’s interaction with the product: how it is perceived, learned, and used. [10]
Seems about right… What do you think?
I love my MVC …
If this was a comics style blog it would start like this:
They both stood there, the clean and virtues super-hero MVC and his arch-enemy the dirty and corrupt spaghetti-design-pattern… they were both aware that only one of them will prevail.
The fact is that real life is more complicated than that, writing code is an on-going process and sometimes you need to “get things done now” instead of “get things done right”. The important thing is to keep in mind the important notions that lays behind MVC.
So who is this super-hero? From Wikipedia: Model-View-Controller (MVC) is a design pattern used in software engineering. In complex computer applications that present lots of data to the user, one often wishes to separate data (Model) and user interface (View) concerns, so that changes to the user interface do not impact the data handling, and that the data can be reorganized without changing the user interface. The Model-View-Controller design pattern solves this problem by decoupling data access and business logic from data presentation and user interaction, by introducing an intermediate component- the Controller.
While MVC might take a little more time to design, the notion of decoupling presentation, logic and data layers is imperative. Code that has been written this way can easily be changed without enforcing changes on other layers of code. Because it is a well known fact of life that things always change, agile code makes our life easier and simpler. It is also usually makes the code more comprehensible.
On the other hand you have spaghetti code, it is takes no planning at all but changing it is a nightmare. Moreover, spaghetti code is incomprehensible and as the saying goes: “Always write your code as if the person who will maintain it is a violent psychotic who knows where you live.”
So keep your layers decoupled and live longer and happier.
BTW: if you don’t know which song was playing while I wrote this blog see the title or click here.
Scale out versus scale up – How to scale your application.
When designing enterprise application architecture, talking to clients and doing interviews for my group, I sometimes tackle the “scale up vs. scale out” software architectural dilemma.
To set the stage lets define what scale means as well as what scale-up and scale-out mean.
The problem domain:
We have an application that serves about a 100 users (an online video store web site for example). Now, all is dandy and the application is running fast and smoothly. But then our application becomes more popular and more users want to use it at the same time. Now the application needs to serve thousands or even millions of users. The application is becoming sluggish and non-responsive because it runs out of computing power, memory or network bandwidth. There is an irony in this situation because this is what we hoped for (millions of users) and this is the time when we need the application to work at its best, while in reality this is when most applications tend to fail.
The scalability solution:
scalability is a desirable property of a system, a network, or a process, which indicates its ability to handle growing amounts of work in a graceful manner.
There are two kinds of scalability strategies:
Scale up (Scale vertically) means to run the application on a stronger computer.
Scale out (scale horizontally) means to run the application on many regular computers.
If we think of this in terms of a housing problem, these solutions make more sense.
Let’s say we have a family house with five people in it.
After a few years the family grows and is now composed of ten people and there is no more room. What do we do?
Scale up would mean to put everyone in one big building:
Scale out would mean to put these people in several small size houses:
Scale up – pros and cons:
Pros:
This is a straight forward solution that does not demand a change in the architecture of the software we write. You just run the same application on a stronger computer.
Cons:
On the other hand, the problem with scale up is that it is a costly and not an infinite solution. Big computers, like houses, cost a lot and there is a physical limitation to the computing power and memory you can have in a single computer.
Scale out – pros and cons:
Pros:
If planed right, this solution offers infinite scalability, when you need to support more users you just add more low cost computers to your server farms.
Cons:
On the other hand, this is not a straight forward solution. You need to design, architect, and develop your application to be ready to scale out (this is a topic for another blog I plan to write)
Conclusion:
For small scale application scaling up might be cheaper and faster to develop and implement. Having said that, most large scale applications, such as Google, Amazon, and Microsoft, use the scale out solutions in order to handle their scalability challenges.
Bottom line conclusion – if you plan for success do some pre-thinking and additional development to make your application scale out ready.