Towards complex software systems

6170448143_8b69517e20_o

The recent years have seen an exponential increase in software. For each and everything we can put our thoughts on, we have a software that helps or automates tasks. But as scope for development increases, so does the risks. Whatever happens, software complexity is on the increase, as there are too many components in orchestration with each other. The most important point is how to lower your risk and friction between components or modules.

It’s just not enough if you create software just to address a problem. Software truly grows when it outlives the purpose for why they were created. This effect has been felt in the case of the Web and the cloud environments. The hyperlink that was conceived by Tim in 1989 as a way to share information soon became the cornerstone for software creation all around the world. Suddenly you had software mushroom all over and information could be exchanged effortlessly. The same concept powers the Web. Individual nodes process information by themselves and are independent from others in the group. This is how software has scaled and this is how it should scale in the future. Software always scales by federation and wide spread adoption. Open source is an excellent movement that builds on wide spread adoption and use.  Software deployments have changed and become more complex in the last few years – thanks to collaboration fuelled by social media and the Internet. This opens a whole new chapter in developing new age software for the future.

There’s a friction component and a risk component involved in software. Friction happens because over time software modules get complicated. This normally slows things down. The risk factor is imminent due to various components coming together to form a massive system. Unless you check, check and recheck, you will always have a surprise.Though there are methods that address these problems, collaboration just changed the game all over again!, you have  multiple checkin’s, checkout’s builds, forks. This is where you have to be super critical about releases which lower both friction and risk.

Two interesting concepts to sorting this problem out One is micro services, the other is containers. Micro services are created by a small team who build, deploy and manage the service end-to-end. Micro services increase the risk factor but lowers friction. Since they are essentially self-contained modules, there is less friction, but coupled with risk. Containers on the other side reduce the risk, but can increase friction. Using containers, you build just once and the code can run anywhere any number of times as the environment is consistent. This helps better infra utilization as well.

Continuous delivery is that magic wand that we need to use to control both risk and friction when developing software. It’s like hitting 2 birds with a stone. We not only want to lower risk but decrease friction as well. Code changes so fast that end-to-end testing may not be the right way to contain delivery schedules. CD perfects our release mechanism every time we deliver. This takes out risk and friction out of the equation. We’ve always seen that large complex systems always give you a final moment of surprise when testing for releases. Because you can never be sure if everything will work as expected from the get go!

So in short, if stable releases are the order of the day, continuous delivery is your go-to platform.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s