- How do you stay relevant in an app-concentric world? Here’s how
- A successful company demands the right culture framework. Here’s the primer.
- Marketing for mature markets: it pays to read the right signs.
- The wearable devices of the future can track your carbon footprint and support green tech. Benjamin Hubert of Layer talks about Worldbeing.
- Revitalise testing completely with continuous deployment
DevOps – connecting Development, Quality Assurance and technical operations together so they work as a unit.
Continuous delivery is the platform on which DevOps is enabled. This platform ensures an all hands down approach and takes out all barriers that prevent teams from working together. This is way different and not so like the earlier models where software development was in silos.
Continuous delivery is a developmental exercise in itself. The teams which were earlier separate now work together as a single unit. Most of what DO has evolved today stems from agile development practices. These are fine-tuned over a period of years and evolved to the DO we know today. It’s not a hard and fast rule that DO should be adopted across every need. If your release cycles are many and haptic, then DO could be the effective vehicle that steers your developmental engine the right way.
Developers learn to check in their code multiple times a day. This is auto-verified before commits, and then shared with the team. Since you get to see statuses of all check-in’s fast, errors are corrected and stable releases are made faster. DevOps targets product delivery, continuous testing, quality testing, feature development and maintenance releases. Automation fuels most of these initiatives. Not like the other models where a development release is rolled out, then goes to QA, comes back later on. By the time bugs are filed, the developers would have moved on.
DevOps ensures low failure rates of delivered software. Since DO integrates all pillars of software development as a single entity, events are now granularly tracked, development environments become more stable. This gives more control to developers. They can concentrate on creating more application-centric releases. Infra/IT management becomes more efficient and reliable as regular, consistent and smaller releases weed out any known problems.
DevOps is necessary to today’s software development as most of the industry rides on one powerful word – collaboration. Today, customer preferences are changing, perceptions are changing. Software standards change to accommodate the maximum, so that there’s value when you release software for the masses. This dynamic, hard hitting world demands an equal and dynamic platform. Moreover, the Internet has already accelerated collaboration to a new high. Time is of the essence, the faster you release, the better positioned you are. And, as with every other revolution it’s only the beginning…
The cloud revolution and the subsequent ‘As-a-Service’ economy has been wildly successful primarily because of the attitude of continuous progression. You don’t find enterprises saturating when they reach a status quo; instead you find them continuously exploring ways to automate processes, access meaningful data, and advance self-learning capabilities in a secure, trusted environment.
At Qruize, we’ve been toying with continuous delivery and we’re going to take a deep dive into some related topics a while later. For now, we want to make our readers understand continuous delivery and what metrics make sense when deploying often, faster.
Continuous Delivery – What is it?
Continuous Delivery is a set of practices and principles aimed at building, testing and releasing software faster and more frequently. This allows us to do three things: Deploy more often, get feedback, and fix problems, all much faster than before.
Here’s a rule of thumb: you’re probably doing continuous delivery right if your software is deployable throughout its lifecycle. Now that we’ve got the basics out of the way, we can move on to the juicier details.
How do you measure the success of Continuous Delivery?
Everyone relies on data and metrics to measure success. Logically, the software development process can’t be improved upon unless the change implemented is quantifiable. It is no wonder that strong development teams are metrics-driven. However, the trick is in identifying what should be measured. The metric to be monitored for determining success/failure is bound to have a significant effect on team behaviour as well.
For instance, if lesser lines of code are seen as a positive metric, developers will write many short lines of code. If success is measured by the number of defects fixed, testers will log bugs that could be fixed with minimal effort, and so on.
The bottom line is, there is no point in removing bottlenecks that aren’t actually constraining the delivery process. This is why it is critical to rely on a global metric to determine if the delivery process as a whole has a problem; and for software delivery, that metric is cycle time.
Cycle time and such
In its barebones, cycle time is the time elapsed between moving a unit of work from the beginning to the end of a physical process. Dave Farley and Jez Humble who wrote the book ‘Continuous Delivery’ define it as “the time between deciding that a feature needs to be implemented and having that feature released to users”.
How long would it take for an organization to deploy a change? And what about when this is done on a repeatable, reliable basis? Cycle time, the time it takes between deciding that a feature needs to be implemented and having that feature released to users is hard to measure because it covers many parts of the software delivery process—from analysis, through development, to release.
There are ways around the difficulty – a proper implementation of the deployment pipeline helps in calculating parts of the cycle time associated with check-in to release. It also reveals the lead time from check-in to each stage of the deployment process – thus baring bottlenecks.
External Restrictions and Other Parameters
Sometimes, the bottlenecks playing havoc on your cycle time could be external. Subordinating all other processes to an external constraint may be the only viable option, therefore, while the CD process runs along smoothly, deployments could be slammed.
A way around this has been to record not just the total cycle time but also the number of deployments into each environment which offers an efficiency metric that pinpoints where the issues were and record how our work affected them. Some other diagnostics that warn of potential problems are: number of defects, velocity (rate of delivery of tested, ready to use code), number of builds/day number of build failures/day and duration builds among others.
All in all, selling continuous delivery is a combination of visibility, risk mitigation and responsiveness (cycle time) of the development team.
The erstwhile software revolution when it happened, brought in so much clarity and standards to how a software product must be produced, delivered and maintained. And, as these ideas and strategies grow stronger, standards are formed. But, like everything else, a shakedown happens once a while and something totally new gets created. This is the junction where enterprises and software houses are toying with such a movement. A movement called DO- DevOps.
For starters, DevOps (DO) has taken the market by storm, as these numbers show from a recent survey:
- 63% of respondents in a survey have adopted DO compared to 50% in 2011, that’s a 26% increase in DO adoption rate
- Increased agility and reliability across the software lifecycle
- Delivery routines increase with laser sharp precision – the throughput is more massive yet highly tuned
- Demand for DO keeps increasing across job postings
Today developers use CI – continuous integration/monitoring processes that helps them release builds fast and efficiently. So they need production environments that spring up and close down at the touch of a button or click because experiences are instantaneous and real-time. Running DevOps is the in-thing today in the software industry because it facilitates better application development and response, and is cheaper than other methods. Getting on the DevOps bandwagon means that you have a super-efficient, dedicated pipeline that spews out code and updates on a frequent basis. This pipeline is not limited to your enterprise but extends to third parties who you partner with as well, and that’s where the limitations start.
The most critical thing about DO is not just adapting to a standard, but ensuring collaboration and co-ordination between teams and processes happens all the time and knowing why it is done. The elements of change would revolve around People, Process and Technology.
People because this deals with a lot of culture/communication/demographic changes. A simple way to fix this at the onset is to have a common business objective trickling down to clear vision and mission across the board. This opportunity also gives you the time to pick the right resources for the job.
Process because it helps you to apply lean thinking to creating solutions, and as the methodology itself is lean and nimble, you get instantaneous feedback all the time. So you begin to see value in each of your interactions as they happen.
Technology because implementing commonly used tools and toolsets boosts productivity, learning curves are minimal and on boarding is fast. Moreover, today’s demanding software environments need production boxes to be spun up and down as fast as possible- moving to a situation where you have IAS (Infrastructure as Code) or Software defined environments.
Enterprises going the DevOps way need to coax and convince service providers they work with of the success of this model as they may not really want to switch over. They’d still be happy with traditional outsourcing. The next bit with DevOps is that many large organizations have rigid process flows for software development and that has to be followed religiously. This mode of operation impacts a DevOps style as DO is nimble, agile and fast. Large organizations may simply not be able to adapt to the needs of a DevOps environment. There are also cultural, regional learning and unlearning to be done at many levels to enable a competent and nimble team – all of which may be difficult to implement over a large enterprise.
Barring these few niggles and issues, it’s almost like adopting DO means that the software industry is on steroids. And going by the way things are, it wouldn’t be surprising if DO becomes the new outsourcing. Currently all factors lead to it, and if these standards can be embraced and adopted at a global level, they pave the way for great software to be made and enjoyed. And, at the same time making that process a win-win for everyone.
The one thing that has doubled and tripled over the last few years is data. Data generated by multiple transactions, touchpoints and devices. Though this has created smarter customers, enterprises are still losing out because they are unable to make sense of all this data that is around them. What do customers want?, Where are they heading next?, What can we do to influence their next buy? Most importantly, how can we create painless, value rich experiences that ‘wow’ them?
Read from our selected list to find out how new innovations in technology is scripting customized, personalized experiences.
- Werner Vogels CTO of amazon.com warrants out their new BI service which assists in deriving insights from every other existing data source.
- Why only few brands stand out and others go unrecognizable? By perceiving how customers categorize and prioritize their need, the co-founders of Play Bigger Advisors pick and assert the right mechanism which enables organizations towards new markets.
- Lately Docker has become the talk of the town. Craig was mind boggled after witnessing the accelerated uptake of the usage of Docker and application containers in production workloads.
- It is evident this era belongs to mobile, these investment figures from IT giants on mobile apps affirms that additionally.
- Servers and DB Management are considered to be vain expenditure now and it’s taken over by DBaaS, which suits and optimizes DB management seamlessly.
We’ve been working with dozens of customers who come up to us with unique problems: and we often help them in unique and effective ways whether it has to do with building a platform or migrating applications to the cloud. A common misconception with most organizations we come across is that migrating to the cloud makes them agile. While a significant selling point for the cloud is the agility angle, simply being on the cloud won’t make you so.
Organizations want to bring applications to the cloud and manage them in the cloud
In our previous post, we spoke at length about the DevOps movement which has enabled organizations to shrink months-long procurement cycles and develop the infrastructure they need in a matter of hours, or even minutes. At first glance, we can all agree that some traditional ops activities are starting to fade away:
- With IaaS in the picture, we no longer need to rack servers in house or swap hard drives ever.
- PaaS does away with configuring firewalls, installing databases or web server software.
- Configuration management and automation frees us of manually installing applications, patches or publishing ssh keys.
While many organizations have benefited from this automation, greater gains can be realized when the application development and hosting platform provides capabilities further up the stack. In other words, more abstraction is essential to derive more value out of cloud computing.
The way to do that, thereby making the lives of developers easier is through PaaS – decoupling applications form the operating system. In essence, DevOps and PaaS represent two different paradigms for delivering applications to the cloud.
- DevOps – DevOps takes an automation approach – the process of installation, configuration and deployment of the application stack are scripted
- PaaS – PaaS works by abstracting the details of the cloud infrastructure from the developers
While this may seem like developers could do away with Sysadmins once and for all, typical Sysadmins argue that they can grow up to 75% of PaaS functionality with DevOps tools like Chef without giving up any systems architecture flexibility.
The NoOps Movement
In his blog post, Adrian Cockcroft, director of cloud systems architecture for Netflix, detailed having little need for operations staff, partly because the company shifted to the cloud, thus automating former functions. In another instance, PaaS provider AppFog, has argued that the emergence of PaaS offerings eliminates the need for most operations within the organization, thus enabling a NoOps culture. AppFog’s infographic regarding this stance was published on GigaOm.
It is pretty obvious now that NoOps often gets linked with PaaS. However, we have already established that PaaS cannot and will not “solve all problems” and “enable blissful ignorance”. Several experts including James Urquhart and John Allspaw have agreed that cloud hides certain kinds of problems, only to replace them with new and more interesting ones. AWS, Heroku or Azure could have outages. The PaaS platform can’t magically scale applications if there is IaaS congestion. If another SaaS service is leveraged for an app, we’d still be constrained by the operational excellence of said service.
Melding DevOps and PaaS Together
However, the answer isn’t choosing between once of these seemingly competing paradigms. We don’t need to. Simply make DevOps the foundation of your PaaS infrastructure. This way, Sysadmins can provision any services – choose their stack, pick their cloud and package the entire thing as an environment for the developers to work with as a black box. Handing up a significant part of the operational responsibility to developers makes it even more imperative that the PaaS is infused with operational tools pre-baked by the Sysadmins. Thus the Control of DevOps and the productivity of PaaS, rolled into one is critical to business agility and operational efficiency.
In fact, with cloud computing, the role of the Ops is not going away but it stays in the background offering an interface which developers can manage themselves. The symbiotic relationship between DevOps and PaaS only serves to transform infrastructure into an abstracted application layer that stands ready to initiate services on demand.
Docker is the new kid on the block that is rehashing the application development scene in a big way. Why, because unlike VMs and hypervisors which take a lot of memory and file system space for virtualization, docker containers share the kernel resources with all containers. This means there’s no bloated, unwanted code/libraries in your package. You can now focus all your creative energy into getting your application experience right while docker does all the heavy work in the background. It has made application development as easy as Build, Ship, Run. Docker containers do everything else.
The architecture of a docker container is different from that of a Virtual machine instance. A VM abstracts the entire operating system from head to toe. So if you run multiple VMs on a single box, you have multiple such abstractions – result, unwanted loss of memory and infra. Whereas, docker containers package all the necessary files/codes/resources within the container itself and only depends on the shared kernel for execution. This means there’s a substantial decrease in system overload, speed is much faster, applications are responsive and reliability is much more predictable. But all things in life come with a fine line right? Though docker has accelerated DevOps in a big way, there is a limitation with containers in that they run on the same OS only. You can’t have multiple OS emulations on the same box. But when it comes to running more applications on a server, than having more server types, it’s the former that wins and this is why most of the financial companies/ data warehousing/hosting companies are making a beeline to reap savings right from the start. Containers are also safe and secure, they are easier to maintain as well. Docker knowing this partners with other container enterprises like Canonical, Red Hat to build safe and secure containers that are easier to maintain and deploy.
In a day and age where enterprises are struggling to make applications and workloads more portable, Docker introduces a sure fire way to make applications work and run virtually anywhere and without any assistance at all. And the best part, containers love the cloud. So it’s now an easy thing to deploy stuff using containers on the Cloud as well. Docker works with most DevOps applications like Puppet, Chef, Vagrant, Ansible etc. Docker simplifies most tasks that other applications do for example, setting up multiple live servers within a local instance, test projects across different server settings, deploy for multiple operating system etc irrespective of the local host environment.
In closing, Docker just accelerates application development, managing and deploying applications. It helps developers to quickly create and deploy containerized applications on the fly. This would just excite just anyone who’d want to see cost savings go up.