Business process management is now way easier!



Business Process Management provides a workflow framework that helps BIs and middle level management to start creating business workflows that eventually get executed as a process.

Workflow platforms use many components and are generally open source. One such product is Activiti.  Activiti is a light-weight workflow and Business Process Management (BPM) Platform targeted at business people, developers and system admins. Its core is a super-fast and rock-solid BPMN 2 process engine for Java. It’s open-source and distributed under the Apache license. Activiti runs in any Java application, on a server, on a cluster or in the cloud. One engaging use of Activiti is that is lowers the risk of potential failures and human interaction as compared to traditional ways.

Activiti is an Apache-licensed business process management (BPM) engine. Such an engine has a core goal to take a process definition comprised of human tasks and service calls and execute those in a certain order, while exposing various API’s to start, manage and query data about process instances for that definition. Activiti uses the BPMN 2.0 to make easier in the communication and understanding between Business Team and Developers is a added advantage in Activiti WorkFlow.

Activiti supports BPM2 (Business Process Management). BPM2 processes in Activiti are run in native Java. Activiti is a multi-component system with each component cut out for a particular role. They include:

1. Activiti Explorer:

Activiti Explorer is a web application using the Activiti API’s and showcasing the features of Activiti. Activiti contains a demo setup that will get this web app up and running in a matter of minutes. It usually runs in a Tomcat Server through deploying the Activiti war in the webapps folder on tomcat installation folder. Activiti explorer war is available from It consists of demo users and models and includes task management, process instance inspection, management features and viewing reports based on statistical history data.

2. Activiti Designer:

The Activiti Designer is an Eclipse plugin which allows you to create workflow/model BPMN 2.0 processes from within your IDE-environment. It also has built-in support for the Activiti-specific extensions to enable you to use the full potential of both the processes and the engine. It is used to create workflow using BPMN 2.0.

3. Activiti Modeler

The Activiti Modeler can be used to create workflow/model on BPMN 2.0 compliant processes graphically using a browser. The process files are stored by the server in a database model repository. Activiti-Explorer Web App consists the Activiti-Modeler In-Built to create the workflow/model.

4. Activiti Engine

It is the heart of the Activiti. It’s a Java process engine that runs BPMN 2 processes natively. It uses the Activiti API’s to process the BPMN 2 process. Activiti Engine is simply a jar which is used for the development of workflow using Activiti and exposes the functionalities to it.
In the next series, we will see how Activiti is used in a business scenario.


Written by Sandeep.

Sandeep is a Research Associate at Qruize Technologies specializing in Java Development.


Reuseable components speed up development time

With rising mobile devices on the go today, there’s an increasing pressure on developers to churn out applications daily. In an already mobile infested world, these applications have little time and priority for the average user. The rate at which apps are being churned out have made the mobile marketplace a volatile playground where only the fittest survive.

To help developers be on top of the race and yet create applications that speak of value, here are some components that can be re used for any project, which accelerates development time and go to market value. These modules are helpful to kick start any Android based project.


This module allows an app to log debug and error message in a giving details about line number, method name called, class name and of which package it is from. It appends log to user defined file with timestamp.


This module allows to make different network requests like GET, POST, PUT, DELETE, etc with url, optional parameters, optional headers, and it returns two listeners of response and error


This module allows an app to splash a screen or image for few seconds before loading the actual app.


This module allows to create different views dynamically on run time for eg.. it can create Imageview, Videoview etc.

Here are some screenshots which detail the component in use:

Login Screen


Splash Screen


Interactive Menu


Top 10 IT predictions for 2016


As we are coming to the close of yet another year; a year that saw rapid strides in technology across every sphere of life, here are ten exciting trends that will rule over 2016.

  1. The Device Mesh

This refers to the growing set of endpoints through which customers access, interact with clients, exchange data, store information. The device mesh includes everything from mobile phones to IOT based devices. These devices are connected to their back end networks and often work in isolation. This will change in due course giving more freedom and convergence to users.

  1. Ambient user experience

There’s no substitute for exemplary customer experience. The channel is oblivious. With much penetration of the device mesh, it gives developers finite control about how they want their customers to revel in the experience. Developers can forever alter the way customers think, feel and brand themselves in exciting new ways. With IOT heating up the personalization space, developers can now marry electronics and devices and data to form a formidable and consistent platform.

  1. 3D printing materials

3D printing is taking up in a big way. This means lower costs for production, unlimited customizations. Area of scope extends from aerospace, medicine, military, energy sector etc. As devices become smaller and scale to provide advanced functionalities, composite parts that can be easily manufactured, assembled, integrated is the order of the day. 3D printing just seems to be heading that way.

  1. Information of Everything

The device mesh produces data at every touchpoint. This is proliferated across all devices that form the mesh. The goal here is to make ‘sense’ out of the information goldmine. That is what Information of Everything tries to address. It seeks to link data from different sources and produce meaningful information from them.

  1. Advanced Machine Learning

This is an area of great interest. It envisages an environment where machines auto learn the environments they are in. DNNs (deep neural networks) enable hardware/software based machines learn the environment they are in. A precursor to this technology is already in use, in the form of auto-heal networks.

  1. Autonomous agents and things

Advanced machine learning gives rise to automatic robots that can function on their own. Typical examples are Google Now, Cortana, Siri – which use these frameworks to gauge the information received through the digital mesh and uses this information to process results. This is big talk as it directly impacts customer behaviour, personalization etc

  1. Adaptive security architecture

The complexities running a digital ecosystem exposes the threats and vulnerabilities affecting it. Simply relying on perimeter access and rule based security will not help in the future. The focus will shift to making applications safe at their layer itself. Enterprises also need to look at how user-entity behaviour to find out acceptable patterns and to weed out non acceptable or threat based risks.

  1. Advanced system architecture

Security is on everybody’s minds. With the accelerated rate of devices adoption, security is a critical layer that can’t be ignored. Using field programmable gate arrays, it is possible to build security systems that mimic the brain. Their light architecture helps them to be integrated into smaller form factors, with lower power consumption and greater efficiency.

  1. Mesh app and service architecture

With technology disruptions taking place on a very large scale, large, legacy monolith systems are giving way to smaller componentized systems. These systems are easier to manage, troubleshoot and maintain. Micro services play a key role developing agile systems that are deployed on a Cloud or mobile platform. Container technology also helps in faster rollout of software and micro services environments.

  1. IOT platforms

IOT platforms complement the digital mesh and its underlying device makeup. IOT platforms are what IT folks need to make IOT a reality. This basically boils down to managing, securing and integrating technologies and standards that power devices and data.

At Qruize, we pride ourselves on being at the helm of innovation. We’ve handled projects that touch upon each of these emerging trends in some way or the other. If you have an idea, we can build it for you – just hit that button already!

5 golden rules for easy IT…



“Press F1 for help”

This is a very well-known statement. But we are in a time where that help needs real help. With today’s hybrid environment, there are enterprises still running legacy equipment, there are third party vendors who still produce adaptors for integration, then there’s virtualization, and we’ve got Cloud.  Through it all,  the IT team still has to manage this load. It’s important to understand where the holes are plug them in before it falls. So here are some of those simple ideas which can be implemented quickly…

  1. Backup regularly and monitor them

The function of IT in an enterprise has largely been optimized with the Cloud framework. But are you operating on the right strategies for backup? It’s important not to just back up data, but to also think about metadata. That’s data about the data.

Metadata can help you out of serious issues by having them recorded or set in place somewhere. Ensure this simple fix in your policies the next time you backup. Backups don’t do anything to active files, so if colleagues have left for the day, ensure a policy closes all active files so that it’s backed u promptly!

  1. Maintain a cohesive team

Whatever be the process or goal, it’s people who drive it. It becomes paramount that people are comfortable in what they do and how they do it. If IT teams function as post boxes, their overall value diminishes. This concept now stretches even further with methodologies like DevOps etc where culture barriers need to be broken.

  1. Putting automation to work

IT management which has today become lot simpler through outsourcing still has a lot of points that are monitored daily. Questions also stem around what’s being monitored, is it useful, is it needed etc. Using automation this can be simplified and restructured.  Identifying ways and methods to automate and provide this across every touchpoint will reduce failures and increase productivity. This action has to happen across everything IT stands up for.

  1. Security concerns

Security is something that is quintessential all the time. It’s not doubt that enterprises are wary and find it hard to move to something virtual in a matter of time when they were used to physical boxes all along. Again, careful planning is the key to preventing an attack or being a victim. Enterprises should not leave this perimeter unchecked. Investing with the right security partner who safeguards your business motives will be the best bet.

  1. Teams need to break culture barriers

The latest thing to join the software bandwagon is DevOps. As the software industry progresses, newer models of software delivery crop up. DevOps is no stranger, but the processes and roadmap engineers have is a long drawn one. And, one that can’t change fast. This is experienced with most software companies as they try and grapple with DevOps, CI/CE etc. One efficient way is to break all barriers that hinder communication, interactions and meetings. Over time, the team will build up and come together for everything. Breaking down barriers gives everyone in the team instant connect with each other facilitating greater response and agility…the thing that is needed today.

Whether you are a startup organization or a development house, these steps will help you to craft better software experiences for your users.

Towards complex software systems


The recent years have seen an exponential increase in software. For each and everything we can put our thoughts on, we have a software that helps or automates tasks. But as scope for development increases, so does the risks. Whatever happens, software complexity is on the increase, as there are too many components in orchestration with each other. The most important point is how to lower your risk and friction between components or modules.

It’s just not enough if you create software just to address a problem. Software truly grows when it outlives the purpose for why they were created. This effect has been felt in the case of the Web and the cloud environments. The hyperlink that was conceived by Tim in 1989 as a way to share information soon became the cornerstone for software creation all around the world. Suddenly you had software mushroom all over and information could be exchanged effortlessly. The same concept powers the Web. Individual nodes process information by themselves and are independent from others in the group. This is how software has scaled and this is how it should scale in the future. Software always scales by federation and wide spread adoption. Open source is an excellent movement that builds on wide spread adoption and use.  Software deployments have changed and become more complex in the last few years – thanks to collaboration fuelled by social media and the Internet. This opens a whole new chapter in developing new age software for the future.

There’s a friction component and a risk component involved in software. Friction happens because over time software modules get complicated. This normally slows things down. The risk factor is imminent due to various components coming together to form a massive system. Unless you check, check and recheck, you will always have a surprise.Though there are methods that address these problems, collaboration just changed the game all over again!, you have  multiple checkin’s, checkout’s builds, forks. This is where you have to be super critical about releases which lower both friction and risk.

Two interesting concepts to sorting this problem out One is micro services, the other is containers. Micro services are created by a small team who build, deploy and manage the service end-to-end. Micro services increase the risk factor but lowers friction. Since they are essentially self-contained modules, there is less friction, but coupled with risk. Containers on the other side reduce the risk, but can increase friction. Using containers, you build just once and the code can run anywhere any number of times as the environment is consistent. This helps better infra utilization as well.

Continuous delivery is that magic wand that we need to use to control both risk and friction when developing software. It’s like hitting 2 birds with a stone. We not only want to lower risk but decrease friction as well. Code changes so fast that end-to-end testing may not be the right way to contain delivery schedules. CD perfects our release mechanism every time we deliver. This takes out risk and friction out of the equation. We’ve always seen that large complex systems always give you a final moment of surprise when testing for releases. Because you can never be sure if everything will work as expected from the get go!

So in short, if stable releases are the order of the day, continuous delivery is your go-to platform.

Do- Why DO?

Why DO

DevOps – connecting Development, Quality Assurance and technical operations together so they work as a unit.

Continuous delivery is the platform on which DevOps is enabled. This platform ensures an all hands down approach and takes out all barriers that prevent teams from working together. This is way different and not so like the earlier models where software development was in silos.

Continuous delivery is a developmental exercise in itself. The teams which were earlier separate now work together as a single unit.  Most of what DO has evolved today stems from agile development practices. These are fine-tuned over a period of years and evolved to the DO we know today. It’s not a hard and fast rule that DO should be adopted across every need. If your release cycles are many and haptic, then DO could be the effective vehicle that steers your developmental engine the right way.

Developers learn to check in their code multiple times a day. This is auto-verified before commits, and then shared with the team. Since you get to see statuses of all check-in’s fast, errors are corrected and stable releases are made faster. DevOps targets product delivery, continuous testing, quality testing, feature development and maintenance releases. Automation fuels most of these initiatives. Not like the other models where a development release is rolled out, then goes to QA, comes back later on. By the time bugs are filed, the developers would have moved on.

DevOps ensures low failure rates of delivered software. Since DO integrates all pillars of software development as a single entity, events are now granularly tracked, development environments become more stable. This gives more control to developers. They can concentrate on creating more application-centric releases. Infra/IT management becomes more efficient and reliable as regular, consistent and smaller releases weed out any known problems.

DevOps is necessary to today’s software development as most of the industry rides on one powerful word – collaboration. Today, customer preferences are changing, perceptions are changing. Software standards change to accommodate the maximum, so that there’s value when you release software for the masses. This dynamic, hard hitting world demands an equal and dynamic platform. Moreover, the Internet has already accelerated collaboration to a new high. Time is of the essence, the faster you release, the better positioned you are. And, as with every other revolution it’s only the beginning…

Is DevOps the new outsourcing??

DevOps or outsourcing

DevOps or outsourcing

The erstwhile software revolution when it happened, brought in so much clarity and standards to how a software product must be produced, delivered and maintained. And, as these ideas and strategies grow stronger, standards are formed. But, like everything else, a shakedown happens once a while and something totally new gets created. This is the junction where enterprises and software houses are toying with such a movement. A movement called DO- DevOps.

For starters, DevOps (DO) has taken the market by storm, as these numbers show from a recent survey:

  1. 63% of respondents in a survey have adopted DO compared to 50% in 2011, that’s a 26% increase in DO adoption rate
  2. Increased agility and reliability across the software lifecycle
  3. Delivery routines increase with laser sharp precision – the throughput is more massive yet highly tuned
  4. Demand for DO keeps increasing across job postings

Today developers use CI – continuous integration/monitoring processes that helps them release builds fast and efficiently. So they need production environments that spring up and close down at the touch of a button or click because experiences are instantaneous and real-time. Running DevOps is the in-thing today in the software industry because it facilitates better application development and response, and is cheaper than other methods. Getting on the DevOps bandwagon means that you have a super-efficient, dedicated pipeline that spews out code and updates on a frequent basis. This pipeline is not limited to your enterprise but extends to third parties who you partner with as well, and that’s where the limitations start.

The most critical thing about DO is not just adapting to a standard, but ensuring collaboration and co-ordination between teams and processes happens all the time and knowing why it is done. The elements of change would revolve around People, Process and Technology.

People because this deals with a lot of culture/communication/demographic changes. A simple way to fix this at the onset is to have a common business objective trickling down to clear vision and mission across the board. This opportunity also gives you the time to pick the right resources for the job.

Process because it helps you to apply lean thinking to creating solutions, and as the methodology itself is lean and nimble, you get instantaneous feedback all the time. So you begin to see value in each of your interactions as they happen.

Technology because implementing commonly used tools and toolsets boosts productivity, learning curves are minimal and on boarding is fast. Moreover, today’s demanding software environments need production boxes to be spun up and down as fast as possible- moving to a situation where you have IAS (Infrastructure as Code) or Software defined environments.

Enterprises going the DevOps way need to coax and convince service providers they work with of the success of this model as they may not really want to switch over. They’d still be happy with traditional outsourcing.  The next bit with DevOps is that many large organizations have rigid process flows for software development and that has to be followed religiously. This mode of operation impacts a DevOps style as DO is nimble, agile and fast. Large organizations may simply not be able to adapt to the needs of a DevOps environment. There are also cultural, regional learning and unlearning to be done at many levels to enable a competent and nimble team – all of which may be difficult to implement over a large enterprise.

Barring these few niggles and issues, it’s almost like adopting DO means that the software industry is on steroids. And going by the way things are, it wouldn’t be surprising if DO becomes the new outsourcing. Currently all factors lead to it, and if these standards can be embraced and adopted at a global level, they pave the way for great software to be made and enjoyed. And, at the same time making that process a win-win for everyone.