In addition to Server and Cloud versions of Confluence, Bitbucket, Jira Service Desk, and Jira Software, Atlassian also offers Data Center deployments. Data Center is a type of self-hosted deployment. The main technical difference between Server and Data Center is that Data Center permits multiple application servers running in parallel, whereas Server only permits one application server. We are often asked by customers about Atlassian Data Center vs Server – and, more specifically, which is the better option.
In this post, we will explore the differences between Server and Data Center and provide you with ways to determine which is best for you.
The answer depends on the size of the deployment and its business importance.
The benefits of Atlassian Data Center
Atlassian Data Center is designed for large-scale, mission-critical deployments. It costs more than Server, but it provides high availability, performance at scale, deployment flexibility and zero-downtime upgrades.
What do these benefits mean?
Performance at scale: Atlassian Data Center permits multiple application servers, so for a given specification of server, an instance of Data Center can handle more users than one of Server. Data Center instances can be built which spin up or shut down application servers with fluctuations in demand, which is known as autoscaling.
High availability: Due to multiple application servers, if one were to fail, the others would keep the service running. Server has only one application server; if it fails, the whole application would go down.
Deployment flexibility: Atlassian has developed templates for building complete autoscaling instances of Jira Data Center and Confluence Data Center in Amazon Web Services (AWS) and Microsoft Azure. The templates within Jira Data Center and Confluence Data Center (called ‘Quickstarts’ in AWS) enable you to create, within minutes, all the required application and database servers, with configurations and ancillary components (such as load balancers) for a complete instance of Data Center.
Zero-downtime upgrades: Data Center allows you to upgrade the version of your Atlassian application without taking the system offline. The upgrade is performed one server at a time. While one server is being upgraded, the others keep the service (e.g. Jira) running. A self-hosted Atlassian application normally requires at least one version upgrade per year in order to stay current. The more expensive downtime is for your organisation, the more valuable zero-downtime upgrades are.
Factors to consider
In choosing between Atlassian Data Center vs Server, the main factor is often the additional cost of Data Center versus the additional cost of the outages it would prevent. A secondary factor is whether the required performance can be delivered with a single server. Downtime becomes more of a problem the larger an organisation becomes. Server outage can be caused by something as simple as a lack of processing power, storage or memory. But an hour of Jira downtime, for example, could have significant financial and operational consequences.
Atlassian Data Center costs more than the Server product, but when the cost of the average downtime in a year approaches the additional cost of Data Center, or the volume of users is beginning to affect performance, that is the time to consider moving to Data Center.
The point at which organisations choose to move to Data Center varies. According to Atlassian, 45% of current Data Center customers have moved from Server to Data Center at the 500 or 1,000 user tiers – but this can vary from product to product. For Jira Service Desk, for example, Atlassian found that 50% of Data Center customers upgraded when they reached 50 users.
The cost of downtime
Do you know what an hour of downtime costs you?
You can normally work it out by counting how many of your employees rely on Atlassian products to get their jobs done. If there was an outage, their productivity would decrease, and an estimate can normally be made from there.
The cost of your employees’ lost time gives a minimum to the cost of an outage. The true cost may well be higher, because of such factors as:
- Loss of business
- Reputational damage
- Regulatory fines in certain industries.
The cost of poor performance
When an application runs slowly, users become less productive. Studies show that a user will start on another activity if their system takes more than about twenty seconds to respond, although frustration will set in much sooner than that. User frustration can result in a decline in confidence of employees in the organisation. So, when it comes to the Atlassian Data Center vs Server debate, performance is a key consideration.
The rapid growth of many teams using Atlassian products can result in a loss of performance that creeps up on IT administrators. At one point, the system is working fine; six months later it is barely usable because the additional volume of users has slowed it down so much.
Atlassian Data Center not only provides better absolute performance, but an autoscaling architecture helps to handle growth – so that the transition, say from 2,000 users to 4,000, does not present a problem. At peak times the system simply spins up as many servers as are required.
The cost of poor performance is harder to quantify than the cost of outages, but it is clearly greater in larger organisations, and has the potential to be almost as serious as an outage.
Struggling to decide between Atlassian Data Center vs Server?
Automation Consultants is a Platinum Atlassian Solution partner with a wealth of enterprise deployment experience and will gladly help you evaluate which Atlassian deployment model is ideal for you.
Or, for further information, download our free Guide to Atlassian Data Center Apps.