Author Archive

What is IaaS – Infrastructure as a Service

Posted by Adrien Tibi

Infrastructure as a Service is one of three levels in the cloud computing stack model commonly used to describe the different types of service that hosting companies can provide.

IaaS is the bottom, or most basic, layer of the cloud computing stack model and describes a situation where a provider supplies a customer with just the infrastructure required to run their application(s). This differs from Platform as a Service (PaaS), which includes things like development tools, runtime environments and ready-made databases, and Software as a Service (SaaS), in which users are given access to a fully functional application.The IaaS model lets users purchase the building blocks of IT infrastructure, such as servers, storage and networking, without investing in the hardware, and the environment in which to operate it, themselves. As with all ‘as a Service’ cloud models, customers benefit from an on-demand payment, with monthly, utility-style pricing and the flexibility to increase or decrease the size of their systems.

The definition of ‘infrastructure’ in IaaS is, however, open to interpretation.

‘I’ is for infrastructure, or is it?

Frequently, IaaS is defined as the provision of virtual servers exclusively, in either public or private cloud configurations. Some sources include both virtual and dedicated, physical servers in their definition.

Our view is that the true interpretation of ‘infrastructure’ is the underlying hardware, the ‘tin’ if you like, that powers the cloud and your applications.

While the common denominator in all definitions is access to core resources, such as servers, in a virtual IaaS you don’t have access to or knowledge of the underlying infrastructure. An example of PaaS, not IaaS, in our book. While industry observers and media have been saying that the distinction between IaaS and PaaS has become blurred in recent years, with the introduction of new service and models, this fundamental misappropriation of the term IaaS has been around since the very beginning.

True IaaS is bare metal cloud

True Infrastructure as a Service has become available to a far wider range of businesses in recent years thanks to advances in the area of bare metal cloud, where the provisioning and management of dedicated infrastructure has become highly automated. Ease of use, scalability and management of dedicated (bare metal) resources has now reached cloud levels of convenience. Hence the name.

The advantage of using bare metal cloud over virtual versions of IaaS is that you have complete control over the system architecture. This means you are free to choose how servers are used, as dedicated or running hypervisors, the application or VM density and every aspect of how they are clustered and networked. You are also free to change this at any time, adapting your bare metal cloud resources to perfectly meet the shifting needs of your business.

Ready to deploy on bare metal? Create your free account and start configuring your bare metal servers here.

IaaS, PaaS, does it really matter?

No, not really. What matters is that you get the right solution for your needs – the right combination of power, performance, cost and reliability. Virtual IaaS (PaaS) infrastructure solutions typically come with access to a portfolio of optional extras that can be used to build a complete solution on virtual machines. Bare metal cloud meanwhile gives you the option to control every aspect of your stack and squeeze every drop of value from your IaaS investment.

Increase SaaS profitability with IaaS

Posted by Adrien Tibi

Adopting Infrastructure as a Service (IaaS) for your SaaS business could dramatically reduce costs, increasing customer lifetime value and helping you achieve or grow profitability.

SaaS profitability is commonly assessed in terms of the ratio between Customer Acquisition Cost (CAC) and customer Lifetime Value (CLV), the rule of thumb being that a CLV > 3x CAC is the sign of a healthy SaaS business.

Clearly then, increasing CLV and reducing CAC leads to greater profitability per customer and a more profitable business.

Stuck for ideas on how to make a profit in SaaS? If so, download our SaaS profitiability guide for hints and tips.

If you now go and look for insight into how to increase CLV, the emphasis will nearly always be on increasing the revenue per customer through service expansion or clever marketing tricks. But buried in the calculation of CLV is the cost of servicing each customer and, within this, the cost of hosting. This is potentially far easier to control and could mean the difference between positive and negative profitability per customer.

As Dave Key writes in The Imperative to Reduce the Cost of SaaS Service, the opportunity to improve profitability by reducing the cost of servicing customers is often overlooked, meanwhile much attention is paid to customer acquisition and the product’s ability to generate revenue – not that these should be forgotten of course.

Key goes on to say that a core component in service cost reduction is hosting efficiency and that this can be maximised by optimising your choice of platform and vendor to closely match your business’ requirements and capabilities.

The hosting stack and hosting efficiency

Where you decide to operate on the hosting stack directly affects the costs you incur. Higher levels on the stack reflect increased value add, in terms of management and/or development environments, by the vendor and thus greater unit costs. I.e. Platform as a Service (PaaS), with its added development tools, will cost you more than the simpler Infrastructure as a Service (IaaS).

Maximising hosting efficiency isn’t just a case of choosing the platform with the lowest cost per unit of compute, however. While stepping down the stack invariably bestows greater control and lower costs, it also comes with a corresponding administrative burden that must be managed by your devops teams.

If your team is comfortable with, or even desires, greater control over the infrastructure powering your application, moving down the stack is a sound choice. Taking on hosting administration responsibilities that your teams can’t handle, however, could lead to problems with availability or worse, security, and should be avoided.

IaaS – the bottom of the stack

With IaaS, your vendor provisions the resources required to run your application, including compute, storage, networking, etc. Meanwhile, you have the freedom to configure these resources as you wish. Your developers can fine tune IaaS to the needs of your application and its users, all while keeping platform costs to a minimum.

If your devops teams are capable of architecting and orchestrating the underlying resources you need to deliver your application, IaaS will offer you the lowest cost point and make a greater contribution to profitability than other platforms.

Virtual vs. dedicated – the rise of bare metal cloud

The term IaaS frequently describes virtualised service offerings. While assuring you a lower cost point than PaaS solutions, virtual IaaS still incorporates a layer of abstraction, which comes at a cost and leaves some control in the hands of the vendor – machine density for example.

Bare metal cloud, which has been made possible through advances in automated provisioning, is, in a sense, a true IaaS wherein you are provided with the physical, dedicated servers required to build your desired environment.

On a bare metal cloud you have absolute control over every aspect of your hosting efficiency, right down to machine density, and can therefore be sure of optimising cost and profitability.

Conclusion

If you want to maximise the profitability of your SaaS business and your devops teams are up to the challenge, adopt IaaS (or, even better, bare metal cloud) to reduce your hosting costs and the costs of servicing each customer.

By managing your hosting and other costs well, any effort you put into increasing revenue per customer and reducing CAC will be doubly effective.

How to Get Started as a Successful Hosting Reseller

Posted by Adrien Tibi

Becoming a hosting reseller can be a lucrative means of starting a new business or cross-selling to your current customer-base, without the complexities of setting up and fully managing your own infrastructure. By employing your technical hosting knowledge, you can benefit from a recurring income and a long-term relationship with customers.

Give your hosting reseller business all the support it needs by downloading our WHMCS guide here

But like any business, it requires strategic planning and investment before you can offer a compelling and reliable service. Here are five steps to getting started as a hosting reseller that will set you up for success.

1. Select a host

This is possibly the most important step, as the service you offer to your customers will be directly linked with the service your host offers you. You should have a number of questions in mind when searching for a host:

2. Choose your hosting package

Traditional reseller hosting involves selling provisioned space on a dedicated server, but cloud environments, where your reseller package and your customers’ packages can be easily scaled, are popular today. Whatever your preference, you should choose and customise your hosting package based on your projection of customer growth, your expectations of customer requirements and your budget. By completely tailoring your server environment through, for instance, the number of cores or the type of hard drive fitted, you can deliver enhanced performance to your customers and gain a competitive advantage.

3. Create your own hosting packages

The hosting packages you offer will again be determined by your expectations of who your customers will be. A key consideration is the pricing strategy you employ. Typically you will not be able to compete solely on price against big hosting companies, so factors such as resource allocation (disk space, bandwidth, databases etc.), the type of hardware and software instances available and quality of support should dictate the prices of your plans.

4. Plan your support system

You need a control panel, plus ticketing and billing software to manage and support your customers. WHMCS, the leading web hosting automation platform, is often the go-to application for handling billing and support, and you may want to choose a hosting provider that have aWHMCS plugin. Whatever you choose, you need to get to grips with your system as it will act as the backbone of your communication with customers.

You also need to plan the logistics of the support you offer, for example, the hours that you provide support (which should be reflected in your pricing) and the arrangement between you and your host regarding how you handle physical hardware issues.

5. Define your terms of service

Every host, including resellers, needs to define terms and conditions of service to prevent illegal, abusive or disruptive activity by customers. For resellers, this needs to be in accordance with the host’s acceptable usage policies. This can always be found on a host’s website, but if you need further guidance or support, you should contact your host to establish your documentation and ensure it is legally sound.

6. Get your hosting business online

Just like any other business, you need to bring your hosting services to market. Of course, this involves creating an eCommerce website or, if web hosting is a natural extension of your current services, adding landing pages and support system functionality to your existing business’ site.

From here, it’s up to you how you approach your marketing strategy, whether you utilise display or search ads, develop an organic content marketing plan or something else.

The key to reseller hosting success is treating it like a business. Whether you provide managed infrastructures or applications services, your customers expect reliability and professionalism. The host you rely on for your reseller service lays down the foundation of this, and that’s why in many respects the initial stage of choosing a host is the most important. Find out about the Redstation Reseller Programme today.

To ensure you hosting reseller business has the support it needs, you might want to consider the world’s leading billing software. If this is something your interested it, don’t hesitiate to download our guide for hosting resellers about ‘How to Get the Best from WHMCS’.

Beyond London Colo: 4 Benefits of Local Colocation

Posted by Adrien Tibi

Organisations choosing to co-locate their servers outside of London have the benefit of accessibility, reduced latency, affordability and increased security.

The fact is that the Greater London area has the highest concentration of data centres in the UK, and for many organisations, it will be the only location they consider. But with new hubs popping up in places such as Manchester and Birmingham, more and more regional businesses are looking at the benefits that colocation outside of the capital can offer them.

1. Accessibility

If your organisation is based outside the capital and your team need access to your server hosting environment, it’s kind of a no-brainer to pick a data centre near to your office location.

Even if you are based in London, concerns about data security in the event of a disaster or emergency may make it better for you to choose a location outside of the M25. A data centre location close to the M4 corridor such Maidenhead or Slough will still offer convenience for travelling Sysadmins and IT administrators, but will also keep Business Continuity and Risk Assessment happy.

Of course, many datacentres offer use of remote hands for basic requests, so you don’t even need to visit.

2. Reduced Latency

If your office is outside of London, choosing a data centre that’s closer to home will reduce latency to a minimum. In fact, even if your offices are in the capital, latency will be minimal if you have a P2P (point to point) line installed between your datacentre and your office.

There’s also the fact that some London datacentres have worse latency than better connected ones outside of the capital. For example, we have a direct line from our London cores (i.e. public internet) to our Maidenhead data centre, and it works better than someone being in London and then having to “hop” between three people in order to reach the public internet.

3. Affordability

London is the most expensive city in the world to live and work in. It’s really not surprising that it’s also the most expensive place to host your data.

Many businesses are wasting valuable office space to have their servers on site, or within the M25, largely because they believe this will reduce latency. The truth, however, is that latency will be negligible if you install a P2P.

4. Security

If data security and recovery in the event of a major terrorist attack is a priority for your organisation, then colocation outside of London may be a sensible hosting choice. Of course, with a city the sheer size of London, it’s still possible to choose central locations that are outside of high-risk areas.

Overall there are some great reasons to choose data centres outside of London to collocate your servers. You’ll most likely save money. Your equipment will be accessible, and it will be more secure.

 

What is colocation?

Posted by Adrien Tibi

Moving your existing servers out of your own premises and in to a data centre improves reliability and security while granting access to sophisticated connectivity and hybrid cloud computing options.

IT and applications are increasingly moving to the cloud, but there are still a great many servers sitting in offices around the world. These servers can be a drain on their owners, who have to manage and maintain them, and they still remain vulnerable to a wide range of business risks including fire, theft and connectivity outage.

When the investment in equipment has already been made, and you want to continue using it, but you want to benefit from a more secure environment, better access to connectivity and higher levels of resilience, colocation is the solution.

What is colocation?

Colocation is the act of housing your physical servers in someone else’s data centre.

By colocating your servers within a managed data centre, you can obtain many of the benefits of the data centre environment and business model without the need to invest in new dedicated servers or migrate your applications to virtual ones.

Key benefits of collocating your servers include improved business continuity capability, enhanced connectivity options and access to hybrid cloud options.

Business continuity

No business can affordably create an IT environment as safe and secure as that of a Tier 3 data centre – the tier of data centre that most respectable hosting providers operate.

A Tier 3 data centre offers, amongst other things:

All things that you, probably, cannot replicate on your own site and which can, in the event of a system outage or service interruption, keep your servers online and available.

Connectivity

With increasing globalisation, more mobile workers and BYOD trends, servers connected to your building’s internet pipe are not going to meet the connectivity needs of your organisation.

In a colocation facility, your servers can be given state of the art connectivity. This includes but is not limited to:

Hybrid cloud

Data centres are, naturally, the place where public, private and bare metal clouds live. Colocating your own servers with providers of these other platforms will make it easier integrate them into a hybrid cloud solution – both technically and financially.

By colocating within a data centre, your servers can easily be connected with a wide range of complementary technologies that address specific requirements for your business. These could include solutions like:

A solution for now, and the future

From your first steps into cloud computing through to a fully mature cloud strategy, colocation plays a valuable role at every stage of cloud adoption. For some services, running anything but your own hardware may not be an option but by colocating you’re getting access to all the benefits of the data centre environment all the same.

Apache Spark vs Hadoop: What’s best for managing Big Data?

Posted by Adrien Tibi

Apache Spark and Hadoop are both frameworks of platforms, systems and tools that are used for real time Big Data and BI analytics, but which one is the best for your data management?

According to Bernard Marr at Forbes, Spark has overtaken Hadoop as the most active open source Big Data project. While Hadoop has dominated the field since the late 2000s, Spark has more recently come to prominence as a big hitter

However, a quick look at Google Trends shows us that while interest in Spark has been on the rise since around November 2013 it’s still completely dwarfed by Hadoop.

Google suggests that in March 2016 interest in Hadoop equalled its all-time peak, but Spark has only ever achieved around 44% of Hadoop’s peak interest level. Incidentally, Spark’s own March 2016 peak is only up 3% from its previous high point in June 2015, so growth in interest does seem to have slowed.

Ready to deploy on bare metal? Create your free account and start configuring your bare metal servers here.

So what is Spark, and how is it competing with the Hadoop elephant?

What is Apache Spark?

At its simplest, Apache Spark is a data processor. Like Hadoop, it is open source, and provides a range of connected tools for managing Big Data. It’s often considered a more advanced product than Hadoop, and is proving popular with companies that need to analyse and store vast quantities of data.

The Spark team are clear on who they view as their competition, suggesting that that their engine can run programs “up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk.” If that’s true, then why has interest in Hadoop contiued to rise?

Comparing Spark and Hadoop

The answer is that both products have their strengths and weaknesses, and in many cases their use is not mutually exclusive.

1) Performance

By processing data in-memory Spark reduces latency almost to 0, but can be extremely demanding in terms of memory as it caches processes. This means that if it is running on top of Hadoop YARN or systems that also have high-memory demand, it might be deprived of the resource it needs to perform efficiently.

By contrast, Hadoop MapReduce kills each process once a task is completed, which makes it leaner and more effective to run alongside other resource demanding services. Spark is a classic only-child, it works best in dedicated clusters, whilst Hadoop plays well with others.

2) Costs

Although both software products are open-source and thus free to use, Spark requires a lot of RAM to run in-memory, and thus the individual systems required to run it cost more. However, this is balanced out by the fact that it requires far fewer machines to process large volumes of data, with one test successfully using it to sort 100 TB of data three times faster than Hadoop MapReduce on 10% of the machines.

3) Ease of Use

Spark is generally regarded as easier to use than MapReduce, as it comes packaged with APIs for Java, Python and Spark SQL. This helps users to code in their most familiar languages, and Spark’s interactive mode can help developers and users get immediate feedback for queries.

4) Scalability

Both systems are scalable using the Java-based file system HDFS. Hadoop’s age means that it has been used for high profile large infrastructures: Yahoo has over 100,000 CPUs in over 40,000 servers running Hadoop, with 4500 nodes in its largest cluster. According to the Spark team the largest known cluster has 8000 nodes.

5) Security

Hadoop’s Kerberos authentication support can make security difficult to manage. While Spark lacks secure authentication, it benefits from sharing Hadoop’s HDFS support for access control lists and file level permissions.

Overall, Hadoop comes out on top for security, but Spark benefits from its strengths.

Conclusion

The good news is that the two systems are compatible. Spark benefits from a lot of Hadoop’s strengeths via HDFS, while adding speed and ease of use that the older project lacks.

If you need to process huge quantities of data, and you can dedicate systems to process it, then Spark is likely to be better, easier to use and more cost-effective for your project. However, if you need scalability and for your solution to run alongside other resource-demanding services, Hadoop MapReduce will probably be a safer bet.

Bare Metal Cloud vs Dedicated Servers

Posted by Adrien Tibi

Simply put, a bare metal cloud is made up of dedicated servers but is automated for near-instant provisioning.

  1. What is a bare metal cloud and what is a dedicated server?

There’s a lot of confusion around the bare metal cloud concept, largely because it’s still a relatively new term, only really coming into use since 2014. Even though the popularity of dedicated servers is in decline, it’s a well-recognised term, so it’s easy to see how people start to get confused when they hear “bare metal cloud” and “dedicated server” used interchangeably.

Ready to deploy on bare metal? Create your free account and start configuring your bare metal servers here.

 

The reality is that bare metal cloud and dedicated servers are related, because one is part of the other. A bare metal cloud environment is made up of 1 or more single-tenant/dedicated servers. A dedicated server is a standalone server, with a given specification, that the customer has complete administrative access to.

  1. So why not just call it dedicated or single-tenant cloud?

Probably because of the negative connotations. Dedicated servers have historically had the drawback of being manual or time-consuming to provision. This meant that if you needed an instance spun up quickly, you would want to avoid a dedicated environment as a matter of principle.

A bare metal cloud environment avoids this issue, as it automates provisioning. You can buy your kit online and get it up and running in a matter of minutes, as easily as you would with a public cloud provider like AWS. Unlike virtual machines, a bare metal cloud lets you control everything from the infrastructure upwards, but without owning or operating the wider public internet or datacentres.

What can a bare metal cloud include?

So who is bare metal cloud right for?

You’ll always need to analyse your hosting requirements to determine which environment will best support your workload. But for a great number of workloads, a bare metal cloud represents the most flexible method of running a hosted application as it is customisable, cost-effective, and scalable.

  1. Ok, but is bare metal cloud right for me?

Maybe, try asking yourself these questions:

 

If your answers are generally positive, then you may want to look into controlling a bare metal environment. If there are negatives you’ll need to weigh up your technical considerations against each other or against your budget objectives.

When is Bare Metal the Right Choice for ECommerce?

Posted by Adrien Tibi

For the majority of eCommerce businesses, workloads are quite predictable, making bare metal cloud a much more cost effective infrastructure choice than public cloud in many cases.

Swathes of eCommerce founders are understandably drawn to public cloud vendors, such as AWS – enticed by low entry point pricing and offers, as well as the success stories of prominent names. But there are two sides to the public cloud pricing equation, and low entry point prices are invariably offset by premiums elsewhere in the portfolio. Typically, this impacts businesses who do not make use of the scalability aspects of the platform.

Ready to deploy on bare metal? Create your free account and start configuring your bare metal servers here.

 

Don’t Mistake Yourself for Amazon

If an eCommerce store experiences massive peaks and troughs in its visitor numbers, the ability to access burstable capacity, and to avoid the need to over-provision to cope with those peaks, represents a very real financial benefit. The reality is, however, that most eCommerce businesses, even the most successful, are quite unlike Amazon, which built AWS to meet its own need for scalability first.

Amazon’s eCommerce business faces unique challenges that arise from its size and success – massive fluctuations in visitor numbers are only the beginning. Amazon also contends with a truly global audience, calling for infrastructure in every region in order to cater for demand while delivering a satisfactory user experience.

On top of this, Amazon’s Big Data collection and storage, analytical power and back office systems all need to scale in line with customer demand too. Their need for burstable capacity is unprecedented.

Add to this, its marketplace offering, which accommodates millions of independent sellers, and the magnitude of these infrastructure challenges becomes even more impressive.

But Amazon is a one-off. The vast majority of eCommerce businesses have very different infrastructure requirements.

 

The Right Infrastructure for Your ECommerce Business

Fluctuations in visitor numbers to eCommerce websites rarely reach the point where the difference between the average and peak demand for computation power makes paying a premium for public cloud scalability worthwhile.

And while visitor numbers fluctuate, the volume of data being stored for transactions and other processes grows linearly – predictably and slowly in relative terms. The demand for analytical power and business intelligencemeanwhile, though driven by visitor numbers, is in many cases outsourced to third parties and so does not impact the core infrastructure requirements at all.

This predictability in demand, and therefore infrastructure requirements, is further supported by the tendency for successful eCommerce businesses to serve specific markets and geographies well, rather than taking the multi-national broad-line approach.

It rarely makes sense for an eCommerce business to host all of its infrastructure, if any, on the public cloud.

 

The Business Case for Bare Metal

Unless your workloads are like Amazon’s and you can benefit from access to premium, burstable, pay-as-you-go capacity, bare metal cloud will give you better ROI. It’s well known that, for most examples of always-on instances, a dedicated server within a bare metal cloud environment will cost a fraction of what the public cloud alternative will – once attractive new customer deals have expired, of course.

In addition to the cost advantages, bare metal instances always deliver better performance over time, thanks to their being single tenant and not sharing resources with any other users in any way. Given how vitally important page load and server response times are to the customer experience in eCommerce, this is a distinct advantage in bare metal’s favour.

 

In conclusion, the low-cost ticket price of public cloud is not what you will end up paying if your workloads are steady or your instances always on. In every case, the best way to ensure you get maximum ROI from your infrastructure, both in terms of cost and performance, is to match it closely to your workloads. Capacity planning is an essential step in cloud deployment of ecommerce businesses and should not be overlooked.

Amazon AWS is a Bare Metal Cloud – for Amazon

Posted by Adrien Tibi

For its users, AWS is a public cloud; for Amazon it’s their own cost-effective bare metal cloud.

Understanding how Amazon’s AWS supports the needs of its Amazon.com business is useful when considering the economics of building a cloud environment for your eCommerce business.

Undeniably the most popular and successful eCommerce business of our time, Amazon is unique. And with this success and scale have come challenges that no other eCommerce company has faced previously. Overcoming these challenges is what led to the birth of AWS and is why Amazon.com is now built fully upon it.

Ready to deploy on bare metal? Create your free account and start configuring your bare metal servers here.

Amazon’s Unique eCommerce Challenges

Perhaps the easiest component of the eCommerce business is providing an online store on which customers can browse and purchase products. Pages are highly templated, while images and content are driven by a database of products. The real challenges for an e-tailer with Amazon.com’s success all revolve around scaling this online store while meeting the demands of customers.

Delivering the best possible user experience, no matter where or when customers are using the site, requires infrastructure with enormous scalability. In order to maximise sales and minimise abandonment, product images and content need to load lightning fast, categories must be searchable and filterable and the site responsive when customers add products to baskets and move through the checkout process. Slow sites leak orders.

Amazon.com’s visitor numbers also vary massively over time. For instance, the number of site users on an average Wednesday will be dwarfed by the Black Friday rush. Provisioning an infrastructure to cater for the peaks in demand, while simultaneously meeting customer expectations, would simply be economically unfeasible. The only option for Amazon was to build on a shared platform that can afford the kind of scalability required.

The issue is further complicated by Amazon.com’s global reach. The same kind of scalability and performance is needed in every region – images can’t be transferred across the globe as customers try to browse products. Instead, images and content need to reside near the user in order to provide a satisfactory customer experience. The only way to achieve this is with infrastructure close to the end-user, meaning huge estates at multiple points across the globe.

 

AWS = Bare Metal Cloud

Amazon will have very quickly realised that, while third party public clouds offer the scalability it needs, the economics are not in its favour.

In order to be able to offer massive scalability, a public cloud vendor needs to be able to sell unutilised capacity quickly and easily. This means charging low entry points to bring customers on-board for short-term or low-level usage but then charging a premium in other areas to offset the potential cost of under-utilisation. The result is that larger businesses end up paying more for their resources.

For Amazon, building its eCommerce business on someone else’s public cloud would be too expensive. So it built its own.

But when you build your own cloud you build a bare metal cloud, a collection of physical machines, networked and at your disposal, whether it’s for use in dedicated format for databases and containerised apps, or in virtual form for webservers etc. In AWS’s case, this bare metal cloud consists of more than two million Linux servers.

Public cloud is simply an economic model applied to such bare metal infrastructures. By building its own bare metal cloud, Amazon was able to remove the cost premium of building Amazon.com on someone else’s public cloud, and create an additional revenue stream selling the public cloud model to others.

The lesson for eCommerce businesses is to never make assumptions about which type of infrastructure is right for your needs and to always look beyond attractive new-user pricing. The right infrastructure for you always depends upon the type of workloads you will be running, the patterns in demand, locations of users and the utilisation of resources.

The right solution may well be a combination of public cloud and other platforms, like private or bare metal clouds and dedicated servers. The only way to discover this is to thoroughly understand your requirements or consult with experts who can advise you on all the available options.

Is the Size of Your Dev Team Harming Your Productivity?

Posted by Adrien Tibi

Jeff Bezos, Amazon CEO, famously claimed that if a team couldn’t be fed with two pizzas it was too big. Of course, he also declared that “communication is terrible!” – so, was he making a valid point or just grandstanding?

It can seem counterintuitive to suggest that two heads are worse than one, but there’s actually a good body of evidence to suggest that Bezos knows what he is talking about.

A number of studies show a variety of effects that make create sluggishness and lower productivity in large teams, regardless of who’s in them.

 

Sluggishness in Large Scrums – Jeff Sutherland

The Scrum Guide has evolved its “7 ± 2” people rule for team size into “3-9 people” over the years, showing a growing recognition of the value of even the smallest teams.

But Jeff Sutherland, one of the inventors of Scrum and writer of The Scrum Guide, is unequivocal on the matter – keep it at 7 people or fewer.

In an experience report he produced for Craig Larman’s book Agile and Iterative Development: A Manager’s Guide, Sutherland described a situation he observed within a 500 person dev group:

A few teams within the group were generating production code five times over the industry average rate. But most only doubled the production average, despite good Scrum execution. All the teams that were able to work hyper-productively consisted of 7 people or fewer, and Sutherland surmises that the larger group numbers (usually of around 15) were the reason behind their relative sluggishness.

He also points out that (at the time of writing) Rubin’s Worldwide Benchmark database gives the average cost per function point, across over 1000 projects, as $2970, but for teams of 7 people, the average was just $566.

Any team over 7 in size should be split up into multiple Scrums.” – Jeff Sutherland

 

 

Social Loafing–Ringelmann, Latané et al

A much earlier case for small teams was made by Maximilien Ringelmann, with his 1913 findings now referred to as the Ringelmann Effect.

Essentially, the Ringelmann Effect suggests that the size of a team is inversely proportional to its productivity. The more bodies in a group, the more difficult coordination becomes, with teamwork and cooperation suffering. Ringelmann highlighted this with his renowned “rope pulling experiment” – he found that if he asked a group of men to pull on a rope together, each made less effort when doing so as part of the group than when tugging alone.

Ringelmann’s findings are backed up by the experiments of Bibb Latané et al, who studied the phenomenon known as social loafing.

 

Social psychologist Latané demonstrated the social loafing effect in a number of ways. A key experiment that showed that, when tasked with creating the loudest noise possible, people in a group would only shout at a third of the capacity they demonstrated alone. Even just (mistakenly) believing they were in a group was enough to make a significant impact on the subjects’ performance.

When groups get larger, you experience less social pressure and feel less responsibility because your performance becomes difficult, or even impossible, to correctly assess amidst a crowd. It’s no wonder that when credit and blame get harder to assign, you start to feel disconnected from your job.” – Bibb Latané

The Pain of Link Management – J. Richard Hackman

It was published a number of years ago now, but Diane Coutu’s interview with J. Richard Hackman on Why Teams Don’t Work in the Harvard Business Review is worth a read for a further look at the many reasons that teams can hinder more than help.

To Hackman, one of the key stumbling blocks for teams is link management. As groups grow, the accumulating links between everyone within the team rises steeply. The number of links created by a team can easily be calculated with the equation:

# of links = n(n-1)/
(where n = number of people in the team)

So a team of six will create 15 links, but a team of twelve racks up 66.

The more links needing maintenance, the higher the potential for mismanagement and miscommunication. Keeping everyone in the loop and coordinated can eat into productive time. Or, as Hackman bluntly puts it, “big teams usually wind up just wasting everybody’s time”.

 

Relational Loss – Jennifer Mueller

Racking up links can also incur a more personal toll. Psychologist and Professor of Management, Jennifer Mueller proposed “relational loss” – the feeling that you are receiving diminishing support the larger your team grows – as another issue created by a larger team.

Mueller studied 212 knowledge workers across a number of companies in teams ranging in size from three to nineteen. Across data derived performance evaluations and questionnaires on motivation, connectedness, and coordination, she found “compelling evidence for relational loss.” The larger the team, the less supported people felt and the more their performance suffered.

Software Project Teams – Brooks’s Law

“Adding human-power to a late software project just makes it later.” – Fred Brooks

Most developers will be familiar with Brooks’s Law for software project management and the idea that for every project there is an incremental person who will make a project take longer, if added to it, along with those arguments that refute it.

Whether the law is gospel or “an outrageous simplification” as Brooks himself claimed, three sound factors underpin his point:

Firstly, new team members are rarely immediately productive – they need what Brooks refers to as “ramp up” time. During ramp up time, existing members of the group may lose focus as they dedicate time and resources to training the newcomer. Far from creating an immediate improvement, the new worker may even make a negative contribution, for instance introducing bugs.

Secondly, personnel additions increase communication overheads – everyone needs to keep track of progress, so the more people in the team the longer it takes to find out where everyone else is up to.

Thirdly, there’s the potential issue of limited task divisibility. Some tasks are easily divided but others are not, as illustrated by Brooks’s charming example that, while one woman needs nine months to make one baby, “nine women can’t make a baby in one month”.

 

Larger Teams Breed Overconfidence and Under-performance – Staats, Milkman and Fox

Not only does larger team size seemingly make people more complacent and less productive, but it also breeds overconfidence. There’s a tendency “to increasingly underestimate task completion time as team size grows,” say researchers Bradley Staats, Katherine Milkman, and Craig Fox. One of their experiments showed that, in a task to build uniform Lego figures, teams of four people were almost twice as optimistic about how quickly they could construct it as teams of two, but they actually took over 44% longer.

If four people struggle to work together to build some Lego, then the outlook doesn’t exactly look great for a complex development project.

 

Back to Bezos (and Pizza)

In practice, the two-pizza rule translates to splitting your personnel into autonomous task forces of five to seven people, which sits comfortably alongside the advice of Sutherland and Hackman and should minimise social loafing and the sense of relational loss.

And when Bezos said “communication is terrible”, he was really saying that cross-team exchange gets in the way of team independence and creates group-think. Not everywhere has a culture that relies on creative conflict like Amazon, but limiting dysfunctional decision-making and being discerning about how and where communication is needed would be beneficial to all.

 

 

If you’re mystified by a lack of productivity from a highly skilled and talented dev team, it may be that you’ve just got too many talented people trying to work together. Splitting your group into two or more two-pizza sub-teams might be all it takes to radically change your output.

 

Build your bare metal cloud

Speak to an advisor for a completely free consultation or create a free account and start configuring servers now