How to set up Development, Staging, Production, and QA environments

I am in the process of setting up new servers for an organization. What are the standards or best practices for setting up a new environment with Development, Testing, Staging, and Production (or I’m open to other levels I’m not familiar with)? Additionally, I’ve heard of organizations breaking out servers into SQL, Application, Web Server, etc. Where can I find good examples of possible solutions for server setup?

Is virtualizing these environments among a few physical boxes a good practice?

I’ve searched online for some ideas of how other organization have their environment set up, but I’m not finding anything specifically helpful. I welcome any links you can point me to that discuss building an entire enterprise solution for a small to medium company.

I just found this link: http://dltj.org/article/software-development-practice/ I’d like to find more articles like this if anyone knows of any good ones they can point me to.

Before you down-vote my question, please post comments to let me try to explain more. I may just not know enough to ask the right questions.

Answer

This is a pretty loaded question. My general advice is to focus your attention on managing complexity and allow the system to grow organically.

Virtualization:

You really want to avoid server sprawl, and these days, everything is virtualized. Pick a platform that will allow you to add virtual servers quickly, as well as manage them efficiently. One trend I’ve seen is having two (for example) AIX or VMWare clusters, one for prod, one for non-prod. The non-prod one is used for all the dev, testing, and staging environments. These environments are perfect for web servers or application servers, but I’d try to avoid putting large, growing production databases as a VM (at least on windows).

Databases

These can easily get out of hand whenever they need to share resources with other servers. Always have databases running on a dedicated OS, never shared with an application or web server unless there’s a really good reason for it. Whether you use a VM or hardware is the only question.

You want a scalable infrastructure that won’t cap you if you ever need to, for example, move to a clustered solution. Many databases are going to be fine in a VM, but for the few that will eventually need more horsepower than is convenient to provide in a VM environment, you’ll find yourself wishing you’d put them on raw hardware instead.

If you’re not talking about windows, then some of these guidelines won’t be relevant. It’s common accepted practice to put large growing databases as LPARs in an AIX hypervisor, for example.

Storage

You can’t have real virtualization (with VM mobility and host clustering) without shared storage. Prod, dev, testing, and QA servers all look the same to your storage, however you might want to invest some time into finding a way to prioritize your prod. It is a very bad idea, for example, to have a heavily taxed prod database sharing disks (raid sets, pools, whatever) with a dev server. Dev can hit the disks just as hard as prod, sometimes, and the last thing you need is to figure out whether some sort of test is what’s slowing your production down.

Have someone who knows your storage sit down and analyse all the potential bottlenecks (ports, cache, controllers, disk, etc) and do your best to prevent contention for as many of these as possible between prod and non-prod.

That said, sometimes the application people need to run dev benchmarks to help quantify the effects of a new patch or something. In this situation, you might need to be able to offer them similar (or at least quantifiably different) amounts of storage horsepower.

Attribution
Source : Link , Question Author : TreK , Answer Author : Community

Leave a Comment