October 15, 2012
In 1998 I briefly worked for FiringSquad, a gaming website founded by Doom and Quake champion Thresh aka Dennis Fong and his brother Kyle. I can trace my long-standing interest in chairs and keyboards to some of the early, groundbreaking articles they wrote. Dennis and Kyle were great guys to work with, and we'd occasionally chat on the phone about geeky hardware hotrodding stuff, like the one time they got so embroiled in PC build one-upmanship that they were actually building rack-mount PCs … for their home.
So I suppose it is inevitable that I'd eventually get around to writing an article about building rack-mount PCs. But not the kind that go in your home. No, that'd be as nuts as the now-discontinued Windows Home Server product.
Servers belong in their native habitat, the datacenter. Which can be kind of amazing places in their own right.
The above photo is from Facebook's Open Compute Project, which is about building extremely energy efficient datacenters. And that starts with minimalistic, no-frills 1U server designs, where 1U is the smallest amount of space divisible in a server rack.
I doubt many companies are big enough to even consider building their own datacenter, but if Facebook is building their own custom servers out of commodity x86 parts, couldn't we do it too? In a world of inexpensive, rentable virtual machines, like Amazon EC2, Google Compute Engine, and Azure Cloud, does it really make sense to build your own server and colocate it in a datacenter?
It's kind of tough
to tell exactly how much an Amazon EC2 instance will cost you since it varies a lot by usage. But if I use the Amazon Web Services simple monthly calculator
and select the Web Application "common customer sample", that provides a figure of $1,414 per month, or $17k/year
. If you want to run a typical web app on EC2, that's what you should expect to pay. So let's use that as a baseline.
The instance types included in the Web Application customer sample are
24 small (for the front end), and 12 large (for the database). Here are the current specs on the large instance:
- 7.5 GB memory
- 2 virtual cores with 2 EC2 Compute Units each
- 850 GB instance storage
- 64-bit platform
- I/O Performance: High
You might be wondering what the heck a EC2 Compute Unit is; it's Amazon's way of normalizing CPU performance. By their definition, what we get in the large instance is akin to an old 2008 era dual core 2.4 GHz Xeon CPU. Yes, you can pay more and get faster instances, but switching instances from the small to the high-CPU and from the large to the high-MEM more than doubles the bill to $3,302 per month or $40k/year.
Assuming you subscribe to the theory of scaling out versus scaling up, building a bunch of decent bang-for-the-buck commodity servers is what you're supposed to be doing. I avoided directly building servers when we were scaling up Stack Overflow, electing to buy pre-assembled hardware from Lenovo instead. But this time, I decided the state of hardware has advanced sufficiently since 2009 that I'm comfortable cutting out the middleman in 2012 and building the servers myself, from scratch. That's why I just built four servers exactly like this:
(If you are using this as a shopping list, you will also need 4-pin power extensions for the case, and the SuperMicro 1u passive heatsink. The killer feature of SuperMicro motherboards that makes them all server-y in the first place is the built in hardware KVM-over-IP. That's right, unless the server is literally unplugged, you can remote in and install an operating system, tweak the BIOS, power it on and off, and so on. It works. I use it daily.)
Based on the above specs, this server has comparable memory to the High-Memory Double Extra Large Instance, comparable CPU power to the High-CPU Extra Large Instance, and comparable disk performance to the High I/O Quadruple Extra Large Instance. This is a very, very high end server by EC2 standards. It would be prohibitively expensive to run this hardware in the Amazon cloud. But how much will it cost us to build? Just $2,452. Adding 10% for taxes, shipping, etc let's call it $2,750 per server. One brand new top-of-the-line server costs about as much as two months of EC2 web application hosting.
Of course, that figure doesn't include the cost in time to build and rack the server, the cost of colocating the server, and the ongoing cost of managing and maintaining the server. But I humbly submit that the one-time cost of paying for three of these servers, plus the cost of colocation, plus a bunch of extra money on top to cover provisioning and maintenance and support, will still be significantly less than $17,000 for a single year of EC2 web application hosting. Every year after the first year will be gravy, until the servers are obsolete – which even conservatively has to be at least three years. Perhaps most importantly, these servers will offer vastly better performance than you could get from EC2 to run your web application, at least not without paying astronomical amounts of money for the privilege.
(If you are concerned about power consumption, don't be. I just measured the power use of the server using my trusty Kill-a-Watt device: 31 watts (0.28 amps) at idle, 87 watts (0.75 amps) under never-gonna-happen artificial 100% CPU load. The three front fans in the SuperMicro case are plugged into the motherboard and only spin up at boot and under extreme load. It's shockingly quiet in typical use for a 1U server.)
I realize that to some extent we're comparing apples and oranges. Either you have a perverse desire to mess around with hardware, or you're more than willing to pay exorbitant amounts of money to have someone else worry about all that stuff (and, to be fair, give you levels of flexibility, bandwidth, and availability that would be impossible to achieve even if you colocate servers at multiple facilities). $51,000 over three years is enough to pay for a lot of colocation and very high end hardware. But maybe the truly precious resource at your organization is people's time, not money, and that $51k is barely a rounding error in your budget.
Anyway, I want to make it clear that building and colocating your own servers isn't (always) crazy, it isn't scary, heck, it isn't even particularly hard. In some situations it can make sense to build and rack your own servers, provided …
- you want absolute top of the line server performance without paying thousands of dollars per month for the privilege
- you are willing to invest the time in building, racking, and configuring your servers
- you have the capital to invest up front
- you desire total control over the hardware
- you aren't worried about the flexibility of quickly provisioning new servers to handle unanticipated load
- you don't need the redundancy, geographical backup, and flexibility that comes with cloud virtualization
Why do I choose to build and colocate servers? Primarily to achieve maximum performance. That's the one thing you consistently just do not get from cloud hosting solutions unless you are willing to pay a massive premium, per month, forever: raw, unbridled performance. I'm happy to spend money on nice dedicated hardware because I know that hardware is cheap, and programmers are expensive.
But to be totally honest with you, mostly I build servers because it's fun.
Posted by Jeff Atwood
I take it if you are building servers to have cohosted, you are working on a new project? What are you working on now?
I'm not sure the AWS numbers you are quoting match up with the four servers you are building. Using the AWS Web Application "common customer sample" it quotes 6 servers, 4 storage volumes, IPs, bandwidth and load balancing.
Backing out everything but the 2 Web and 2 Database servers drops the bill to $670 / month.
I've been building my home PC's since I was in elementary school, and have built servers for work before, and so I almost entirely agree with you. The prices of cloud hosting seem to be outrageous (even just cloud data -- google drive, which is the cheapest option, is about 6x per year the cost of just buying drives when comparing 1TB option).
However, while I've always been able to find the time for maintenance, and always had fun doing the labour, I've always ended up with issues with network bandwidth. In the case of data storage, there is also an argument for better accessibility since those services usually come with mobile access, os/browser plugins, and all sorts of web api's etc.
Anyway, its really hard to evaluate options sometimes, but your point that building locally isn't impossible is spot on. Advise that I've been given is to run servers locally for the every-day workload, but make sure you have a scalable cloud backup plan in case usage jumps through the roof, or disaster strikes, and pay the prices only for those cases.
I recently built a single server for our company and we found that the Mac Mini colo services offered the best price/performance for a server or two.
Compared to hosting with AWS, over several years it's cheaper to buy a Mac Mini and colo it somewhere, with the Mac Mini simultaneously offering better performance. (Note: we're upgrading the Mini to 16GB ram and SSDs with 3rd-party gear)
Compared with colo'ing a 1U server... well, actually most companies aren't interested in selling 1U of colo space. I found most companies want to lease you at least a quarter of a cabinet. Obviously, a build-your-own 1U server is going to offer more compute power, but finding hosting may be difficult. Quad-core 2.2ghz i7, 16GB RAM, and 256GB SSD is overkill, actually, for a lot of server needs.
I think the economics work because the Mac Mini colos basically rent colo space by the cabinet, and can fit 96+ Minis in a single cabinet, thus driving down the cost for end customers.
Just a thought that seems to be working for us.
I agree. Even after you throw in a rack, cabling, switch and backup power supply and assorted paraphernalia and include your monthly electricity bill I'm sure you will still be ahead. And especially so if you consider it on relative performance. I run my own servers and every time I look at the relative value proposition of switching to EC2 I can't justify it on cost alone.
That said, the time spent building and maintaining them along with an inability to easily and quickly scale or to deal with server outages etc. means I probably wouldn't make the same decision again. But it was a lot of fun.
To be clear,
1. I'm sure the economics start tipping in the favor of standard rackmount servers fairly quickly once you're talking about more than a few servers, or if raw CPU performance is your limiting factor.
2. Apple only officially supports 8GB of RAM on the Minis; third-party sites sell 16GB upgrade kits that have been pretty extensively validated. If that makes you nervous when talking about server hardware, I don't blame you because it should, but according to our own experiences and those of others it's not an issue.
Yes, BYO puts a lot of labor and risk-management on your shoulders in exchange for the reduced ongoing cost.
OTOH, this past year, I've been busy migrating an app to The Cloud, so it no longer shares a non-redundant 8 Mbps connection with the rest of The Office. We get better bandwidth, network latency, system software (2012 is long past Fedora 9's sell-by date), MTTR for hardware problems, and scalability (up and/or out is trivial vs. having someone physically plugging things locally). On-box performance is similar because our aging server fleet was already 1 ECU and the memory footprint is laughably small.
Even with some waste due to cutting some corners in the rush to get it out there, our bill is around $250/month for 3-4 servers and the rest of the AWS cloud they live on.
The other question to factor into your cloud calculation is whether you *reserve* any of those resources. That allows you to convert part of the per-hour usage charge into up-front payment, which we lean on heavily.
Forgot to mention. Our DB also gained from the transition because now it's a multi-AZ RDS instance. There would have been a lot of manual fiddling and loading to recover the DB if that server went down in flames when it was local.
Also a clarification, the local servers were 1 ECU each, not total.
You are not just comparing apples to oranges; with that default aws setup, you are comparing apples to donkeys.
The correct comparison is your server vs a single EC2 High-Memory Double Extra Large instance with a 3 year heavy-utilization reservation. This instance costs $3100 upfront plus $0.14/hour. The total 3 year cost for this server on AWS would be 3100 + (.14 * 24 * 365 * 3) = 6779.2, or about $188.31 per month.
Sure, its more expensive. But AWS provides an insane of value on top of the server. Like instantly being able to provision additional capacity. I wouldn't be at all surprised if, on a full-loaded cost basis, it is extremely competitive with building your own server. Heck, the employee salary expense of building your own server will easily drive the cost of the server well beyond the $3100 up front amazon fee.
I love building hardware too (never had a computer I didn't build except for laptops). But my mind boggles at AWS value proposition.
First off, Jeff thanks for the link :-)
Second, the site http://www.servethehome.com has grown from a micro instance a year ago, to a small instance and now requires a medium instance just for the main site.
Still looking to colo a machine in the near future. Big impacts are much higher storage I/O, significantly more memory available. The other main impact is that the EC2 compute units get destroyed.
Just by way of comparison, the AMD Opteron 6128 is a low cost (~$100) server CPU that has 8 physical cores. An Amazon EC2 micro instance is about as powerful as a single Opteron 6128 core under ESXi.
The one thing that you do get with AWS is a suite of tools that makes management easy. Also, scaling is made much easier with AWS and you get things such as the ability to quickly add components such as load balancers to the system.
Still, if you need things like SSD performance, high speed CPUs and lots of memory, it is usually less expensive to build a 1U similar to yours. A really good application example is a Minecraft server where you need all three. AWS cannot cost-effectively handle that workload.
Please include your hourly rate and the number hours you spent researching, building, installing OS, etc. Please also include an estimate for the monthly hours you'll spend on maintenance and updates.
Without this data there is no comparison to the cloud services.
OK Sam Thomas, but only if you factor in the collective learning value of this article to the world, in hours, by each person's hourly rate. :)
Nice Dodge, but I'm being serious. I work for a small software company that does its own hosting and server maintenance is killing us. I don't think your comparison is useful without some estimation of these costs. It's easy to assume these things are free while burdening your developers with support tasks and wondering why all the software projects are late.
$1500 a month? That's a lot.
You can rent a Linode from $20 to $150, I didn't think Amazon was that much more expensive!
I want my servers up 100% of the time on a fast link, I'm not going to risk it building my own kit. Even though I do enjoy it. I've been running a VM, now a cloud VM, for 6 years and increased it over time as the system became popular. Currently serving upwards of 13,000 unique users per day with over 40K transactions globally. And still, it costs only $300/year. PER YEAR. I'd never pay as much as what is being discussed here. On the other hand, it's all running in Java. Had I used PHP, Ruby or some other slow junk then I might need that $500/month cloud vm.
I love this article! For my hobby/entrepreneurial websites https://clubcompy.com and http://cardmeeting.com, I self-host at a local colocation facility. It has been fun learning experience for me to build enterprise apps without big iron. I don't like having the upfront cost of the server and network hardware, but I do know I'm spending less per year than I would if I went to the cloud. My colo was nice enough to give me 4U at a flat monthly rate, and I don't have to worry about blowing through any bandwidth limits and having costs go haywire if I have a spike in visitors. Having that dependable monthly expense enables me to pay for it out of my mad money and not anger the CFO (my wife ;)
I'm surprised the cost of energy hasn't been mentioned yet. I built datacenter monitoring tools for a few years and I remember energy working out to 2/3 of the total cost of ownership.
I feel like colocation and bandwidth charges will be the lions share of the total cost and are completely neglected in your comparison. It will be easily several thousand a month for colocating several servers plus a good connection.
"But to be totally honest with you, mostly I build servers because it's fun."
I would hope so, because there's certainly no financial reason to do so. Economy of scale is a Thing That Exists. So is TCO. You're ignoring both in this article.
The main problem with your estimate is that you're using 100% utilized instances, but billing them as On-Demand. Granted, that's how Amazon sets up the sample, but that's not what you'd do in practice.
You'd use reserved instances, not on-demand, which are a good deal cheaper even taking into account the reservation fee.
When I switch the 6 servers to reserved instances, the bill comes down to $825/mo.
That brings the calculation to $30k for 6 servers for 3 years, almost half less than your estimate...
Taking this a little further: if your colo bill is ~$500/mo, then add $3k/server * 4... That brings the 3 year cost of building your own to.... $30k. Exactly what Amazon charges, strangely enough.
And that's assuming none of your hardware breaks. And not factoring in the cost of build time, tweaking, shipping to colo, installing OS's and configuring, and everything else you have to do when you build your own.
I'd call that pretty much break even on the cloud hosting, assuming you consider the two builds equivalent. If not, what would you consider equivalent?
Hardly "prohibitively expensive".
Does anyone else find it comical that the links that Jeff provides to the server hardware all go to Amazon and he is quoting AWS? Amazon wins no matter what one does since they sell everything!
I've tried colocation, but the server you own becomes out of date faster than it depreciates. Also, I've found the colo providers to be very stingy with bandwidth. AWS can get very pricey, unless you're using them as a PaaS provider. On the other hand, some of the dedicated server offerings are extremely attractive. For example, BurstNet is offering unmetered 1 Gigabit or dedicated/guaranteed 100 Mbit bandwidth for only $179/month, including this high-end dedicated server configuration:
- Quad Core Xeon x3220 (2.4Ghz)
- 8GB RAM
- (2) 500GB SATA w/Hardware RAID
- 5 IP4s included, plus IP6
- Pretty much any Linux distribution, or Windows Server for a little extra per month
Check out http://bit.ly/burstsale for this and other options on sale.
Good point about reserved instances. If I change the (Amazon provided!) default Web Application settings from on-demand to three year reservations, I get 825.17 per month with a 3,600 up front payment:
That breaks down to $925.17 per month. So instead of buying one faster than almost anything you can even buy for any price at EC2 server every two months, you can "only" buy one every three months.
Also note that we're paying a little bit exorbitantly for the 512 GB SSDs and the top-of-the-line Xeon quad-core CPUs. If I bump that down a bit, I could easily build these servers for around $2k each with almost no meaningful loss in overall performance. (but some loss in storage capacity, yes.)
If I do that, then we're back to buying one faster than almost anything you can even buy for any price at EC2 every two months again. :)
Also for the record our colocation costs, including bandwidth and power are more like $125 per server per month, nowhere remotely near $500 per server! Even that is a little bit expensive, I could probably have done better if I shopped around a bit. Actually now that I'm comparing notes here:
"They offered me 8U at 2 amps for $200/month with a 100 Mbps unmetered data plan"
"For $206, I got 10U, 4 amps, and 100 Mbps connectivity in a San Jose data center about 14 miles from Fremont"
I need to re-negotiate that deal! Our servers are all 1U so we only need 5U, and perhaps 4 amps.
Any change in heart in the aftermath of Sandy? Trello is down and I've seen Stack Exchange and Fogcreek rolled over to their back up data centers. Granted AWS and other cloud providers have their outages but it seems like they are shorter than when your primary DC goes down.
While reading all the comments I could not get my thoughts together, I now think I see what is going on.
Some people are spending a lot of time learning how to get good value out of xxx hosting company choosing the correct type of instances to rent etc., other people would rather spend their time understanding how servers work and build their own servers. (Then you have the time to find a good colocation setup if you build your own servers.)
I now thank that EC2 offers a few main benefits.
a) You can scale down without having to take a big loss by un-capitalizing your hardware investment.
b) A small monthly bill you are not commit too, is a lot easier to get approved then buying lots of machines.
c) If it all goes wrong you can blame someone else! I think this is the best reason to use EC2...
d) If you only need very low powered servers for a short time, then EC2 cannot be beaten.
e) You can believe you can stale up next week if needed without having costs this week.
Forgive me for butting in the middle of your EC2 cost discussion, but I'm building something very similar to this and I was curious what you are using for your RAID configuration. Did you go with onboard RAID, software, or hardware, and why? I've been reading lots of people saying to avoid onboard RAID, but I just recently had a hardware RAID fail likely due to the card (3ware 9650SE-2LP). I'm also doing a Windows build, so there doesn't appear to be a solid software RAID option like Linux has.
Jeff, please also note that when you use the Amazon option, you get their ability to host in multiple datacenters, and load-balance across all your purchased instances.
These aren't small things.
If you roll-your-own setup, you will have to maintain contracts with multiple data centers, and you will have to buy (or implement) your own basic network security and load-balancing devices.
That's not chump change. It'll easily add the cost of another server plus open-source routing/loadbalancing/security packages (in your roll-your-own strategy) or the cost of dedicated routers, firewalls, and load-balancers, if you decide to purchase them off the shelf--a thousand dollars to several thousand dollars more in upfront charges.
Plus you'll have to self-admin servers (with their multiple points of failure) and network security and routing devices, which aren't necessarily the skillsets that software developers want to spend precious focus on.
Sorry Jeff, I completely failed to read the point that you made at the very end of your post in which you specifically call out that your approach is for folks who "don't need the redundancy, geographical backup, and flexibility that comes with cloud virtualization."
In light of that exception, my objection is misplaced.
Our company recently investigate the cost/benefit of AWS. We went with our own colocation facility.
Some salient points:
1. While AWS is more expensive compared to rolling your own physical servers, the talent required to DIY is expensive. This makes AWS very attractive to startups without someone in-house who can perform the operations-related stuff.
2. The only way to make your environment cost-competitive with AWS reserved instance costs over a 3yr basis is to virtualize your own datacenter. You can virtualize using Amazon and have them be the beneficiary of your spare CPU/RAM/Disk cycles or you can virtualize yourself and save over the long haul.
3. For AWS costs to be even marginally competitive over a 3-year basis you must sign up for reserved instances to lower costs. This means paying a large sum of money up-front. If you have 50-100 servers in AWS, good luck going to the bank and telling them you want a loan for something that is a "soft cost". The bank wants evidence of hard assets that can be recoverable in the event you can't pay that money back. This makes taking out a loan for reserved instances difficult but makes it much easier to get a loan for your own hardware.
4. We spend 35K a month currently in AWS. We calculated that even taking into account the fastest possible hardware, an insane amount of RAM and SSD san, hosted in one of the best datacenters in the country, paying for bandwidth, rackspace and salaries, hosting on our own had an ROI of ~14 months if we doubled in size every 6-8 months. AWS becomes very expensive as you host more machines there.
5. Good luck getting the kind of IOPs necessary for big data from AWS. Their SSD tier does not provide the kind of IO you need for scale and it is very expensive.
6. AWS is perfect if you need to scale up and down all the time or if you are a startup and have more money than operational technical talent.
7. We will continue to leverage AWS for some of their ancillary services. AWS is a good tool and there is always room in the toolbox for exotic instruments that can be used at the right time.
8. The more services AWS hosts, the more complex it becomes. That complexity has a serious operational cost. Have you taken a look at the descriptions for their downtime events? It sounds like a giant Rube Goldberg contraption.
You should include the price of Microsoft Windows Server in your calculation for a fairer comparison.
Hi Jeff, long time fan, first time commenter... I love building servers too and I've managed a small group of servers, I personally use Linode, and my currently company uses AWS and some internal servers...
You would agree that in coding you pick the right tool for the job (scientific computing would use a different technology stack than standard ecommerce startup website)...
1. AWS is elastic (you pay a premium for being to scale up or down - and there's value to the agility with which you can change or add new services)
2. AWS RDS is a huge improvement over managing MySQL replication, and they have ELB and lots of other addons that take serious Ops chops to create and maintain
3. Server operations cost is not the raw hardware:
a. The biggest cost in Ops is people (same as coding), so leveraging Amazon saves on how many people you need to pay to manage your server farm (yes, SysAdmins take holidays and change jobs so cost = N+1 )... you can outsource half way by colocating but the setting up the redundancy, monitoring, auto scaling, etc. becomes a physical pain (you want West Coast and East Coast servers, right).
b. The infrastructure of cooling, UPS, network (bandwidth!), backups, etc. is also a big factor in Operations (does your server room have building security? backup generator?)
My point is that for a stealth mode startup or any internal lab testing buying servers is a no brainer - do it with ESXi or OpenStack and hack away!
BUT for Production you'll need some Cloud strategy (AWS competitors: RedHat OpenShift, RackSpace Cloud, IBM, ATT Compute, Google AppEngine, etc. means lower prices and improved services)
As you've already mentioned if you happen to have hanging around a pile of cash and tech expertise that's underutilized...
Omg, loved the picture book cover. Nyce!!!!!
ditto to Brandon0, and back to the topic of the actual server you built: what RAID controller are you using?
It appears you are using the onboard controller on the motherboard which seems risky since the general consensus, including discussions on ServerFault, are that onboard RAID is 'RAID in name only.' i.e. IT, not the drives tend to be the source of catastrophic data loss.
Are you not concerned that you'll have a lot more to worry about than just the rather poor SSD reliability you've written about elsewhere?
Are you getting enough upstream at home for a reasonable price to run a server there?
my buddy's sister makes $85 an hour on the laptop. She has been out of work for 8 months but last month her payment was $19789 just working on the laptop for a few hours. Go to this web site and read moreOnline Income
wegtrfill continue to leverage AWS for some of their ancillary services. AWS is a good tool http://www.vpillssatis.gen.tr and there is always room in the toolbox for exotic instruments that canewf.
Apologies in advance - I am that lowest of orders: an end-user *and* RentaServer renter.
I have been getting screwed by Hosts ever since the Web was invented. (literally)
Numerous SERIOUS marketing efforts have been started over the years and all have ended with crashed Sites.
To explain: I am a serious marketer and learned to drive traffic the hard way in the hard world with pay-in-advance ads.
The Web should have been a paradise for me as a small operator with miniscule costs compared to the "real" world.
Almost EVERY promotion I've run - and they are still expensive even out in the Cyberbog - has worked thus resulting in Server crashes from even minor peak traffic.
We aren't talking Markus Frind figures here, just a few thousand hits.
After all these years, I have never got a straight answer and never had a Server service stay up for one month without downtime.
I don't need PeerOne, I keep hearing of guys running Servers from home, like Plentyoffish.com did and all these years later, with traffic that would make ME a billionaire on 10% of it, he still runs everything as almost a one-man band on literally, 1/300th of the number of Servers his competition uses.
I had high hopes for the "Cloud" with its lies of distributed loading and and from 1&1 who promoted it heavily, to others, they fall over, so where is this redundancy?
My three current "trials" have all fallen over in the last 3 months - all "cloud-based"...
WHAT DO I NEED TO DO TO HAVE A LITTLE SYSTEM THAT CAN:
1. Run a few Forums.
2. No high-demand music/flash/video downloads, mainly text.
3. Have 5000 concurrent users.
4. Handle spikes of visitors of 10,000 per hour NOT second or minute, per hour.
plentyoffish.com was handling 100 times that with the colossal demands of a dating service system, on a home PC. Running Windows as the final insult! :-}
I don't even need big data pipes. No videos, no music.
With all the tech expertise I've seen on this Board here, someone must be able to tell me the secret.
Or, at least how Markus did it.
Why do others need 600 Servers and 500 staff and he needs a couple of renta-boxes and his girlfriend?
a few hundred items. But once you have thousands of items to paginate, who the heck is visiting page 964 of 3810? What's the point of paginating so much information when there's a hard practical limit on how many items a human being can view and process in any reasonable amount of time west covina plumber
Thank a lot for this blog. I bookmarked it last year knowing that this year I would be building a new server. Now I am ready to build it.
Would you still use the same specs? Or would you move to an Intel E5-2620 (6 core) platform?
My server will be mainly file storage, but I would like to leave the opportunity to expand into Virtualization in the future.
Very interesting discussion. Over the years I've rented servers and colocated my own servers. More recently, I thought I'd try out cloud hosting for my new sites which book hotels online and book excursions online.
At first, I was plagued with problems with the cloud hosting. The server crashed and wouldn't come back up after a restart and as for speed, snails pace to put it mildly! I'm sure the hosting company got thoroughly sick of me in the first few weeks.
I've been working with Windows and Linux servers for years, so I knew that I'd set it all up correctly. I just couldn't understand what was wrong. The sites were offline for 5 days and I decided that any more problems afterwards, I would go back to colocated dedicated servers.
As it happens, that was the last time it crashed. Something hadn't been set up right with the virtual server itself and after then it's been a dream, very reliable, very stable. One of the sites has over 6 million pages and as traffic volumes increased, I've been able to increase memory and disc space without affecting the sites at all.
All in all, I think virtual servers are the future. What happens when I need more than the maximum my host can provide, I'm not sure of yet, but I will cross that bridge when I come to in.