Performance of Thin Clients on GNU/Linux

Intel wants you to buy powerful processors on thick clients. That’s where it gets the big bucks. When I saw a report of a test of thin clients sponsored by Intel, I paid attention. Here’s the crux of it:
“In our tests, all the clients performed the same tasks at the same time, though each had its own copies of the data files. Though typically people are not doing exactly the same thing at the same time, most networks of a similarly capable server would be supporting a lot more than a mere 5 simultaneous users. Further during normal work hours a great many of those users would be working on different tasks at the same time. Our test cases are thus probably less demanding on the server than real user networks.”

What’s more, these tests were done with that other OS on the server. GNU/Linux scales a lot better and the real world does not have the whole office pushing enter at the same time. Intel’s test showed things taking 5 times longer with just 5 clients. My tests in a real world with real users shows tasks taking no more time with 20 users than with one user and in addition, tasks running on the server are faster than clients running on the client. That’s because people are not robots and real servers have multiple and faster drives than desktops from Dell usually have.

21 Grade 1 students max out the network

21 Grade 1 students working a 2gB server hard

I don’t think many non-teachers are aware of Grade 1 students. They have no patience. They are twitchy and go on from one task to the next in the blink of an eye. They click on things because they can, not because someone is paying them. Notice that the CPU is not maxed out, the assumption in Intel’s tests. On CPU load, that machine could run 100 students. On file accesses, I had four hard drives on the system each with about 8 millisecond access times. That means I could seek 125 times per second on each of them and 500 times per second on the whole array. That means CPU and I/O were not a bottleneck for the server. The network traffic is a bit over 12 MB/s, much less than the bandwidth of a gigabit/s NIC. No bottleneck there and only 0.6 MB/s per user. That network, a cheap affair in the bush could handle many times more users. If you look at that chart, you will see the bottleneck is RAM. That server has used all its RAM for users and caching. There is not anything left and it is beginning to swap, with 21 users in 2gB, 100 MB per user.

Here’s a real-world example from Largo, FL where Dave Richards is running 200 e-mail users on one server. His uptime shows the CPU idling at 3% usage. Sorry, Intel. You sold him those CPUs. His chart shows only 7 users because the users actually login to another server while the applications run on different servers. He says he could run 400 users on that one server. The “wait” queue is only 5 or 6 long. Performance will be snappy at that rate because the server can do thousands of things per second.

Dave Richards uptime for 200 users

Dave Richards uptime for 200 users doing e-mail


So Intel is wrong/misleading. You can do much better than the performance of a thick client using thin clients and GNU/Linux on a terminal server.

see “Principled” Technologies

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in Linux in Education, technology. Bookmark the permalink.

2 Responses to Performance of Thin Clients on GNU/Linux

  1. High school students use firefox more. It does use a lot of RAM but I cache stuff at the firewall so it is quite snappy if they go for the same pages (which they do), and the RAM goes surprisingly far. Typical web pages fit in 1MB so you can cache a lot of pages in 1gB. As well FireFox and other browsers cache to disc so having four drives can keep up with it. My best server had 4 SCSI drives. They seeked in about 2milliseconds, almost a “click”. SSD is the way to go though, seeking in less than 1ms.

    I updated the post to show the uptime Dave Richards sees in Largo with hundreds of users on one server. Wait queue is only 6 long. How long does it take to seek six times? That’s responsiveness. You won’t do any better on a thick client with one hard drive and that other OS.

  2. Bender says:

    I guess the ram amount is the weakness at this server, let some some students fire up firefox or chrome and they will bring it to knees 😉 Though still 2GB for 21 users is mighty impressive! Latest OSes from Redmond require at least 1GB and here we have 21 users working on 2GB of memory !!

Leave a Reply