Ars Technica Reaches New Low In Journalism: why thin client desktops must die, based on experience from 1990…

I hate biased reportage like this one. The guy writes that thin clients must die and he starts by discussing 386s in the 1990s used as terminal servers at 10 megabits/s and software not designed to be multi-user, WordPerfect.

“Admittedly, the problem here wasn’t so much the terminals (at least the ones that weren’t dead on arrival) as it was WordPerfect and SCO. We had gotten WordPerfect for SCO immediately after its release (the box was practically still warm when it arrived in our office, and there were two software licenses stuck together in the box). Apparently, at that point the code hadn’t been tested in multi-user mode very much. Whenever someone ran spellcheck and added a new word to the custom dictionary, it changed the ownership of the dictionary file. When the next person ran spellcheck, it would cause SCO Unix 386 to kernel dump.”

Uhhh, let’s see. The messenger arrived on a Monday and you hate Mondays, therefor, you must hate the messenger… Does that make any sense? Not in the least. Modern thin clients are more powerful than those terminal servers about which he had experience. A poor design of a system that used thin client does not mean there’s anything inherently wrong with thin client systems.

Systems that I have designed had superior performance to thick clients (presumable the authour’s favourite). Why? Because I had adequate RAM on the terminal server to cache all user applications. That meant a click on an icon brought immediate response. Files in GNU/Linux are reusable and re-entrant. One file read in by Joe is immediately usable by Bob, Billy, Heather and Jeanne. They only have to seek for their data-files which are usually tiny things in comparison to the applications.

Then there are the storage units. A good server will have multiple read/write heads so that N files can be read at once, meaning no waiting almost all the time. I have seen 30 users on a single PC blown away by the experience. When their familiar thick clients were turned into thin clients, opening the largest application, OpenOffice.org, took less than 2s compared to 10-15s on a thick client.

A good terminal server these days may have multiple gigabit/s NICs while the junk in TFA may well have had a single 10 megabits/s NIC. Talk about comparing apples to oranges.

Thin clients and terminal servers today are a great solution for almost all computing except multi-media editing and the like where the output must be rendered close to the user in order to give immediate feedback. That covers, what, 5% of PC usage? Lots of organizations find they can use GNU/Linux and thin clients for 80-90% of tasks with few hassles. With a bit of work even that limitation can be compensated now that gigabit/s NICs are so cheap. The proof of that is obvious for any thoughtful person. Consider how many use a browser and little else to do all their work. They might as well be using a thin client. Run the browser on the server and get superior performance by sharing cached web-content.

Oh, and another thing. I remember the quality of PCs in those days… It was not great. Components, too, were quite variable in reliability. Comparing any system with similar architecture then with what we have today is invalid. They just were not the same. We had clockspeeds of a few tens of MHz, remember? Now, most thin clients even the cheap ones start at ~1gHz. That basically means nothing on the thin client is a bottleneck. It all comes down to the network and we can buy cheap switches that will run several gigabits/s full-duplex all day long with no problems. For text, cartoons, and still images, the networks are not a bottleneck until one gets ~50 users per NIC. Most terminal servers can even do a reasonable job of full-screen video if a few frames per second is sufficiently useful. YouTube is no problem at all unless everyone is doing it…

My recommendation is that GNU/Linux terminal servers and thin clients should be the default solution unless there is a demonstrated need for more local power. It’s just not economical to have everyone in the organization drive a Cadillac and the performance of Cadillacs may actually be less than a smaller/less expensive vehicle. Some Cadillacs are good cars but they are over-priced if all you need are wheeled vehicles. All most of us need is a good thin client. They start at about $50 these days. The most expensive thin clients are better than the terminal server the guy was dissing.

for a really poor example of journalism (or excellent trolling) see Passport to hell: why thin client desktops must die | Ars Technica.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology. Bookmark the permalink.

77 Responses to Ars Technica Reaches New Low In Journalism: why thin client desktops must die, based on experience from 1990…

  1. Ted says:

    “After Effects uses clustering between desktop machines because it eats it self out of house and home.”

    Can you point me at a decent HOWTO on how I cluster AE, or should I just carry on using AERender and/or Watch Folders?

    http://help.adobe.com/en_US/aftereffects/cs/using/WS3878526689cb91655866c1103a4f2dff7-79a3a.html#WS3878526689cb91655866c1103a4f2dff7-79a2a

  2. oiaohm wrote, “You cheap consumer PC sitting on your desk is basically a monster to what most people need. Only thing is it don’t have a fast enough network port in it to send the screen else where yet perfectly. Everything else to support at least 2 users is there.”

    2 is way too modest. Having been system administrator in a building full of GNU/Linux PCs, I can send around an “uptime” or “top” command and see what real users actually do. If it weren’t for Flash, most users could replace their OS’ idle-loop, ~1% utilization on a powerful CPU. That’s with single core… Humans have very low input bandwidth. Since they read at the rate of kilobits/s and glance at still images at a few kilobits/s, they just do not place any load on a terminal server. A few users watching YouTube can use more bandwidth than the rest of them. In a workplace, few people are likely to waste time watchin YouTube. They read, write, think and do.

    Here’s a seriously under-resourced terminal server logging in 6 users:
    No swapping, and 6% CPU utilization in 512MB. There were just brief periods when they were anywhere near maxed-out on CPU. That was my old beast, AMD64, 2.5gHz, single core. Given more RAM, that machine could have run 25 users, a whole lab, as it did when new. Beast did have a gigabit NIC to a 100 mbits/s switch. I have run a whole lab on 100 mbits/s if nothing graphically intensive is going on.

  3. oiaohm says:

    Linux Apostate Cat6a/e cable and ports are 10G Ethernet they are also 1G networking.

    The wiring todo 10G Ethernet is basically being deployed in places currently using 1G networking.

    “And the biggest question is always “why”. Why replace your cheap consumer PCs with some big iron monster?”

    You cheap consumer PC sitting on your desk is basically a monster to what most people need. Only thing is it don’t have a fast enough network port in it to send the screen else where yet perfectly. Everything else to support at least 2 users is there.

    Its more the means to use your PC power where you want. Basically 100 meter cable radius around where you have your PC instead of having a second PC. So inside that radius for just a few people it will make no sense having more than 1 PC in a lot of cases.

    10G Ethernet is cheaper to license than hdmi or dvi or displayport as well.

    Common error
    “If you need performance in an application (let’s say After Effects or Photoshop, for examples), the last thing you want to be doing is sharing the CPU and memory resources with others.”
    If you are really worried about memory you will be running applications virtual. UKSM and KSM from Linux and VMWare equal can compress the memory so reducing there memory usage.

    Its not sharing the CPU what is the big problem. Its sharing the GPU. After Effects and Photoshop are both GPU heavy.

    After Effects uses clustering between desktop machines because it eats it self out of house and home. Desktop machines are not really big enough to run it so you end up with a crap load of network traffic trying to make up for it.

    Really After Effects is a poor example. It shows the problem clearly. If all the processing power was in one machine you would not have to waste network traffic routing stuff around. In fact that routing can fill 10Gbe network cards. Yes welcome to ouch. 3 monitors by 10Gbe or 1 application consuming the lot.

    Ted some of these video processing applications just build a huge mother machine. Anything else will kill you. Hacking around some issues by clustering is stupid sharing the huge mother machine between many users is the only way to go that make sense for the cost of the machine. There is a place for the Server and thin-client combination. Yes its big horid things like after effects where you need huge amounts of processing power in a small space.

  4. oiaohm says:

    oldman Really I do think 10Gig will become common because. 10Mips to 10GigE are the same port. OK minor difference good quality 1G has the shielding connection on the plug but that is only difference in plug to cover 10m,100m, 1g, and 10g networks. Cat 6a/e cable. Socket a little better design at 10GigE. So yes businesses using 1G now are rolling out the cabling for 10G. The cable role out always comes before the device becomes common.

    WAN Pipe. Where does a WAN Pipe come into this. I am not talking about thin clients going over the Internet or even in a lot of cases building to building. 100 meters is about your performance limit. Any more than that from the server forget it. As Linux Apostate said latency. There is a limit how long a cable from server to thin client can be before the speed of light catches up to you and causes harmful latency more than what hardware variation introduces. You could do 500 meters if there were no switches but that is the absolute hard limit using fiber to keep latencies low enough. Yes at a 100meters you are limited on number of switches you can use. Increasing bandwidth does not fix that problem.

    I am talking about thin clients seeing the local desktop machines go by by. Main reason one 1 desktop machine and 2 users don’t work is the fact that the extra monitor and keyboard has to be basically on top of the other user and lack of means to send screen keyboard and mouse remote at full speed due to lack of long range port fast enough. 10Gbe port gives you a long network monitor port. The current desktop machines are more than able to support 4 to 5 people not playing some heavy game at the same time. The problem is 5 people in one small room equals 5 unhappy people.

    Internet clients are most likely going to be serviced by html5 with applications design to work with that latency exploiting javascript and other techs todo local processing. Since this allows lag hiding due to user end scripting.

    Basically oldman we are in a split in the road. Current thin client stuff will disappear because its based around compression and trying to get something too big, down something too small without redesigning the applications. The new thin client stuff with be restricted to local networks LAN solutions going past that is pointless you might as well use the other old horrid tech and save bandwidth not like performance will be much better.

    The better remote servicing will be html5+.

    Yes bandwidth of the local thin clients that perform will be too huge for Internet for a very long time let alone the physical latency problem. This is where the cloud idea complete collapses. There is no way cloud can provide high performance thin clients. Local servers can provide high performance thin clients.

    100 meter limit is the same limit the only text terminals from mainframes first had.

    The thing that you did not get oldman is it the size of the Wan being small what makes central server so much of an advantage. Perfectly synced cache and able to set cache to see through ssl traffic without having to install root certificates in browsers.

  5. Linux Apostate wrote, nonsensically, “The case for doing this is not exactly compelling.”

    If the single point of failure is the most robust part of your system, let it roll. Imagine a tank commander who refused to advance because a single tread-failure could put him out of commission… or a bus-driver who told his boss the reason he remained parked with a load of passengers was that he was afraid his 20-ply tires would fail at a critical juncture… or someone afraid to allow the processing to happen over the network because his copper which lasts ~25 years and his switch with a meantime before failure of 10 years might die…

    There are few networks or servers as fragile as the typical COTS PC. Thin clients are actually more robust than the typical PC because there are fewer moving parts and less heat… So, Linux Apostate is wrong, seriously wrong.

  6. Ted wrote, “If you need performance in an application (let’s say After Effects or Photoshop, for examples), the last thing you want to be doing is sharing the CPU and memory resources with others.”

    Wrong. My kids using 8 year old thin clients could open the largest app, OpenOffice.org in less than 2s because most of the files were cached and the server had 4 read heads (SCSI). Using the same hardware as thick clients took ~7s to open the window. Sharing in some cases increases performance. Also for processes using heavy CPU, you are much better off shoving that off to a server/HPC instead of trying to be the idle-loop on your PC. In watching real desktop loads on a terminal server, I rarely saw more than 40% usage even with 20 users pointing, clicking and gawking.

  7. Ted says:

    “Yes, I am. The point in question is what to do to produce the ultimate thin client performance.”

    Easy. You don’t use a thin client.

    If you need performance in an application (let’s say After Effects or Photoshop, for examples), the last thing you want to be doing is sharing the CPU and memory resources with others.

  8. Linux Apostate says:

    And the biggest question is always “why”. Why replace your cheap consumer PCs with some big iron monster?

    Who does this serve? What business becomes more profitable as a result of the transition? Is the supposed inefficiency of a DVI cable really so serious that we all need to switch to 10G Ethernet?

    Hey, any fans of Google here? Remember when Google built a data centre cheaply, by using mass-produced economy-of-scale PC parts? And then other businesses copied the idea, because it worked so well? The suggestion here is that the trend should go in the other direction: away from cheap hardware and back to big iron, away from the robust distributed implementations and back to single-point-of-failure. The case for doing this is not exactly compelling.

  9. oldman says:

    “Linux Apostate and oldman the reality is the cost of doing 10GigE network has been dropping as the volume of supply of parts increase.”

    The point of contention is when 10Gig becomes so commoditised That it becomes viable to put it on a low end throwaway computer. IMHO that day if it does come is 5+ years out if ever.

    My “if ever” comes from hnowing that you would also need to have the network infrastructure in place to support it. If you are using a core-edge design How much over-commit (2:1? 4:1?) are you going to tolerate at the edge and still get your performance? If you are using a collapsed design, When can you change out your chassis for the new generation where backplane capacity is in the Terabit Range (A la Cisco Nexus 7000 – The cost for the large network chassis can easily go above $ 1million US) Then there is the little matter of having a wan pipe big enough – Got $100K+ US a month for a 10Gig WAN Pipe sir?

    All to support your little theoretical flight of fancy.

    The simple fact of the matter is its probably going to take something on the order decade or more before even a local segment of the internet can support this kind of traffic.

  10. oiaohm says:

    Linux Apostate
    “network cards that can somehow assemble TCP/IP packets without touching the OS. This is crazy.”

    At 10GigE + OS trying to manage TCP/IP could completely lock the system. Yes 10GigE + you have network accelerators. Makes sending block devices by network fast and simple as well. Just dma the harddrive data to the accelerator and tell it how to packet it and where you want it sent. Same you can do with display stuff. Accelerators also take care of some of the TCP/IP noise.

    “super-specialised server hardware” The stuff I am talking about network side is general 10GigE+ hardware. GPU your normal Nvidia and ATI cards are more than able to DMA send output buffer automatically. We are not talking super-specialised.

    Linux Apostate the reality is 10GigE+ is a new breed of hardware different to the network cards we have been using. They have some brains. GPU in your computer have had for a while DMA to DMA send. This is what Nvidia Optimus does. So instead of sending that to a intel video card to display you send it to a Network card to send remote. Interesting point is that is not a speciality chip either you can make normal Nvidia and ATI desktop cards do DMA to DMA.

    No part you need is exactly speciality. Next generation of network ports required yes. Not speciality that much. The required GPU you most likely already have. Its the network cards you are using that are the weak link basically.

    oldman The 10GigE has come down because the top at Dell is now 40GigE
    http://www.pcworld.com/businesscenter/article/254372/dell_brings_40gigabit_ethernet_into_its_poweredge_blade_system.html

    Dell still has to go up to 100GigE switchs yet.

    Cisco has some 100GigE/40GigE/10GigE switch that are really nice.

    Linux Apostate and oldman the reality is the cost of doing 10GigE network has been dropping as the volume of supply of parts increase.

    “*almost* the *same*” To be correct on 10GigE you would say its the same if not better on the thin.

    10GigE is the line in the sand.

    Think back oldman remember when 1GigE switchs were quite a few thousand dollars. It does not take long for that price to drop.

    Linux Apostate basically you now know the magic number 10GigE. This might take a few years before generally affordable.

    “So, provided you can assemble and disassemble network packets entirely in hardware, the network latency for the thin client is similar to a DVI cable. Right.”
    Slightly faster on the network cable. DVI is not a effective protocol. So better than DVI or HDMI at 100 meters of cable compared to a normal length DVI/HDMI cable. Slightly less than displayport at 100 meters about equal on the same length cable. That is using 10GigE of course. 40GigE and 100GigE is another matter.

  11. oldman says:

    “I have specced have dual 10GbE ports so they may well be intended to connect to a mesh of devices rather than a switch. A thin client could connect to the odd port.”

    Sorry Pog, but that is not the way that it works. The card that you linked is designed to be connected to an upstream 10GigE port in a Network switch that is provisioned with 10Gig Ports.

    BTW The cheapest 10gig switch I have been able to find is a 24 port managed switch ca $9K US (from Dell).

    http://www.dell.com/us/business/p/powerconnect-8024f/pd

    That is indeed somewhat less per port that I quoted previously ($375.00 @ port) but it far more expensive that the equivalent GigE switch.

    No matter how you slice it. 10GigE connections are not for either small cheap computers or thin clients for a long long time to come, if ever.

  12. Linux Apostate says:

    So, provided you can assemble and disassemble network packets entirely in hardware, the network latency for the thin client is similar to a DVI cable. Right.

    Wasn’t the whole point that the thin clients were supposed to be cheaper than the equivalent PCs? And yet, in order to get *almost* the *same* performance available from a desktop PC, you are buying specialised clients that cost (let’s be generous) as much as a desktop PC, plus top-end network gear, plus super-specialised server hardware including GPUs and network cards that can somehow assemble TCP/IP packets without touching the OS. This is crazy.

  13. Chris Weig, do the maths. Even 200 Euros is cheaper than 250 or 300. I can buy container loads of thin clients for $50 from China. Because processes are run on the server and the server may have huge resources, performance can be better using a thin client. That was the case where I last worked. The Xeon server with 4X RAID1 and ECC RAM easily trounced the thick clients with 40gB hard drives. A thin client may wait a few milliseconds on a network but a thick client can wait seconds while a hard drive seeks to open a window.

  14. oldman wrote, “NOBODY at this point in time is going to put a 10Gig ports on a thin client, let alone put together a network to support them. We have a long long way to go for that, if ever.”

    Some servers are run as thin clients or in clusters and 10GbE makes sense there. Powerful servers may cost ~$40K so the cost of a switch may be insignificant, especially if there is but a single thin client controlling the cluster. Several servers I have specced have dual 10GbE ports so they may well be intended to connect to a mesh of devices rather than a switch. A thin client could connect to the odd port.

  15. oldman says:

    “Actually, it’s a few $hundred per port, very affordable for those who depend on a network to make a living.”

    Last time I checked Pog, there were two parts to a network connection. You forgot the cost of the 10GigE port on the switch which at this point in time is still in the $500-$1200 per port range. This brings the total cost for a 10GigE connection to ca. $1000-$1700 per computer. This of course does not even gegin to take into account the costs of the uplinks to support this ( 40Gbps is till in the $10000+ per port.

    Regardless of costs, the fact remains that NOBODY at this point in time is going to put a 10Gig ports on a thin client, let alone put together a network to support them. We have a long long way to go for that, if ever.

  16. Chris Weig says:

    A souped up thin client is still cheaper than a normal thick client.

    Ahem, thin clients are not cheap. For a most basic ARM thingy from a quality manufacturer you pay at least 200 Euros, but most are much more expensive. You can get a real power-saving Mini-PC for 250 Euros, a real business PC of the non-sucking variety for 300 Euros. Why would anyone willingly castrate himself by buying a thin client?

  17. Ted wrote, “Aren’t you the one who always evangelises “small, cheap” devices?”

    Yes, I am. The point in question is what to do to produce the ultimate thin client performance. I think the vast majority of thin clients can perform quite well at 100 mbits/s for peanuts. A souped up thin client is still cheaper than a normal thick client.

  18. oiaohm says:

    Linux Apostate
    “You are now ignoring latency in the software stack.

    Perceptible or not, there will always be more latency with a thin client. And for some applications it certainly will be noticeable.”
    In fact I am not ignoring latency of the software stack. I have had the fun of using the high end stuff.
    oiaohm
    “dma-buf mmap and vmap for transfering buffers off video cards on host machine to send to remote machine.”

    Do you understand what this means. I guess not. dma-buf exploits the dma bus to transfer the data off the video card this can go straight to accelerated network card that packet wraps it. Yes this hardware link updates the data to the accelerator as the video card does. No software stack interaction other than configuring the acceleration to send and setting the video card to send data there. We are talking in nanoseconds between when the video card completes data to send to when it turns up at the thin client. This speed can be constant.

    Linux Apostate so exactly what software stack. Sending screen to remote location does not require software stack issues if you can send the data stream from the video card raw. Its truly no different to putting a long monitor cable on it. In fact if monitor can directly process 10GbE IP/TCP has less overhead than the digital protocol in HDMI or DVI. So yes a monitor connected by 10GbE would be able to out draw a monitor connected by HDMI or DVI.

    This is the problem. 10GbE to a thinclient box to DVI or HDMI to Monitor. Would be a little slower.

    A thinclient that is in the monitor with 10GbE port due to being able to bipass DVI and HDMI crap with with dma-buf on server would be faster than the DVI/HDMI monitor directly connected to the Server.

    “Perceptible or not, there will always be more latency with a thin client. And for some applications it certainly will be noticeable.”

    In this case the latency is no greater than general hardware variation and how bad dvi and hdmi suxs. Some monitors are faster than others. If you are buying random monitors there is no way you could tell. In fact we are looking at a case that the network cables are latency lower than what we are currently using to connect to the displays even with 100 metres of it.

    Yes there would be a very good case to drop HDMI/DVI ports and replace them with network ports decanted to video output since this will equal lower latency.

    10GbE is the turning point basically. Displayport has a little better protocol. So if you monitor is not connect by displayport you might as well not be arguing Linux Apostate because what you are seeing with HDMI/DVI even analog is what a 10GbE can deliver to a monitor directly supporting 10GbE.

    Ted
    “Do you have any idea how expensive 10GbE actually is, or how rare it is outside of medium to large businesses?”

    Thinking I use 40GbE stuff in business 10GbE is old and has come down in price. When 100GbE stuff comes in the 10Gbe stuff will become into the price range of 1G stuff.

    I am talking about the return of the mainframe. There are some current generation thin-clients sitting on 10GbE. Currently only used in VFX and Mil applications.

    Basically the tech to give perfect thin clients is out there. 10GbE with other alterations like dma-buf is when it happens.

    Also keyboards, mice and joysticks have a habit of not being fast either.

  19. Ted says:

    “Who cares about gamers”

    How about Activision? Or Electronic Arts? Or Valve?

  20. Ted says:

    “Actually, it’s a few $hundred per port, very affordable for those who depend on a network to make a living.”

    Aren’t you the one who always evangelises “small, cheap” devices?

    You’d be happy doubling the price of the device for its NIC?

  21. Chris Weig says:

    Who cares about gamers, people wasting time using IT rather than making the world a better place?

    Yeah, who cares?

    Perhaps somebody like you who raves about Humble Bundles and Valve’s Steam supposedly coming to Linux and games being Linux’s next big thing and Microsoft collapsing even faster because one can play games on Linux now?

    Right, who cares, except for Mr. Pogson.

  22. Phenom wrote, “But gamers will definitely notice is the screen is a couple of frames behind.”

    Who cares about gamers, people wasting time using IT rather than making the world a better place? Or video for that matter when TVs and projectors are so cheap. Thin clients are the line-men of IT doing the heavy lifting of finding, creating, modifying and presenting information in the least expensive manner. They don’t need to be fancy or expensive to do the job of showing the pix and sending the clicks.

  23. Ted wrote, “Do you have any idea how expensive 10GbE actually is”

    Actually, it’s a few $hundred per port, very affordable for those who depend on a network to make a living.

  24. Ted says:

    “Linux Apostate really I don’t think you had a clue how fast a 10g network really is.”

    Do you have any idea how expensive 10GbE actually is, or how rare it is outside of medium to large businesses?

  25. Phenom says:

    “At the same time, they may not notice the screen is 1/2 a character behind while typing”

    But gamers will definitely notice is the screen is a couple of frames behind. Or when the sound breaks. Or when video looks like ass.

  26. Linux Apostate says:

    Your earlier posts didn’t mention latency at all, just bandwidth. You are now ignoring latency in the software stack.

    Perceptible or not, there will always be more latency with a thin client. And for some applications it certainly will be noticeable.

  27. oiaohm says:

    Linux Apostate
    “oiaohm seems to be confusing latency and bandwidth.”

    No I did not. Reason for thin terminals running into major latency problems was running out of bandwidth. Without enough bandwidth in the first place you cannot get low latency over a network. You have to try to run high compression that equals more latency each end to cover up the fact there is not enough bandwidth todo the job.

    Speed of light down a 100 meter of network cable does not take long.

    Screen data can only be delivered to the screen so fast. 200 meter round trip by network cable even with handling each end still adds bugger all to the lag particularly on 10g+.

    A poor quality monitor may in fact add more lag than 200 meters of network cable. Yes 100 meters from the server and back.

    1080p over 1g does not happen without depending on compression.

    Linux Apostate 10g+ is where you get the bandwidth. Then its down to how well the server manages the traffic sending and receiving. Managed right without compression you are serous-ally taking 3 monitors working perfect.

    Also http://www.intilop.com/shownews.php?newsid=10G-Ultra-Low-Latency-Ethernet-MAC

    Sharemarket floor trading to the nano second. Fastest of these is 5 to 10 nanosecond general is 100 nanoseconds crap 10g hardware is 1 millisecond. You would expect 1 millisecond if going threw a few switches.

    Linux Apostate milseconds is like forever on good 10g stuff.

    So that keyboard and mouse data back to server is by gammer standards the network link to the Internet is going to be a bigger problem.

    So the difference 100 meters away from the server to on the server at worse with crap network gear in 10g+ is +2 milliseconds. Good gear somewhere under 10 nanoseconds. Middle of range 100 manoseconds.

    The speed we are talking here is that small as you said good players are on 50ms lag in internet play. The Internet lag is many many times what a thin client is on right size network cable..

    500ms is what you find in 1g networks. The more common network type. This is still practical for word processing and general stuff.

    Why the massive increase in speed between 1g and 10g. No need to compress 10g is big enough so no extra processing each end that add up to super big lag. So 10g+ can deliver equal to real desktop somewhere between +2ms and +10 nanoseconds. Serous-ally with the fact the server can be bigger spec compared to normal machine a large faster gpu might hide a section of this.

    Bandwidth and Lag are directly related with thin clients. Common mistake with thin clients is to install small network cables. Thin clients equals you need monster clients from hell if you want to game or send raw videos to screen as is.

    Remember 40g and 100g cables are even faster.

    Robert Pogson when I say perfect I mean perfect. +2ms at worst plus all the other normal thin terminal gains of shared resources you are talking something that the cost can be disappeared by the gains in power.

    Something else just to hurt Linux Apostate where do you think vsync data comes from that games use to calibrate console lag. The monitor.

    So games can know exactly how much lag user on a thin terminal has and use auto correction. Reason for having to use vsync data is monitors are different grades of crap. +5 to 10ms is not impossible for a modern day digital monitor.

    This is why your argument has no legs. 10g+ your added lag is no different to what some monitors will add.

    Linux Apostate really I don’t think you had a clue how fast a 10g network really is.

    Bandwidth is what killed thin terminals means to be low latency. Now that is fixed speed is here for the taking.

  28. oiaohm wrote, “Those thin clients simple did not have the bandwidth to provide a perfect experience.”

    The reaction time of a thin client is not only screen refreshes. When a user clicks on something, say to start and application, the file-system may need to provide ~100 files. On the terminal server those are likely to be in RAM, ready instantly. In a thick client, 100 seeks is ~500 milliseconds, so the thin client appears faster because it takes less time to open the new window. Users really notice that. At the same time, they may not notice the screen is 1/2 a character behind while typing. As much a factor as bandwidth is screen resolution. For word-processing and such 1024×768 refreshed a few times per second may be good enough while for watching a movie, people will prefer 1080p. Then there is graphics smart enough to realize that only the changes in the screen needs to be sent. There’s no point in redrawing a static area of screen.

  29. Linux Apostate says:

    Well, exactly. Lag (latency) is a big problem for gamers. I don’t know of any FPS or RTS game where 500ms would be tolerable. Turn-based games, perhaps.

    Do an image search for “tf2 scoreboard” and have a look for the Ping column. This is the round-trip latency in milliseconds. Notice that the high-scoring players usually have a ping below 50. This is not a coincidence.

    oiaohm seems to be confusing latency and bandwidth.

  30. oiaohm says:

    Prong Reboots it does not change the reality. That once you make it to 10gbs network thin clients there is basically zero difference between them and a desktop machine on interaction performance.

    Most people are comparing thin client performance to those of the 1mbs, 10mbs, 100mbs, 1gbs ranges to a full machine. Of course this cannot work there is not enough bandwidth to send data to screen fast enough so depend on compression..

    You cannot compare the current generation thin clients to thin clients from the 1900~ heck you cannot really compare to to the 2000-2011. Those thin clients simple did not have the bandwidth to provide a perfect experience.

    Ok the imperfect clients are in fact good enough for particular usages. Most likely still have a place to save some bandwidth here and there.

    Remember 10gbs 40gbs and 100gbs network cables today. All are able to provide with left over highly decent performance.

    A Thunderbolt interface that is for 7 monitors is only 20gbs.

    The reality networking is now faster than video monitor interface. In fact networking is fast enough to talk to a lot of PCI-e cards from a network bridge. So a thin client with a PCI-e card sticking out of it would be possible.

    So question how many monitors do you want a 40gbs per second to you desk can drive 14 of them.

    10gbs can drive at least 2-3 most likely 4 monitors just as if they are desktop machines. More if they are not all changing rapidly.

    A decanted 100gbs to desk I don’t know who would exactly need this since its basically 30+ monitors at there desk. That is enough to basically cable a studio control room using 1 cable to a switch and the server room 100 metres away. Think the client are POE it would be a total of 31 power points in the studio. 1 power point each for all monitors and 1 power point for the switch.

    100gbs cable is not that big.

    This is the reality we have true monster cables. The bandwidth requirements to run thin-clients is no longer a problem. Heck there is research into a 1000gbs network cable. At this point I still don’t know exactly what office usage that will have.

    A 1gbs to you desk is a little small. 100mbs to desk is also a little small.

    The return of the mainframe is because thin-clients now can work perfectly again. That there is no advantage to a thick client because the require bandwidth to run a full thin-client as if it a thick client now exists again. When we were still using text mode graphics we did not need this much bandwidth. It was the introduction of graphics that did in the mainframe because networks could not move enough bandwidth.

    Yes video out was the fastest interface on the computer. The fast interface is now the network port.

    The machine to drive these monitors still will not be small.

    Yes it possible to get more than 4 cleanly and quickly down a 10gbs network able. Compression. Reality is you can send 3 down that basically no compressed.

    So yes 100gbs cable support way more than 30 with compression.

    Reality desktop computers are getting to the point of being able to drive more screens that what people want on there desk. So spiting machine between users is not a problem particularly once you start seeing machines with 10gbs and 40 gbs network cards..

    Note in my example of 10 gamers to a 100 gbs network cable I gave them 3 screens each. Its not like a pro gammer is going to be running just 1 screen. Mind you 5 gamers to a 100 gbs network cable would provide enough for 6 projectors each for total Emerson.

  31. Prong Reboots says:

    The issue isn’t 50 ms. It’s 50 ms added onto the minimum reaction time, which, by the way, is under 200 ms. If your reaction time is 500 ms you are essentially handicapped (or well into your 80s).

  32. Ted says:

    “Those are names of products from a dozen organisations.”

    But they all manage to be “Linux” when comparing features with Windows, don’t they?

    ““Windows” does what, let in light?”

    “Linux” – what the hell is that supposed to be?

  33. Ted wrote, “You were saying? Something about inane names?

    At least in Microsoft’s case you can often tell what the software actually DOES.”

    Those are names of products from a dozen organisations.

    “Windows” does what, let in light?

  34. Linux Apostate wrote, “When I last played FPS games, even 50ms lag was too much.”

    So, why aren’t you in the Olympics where such reaction times would give you the Gold?

  35. oiaohm says:

    Linux Apostate “By the way, LOL at the idea of 500ms lag being acceptable for gaming. When I last played FPS games, even 50ms lag was too much.”

    Linux Apostate how fast is a HDMI cable. This is something important.

    Latest HDMI 1.4 top speed 10.2gbs. Max speed per screen is 3.40gbs. If you have a 10gbs network cable is fast enough even for the most extrema gammer.

    If you are not pushing max result-ions 1gbs can be fast enough. 500ms delay is only if you insane enough not to be using fast enough network cable.

    This is the thing 40gbs network cables exceed all monitor connection cables. Yes 40gbs copper exists. With 100gbs in the next few years. The spec for 100gbs is approved volume of hardware is not out there yet.

    This is why I say we only recently got the tech todo thin clients. 100gbs network links was only approved in 2010. Both work on copper or fibre.

    We have true monster cables. That are bigger than video output cable. 10 gamers on a 100gbs network link to a server are most likely not going to be able to spot the difference between locate machine and server.

  36. oldman says:

    “That’s a good little boy @ldman, you’re learning not to put words in my mouth. Keep it up and some day you may earn an ‘o’ in place of the ‘@’.”

    ROFLMAO!

    B@lls, sheer B@lls…

  37. kozmcrae says:

    “Your moral superiority knows no bounds….”

    That’s a good little boy @ldman, you’re learning not to put words in my mouth. Keep it up and some day you may earn an ‘o’ in place of the ‘@’.

  38. Ted says:

    “They have so many inanely named products it’s hard to keep up.”

    Debian
    Ubuntu
    GIMP
    Amarok
    Ogle
    Totem
    Kate
    Gaim
    Kopete
    XMMS
    Noatun
    Xine
    Grip
    K3b
    Iceweasel

    You were saying? Something about inane names?

    At least in Microsoft’s case you can often tell what the software actually DOES.

    Anyone can _claim_ to have used any version of Windows. However, all your criticisms of Windows seem to be based on versions no later than Windows 2000 and Windows XP.

  39. Linux Apostate says:

    The problem is *not* stating an opinion, but rather pretending that your opinion is factual.

    He was not wrong on facts or logic. If he was, you’d be able to quote something that he said which is also factually incorrect.

    By the way, LOL at the idea of 500ms lag being acceptable for gaming. When I last played FPS games, even 50ms lag was too much.

  40. Clarence Moon says:

    as are the web stats…

    You cheer when the web stats bend your way and you say they are invalid when they show what you do not want to see. You are great at fooling yourself, eh?

  41. oldman says:

    “At least their SEC-filings are current as are the web stats. I can still enjoy M$’s decline into the tar-pits.”

    A waste of time and energy IMHO. Better to garden and let things take care of themselves.

  42. oldman wrote, “There is no such thing as 2010 server”

    Correct. I should have written 2008. They have so many inanely named products it’s hard to keep up. I read lately they have renamed their file-manager, again.

    oldman wrote, “The further away from that you get time-wise from the versions of windows that you worked with, the more you will get tagged as “some old crank” who is out of touch with reality”

    At least their SEC-filings are current as are the web stats. I can still enjoy M$’s decline into the tar-pits.

  43. Linux Apostate wrote, “You write op-eds (blog posts), and then you attack the Ars Technica writer for doing the same thing.”

    The only performance mention the authour made was performance on 386 from 1990. He wrote his opinion was formed back then. He refused to consider that every factor in the performance of thin clients had improved two orders of magnitude since:
    servers then: 1 32bit core at less than 100 MHz, servers now, as many 64bit cores as you can afford at 2-4 gHz. Since the processes are running on the server, should he not consider that the apparent performance of thin clients is like a rocket these days?

    clients then: 1 32bit core and a few MB RAM at very low clock-speeds, now we have clients still mostly with 32bit cores but much higher clock-speeds, much more RAM and faster RAM. Since the task is the same, showing the pix and sending the clicks, don’t you think the performance of thin clients is like a rocket these days?

    Folks were using 10mbits/s networks then. Now we have 1 gigabit/s at the server and 100 megabits/s or faster at the client. The bottlenecks are all gone except for full-screen video where the whole screen has to be redrawn pixel by pixel over the network. Everything else is greatly accelerated. Most users cannot tell they are using a thin client and not a thick one except for the size. With file-caching and improved storage, servers actually give superior performance to most thick clients.

    So, my opinion is based on facts. His was based on his own regenerated biases. I can put numbers to the performance of my thin clients. He had nothing but hate.

    The authour even wrote, “Admittedly, the problem here wasn’t so much the terminals (at least the ones that weren’t dead on arrival) as it was WordPerfect and SCO”. He admitted he had no basis in fact and no measurement of anything relevant to put down thin clients. He was wrong on facts and logic. Yet, you guys defend him.

  44. oldman says:

    “I have used all versions of that other OS except “8″, 2010 “server” and, perhaps “CE”:

    There is no such thing as 2010 server, pog. There were three versions of windows that were released after windows 2003 – Windows 3003 R2, windows 2008 and windows 2008 R2.

    Making statements like this makes you look more ignorant that you actually are.

    You are entitled to you beliefs Pog. the consequences of holding to these convictions are that you can not expect to have you technical criticisms seriously. The further away from that you get time-wise from the versions of windows that you worked with, the more you will get tagged as “some old crank” who is out of touch with reality

  45. Linux Apostate says:

    I don’t think it was suggested that you should support Microsoft or buy their products.

    Rather, the point made here by ch (and earlier by myself, though my reply disappeared) is just that your own opinions are not necessarily based on fact. You write op-eds (blog posts), and then you attack the Ars Technica writer for doing the same thing.

    I’ve read his article carefully and I don’t see any instance of an incorrect factual claim. Do you? Looks like opinion to me.

    I’m sorry to say that I think you have trouble distinguishing opinion from fact, because you state opinions as if they were fact (e.g. Intel lies, Windows is still DOS at heart). And when other people give their opinions, you assume they are stating facts (e.g. in Ars Technica). There is a difference. Facts are falsifiable, opinions are not.

  46. ch wrote, “basing your opinion of Windows on 1990ies versions of it, for that matter.”

    History is not my only basis for judging M$. There is no evidence of enlightenment on the road to Damascus. If anything, M$ has just become more subtle in its abuse of the market. I have used all versions of that other OS except “8”, 2010 “server” and, perhaps “CE”. It is not logical to suggest I should support the enemy by buying their products. I also try to avoid products from Intel, even though they are technically good but tied with M$ in anti-competitive lock-ins and monopoly pricing. Intel is diversifying and I would not be surprised if they were to return to ARM when the cash-cow dries. ARM with Intel’s best tricks and FLOSS would be pretty good IT.

  47. oiaohm says:

    ch SSD does not solve all. http://bcache.evilpiepirate.org/
    bcache rocks particular on ramdrives and solid state drives. Yes there is a reason data recovery on a failed solid state drive you might as well kiss your data that was not backed up good by.

    Really expensive SSD drive filled with data you are not using regularly is a waste ch. Using them for block caching makes way more sense. Yes you do still get the huge performance gain out them along with the huge storage of a old school hard-drive for the less frequently accessed stuff. Yes backed up on old school hard drive.

    Ch displaylink box hanging off one side of machine or equal zero client giving extra seat.

    So zero client setups off that 1 machine you can start fast equals less traffic jams if people do need todo more things. systemd will support hot pluging them in like a usb stick. So you at the machine using it I plug in the cable to the zero client box turn the other screen on and I am away.

    Note usb port of the first machine can provide enough power to run these zero clients for about 4 of them. Only the screen needs a extra power point.

    So where ever you have 1 PC now supporting 1 user if there is enough physical space you could have upto 5 users. Add they don’t take any power when they are not in use either.

    For me my home usage of zero clients is mostly to forward videos and applications from my server machine to where I need it. My server machine has wake up on lan. So its basically out cold unless I need it. I don’t need to walk to it to turn it on.

    I guess your system is still running sys v init start up ch that is a slow boat to china.

    This is the difference now. We have thin clients that connect by USB and Network. Network are POE so a powered switch and they don’t need power point. The USB ones don’t need power point and some are both. So they are plug and go.

    Know that usb bit that goes in the side of an android phone providing a hdmi port. Guess what its forwarding over usb. Something from Android to Linux means to do a really nice thin client with GPU access.

  48. ch says:

    “One would expect current opinions to be based on facts not distortions of history.”

    Coming from you, that’s really bad: You yourself obviously have no problems with basing your opinions on distortions of history – or basing your opinion of Windows on 1990ies versions of it, for that matter.

    As regards thin clients: When I – or my special someone – want to do something on the computer, we fire up our one PC and eventually shut it down again. (Thanks to an SSD, booting is so fast we only really have to wait for the modem to get up – probably the poor thing runs on Linux 😉

    You have one big PC (“Beast”) running at all times and start up a second machine if you want to do stuff – how is that more economical?

  49. oiaohm says:

    Clarence Moon the difference in performance between a modern setup thin client where gpu in server are being used to render 3d and 2d from application to 2d output image stream that is sent to the thin client and a desktop machine. Is almost nothing. Please note modern is less than 6 months. Tech only recently got to the point of doing this well.

    The difference effects pro gamers only there is no other user who pushes the system hard enough to notice. We are talking less than 1/2 second lag difference at worst. The modern ones using display forwarding from a gpu’s in the server you can basically do anything bar be a pro gammer and maybe a 3d model editing person who is picky. Mind you a causal gamer normally that bad of a shot that a 1/2 second lag might increase their hit rate. Its only going to be the top 10 percent of users if that who find a issue with thin clients setup based off modern. That still gives 90 percent of seats can go by by.

    Movie processing you have racks and racks of video cards on server. The server 3d performance makes your desktop making look like a complete joke. Yes something that will take days to years desktop side could be done few seconds server side just due to the massive gpu power in some of those things. Desktop side becomes very questionable for movie production. Main reason for desktop side was lack of means to send interface from server to nicer location. Those processing servers are room size things and can be noisy as hell not a place you want to be trying to think. 10 metres down the hall would be a nice location.

    Most modern machines for there role they need todo for business and a lot for home unless you gaming is big enough to support 4 users without question. Just you don’t have the means with windows or linux yet just plug them on easily. This is the thing when the OS you have supports easy connect thinclients. You might have a thin client at you tv playing back the movies you have on the hard drive of your PC. Thing is as a proper thin client someone else can sit down at the computer while your using it.

    Thin clients working with OS properly will give flexibility.

    Also when you come to thick clients. You do have the option of harddrive less think clients if you are not in a Windows environment.

    Chris Weig
    “If you today load some average Web 2.0 website with the usual AJAX stuff, your browser sucks CPU like there’s no tomorrow. And this demand will increase. A thin client — both hardware-wise and software-wise — is the worst possible solution.”

    Funny enough this is where the server solution comes into its own. Because all the wasted clock cycles that the individual machines would have had alone since they are in 1 machine the load can be better balanced. So better usage of the total hardware.

    So yes the thin solution might cost exactly the same as the thick. Difference being one has unified cpu power, one does not. And the one with unified running rings around the one without.

    Really a big issue has been the memory compression around virtual machines that has recently been solved.

    Server it is really critical not to run out of memory. uksm that is just new manages to prevent starting lots of virtul machines causing a massive spike in memory usage.

  50. Clarence Moon wrote, “The incremental cost of thick vs thin is pretty much the cost of a hard drive and that is a rather small cost element”.

    No, it’s not. The hard drive may be ~$50, but there is the PSU, the case, the motherboard, the CPU,… Every component except perhaps the NIC costs less with a thin client. Consider a whole system. Are you better off to have 4gB on each client or 400gB on a server? A gB on the server also does much more for everyone by caching stuff that everyone uses. A modest user may only need 100 MB of RAM for his personal data instead of a copy of the OS, every application and the file caches. That’s a huge saving and for many tasks results in a huge increase in performance. The fact that some tasks may not benefit is irrelevant if you have a bunch of users who don’t do those tasks. I know from teaching that a browser and word-processor take care of almost everything high-school teachers need. Special applications can also run on the server. It’s the same sort of consolidation widely accepted on servers applied to desktops. It works.

  51. Linux Apostate says:

    “the Cult of Microsoft”

    Projection much?

  52. oldman says:

    “If there is anything I can do to instigate you some more into revealing your shallow, ignorant, arrogant self just throw some more insults in my direction. You are just a pitiful example of the Cult of Microsoft.”

    Your moral superiority knows no bounds….

  53. kozmcrae says:

    Chris Weig wrote:

    “When have you ever added something substantial to any discussion? I could imagine that someone installed Linux against your will on your computer.”

    You are straining to prove my point. There’s really no need. If there is anything I can do to instigate you some more into revealing your shallow, ignorant, arrogant self just throw some more insults in my direction. You are just a pitiful example of the Cult of Microsoft.

  54. Clarence Moon says:

    No matter how cheap other hardware is, thin clients remain a great way to get away from that other OS

    If that is your mission, then have at it, Mr. Pogson! OTOH, if you want a personal computer, then a thin client is a pretty clumsy way to go about it. The incremental cost of thick vs thin is pretty much the cost of a hard drive and that is a rather small cost element, perhaps a dinner at Red Lobster for a family of 5 or a small bag of marijuana. When the smoke or the dinner is long forgotten, though, you still have the computer with Angry Birds and MS Office to keep you entertained, even when out of 4G range.

  55. Chris Weig wrote, “Nobody needs thin clients”.

    Wrong. No matter how cheap other hardware is, thin clients remain a great way to get away from that other OS, improve performance, lower power consumption, reduce maintenance, space, heat noise, etc. That’s a lot of need that Wintel don’t meet.

  56. Chris Weig says:

    A mix is needed.

    Nobody needs thin clients, Mr. Pogson. Because real computers are dirt cheap. And every computer that is networked is already a thin client if need be. The real thin client concept where your thin client is nothing more than a glorified quasi-interactive TV set and merely outputs pictures is moronic and useless. Gaikai and Onlive may have some use for it, but who else?

    On a more general note you can easily see that the thin client concept in a very general sense is dying. Look no further than to one of your favorite toys: the web browser. You often claim that many people need nothing more than a web browser, and therefore “small cheap computers” are enough. The question is: how long will this remain so? Javascript engines are optimized all the time to enable them to do more, and ever more stuff is offloaded from servers to the clients. If you today load some average Web 2.0 website with the usual AJAX stuff, your browser sucks CPU like there’s no tomorrow. And this demand will increase. A thin client — both hardware-wise and software-wise — is the worst possible solution.

    People want to run their stuff on their computers.

  57. Chris Weig says:

    You hear that Chris? You have nothing to add to the discussion. Anything you say is just bad air.

    Says the master of this game. When have you ever added something substantial to any discussion? I could imagine that someone installed Linux against your will on your computer. And because you couldn’t get it off, you became a unwilling, unwitting Linux user and pseudo-evangelist.

  58. kozmcrae says:

    Chris Weig wrote:

    “…as the OS most thin clients are running on (Linux) has also been stuck in the 1990s for more than 20 years now.”

    Chris Weig has shown his willful ignorance. You can ignore anything he says from now on.

    You hear that Chris? You have nothing to add to the discussion. Anything you say is just bad air.

  59. Linux Apostate wrote, “an opinion piece (“Op-ed”), not reporting”.

    One would expect current opinions to be based on facts not distortions of history. History was improperly presented as justification for killing thin clients. That’s absurd/irrational.

  60. Clarence Moon wrote, “It is an obvious waste of money to supply thin clients when a convention PC is required anyway for other uses on other occasions”.

    Like when every man, woman and child has their own Cadillac? That would be silly, just as it would be silly for everyone to have a thick client or everyone to have a thin client. A mix is needed. Typically schools would have a thick client for teacher’s PC/terminal server and students would have thin clients so the teacher can do his/her job of managing students’ activities. I like to hang a printer and scanner and projector on the teacher’s PC to cover multimedia as well. So, 6 thin clients and one thick per classroom works. More thin clients in the lab is good unless multimedia creation is on the agenda.

  61. Linux Apostate says:

    Point of order – this is an opinion piece (“Op-ed”), not reporting, and bias is expected. I know you do not object to bloggers posting their opinions because you post your own opinions all the time.

    I am not a fan of thin clients because they are so completely dependent on the server and the network. My preferred situation is one where I can do all of my work locally if necessary. I don’t think this is uncommon and a great advantage of PCs (and certain tablets) is that they remain useful even with no network.

    Like “ARMed devices”, thin clients have their uses, but I don’t want to replace my desktop with a remote desktop any more than I want to replace it with an Android tablet. The thin client will be marginally better – at least I can keep a keyboard and a desktop monitor – but due to network lag, it will necessarily be less responsive than a local PC, and of course also dependent on the terminal service.

  62. dougman says:

    Speaking of games, Valve is working on a steam client that run on Linux.

    http://www.phoronix.com/scan.php?page=article&item=valve_linux_dampfnudeln&num=1

  63. Clarence Moon says:

    high school kids certainly do not need games in school

    But what about out of school, Mr. Pogson? It is an obvious waste of money to supply thin clients when a convention PC is required anyway for other uses on other occasions. If you have a real PC, you have both, so you do not need a seperate thin client.

    KHangman is a game with very low bandwidth requirements

    Your using Hangman in the context of computer games sort of says it all, Mr. Pogson.

  64. “A contest, physical or mental, according to certain rules, for amusement, recreation, or for winning a stake; as, a game of chance; games of skill; field games, etc.
    [1913 Webster]”

    Not limited to the 1980s, but classical and in plain English.

  65. Phenom says:

    Pogson, your idea of gaming is stuck back in the 80s.

  66. Matts Haglund correctly observes, “There’s no chances for old western desktop PC.”

    Exactly. People in the emerging markets don’t need/want to be slaves to Wintel and many cannot even afford to pay Wintel its monopoly-prices. We can see this clearly in Kenya where the nation is rapidly getting on-line without copper. Instead of investing $billions in cabling and clunky/massive PCs they are setting up access-points and mobile PCs for $millions. They get more “seats” for far less money. That’s what we did in schools. Companies would install gigabits/s lines for us for $400 per jack or we could add a $20 wireless NIC to an old PC. It was good enough for browsing or running most applications on the terminal server. We could get old PCs for $0 plus freight so potentially, we could get 20 times as many seats for the same investment. It was a no-brainer.

  67. oiaohm wrote of thin clients, “of course its not for gamers.”

    Some games play very well with thin clients. It all depends on the graphics. If the bandwidth required to the screen is small enough they work. I have seen that in schools with elementary students who play games. KHangman is a game with very low bandwidth requirements because the screen is changed and left static for long periods while humans read, think and do. TuxMaths, on the other hand is a killer with those “flaming” meteorites. The twitches of the flames each involve a full-screen refresh. While 30 students could easily play KHangman from Beast, 5 grade 1s would severely impact performance with a single 100 mbits/s NIC. That’s purely a network bottleneck. Beast could handle many more on a gigabit NIC. Of the 100 or so games I had with 24 elementary students, only TuxMath had any problem with thin clients, because most of them were really cartoons rather than full-screen video.

  68. Clarence Moon wrote, “Tell your subjects that your thin client can do anything but multimedia, games, and Skype.”

    Multimedia are not a problem for thin clients except full-screen video. I have no problems with audio because of the lower bandwidth. Games are for kids, so elementary students may not be best served with thin clients but high school kids certainly do not need games in school. Skype will run on a thin client just fine but M$ recently broke it. There is nothing in principle to prevent Skype being used on thin clients. After all it is a network application and the LAN is far faster than the Internet so the bottleneck.

    If necessary, a thin client can run a particular application locally.

  69. Chris Weig wrote, “(Linux) has also been stuck in the 1990s for more than 20 years now.”

    Uhhh, */Linux is on the forefront of many technological innovations of the 21st century, so Chris is full of it.

  70. “ntpq -p
    remote refid st t when poll reach delay offset jitter
    ==============================================================================
    *router 38.229.71.1 3 u 181 512 377 0.328 5.261 2.049

    No stumbling here.

  71. dougman says:

    When you start messing with time servers, all platforms are affected.

    It’s akin to messing with the timing on your automobile, while its driving down the road.

    My customers and servers had no problem, no sporadic down time.

    Speaking of time bugs, what about this one?

    http://www.wired.com/wiredenterprise/2012/03/azure-leap-year-bug/

    What is interesting is that it was not just Azure, a neighboring office informed me that their system was down for whatever reason. Seems their M$ servers servers had an invalid security certificate.

  72. dougman says:

    Thin clients stuck with Linux since 1990, eh?

    HP just released an updated Ubuntu build for their newest thin client model 5745.

    http://davelargo.blogspot.com/2012/06/thin-clients-groupwise-2012-alfresco.html

  73. Chris Weig says:

    http://www.wired.com/wiredenterprise/2012/07/leap-second-bug-wreaks-havoc-with-java-linux/

    Look at that, Mr. Pogson. Linux stumbles over leap second bug. You want to write an article about that, too?

  74. Chris Weig says:

    That he rebukes the thin client with arguments from the 1990s (which he really doesn’t, unless Mr. Pogson reads it) is just fitting, as the OS most thin clients are running on (Linux) has also been stuck in the 1990s for more than 20 years now.

  75. Clarence Moon says:

    All most of us need is a good thin client.

    Well, speak for yourself, Mr. Pogson, I am sure that you will!

    Tell your subjects that your thin client can do anything but multimedia, games, and Skype. They will doubtless want one in order to save the oppressive costs of traditional personal computers where laptops can run into the hundreds of dollars, just for starters. Almost everyone will be willing to give up rich displays and media access to save a few hundred bucks. They could go out to dinner every night for a week on that money.

  76. oiaohm says:

    List of tech to make thin client systems work only really recently started coming into there own.

    Ultra KSM http://www.phoronix.com/scan.php?page=news_item&px=MTEzMTI Reduces virtual machine memory usage and general application memory usage.

    dma-buf mmap and vmap for transfering buffers off video cards on host machine to send to remote machine..

    http://www.displaylink.com/technology/technology_overview.php Note display link can support 10 screens on USB 2.0. Yes USB 3.0 it could support more.

    So locations with 2 users near each other there is really no need for 2 machines. Only reason there are two machines there now is Windows does not support displaylink tech on standard versions of windows without installing extra software. Neither does Linux yet.

    There is another limitation of course its not for gamers.

    Multi-seat work in systemd. Will mean you need a extra seat near a linux machine just plug a display link device in. Magic you now have a box supporting two users.

    This is not the thin client solution of old.

  77. Mats Hagglund says:

    Perhaps this is a generation issue and north vs. south issue. I bought my first pc in 1991 and use internet first time in 1994. To my generation so called “computer” is the same as desktop (or laptop, which i have never liked much).

    However younger generation will see things different. My son is using Samsung Galaxy SII and he’s now using the PC of family nowadays very seldom. I’ve thought about buying tablet to my old parents because PC and using mouse it’s rather difficult for them.

    In developing countries there will be no traditional desktop pc stage. These people, 6 billion, will jump over that stage to the society of ARM-devices, tablets, smartphone, etc. It’s a solar panel-ARM-device society. There’s no chances for old western desktop PC.

    These false ideas are based on too western perspective. Traditional desktop won’t totally die in west but it will marginal device. It developing countries traditional desktop has no chance at all.

Leave a Reply