Acer Enters Thin Client Market

Thin clients are nothing new but Acer producing them is. Another interesting point is that some of these are ARMed so we could be seeing even more widespread adoption of */Linux on ARM and x86. DevonIT’s deTOS OS is one of the options. Acer is one of the top PC OEMs on Earth. They know people are loving small cheap computers.

“The new Acer Veriton N2110G Series comprise robust x86 thin clients that provide top-rate performance for power users. The Acer Veriton N2620G Series models are compact and flexible rich clients offering a TPM 1.2 compliant design and mainstream performance. The Acer Veriton N2010G Series are ultra compact ARM-based thin clients capable of delivering a rich multimedia experience at a significant value. All three series provide multi-tasking processing power, essential manageability and security tools as well as industrial compliance. Acer’s thin clients will be available through Acer resellers with prices starting at $239.”

see Acer enters thin client market.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology and tagged , , . Bookmark the permalink.

54 Responses to Acer Enters Thin Client Market

  1. oldman says:

    No answer…. Not unexpected.

    Coward.

  2. oldman says:

    “Using FLOSS is not about abusing others’ work. The creators of FLOSS use GPL and other FLOSS licences because they want to share that work. It’s not abuse to accept the offer.”

    THat is irrelevant. What is relevant is your contention is that I and others are not freedom loving because I FREELY choose to use the software that works to me that happens not to be FOSS.

    Quit changing the subject Pog and address what I am saying!

    “Of course, M$ and “partners” are the real parasites, charging far above the cost of production for information, software. ”

    Utter baloney. Commercial software vendors ( OF whom Microsoft and its ISV are only one part) license products for anyone’s use. That license is for a fee and comes with terms. Terms that one is FREE to accept or reject as one sees fit.

    Of course if you reject the terms and are unable to negotiate terms more to your liking, you don’t get to use the product. If you do agree to the terms, you only get to use what you licensed under the terms you agreed to.

    But in either case it is you choice. No one is twisting you arm or mine.

    “The world does not owe the parasite, M$, a living and can make its own software and share it freely.”

    The world does not owe Robert Pogson free software either. In a freedom loving society anyone who creates software has the right to set the terms of use for their software. and If those terms require a fee and/or come with restrictions, that is the right of the creator. You as the potential user have only two options, accept the terms and use the software, or reject the terms and forgo use of the software.

    Period.

    Robert Pogson, you have been blessed by a community of like minded people with a license for free use of their collective work. But just because you have been blessed with that gift does not mean that you are entitled to all software under the same terms.

    “M$ could make a healthy living selling its licences for $20 instead of $50+ and they could make the world a better place by distributing for $0 Debian GNU/Linux to people who cannot afford M$’s usual fees.”

    If you wish to donate your resources, that is your right. You have ZERO right to insist that others do the same.

    IMHO That is not freedom loving at all.

  3. ch says:

    “Why should anyone have to buy a server and pay for a server OS just to connect 21 machines together?”

    Because a computer serving 20 clients is, uhm, a server? And no, with 21 machines I wouldn’t want to go peer-to-peer, thank you.

  4. ch wrote of connecting machines running that other OS: “No problem: Install Windows Server on one machine and add the 20 clients.”

    Why should anyone have to buy a server and pay for a server OS just to connect 21 machines together? That is asinine and shows a willingness to be a slave to M$.

  5. ch says:

    “M$ could also make much less complex software”

    So they should have stayed with Win9x? After all, it was much less complex than NT (or Linux, for that matter). BTW:
    http://www.joelonsoftware.com/articles/fog0000000020.html

  6. ch says:

    “can you connect 21 machines together to share files in a lab without violating the EULA?”

    No problem: Install Windows Server on one machine and add the 20 clients.

    “Give a copy to a friend?”

    No problem, and then he could try it for 30 days – if Windows wasn’t already on his PC, which is very unlikely.

    “Publish a benchmark of it?”

    Benchmarks are published all the time, and AFAIK only Oracle seems to have a problem with that.

    “Change any of the code?”

    You can’t change the code of the OS you’re using, either, so what’s the point?

  7. oldman ranted and finished with “Only parasites get upset because they can’t use a personal desktop OS as a server class OS.”

    Nope. FLOSS is a cooperative project of the world and we can do what we want with our hardware up to its limitations not some EULA’s limitations designed to prop up monopoly.

    Using FLOSS is not about abusing others’ work. The creators of FLOSS use GPL and other FLOSS licences because they want to share that work. It’s not abuse to accept the offer.

    Of course, M$ and “partners” are the real parasites, charging far above the cost of production for information, software. M$ could make a healthy living selling its licences for $20 instead of $50+ and they could make the world a better place by distributing for $0 Debian GNU/Linux to people who cannot afford M$’s usual fees. M$ could also make much less complex software which would meet the needs of users rather than M$’s self-interest. Instead, they coerce the world into using bloatware full of vulnerabilities, restrictive EULAs and excluding competitive OS options like GNU/Linux. The world does not owe the parasite, M$, a living and can make its own software and share it freely.

  8. oldman says:

    “Sure you can run applications but can you connect 21 machines together to share files in a lab without violating the EULA?”

    Its a desktop OS that I am licensing Pog? What part of this don’t you get?

    “Publish a benchmark of it?”

    Why should I want to?

    “Change any of the code?”

    We have been down this road time and time again Robert Pogson. I have 10x the experience that you have with writing and maintaining code, and I wouldn’t dream of touching anything as complicated as Windows even if I did have the source code. ANd trading proven function and feature of closed source software that already works for me just so that I can gat access to source code when all I want to do is actually USE the software to get my work done is the ultimate in stupidity.

    But I’ll tell you what Pog. When you actually decide to learn C or C++ and actually start sitting down and hacking at that wondrous source code you think is so precious AND you actually begin to get experience in maintaining your custom Mods, then IMHO you will have a right to talk about source code access!

    “Nope. EULA forbids stuff freedom loving people take for granted.”

    No Pog, freedom loving people cherish the freedoms outlined in the real four freedoms. Freedom loving people don’t get their knickers in a knot because of the terms of a license for the use of SOMEONE ELSE’S PROPERTY don’t allow them to misuse it.

    Freedom loving people exercise their freedom to not agree to the terms of the license and not use the product and respect the rights of other freedom loving people to exercise THEIR rights to agree to the license terms and then get on with the bushiness at hand.

    IMHO that is what freedom loving people do.

    Only parasites think that they can do what they please with someone else s property. Only parasites get upset because they can’t use a personal desktop OS as a server class OS.

    As I said a total load of crap!

  9. oldman wrote, “I would love to sit you down in front fo my personal desktop, demonstrate what I can do with it, and then have you tell me to my face that you think I hate freedom.”

    Sure you can run applications but can you connect 21 machines together to share files in a lab without violating the EULA? Give a copy to a friend? Publish a benchmark of it? Change any of the code? Nope. EULA forbids stuff freedom loving people take for granted.

  10. oldman says:

    “Fortunately haters of Freedom are just a minority of users…”

    Haters of freedom?

    What a total unwashed load of crap!!! I wish to be free to use what works for me. I do give a rats patootie what you think freedom is. IMHO you would not know what true freedom of IT is!.

    I would love to sit you down in front fo my personal desktop, demonstrate what I can do with it, and then have you tell me to my face that you think I hate freedom.

    Jerk!

  11. iLia wrote, “do not expect us to migrate to Linux”.

    Fortunately haters of Freedom are just a minority of users of that other OS and millions have migrated to GNU/Linux and millions more will. We also have millions who have not used any OS who will sooner or later find GNU/Linux works for them.

  12. iLia says:

    When it comes to performance shape of the Desktop you simply cannot make everyone happy. iLia. Just happens what makes Linux users happy makes you Windows users highly unhappy.

    So have some fun, but do not expect us to migrate to Linux!

  13. oiaohm says:

    iLia
    –Translation: the most popular GNU/Linux is not suitable for desktop use.–

    People like me don’t dispute that for Windows users moving to Linux. Popularity has nothing to-do if desktop usage of a Distribution is compatible with end user.

    iLia
    –So what? People usually work with the active window and expect it to be as responsive as possible.–
    This is the problem. Long term Linux users don’t believe this idea. This is a Windows user idea dominate and OS X user idea to a point. More of a so so idea in the Linux world.

    Ubuntu is created by people who follow the historic idea. So only really suitable for Linux Users. They are a breakaway group from Debian. Historically the hard core ideas on system performance these still exist in Ubuntu.

    You are applying what you expect to a community of people who don’t believe the same things.

    Long term Linux users are a different breed. Ubuntu is a very good choice for a lot of long term Linux users. For example I have code currently building in background while I am surfing the net. I don’t want that code in background to be slowed down. Its boring and I don’t want to look at it but I still want it to complete sooner than latter.

    Basically make sure the people advising you on what distribution to use are considering the fact you are a Windows user and not recommending you to stuff that is for the Long term Linux users needing background task performance so causing your foreground performance to sux.

    iLia
    –Please give at least one reason why your doctor should learn about ulantencyd instead of reading some medical journal?–
    ulantencyd if its in the Linux with systemd automatically balances the system like Windows.

    Really something like Ubuntu should not have been recommended in the first place to Windows Users or at least they informed that its not going to perform like windows and run background task better but foreground worse.

    If you had been told this truth iLia would you not avoided Ubuntu would you not. Reason why I laugh is people like you always say because Ubuntu performs X way all of Linux does reality it does not.

    iLia its natural human to laugh at some people. You know the ones walking down the street dressed in sports gear yet you can tell they are not sport person in any way shape of form. The excuse is I am wearing it because it current popular. Not because its what is suitable to-do the job.

    iLia
    –but these distributions have their own problems, so not a big difference.–
    Not a big difference right. Just because it turns out the point you have pulled is limited to particular group of distributions there has to be other problems.

    Really instead of saying Linux suxs get on reviewers back to reviewer Linux Distributions from a Windows users point of view so you can find the ones that are more friendly to you and how hard they are to be set friendly.

    Also I have to laugh at you. Your define of what desktop usage is so funny compare to real Linux users define. For particularly people the Ubuntu desktop is suitable you are not one iLia.

    Just like a person like me being driven nuts on windows that background processes are slow. Wrong desktop performance shape upset user that is the way it is. I am upset by Windows because I like the Linux default shape without ulatancyd. ulatancyd still makes it better of course for me. Since I can now choose give selected background processes equal time to foreground instead of all. In fact I have to tweak ulatancyd to be happy. I cannot fully tweak windows to be happy.

    Debian, Redhat, SUSE all can do both modes. There are others. Now that you know the issue maybe we will get a distribution true optimised for new Windows users if enough Windows users would gang up.

    When it comes to performance shape of the Desktop you simply cannot make everyone happy. iLia. Just happens what makes Linux users happy makes you Windows users highly unhappy.

    This is something TM and others gets wrong. The Linux community is different. There are different values and beliefs. Lot of the conflict between Windows users and Linux users is believes. What Windows users believe is basically laughable to Linux users. Some of what the Linux user believes is laughable to the Windows user.

    So we have a nice failure to understand leading to stuff you just have to laugh at or go nuts.

    Ubuntu is classic one. Linux long term users are using that its suitable for me. Right. First question do you believe the same things about how a Linux Desktop should operate to a Long term users?

    More often than not the answer is no.

    Debian that Robert Pogson falls into the general camp but might take some tweaking to fit your right.

    As a windows user you don’t want tweaking. Problem is I don’t go shopping for a distribution to fit a Windows user.

    So all you have been really saying iLia is that you are incompatible with Ubuntu. Because you don’t believe and expect the right things to be compatible.

    I might not know the distribution you are looking for but I do know the software packages it must contain to make you happy on performance. systemd and ulatencyd. Issue is ubuntu does not contain either package so cannot be converted to Windows user friendly.

  14. iLia says:

    Fabulous!

    Ubuntu default is more design for a server running like 100 desktop users. So something big. So predictable suxes on small.

    Translation: the most popular GNU/Linux is not suitable for desktop use.

    iLia if you don’t want to be laughed at by us who are experienced don’t use Ubuntu on small hardware

    Ok, very common situation in the Linux world: laugh at common losers users.

    Yes so the window in front of you is sluggish compared to windows but the windows you have in background are going like bat out of hell compared to windows.

    So what? People usually work with the active window and expect it to be as responsive as possible.

    there are other distributions that are way better

    but these distributions have their own problems, so not a big difference.

    iLia this is the problem with people like you not knowing why. Since you don’t know why you don’t know that for that hardware you want something with systemd and ulantencyd or put up with the fact all tasks will get equal cpu time.

    people like me? You mean people who don’t want to spend years learning Linux internals in order to find good distribution? We are majority, there are plenty of very smart people who have no idea about how an operation system works, but who need it to perform their tasks. And why should doctors, engineers, artists care about it? Please give at least one reason why your doctor should learn about ulantencyd instead of reading some medical journal?

  15. oiaohm says:

    iLia so far you have not said what sound blaster.

    It could be a pre proper plug and play and you have missed doing a setup step.

    Really I love ulatancyd. Lot of the performance problems with LXDE is normally that the Linux kernel is being too fair with cpu access. ulatancyd addresses that issue. Problem is to use ulatencyd you need a distribution that support systemd. This is not Ubuntu.

    People complaining about poor Linux performance and they mention Ubuntu cannot work out why I walk away laughing. Ubuntu default is more design for a server running like 100 desktop users. So something big. So predictable suxes on small.

    So your issue is a Ubuntu fault iLia. Some of the fault also comes from the prototype apparmor crap that lands in the Ubuntu kernel.

    iLia if you don’t want to be laughed at by us who are experienced don’t use Ubuntu on small hardware there are other distributions that are way better.

    ulantencyd implements something MS does. Active Window bias. If you load up two cpu bench mark programs on windows. One is the active window One is not. Result the active window gets about 3 times the cpu time so performs way faster. Windows slows down background tasks automatically.

    Ubuntu like OS’s non active tasks get equal access to cpu time. Leading to sucking performance. So a bench doing the same thing results in both windows have the same result.

    So the problem you have is your distribution is crap for what you are attempt to use it on. Ubuntu is well marketed but it kinda lack a key feature for a brilliant desktop on old hardware.

    iLia this is the problem with people like you not knowing why. Since you don’t know why you don’t know that for that hardware you want something with systemd and ulantencyd or put up with the fact all tasks will get equal cpu time. Yes so the window in front of you is sluggish compared to windows but the windows you have in background are going like bat out of hell compared to windows. Result short fall of cpu time and you don’t notice that the stuff in background is completing faster on Linux than on Windows.

    Yes everyone thinks they want all tasks getting equal cpu time until they think deeper. You want what you are current using to get more cpu time and the windows management never to get staved in most cases. But you also would like to have particular background tasks not be staved. ulantencyd pulls that off.

    Linux historically in process management is configured for servers.

    Horses for courses.

    People who have used Linux for a long time get use to flicking between windows. This causes their frustration under Windows. Why they started something in background they expect it to progress on a decent rate due to Windows auto slowing down background tasks it does not. So they now believe Windows is not responsive.

    This is the stupid reality. Linux users think Windows is non responsive when they sit in front of it. Windows users think Linux is non responsive when they sit in front of it. The cause is all how they are use to cpu time being allocated to processes. So both are looking for different things to judge non responsive.

    Yes Windows vs Linux responsive cat fight and most of the time neither user wakes up they are using different metrics.

  16. iLia says:

    My parents laptop, my old Dell inspiron, is still running XP with 1GB of RAM and it’s totally responsive. “

    So it is!

    XP under automatic updates has evolved from a lightweight OS that would run in 64MB to a hog that thrashes in 512MB.

    I used XP with 256 mb for more then 7 years, and you know what? It took more than 40 tabs opened in Opera to make Opera unresponsive, but Windows worked well, just minimize Opera’s window, and Windows will writes the memory used by Opera in swap and frees space for other applications. But now when I have 1Gb it is not a problem anymore.

    Yes you can use celeron-1700, 1Gb and Windows XP in 2012!

    And Ubuntu, I can say you that Unity is absolutely useless on my Celeron-1700, 1Gb box. And LXDE still works slower with 1Gb than Windows XP’s explorer.exe with 256Mb, and on my Linux the sound is broken, my Sound Blaster soundcard simply stopped to work under Linux.

  17. oiaohm wrote, “Really you don’t need a super fancy computer to word process or read power point presentation or doing lots of things.”

    Amen. One of the most eye-opening things I do to stimulate the imaginations of students is to take a really old PC from storage and test the throughput of the CPU in some purely number-crunching task. They are amazed. Then we compare specs with the terminal server or some new PC and look at how “busy” the idle-loop is. The mind boggles at how little the usual COTS PC is actually used. A terminal server plus cheap thin clients is really smart in most cases. Lower costs, higher performance, better storage, management and upgrading… It’s all good stuff.

  18. oiaohm says:

    –I think that should be Linux Terminal Server Project (LTSP).–
    Should have been I still from time to time type one of the first names for the project. I know I should not.

    Working video and close to 3d gaming is where you need the 10Gbs clients.

    LTSP Clients doing some interesting tricks on 1Gbs networking can get very close to the same result.

    Hardware acceleration of video and audio decode could be a god send to thin clients.

    Remove gaming, video and major audio work LTSP with everything running on server works at on 100Mbs.

    1G audio work is possible. 10G everything is possible.

    For classes doing major video or audio work you can set up specialist areas.

    This is very much round peg square hole. For usages where thin-clients suite the cost saving and results are highly impressive. Now where they don’t suit they are the wrong thing.

    Lot of the bad performance claims are bogus. Some cases is using them the wrong.

    Lets say I will have only 20 seats with a normal network. Thin clients could allow somewhere to keep 18 seats and bring like 82 seats out graveyard. Ok the 82 seats are not as functional. But they are 82 seats where people who don’t need the 18 seats extra means can do their work.

    More seats more chance people can be all working at once.

    Its like the 80/20 splits that are possible between Microsoft Windows and Linux. If this results in more staff being able to get job done the result is positive.

    Schools normally try to avoid playing lots of complex games. You want to forget something studies show tetris is great. Lot of games are counter to the learning process. Most of the games positive for the learning process don’t require rapid updates so work perfectly. Fancy animation might get student in but fancy animation can also cause student to forget what they are learning.

    Really you don’t need a super fancy computer to word process or read power point presentation or doing lots of things.

  19. Phenom says:

    XP under automatic updates has evolved from a lightweight OS that would run in 64MB to a hog that thrashes in 512MB.

    It is statements like this that demonstrates your complete lack of skills and experience, Mr. Pogson. Have you ever tried XP on 64MBs of RAM? That were the official minimum requirements, but anything less than 192MB made it difficult to carry out any intensive task.
    A common Firefox browsing session these days takes hundreds of MBs of working memory set, so please spare us the non-sense of NT being lightweight. But of course, it is OS’s fault that a browser can consume so much memory for simply rendering web sites.

  20. TM wrote, ““XP was swapping with 1gB RAM just normal browsing/word-processing.”

    That’s total BS and you know it. This is generally what anyone who still has an XP machine is using it for. My parents laptop, my old Dell inspiron, is still running XP with 1GB of RAM and it’s totally responsive. “

    Nonsense. I have seen a lot of XP used in schools and some of them take 2 minutes to get a usable desktop with all RAM in use and swapping. XP under automatic updates has evolved from a lightweight OS that would run in 64MB to a hog that thrashes in 512MB. It still swaps in 1gB because it puts all RAM to use. That’s not a bad choice but it does reduce the snappiness of the system depending on the load on the hard drive. My Beast, currently has 448 MB free even though it has some stale stuff in swap.

  21. TM wrote, “You’d have go to 64-bit and that would quadruple the memory footprint, killing all the overhead you thought you’d have.”

    Where did you learn maths? Going 64bit does fluff up software a bit but it’s not all data that’s 64bit. Bytes still exist in a 64bit system. 24x 1gB is a lot more demand for RAM than 1X1gB for the OS + 100MB/user. I have run systems with 50MB/user and survived with a little swapping. Really, users don’t have a lot of data which is their major cost for RAM on a GNU/Linux terminal server. On a UNIX-type system with shared memory you only need one copy of the executables for all users. It’s not the best security but it is the most efficient system you can have. The point is the world is full of working PCs that can serve quite nicely as thin clients. IBM often converts whole businesses to thin clients that way so that users have the feel of their old PCs with the new software.

  22. ch wrote, “we are talking about rural shools’ shoestring-budget infrastructure of years ago.”

    Yep. The cabling I have seen in schools has been frightening. Several places even skip proper termination and have coupling jacks just laying on the floor and unlabeled/noname cabling. About 2000 in northern Canada, decent LANs began to appear but that doesn’t mean schools made good use of them. Where I last worked they were still running 10mbits/s from hubs… The ISP donated 24-port 100baseT switches when I contacted them about our infrastructure. They used our switches to supply the whole community so it helped them do their job, too. Essentially the school was using the LAN for Internet connection only. Each PC had an IP address from the ISP… without a router even… but compare that with a few years earlier with nothing but isolated thick clients and IT has come a long way in a decade. The first LAN I ever saw in a school was in 2000 and we made it ourselves sending a volunteer into the crawl-space. He was a spelunker and survived the experience.

  23. oiaohm wrote, ” Linux Thin Server Project”.

    I think that should be Linux Terminal Server Project (LTSP).

    On network speeds, I can say that 100mbits/s is perfectly adequate for text and images in a lab of 30 users. I have been doing that since 2004 with LTSP, plain X. OTOH, five users in that lab doing full-screen video is a killer. 200 mbits/s, say bonding two 100mbits/s NICs on the server helps a lot. Gigabit/s is fairly standard on servers and permits 50+ users on a single gigabit/s NIC if they are not doing video. There are many applications of PCs that don’t require video. In a lab, if a teacher wants to present video, it is much more effective to use a projector from the teacher’s computer in many cases. That does not work for students making their own videos or multiple videos being watched in one room but usually a teacher has “a lesson” which may not involve multiple simultaneous videos. It’s a matter of managing resources. If LTSP works for an organisation it is nearly always the lowest cost of computing you can get, one or more busy servers and cheap small thin clients idling.

    In a typical computer lab, I like the 24+2 unmanaged switches. Two gigabit/s ports allows some flexibility with servers and 100 mbits/s is usually adequate for what any individual is doing. For a whole school, a gigabit/s backbone costs $100+ and allows that joy all over the building. I have had some success running gigabit/s over ancient Cat-5 cabling in schools. It’s a wonderful resource, copper, well worth the investment for stationary clients.

  24. oiaohm says:

    ch even 100Mbs is mostly responsive. Other than if people are doing events causing a full redraw very often.

    Its also likely that robbert would be using a lower resultion that what I described.

    1024×768 requires a lot less bandwidth. 18874368 bits per full frame. Or about 5 frame per second on 100Mbs connection if you cannot compress. This is approaching the human tolerance limit.

    You would not be doing 1280×1024 on a 100Mbs network. 31457280 what is basic 3 frame per second raw or way too slow. Since this is on the wrong side of the human tolerance limit.

    800×600 is even better 11520000 a full frame not compressed. Notice something the required speed does not change that much.

    Ch the ones that are unworkable for lots of things today are the 10Mbs thin clients I have used X11 that way. 100Mbs is where thin starts working with a few limitations. Like no video play back. 1G per second is where it gets decent with current day screen result-ions. 10G is where it gets perfect.

    Ch each network speed level of thin clients has a different limitation on what it can and cannot do.

    Now Linux Thin Server Project allows running a percentage of applications on the local machine using a process called disc-less remote boot. So allowing video playback and other items that the thin client server cannot send from it to be processed client side.

    100 Mbs network you need more client side processing than 1 Gbs network thin client. and 1 Gbs network needs more than a 10 Gbs network. 10Gbs is zero client side processing.

    The ball park has changed as Linux has evolved.

    ch this is the problem what robert describes is workable using LTSP with the run a percentage of applications like firefox on the local machine because it will be displaying videos in flash. This still does not require maintenance on the machine is the machine does not have harddrive.

    Of course this means you cannot move all memory in a 100Mbs network normally to the server. 1Gbs networks you can possibility in most cases get away with moving almost all memory to the server. 10Gbs networks to thin clients you can 100 percent basically strip those machines of memory.

    Basically there is a balance between thick and thin. As tech has improved the balance has gone in thin clients favour. Its all about bandwidth can you land the screen or not.

    http://drbl.sourceforge.net/
    There is more than 1 way under Linux to skin the maintenance cat. Drbl is more friendly on your 100Mbs networks.

    The improvement has glossed over.

    Ch consider this even with slight performance problems a 100Mbs network provides what you need is still a very small computer if what you are doing are applications that are compression friendly. 166 Mhz pent with 128 megs of ram will do the job. Basically bit of complete crap. Worthless to run anything modern. So this frees up the more modern machines to be used for what they are good for. Its all about seats.

    Perfect you do need a cpu with a few Ghz of processing power to handle the 10gbs connection but on the ram size still bugger all less than 128megs is still decent.

    That is the very warped things about thin clients. You need faster cpu and network port to get to perfect but the ram requirement in the thin client has not changed since the stone age. Most likely in 50 years thin clients will still only require a max of 128 megs of ram.

  25. ch says:

    “Even so a server with a few 10Gb network cards and 1G connected thin clients give a very responsive desktop.”

    Please remember that we are talking about rural shools’ shoestring-budget infrastructure of years ago.

  26. oiaohm says:

    TM Repository interesting enough even enabling real-time scanning under Linux does not effect perform as bad as it does under windows.

    Also the Linux world is highly not tolerant to rootkit of any form. So adobe acrobate under Linux does not start background services. Where the windows one does.

    “Also, a 32-bit environment wouldn’t support 24GB of RAM.”
    I should expect these kinda of lies from people from TM Repository. They say they are against spread lies and they are the bests lie spreaders going. They need to audits themselves.

    Linux 32-bit environment supports up to 64GB of ram. Physical Address Extension support. 24GB of ram is well inside the operational limits of a 32bit Linux. 64 bit Linux memory limit is insane.
    Depend a bit on hardware Linux max memory for 64 bit mode is either 1024GB or 8589934592GB these are hardware limits. Windows 64 bit mode 128GB so about 1/10 of what the hardware can do.

    Windows XP/2000 and 2003 32 bit can have PAE mode enabled like Linux. Problem is most windows drivers are not PAE aware and bite the big one when you have more than 4G of ram giving blue screen of death. Vista and later Microsoft enabled a software hard lock so disabling what you hardware can do.

    TM Repository
    –No you wouldn’t, you’d just wind up sucking up all the network bandwidth and the screen performance would be considerably worse.–

    How much bandwidth do you think it takes to send 1280×1024 at 60 frames per second.

    1,887,436,800 bits per second that is if you don’t get compression basically if every frame was a complete new frame. Now if you have the number frames to 30 frames per second it simple fits in a 1G cable basically raw.

    When you do compress you can get away with 100 Mbs connection doing 1280×1024. With out any major performance effect.

    Even so a server with a few 10Gb network cards and 1G connected thin clients give a very responsive desktop.

    –Not to mention resource allocation becomes problematic since anyone running a heavy task will bottleneck the rest of the users in that situation.–

    This is what cgroups are for to make sure no user can be greedy.

    There are other performance control options as well.

    Really today with systemd where each user is automatically placed in there own cgroup and resource access is cgroup controlled the problem of one greedy user cause the everyone to miss out is basically gone.

    Most of the argues against thin clients have been address because network cables are now so large. Second is that resource management in Linux and hyper-visors is getting really really good.

    10Gbs is the magic line in the sand. 1gbs clients if not gaming more than good enough for a lot of operations. Reason full screen redraw does not need to happen every frame. Humans are slow.

    Lot of people still think from the times to the 10Mbs thinclients. Bandwidth is required. 100Mbs is where it starts coming functional for a thin client. For a full screen redraw 100 Mbs is problem. 1Gbs to a thin client will not always be used but it gives space so the thin client is responsive.

    Lot of people have had bad experiences with thin-clients and most of it is too small of network connection to deal with thin client events. Yes 90 percent of the time with normal user usage with just altered section of screen updated a 100Mbs connection is fast enough. Its the spike loads where the problems come from.

    Raw streaming comes possible with 10Gbs lines. Not just 1 screen either.

  27. “By putting all that RAM on a terminal server and avoiding running that other OS and 24 copies of software in it, there would have been plenty for excellent performance.”

    Also, a 32-bit environment wouldn’t support 24GB of RAM. You’d have go to 64-bit and that would quadruple the memory footprint, killing all the overhead you thought you’d have.

  28. Besides, if you’re such a guru, why didn’t you simply lean out the number of processes being run on said XP machines? Don’t let the Firefox agent run in the background, don’t install Acrobat, use MSE instead of Norton, etc.

    You’d do it if it was a Linux machine. They can be made to run slowly with improper configuration too.

  29. “XP was swapping with 1gB RAM just normal browsing/word-processing.”

    That’s total BS and you know it. This is generally what anyone who still has an XP machine is using it for. My parents laptop, my old Dell inspiron, is still running XP with 1GB of RAM and it’s totally responsive. Ubuntu requires more resources than XP does in the same environment.

    If you’d said working with Photoshop or doing video editing, I might have believed you.

    “By putting all that RAM on a terminal server and avoiding running that other OS and 24 copies of software in it, there would have been plenty for excellent performance.”

    No you wouldn’t, you’d just wind up sucking up all the network bandwidth and the screen performance would be considerably worse. Not only is this going to be a worse user experience, you’re also going to bring your network to its knees in some vain attempt at performance pooling. Not to mention resource allocation becomes problematic since anyone running a heavy task will bottleneck the rest of the users in that situation. So instead of one computer running slowly, they all run slowly.

    And 24 copies? Sounds like someone doesn’t understand how a shared installation works.

  30. oiaohm says:

    Ted
    –How did you manage to use Office 2007 on TS, considering the OEM and retail versions of Office 2007 do not work on a Terminal Server?–
    Its a funny one. One update causes to work. Later update correct so it fails again. Apply one update not the other and it works. But the terms of the EULA don’t forbid it running TS and in fact due to wording you could count by law the TS machine as 1 machine.

    The old saying knowledge is power.

    Ted
    –You _could_ get away with it with with one copy of Office with 2003, but MS would stamp you _flat_ if they found out.–
    2003 OEM EULA carefully. You find out MS don’t have the clause to stamp you flat. The volume license one of 2003 EULA they do have clause to stop your flat. MS enforcement on that one is go read the license. 2003 OEM is licensed on installed machines not number of user or number of remote access.

    The slight differences between volume license and OEM.

    2003 OEM lazy in the EULA.
    Lets just put it this way 2007 they got lazy with the EULA and allowed it in bug fixing.
    2010 they have eventually got the EULA bullet proof. I suspect is now the other way users will now be breaking the EULA doing basic things.

    Ted all training form Microsoft said not allowed. The EULA does not agree with what Microsoft was teaching with 2003 OEM and 2007 OEM. Its the EULA that is bible on what you are and are not allowed todo. Software lock to prevent something only works if you don’t break it. 2007 OEM was only software locked from running on TS not license locked. Become unlocked when they borked the lock.

    2010 particularly mentions pooling and other methods to reduce number of users.

  31. Ted wrote, “24GB RAM? And swapping constantly? At least try to be believable.”

    1 gB per PC was not enough for XP to avoid swapping.

  32. TM wrote, “What task was causing this to swap? “

    XP was swapping with 1gB RAM just normal browsing/word-processing. By putting all that RAM on a terminal server and avoiding running that other OS and 24 copies of software in it, there would have been plenty for excellent performance.

  33. “I was in one school where I looked around the lab at a super computer with 24gB of RAM mostly wasted struggling to keep that other OS from swapping.”

    What task was causing this to swap? I do 3D rendering which pegs all my computer’s resources (by design) and I haven’t been able to get my 16GB of RAM to swap.

    This is such an exceptional situation, I’m sure you can remember exactly what process was causing this to happen. It isn’t like you just made this up or anything.

  34. Ted says:

    @Pogson

    “I was in one school where I looked around the lab at a super computer with 24gB of RAM mostly wasted struggling to keep that other OS from swapping. ”

    Mr Pogson, there’s anecdotes, and then there’s stories. 24GB RAM? And swapping constantly? At least try to be believable.

    This tale could not have been any more fictional even if you prefixed it with “Once upon a time…”

    @oiaohm;

    “2007 and 2003 it was permit-able to get away paying less by using OEM. Bug in license in 2003-2007 did not cover remote access properly. Australia we are allowed to just buy OEM licenses.”

    “Phenom by the way MS spews some garbage that MS Office 2007 OEM does not have a license to use Network Storage and Use rights so cannot be used in thin terminal server. Reality the MS Office 2007 OEM EULA does not state it either way. ”

    How did you manage to use Office 2007 on TS, considering the OEM and retail versions of Office 2007 do not work on a Terminal Server?

    http://support.microsoft.com/kb/924622/en-us

    And with 2003, you’d have been in violation of the EULA to use one Office license to server multiple clients. To my knowledge, you have *never* been allowed to use OEM or single boxed Office licenses to run multiple users on a Terminal Server, unless you bought multiple OEM licenses with a server (unlikely, and stupid) or lots of boxes (more expensive than a Volume License.) You _could_ get away with it with with one copy of Office with 2003, but MS would stamp you _flat_ if they found out.

  35. oiaohm says:

    ch I know a few companies who have paid Novell for Libreoffice. Part of out sourcing support.

    Phenom by the way MS spews some garbage that MS Office 2007 OEM does not have a license to use Network Storage and Use rights so cannot be used in thin terminal server. Reality the MS Office 2007 OEM EULA does not state it either way. Result under Australian law you have it. I think most other countries would be the same.

    I see MS Office 2010 license has corrected the defects but has added a defect resulting in people committing offences without knowing it.

    Phenom most of the time I do check the finer points of licenses. Sometimes I am caught out. Heck if the rules of MS licenses did not change so much I would not get caught out.

  36. ch says:

    “Novell has no problem selling Libreoffice.”

    And how many do they actually sell? Do you know anyone who has paid for his LO/OOo?

  37. oiaohm says:

    Phenom
    “You want me to believe you do sell support on LO when you didn’t know how licensing for MS Office works?”

    Over time MS has changed points of there licensing.

    Interesting reading 2010 licensing fully. The terms are truly nasty. I suspect most people have not read the terms fully.

    Phenom do have have to handle MS licenses all the time no I don’t. Of course before a full deployment I would have rechecked the licenses I am using. Yes I presumed something that was valid for Office 2007 and before using OEM licenses not Volume.

    The change in 2010 states clearly only 1 running instance. Where 2007 does not in fact state this in OEM version. There was a reason not to. Switch between users feature in Windows XP Vista and 7. Result could be two copies of Office from the same install location running at the same time.

    Welcome Offence under the new MS Office 2010 license. This is why did not think they would have changed it this way. More users are now using MS Office illegally from time to time and would not even know it.

    Phenom really I think you better go read the license yourself and find out what I said is real.

  38. Phenom says:

    You want me to believe you do sell support on LO when you didn’t know how licensing for MS Office works?

    Go back to your toys, little one.

  39. oiaohm says:

    Phenom
    “How can you sell something which is free? Ohio, you sink deeper and deeper, man.”
    Selling solution support services of course. I do sell solutions Phenom.

    So for each solution I need sales points why stuff is included.

  40. Phenom says:

    Thanks Phenom I now have another sales point for Libreoffice.

    How can you sell something which is free? Ohio, you sink deeper and deeper, man.

  41. oiaohm says:

    Phenom Ok I see update EULA 2010. 2007 and 2003 it was permit-able to get away paying less by using OEM. Bug in license in 2003-2007 did not cover remote access properly. Australia we are allowed to just buy OEM licenses.

    Now this explains the the massive interest in making Libreoffice work right.

    Phenom missed the change mostly because the company is skipping MS Office 2010.

    Also I notice 2010 also forbids pooling.

    I really wonder now how many home users break the rules. MS Office 2010 is kinda insane basically if you switch users on the same machine and run two copies of MS Office one in each user you are committing offence.

    Thanks Phenom I now have another sales point for Libreoffice.

  42. Phenom says:

    So its only 1 instance cost per thin terminal server of MS Office
    Ha-ha-ha!

    Ohio, and you dare call yourself a Microsoft VAR? My boy, you can only be a dingo’s balls VAR.

  43. oiaohm says:

    Wayland is not as bad as it sounds. X11 to vnc and so on is a real bugger on the server and cannot do 3d effectively.

    Zero client systems are more Wayland compatible than X11. Why because Zero client is only a frame-buffer device. No X11 RDP or any other protocol handling. So now the poor server has to handle everything.

    When you start using zero clients X11 becomes a complete prick because all its load is now focused on the server. Even so a Linux box running zero clients runs more users than a windows box running RDP. Also if you are using clients that are like VNC or RDP only then yes also X11 becomes a prick.

    The wayland compositor does not forbid having a remote protocol. This is planned for down the track. Good part is most wayland compositors can sit on top of X11 protocol if it has to for now.

    End result is better compatibility with zero clients, VNC clients, RDP clients and spice clients. The network cross linking will have to be redone. Security flaws of X11 will be fixed. Because a wayland application cannot see other applications by default unless given special permission. So you don’t have to use selinux fix on X11 to keep security any more.

    The ride is going to get a little rough around the change over before all the key features of X11 are replaced. But at least the change over will allow X11 to stay in place until that is done.

    The cheap zero clients basically don’t have a xserver. Hardware has changes and Linux has to change with it. Zero clients have a big advantage of not needing configuration. They find there own way to the server.

    The problem with xserver on client as well is you cannot do opengl with doing a frame-buffer on the server anyhow. X11 was good while everything was just 2d graphics. Today people want 3d graphics as well. So we have to change to rdp vnc….. Style protocols where is frame buffers being sent around.

    We most likely need an extra frame-buffer network protocol that supports getting frame-buffers from many servers and gluing it into a unified interface.

  44. oiaohm wrote, “Even that X11 suxs on memory usage on the server.”

    It surprised me, too, when I figured X11 out, but you don’t need an xserver on the terminal server. Each thin client will have a minimal xserver for its video card and the server can be headless. Each user-application connects to the xserver on its user’s client… To the application, it’s a remote display. All the terminal server needs are the gtk libraries etc. to connect to X11 somewhere. It does not have to be on the terminal server. One can have an xserver on the terminal server to allow one or more users to have displays plugged in directly but it is not a requirement.

    The way Largo/Richards does it, the user from his thin client logs in on some terminal server and starts a session. The actual applications can be on other application servers, each optimized and caching files needed by the application. This gives maximum flexibility and frees the most RAM for user-data, hence permitting a lot more users. My first terminal server needed only 50MB per user and I did not realize this at all but I only had one server, so it did not matter. If you have a cluster of servers, one can have them specialize on one or a few applications. It’s an incredibly flexible and powerful tool, that X. That’s why I am so concerned that Canonical is messing with it. Those who use terminal servers and application servers need some replacement or the new Ubuntu will be a step backwards.

  45. oiaohm says:

    Robert Pogson also don’t forget Linux has items like KSM that are able to compact memory. Windows does not.

    A windows terminal server you need more ram to handle the same number of users as a Linux terminal server. Its a technical difference. Even that X11 suxs on memory usage on the server.

    The reality is once you go thin Linux systems come into there own. Since you are now doing what Linux is design todo.

    Of course I am not saying Linux desktop will not get better. The wayland changes with the frame-buffer over network thin clients all will bring effectiveness improvements. 3d on thin clients and reduced memory usage per client on a Linux server. So the number of users a Linux server will support will increase.

  46. dougman wrote, “run 220 users concurrently on one server”

    There’s no special trick to it. Having fewer applications running on a server means resources are not wasted and caching is optimized. Imagine the waste of having 220 copies of that other OS and its office suite and browser in RAM on 220 PCs… I was in one school where I looked around the lab at a super computer with 24gB of RAM mostly wasted struggling to keep that other OS from swapping. The whole thing on a terminal server could run in 4gB. Now, RAM is not that expensive but the ATX boxes, PSUs, hard drives to support it adds up…

  47. Clarence Moon wrote, “Certainly you are not living in corporate America.”

    Thank Goodness! I would likely be unemployed and without medical insurance instead of retired.

    Corporate USA has big problems. The world does not owe them a living and they have to maximize efficiency. FLOSS does that and competitors on a global scale are using FLOSS in business. Take a hint and forget reinventing the wheel. It’s the 21st century and everyone who wants one has wheels now. Much of the world is skipping Wintel for better price/performance in IT.

  48. oiaohm says:

    Phenom the thin client idea is not mine its IBM’s first.

    Phenom
    “Nevertheless, users of thin clients end up paying for MS Office licenses.”
    Really. You miss a few facts. Key one to move software around the office by thin-client is insanely fast. So software can be directly linked to user login.

    Number 2 MS Office License does not require cals if you don’t use the MS Server products(yes some places are using openchange instead of exchange already). So its only 1 instance cost per thin terminal server of MS Office.

    Yes MS terminal services CAL’s are cheaper than MS Office per machine for enterprise.

    So 100 licenses of MS Office get replaced with 2 license of MS Office and most likely 20 terminal services licenses and that works out cheaper. Reason for 2 is two terminal servers. Also does bring performance boost. Users not sitting around waiting for their desktop machines to boot up.

    The reasons for changing over to Libreoffice for internal is less 20 percent of staff deal in most business with external files directly. So they don’t need MS Office to open external files. Due to the fact instances are fast to access.

    Phenom the numbers on this has been done repeatedly about time you go and study MS licenses and wake up running thick clients is highly expensive due to MS licensing model.

  49. Phenom says:

    Neither are you, Mr. Pogson, living in corporate EU. The thin client I mentioned is exactly running eventually certain proprietary software and MS Office, backed by Exchange. It works pretty well, except for the drag&drop glitch I described.

    The number mumbo-jumbo you spilled is nothing but a wretched attempt to avoid the deficiencies of thin clients and paint some picture of a world you wish for, but a world that is simply not there.

    I am afraid you are catching the disease from Ohio – you type too much saying nothing, trying to disguise your hollow ideas behind numbers unverifiable.

  50. Clarence Moon says:

    I know a lot of users of thin clients who don’t use that other OS nor M$’s office suit…

    I don’t know any. Perhaps that phenomenon is local to the Indian schools in rural Canada?

    Everywhere that I have worked, just about everyone in the organization had at least one PC and that PC had MS Office installed. Just about everything we did was integrated into Outlook for scheduling and in the past few years used SharePoint for information exchange and collaboration purposes.

    I think you are living in a dream world, Mr. Pogson. Certainly you are not living in corporate America.

  51. dougman says:

    I have to say, terminal apps and thin clients rock.

    Dave from Largo is the expert in this subject, especially when you can run 220 users concurrently on one server.

    http://mrpogson.com/2012/04/01/daves-top/

  52. Phenom makes my day by holding forth, “Nevertheless, users of thin clients end up paying for MS Office licenses. Therefore, thin clients are no threat to MS whatsoever.”

    I know a lot of users of thin clients who don’t use that other OS nor M$’s office suite. Remember all those roadblocks M$ set up to protect the “applications barrier”? There’s a reason for that. When the necessity of running that other OS disappears the whole thing collapses like a house of cards. You can bet thin clients are a threat, about $50 per seat. There are 1000 million seats at stake… Do the maths. LibreOffice/OpenOffice.org already have 100 million or so of those seats. GNU/Linux has ~100 million of those seats. There could be ~200 million seats gone more or less instantly. Then there are 700 million smart thingies… The train is getting up to speed and M$ has yet to catch the last car. Within a couple of years the number of “seats” in IT could double and M$ is not in line to get many of the new seats and is in line to lose many of the old seats.

    Does anyone on Earth really believe anyone should shell out $hundreds per seat in licensing fees for a thin clients costing $200 or less for the box? Is the world buying software or hardware? The real cost of software is a bit for development and almost nothing for copying. The monopoly prices are ridiculous and everyone knows it.

  53. Phenom says:

    Nevertheless, users of thin clients end up paying for MS Office licenses. Therefore, thin clients are no threat to MS whatsoever. They are simply an approach to build IT infrastructure within a company, which has its pros and cons.

    Btw, the last thin client I had to used (HP-made) had limited drag & drop support, due to its inability to gain focus on target applications from the taskbar. Minor annoyance, but starts playing big in everyday life.

Leave a Reply

Your email address will not be published. Required fields are marked *