In Defence Of Menus

Menus apparently annoy some people. I rather like them.Bruce Byfield: “The modern desktop long ago outgrew the classic menu with its sub-menus cascading across the screen. Today, the average computer simply has too many applications to fit comfortably into such a format.” With a few clicks one gets to where one wants to be, just like a file-system. If your menus are inappropriately long or don’t fit on the screen, it’s because you have not sorted out the entries properly, not because there’s anything wrong with menus per se.

I have 3K Debian packages on my PC. Here’s my top-level applications-menu. It’s long but lists a few “hot” items and a bunch of categories. The boys and girls at Debian sorted them out for me (I have one game, FlightGear. I managed to take off once but crashed… 😉 I think my PC has an above-average number of applications and they fit quite comfortably in my menus. The menus might not be optimal one way or another but they are close enough that’s never an annoyance to me.

Here are my long menus, Accessories and “other”. They don’t look too long to me. They are long but they are alphabetical and only “other” requires scrolling.

This entire article didn’t need any reference to any of these menus because I have an icon to GIMP and “screenshot” on a bar at the bottom of my screen. Auto-key ran in the background and my browser runs all the time.

So, menus are a backup plan on a real desktop OS in GNU/Linux and there’s no need to tweak them if you use Debian GNU/Linux or you know the alphabet. Menus work. Use them. I feel sorry for those developers constantly tweaking or replacing menus. I think they are wasting their time, at least for legacy PCs. For a tiny screen, I can see they have a point but my current monitor is 20 inches and I could switch to using a huge one as I age. So, the energy placed on replacing menus for legacy PCs is really out of place. There is no reason for a “one size fits all” solution to the “menu-problem”. The problem does not exist here.

See 7 Improvements The Linux Desktop Needs

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology and tagged , , , . Bookmark the permalink.

151 Responses to In Defence Of Menus

  1. oiaohm says:

    That Exploit Guy sorry but really you have been caught out lieing way more than me. Yet you still show you head.

    Sorry That Exploit Guy building stuff to standards at times means accepting items that you might want to call rejected. UTF-8 out to 6 byte is just being standard conforming so you are future safe.

    The vmware one there is vmware documentation on what they do to the internals of Windows so they perform. Vmware has in fact had alterations done to OS X and Linux so their products perform as well.

  2. That Exploit Guy says:

    DrLoser one thing that pisses me off more than anything else is accusing me of making stuff up when I have not.

    Save your feigned indignant elsewhere.
    Again, you have been proven full of crap, going so far as to claiming that aliases of the same thing as different “standards”. I can’t speak for Dr. Loser, but, I myself have neither the time nor interest to read your laughable little fibs about how VMWare has completely rewritten Windows or why software should accept illegal UTF-8 sequences. If there is one thing that upsets me more than anything else, it’s lying little scumbags that think they can get away with telling what is untrue.

  3. oiaohm says:

    DrLoser one thing that pisses me off more than anything else is accusing me of making stuff up when I have not. There was absolutely no need to call 100GE some wierdo thing. Ok saying that you had not seen it before I would have been polite and given link.

    DrLoser my politeness level is fairly much linked to yours.

    Max TCP/IP is a very hard set of numbers.

    Max TCP connections per IP address pair to a single port on server is 64K connections. What happens when you give servers more than 1 IP address. Please note that is 64K of streams. Not the number of TCP/IP packets you systems has to handle.

    DrLoser you said
    “max out on tcp/ip communications” This is where I got Max tcp/ip for 10 million. I can fill a 30TbE connection for 10million. 100TbE would be pushing things. Really it would not matter that much what you were sending just as long you were not wanting 30TbE to a small number of clients.

    30TbE out of 10million about half is servers, about 1/4 is switch and optical multiplex(of course this is not going to be your stock normal retail) and 1/4 is installation and profit. 30TbE data-centre setup fits in a single shipping container. So installation can be really simple.

    30TbE set-up will give the internal DNS of it a work out way worse than internet traffic.

    You have asked a how long is a piece of string DrLoser. The top end of that request is all not Windows hardware. In fact stopping at 30TbE was staying inside the 10million budget. Mind you 10 million is no where near the largest I have got into a 40 foot long shipping container. 40 foot long shipping containers is a space you buy at one Australian data centre. Yes you provide the shipping container filled with your gear they connect it in their X mine. Perfectly temperature controlled.

    DrLoser I will give you a little puzzle try designing a data centre in a 40 foot shipping container that is 300TbE connection to the outside world(multi fibre connection allowed). I can tell you now is doable and was doable at end of 2013 but the usage in field was reject due to running out of space in container for UPS. I really expect someone to work out how todo a 1PbE in a Shipping container sometime in the next few years. The fun of modular data centres. Even 10 percent less processor power required to perform tasks would be a good send in these fixed physical size setups.

  4. oiaohm says:

    http://www.ncbi.nlm.nih.gov/pubmed/23938676h
    http://www.ncbi.nlm.nih.gov/pubmed/23787613
    http://www.howtoforge.com/distributed-storage-across-four-storage-nodes-with-glusterfs-3.2.x-on-ubuntu-11.10

    $120,634.85 for the housing switch and the module pair for 30TbE is half a million. Yes they are custom order. 30TbE is no longer large either. 100TbE+ per fiber-optic cable is deployed. Its every expensive to put new fiber in ground its simple to multiplex more and more into it. Sorry DrLoser I am in Australia please remember this where over 500Kms+ you can be still be in the same block of land. So fiber-optic is quite common hardware including very high speed stuff. Think bandwidth required for High resolution security cameras around a fence line of a properly with 500 kms across. Cattle and Camel duffing in some of those areas are a huge problem. Huge stuff is not always datacentre in the normal sense.

    DrLoser sorry you are wrong I did not say there was no value to 40 percent gain in bind. I said there is no value were you are looking. Internet DNS usage no major gain is correct. I said were the saving for bind is local network.

    How do cluster file systems locate each other. Guess what DrLoser by DNS. How does SMB under Windows locate each other these days DNS. How do SAN items find each other in most cases today by DNS. Not directly by IP address. 40 percent gain in performance of BIND can directly result in cluster file-systems running faster. Slow/not responding DNS in local network can cause a nasty stack of collapses. Its not the internet DNS servers where you want the 40 percent gain. Its the DNS servers you are using with your clustered file-systems and databases.

    Even using SAN the performance of your DNS server can be a factor.

    Sorry DrLoser you are out of your depth. As I said with oldman I used cluster file-systems over using SAN. So why should it be odd now that I know where the advantage of BIND DNS performance gain is. Also it also explains why I know what generates and destroys a DNS. Yes part of operating cluster filesystems is living with DNS servers.

  5. DrLoser says:

    Great steaming heaps of gibberish as usual. I apologise for the 100GE thing, oiaohm: I don’t really have the time to check every one of your malapropisms for accuracy. I do you the favour of correcting your spelling where I feel it helps your point. May I suggest that you do me the favour of keeping a civil tongue in your head?

    Ah well.

    DrLoser you said you wanted max TCP/IP. Max for a single server is about 400GbE that is about $50 000.

    No I didn’t, oiaohm. I proposed four interview questions. Not a one mentioned “max TCP/IP,” which is in any case a completely unquantifiable measure. Max under what circumstances?

    I see you mentioned the forbidden “$50,000” number as well. You do realise that I was not seriously proposing $50,000 as a fixed price, don’t you? To anybody willing and capable of reading, I was obviously offering an estimate at the top end of the scale for the non-existent use case suggested.

    100GbE will handle general Internet DNS loads with room to spare.

    Thank you for proving my point.

    There is absolutely no value in talking about a “40% reduction in syscalls through exception-less mechanisms” when it comes to BIND.

    How many times do I have to repeat myself?

    100GE doesn’t come cheap, though. You actually have to have a serious need before you splash out that much on one or more NICs, plus the data-centre switching equipment, plus the network control environment, plus the switches out to the backbone, plus paying for the rather heavy bandwidth usage on that backbone.

    But of course, oiaohm, as you are a self-proclaimed business server and network manager of some considerable repute, although zero visibility, you are well-versed in how to cost out a data centre.

    (Feel free to google for “data centre” before you launch your next tidal wave of gibberish. Oh, and while you’re at it, have another go at googling for IBM SAN configuration.

    (That one didn’t go so well the last three times you made a pathetic stab at the subject, did it?)

    High end network speeds are insane. Yes data-centers can have multi port 30TbE switches with multi 30TbE incoming ethernet.

    And once again, somebody is certainly insane.

    You don’t just go buy a 30TbE switch “off the shelf,” oiaohm. Can you quote me one? No, because there is no such thing in current production. And there won’t be any time soon.

    You can buy an (expensive) switch with 30Tbps Fabric, oiaohm, sure. And naturally you are well aware what telcos mean when they talk about “Fabric.”

    But you’re not going to see any more than 100GE coming out of a NIC on that switch any time soon.

  6. oiaohm says:

    DrLoser has never managed servers. It comes very clear. DNS server issues are normally at the end points. Y

    25 to 30 DNS lookups per second per machine can in fact be normal background spikes without even a user sitting at the machine. Software checking for updates, network resource sharing like printers and file-shares and so on just happening to line up. Most common cause of an router to collapse in a heap is some local machine gone stupid. Advanced network cards have TCP/IP processing on them so they don’t need to bother the OS with TCP/IP packet resends and if that breaks the only limit on the number they can send is size and network transfer speed. If your router is big enough to handle max request it does not collapse in the event a nic in a single computer goes nuts. It just junks a huge number of duplicate messages. Also when a router dns runs out of out going bandwidth it returns unable to resolve for anything bar local.

    Solaris is another OS that in fact when it comes to putting out huge volumes is impossible this is not due to Solaris design this is a simple case of Solaris does not have the drivers or the hardware at this stage.

    Something to handle a million+ requests per second APM X-Gene Data Center processors its 100GE native. Native does make a big difference when the NIC multiplexer can write straight into the cpu caches skipping over normal slow ram so giving more processing time. Using 100GE/GbE connections every slow point adds up very quickly.

    You are correct that it would be a Linux box. $50 000 is far too much $5000 dollar server of the right cpu class and enough ram running Linux will handle your example of Internet DNS with tones of room to spare. In fact the same 5000 dollar box can handle 100GE internal network DNS requests include worst case.

    Datacentres running internal cluster file servers can in fact have time frames of millions of dns requests per second. These are not Internet dns request these are internal dns request.

    DrLoser if you had ever read a DNS server request log of a business you would have realized very quickly majority of DNS processing is not INTERNET. Experience with a few Nic malfunctions teaches you that max DNS request do happen. Max HTTP can happen due to malfunction as well.

    Here is the killer DrLoser for those who have not really worked in business networks. You have a proper managed switch. How long do you think it takes to tell a port on a managed switch to turn off or for the managed switch to work out and turn the port off in case of client side malfunction. Worst case is 60 seconds so your server/router needs to be able to handle max traffic for 60 seconds and not do anything stupid to be part of a high durable class network.

    100GE/100GbE stuff is not new. 2013 30Tbps Ethernet was demoed you find ports of 30TbE class on switches like the Arista 7500 Series. There is no server than can output 30TbE directly. In other words you need 300 100GbE ports particular Linux machines up to 4 100GbE ports. So you need a min of 75 servers to get to true max of what we can do today.

    30TbE goes down 1 single fiber-optic cable.

    High end network speeds are insane. Yes data-centers can have multi port 30TbE switches with multi 30TbE incoming ethernet.

    DrLoser you said you wanted max TCP/IP. Max for a single server is about 400GbE that is about $50 000. 100GbE will handle general Internet DNS loads with room to spare.

    This is the current problem network transfer speeds are truly huge. 30TbE is like transferring a 3TB hard-drive every second. The system you have to design to fill a 30TbE connection is massive. Then you have to remember some data centers have hundreds of 30TbE lines incoming.

    To support 1TbE on a single server we need cpus to be able to go faster. Recent tech advances may in fact see 1TbE or higher on servers directly. So yes the future is just faster and faster Ethernet with less and less tolerance for slow operations like context switches.

  7. oiaohm says:

    http://en.wikipedia.org/wiki/100_Gigabit_Ethernet
    DrLoser I did not invent 100GE its an offical term you moron. It can be written either 100GbE or 100GE.

    That you call it invented word means you are out of your depth on high end hardware the first 100GE stuff shipped in 2011.

  8. A Pesky Interloper says:

    @ JoeMonco
    “Again, what does this have to do with running (Tight)VNC on an Vista Home edition? […]”
    http://mrpogson.com/2014/06/22/in-defence-of-menus/#comment-161523

    Again, what does your proposed use case have to do with what Pogson wants (free software with zero price licensing)? Nothing? Why do you bring it up, then? Oh, you’re trolling by semantics and hoping that Pogson et al will fall for it.

    Well, truth be told, they did fall for it. Successfull trolling is successful. Bravo Joe.

  9. DrLoser says:

    Incidentally, those are very, very low level interview questions. If you can manage a decent answer to them, you might just scrape through the first half hour of Interview Day at Bing, which is the point where the leggy blonde receptionist concedes that there’s a 10% probability of you being more knowledgeable about IT than she is, and passes you on to the intern.

    I recommend this path to Dougie. Dougie would enjoy the prospect of flexing his “University of Life” intellectual muscles.

    If Robert wants to try it, I’d recommend emphasising his college background in Pascal.

    Yes, seriously, I would. Pascal is a Big Deal round Microsoft circles. Pascal begat Modula-2 (also a big Pogson thing, although I understand that Robert no longer feels the need for its more advanced features). Modula-2 begat Delphi, which is a nice little RAD package still in use here and there.

    And Delphi, via Anders Hejlsberg, begat Linq.

    Linq is an interesting exercise in “fluent programming.” Essentially, it’s a way of bringing all the goodies of Functional Programming, including Haskell-style Monads, into the Procedural Programming world.

    Given your background in language classes, Robert, I think you’d enjoy and appreciate it. Of course, it’s a Microsoft technology, which might put you off.

    Never mind, Scala (which isn’t a Microsoft technology: it’s JVM based) is almost as fluent. As a plus, it has built-in covariance and contravariance, amongst other things.

    Master any one of these, and you’ll probably sail through the interviews at Bing.

    Good luck!

  10. DrLoser says:

    OK, enough with the helium balloons and Martians. Apparently neither one was sufficiently convincing, even though nobody has refuted my point either time. Let’s try an interview-style “thought experiment” based upon the following gibberish:

    Linux setups are in fact able to directly fill 100GE in fact multi-able. Yes millions of DNS requests per second can come in on a 100GE connection. There are open source routers in the 100GE class.

    I’m going to ignore this weirdo “100GE” thing. oioahm is welcome to invent any word he wants, including supposedly technical ones.

    Let’s focus on the idea of a DNS server that can handle “millions of DNS requests per second,” shall we? And here are the interview questions:

    1) How many DNS requests per day would that be?
    2) How many DNS requests per day would you expect from an online population of, say, 3 billion?
    3) How many organizations need this sort of awesome power to service DNS requests?
    4) How many commodity machines … even a mildly specialised machine, such as the eminently scalable Solaris boxen … would be required to service this need?

    I’m going to leave aside the asinine contention that the way forward is to reduce context switches by 40%. I’m even going to try very hard to forget the extraordinary (and false) belief that all DNS servers rely on massive flat files.

    Let’s do the interview thing and consider those questions, shall we?

    1) There are 86,400 seconds in a day, so that one is easy. I’ll compensate for fluctuations by time zone and reduce “millions” to “one million.” That would be eighty four billion requests to our notional DNS server per day.

    This “advanced maths” of yours is getting a little hairy, isn’t it, oiaohm?

    2) This one is a little more difficult to estimate, so let’s turn the question upside down. Given the awesome response capability calculated in (1), we can assume around 25 – 30 requests per user for this single wondrous machine. That’s rather a lot, really. It ignores local caching and proxy servers, for example. I’m a fairly heavy browser, and I don’t think I do much more than approach this DNS workload.

    3) This one is relatively easy to answer. Basically, anybody who runs a TLD (or similar). According to the figures estimated in (2), however, these people are wasting their time. A single off-the-shelf Linux box, say a meaty little number costing $50,000, would solve all their problems.

    4) Consequent to (1), (2) and (3), the answer to this one is simple. Just one box. That’s all. Maybe two for site redundancy, or four to eight distributed around the world in case of attack by Martians or Giant Super-Intelligent Helium Balloons.

    $400,000 and we’re done, then.

    I’m not the first to suggest this, oiaohm, but your quite serous maths is quite seriously deranged, isn’t it?

    Oh, and speaking of advanced mathematical techniques (you know, the sort you pick up when you’re in your late teens, if you happen to be mathematically competent), what was that brilliant idea you had that disproves all known Queue Theory maths?

    You know, the one that forces you to sit by a Linux router for hours, even days, before the poor thing finally gets overloaded by incessant BIND requests?

    Have you tried sheep-shearing, oiaohm? Because mathematics is a foreign country to you, isn’t it?

  11. oiaohm says:

    http://en.wikipedia.org/wiki/Context_switch
    Incidentally, oiaohm, you are confusing “mode switches” — user space to kernel, kernel to user space — with “context switches” — between one application and another.
    In fact DrLoser screwed up here completely. Mode switches from userspace to kernel space and back under windows and linux and protected mode OS are wrapped in context switches. There are OSs that mode switching is not wrapped for example freedos. A context switch does not even have to be 1 application to another application in fact some OS’s allow you to perform a context switch inside an application not changing threads or applications again freedos(and other ruleless OS solutions).

    Robert Pogson comment is from the based idea of protected mode OS design. Those you context switch on mode switch to prevent usermode state contaminating kernel mode and the reverse.

    DrLoser if you want to push to max TCP/IP you will choose Linux. In fact you will go for custom programmed. Linux is able todo from userspace using networking real-time share trading. Of course this is due to the circular buffers exposed to userspace that allows you application to put a packet up for instant straight up send as well as straight up receive. This tech enabled you to handle 100G+ network connections. You need hardware multi plexing that is not supported by Windows. There is no such thing as a functional 100G network interface for windows directly. Netronome 100GE for Linux is in fact duel 100G network. Linux setups are in fact able to directly fill 100GE in fact multi-able. Yes millions of DNS requests per second can come in on a 100GE connection. There are open source routers in the 100GE class.

    The sad reality here the only 100GE cards for Windows in fact run Linux on the card to filter and reduce the traffic so Windows don’t break. So your only options are Linux only data centre or a Linux and Windows data centre if your goal is to be able todo max.

    This is the problem DrLoser you did not understand what a context switch was and how it was used. Problem here MSDN documentation suggests its only used application to application in fails to mention Windows uses context switching internally on mode switching. Sometimes it does pay to look under hood.

    Due to recent breakthroughs 1TB networking will become possible. You can already snap stuff like bind on some of existing high end cards if you just happen to be a little low on cpu.

    10 to 30 microsecond don’t sound like a problem 100M or 1G networking but 10G,40G and 100G networking they are quite concerning numbers.. Even on 1 G 10 microsecond context switch can become a problem. Lets say I need todo 10 syscalls(this can be quite conservative) per processing loop that is a 200 microsecond consumed doing nothing else other than context switching. Or now there is a 50 000 max cap on number of packets on 1 core you can process. Not a worry right. Divide 1Gbs by 50000 you get 2.5KB each. What size is a DNS packet yes smaller than 2.5KB so 1G network card has no trouble delivering more than 50000 dns request per second. A single core 1Gbps router is screwed using bind. This is the problem I am very use to doing theoretical network usage maths. You never get theoretical but it tells you when you are in hell. Yes there are cheap single core 1G rated routers out there.

    Setting up 100GE networking you start running some quite serous maths to make sure you have systems large enough to handle the tasks.

  12. That Exploit Guy says:

    Why are you shitting on my blog over semantics?

    Why? Because you were the one who invented this bit of inane semantics and made wholly unfounded claim that it was in the EULA.

  13. DrLoser wrote, “It just never happens, does it, Robert?”

    It did back in 2003 on Lose ’98.

  14. DrLoser says:

    Dear God, all right, let’s pretend I’m a Martian.

    As a Martian, I have no clue about operating systems. As a Martian, I just want to buy a data centre for, say, $10 million, that I’m going to max out on TCP/IP communications.

    Do I buy Linux? Do I buy Windows?

    As a Martian, I am obviously biased and I really don’t care about that naughty, naughty M$ tax per license.

    But, you know what?

    Even as a Martian, I might just notice a 90% degradation in comms performance.

    It just never happens, does it, Robert?

    Look around yourself. The real world tells you that simple and obvious fact.

  15. DrLoser says:

    By the way, I’m prepared to follow your links (a la oiaohm) and check that they’re relevant or indeed accurate.

    Sadly, because this site is so remarkably broken in a way that I thought was beyond the dreams of even the most inept PHP programmer, I cannot do so without resorting to “View source.”

    So, a question for you.

    Would you prefer some nasty grotty only 10% efficient system that chugs along and delivers a reliable GUI front end?

    Or would you prefer a Screaming Fast 22nd Century Quantum Leap comms stack that, er, delivers barely intelligible crap?

    I mean, I don’t believe the 10% thing for a moment.

    But on the other hand I find it hard to believe quite how dire this site’s adherence to “Open Standards,” in this case HTML and CSS, can be, as well.

  16. DrLoser says:

    TCP/IP is a relatively large-sized protocol stack, which can cause problems in Microsoft MS-DOS-based client computers.

    What a very interesting and compelling cite that is, Robert. Isn’t it a shame that nobody has used an “MS-DOS based client computer” for fifteen years or more?

    But what’s this? Do we see your own cite contradicting your ludicrous premise?

    However, on graphical user interface (GUI)-based operating systems, such as, Windows 95 or Windows 98, the size is not an issue and the speed is about the same as Internetwork Packet Exchange (IPX).

    Indeed we do, Robert.

    Indeed we do.

    Honestly, how do you expect anybody at all to take this gibberish seriously?

    Oh, and those measurements of yours?

    Clearly you had one of those rare “I’m not usually quite this numerically inept” moments, didn’t you?

    Let’s see you repeat those measurements one more time. You know. The scientific method, and all.

  17. TEG, being a troll, wrote, ““Diagnostic/teaching purposes” and other wholly-invented, non-existent rubbish, on the other hand, aren’t.”

    Diagnosis/teaching both involve remote assistance, even if the teacher is in the room. Why are you shitting on my blog over semantics?

  18. DrLoser wrote, “I’m absolutely sure you have proof for that insane and undocumented assertion, Robert.”

    I measured the throughput of Lose ’98 on the LAN at one school and got 100KB/s at 10 mbits/s. Same with MacOS in those days. They both switched to the BSD stack and got up to speed. It’s still a problem for M$ and users from time to time…

    e.g. http://boards.straightdope.com/sdmb/showthread.php?t=274314
    Quoting M$, itself, “Historically, the size and speed of TCP/IP had been its two primary disadvantages. TCP/IP is a relatively large-sized protocol stack, which can cause problems in Microsoft MS-DOS-based client computers. However, on graphical user interface (GUI)-based operating systems, such as, Windows 95 or Windows 98, the size is not an issue and the speed is about the same as Internetwork Packet Exchange (IPX).”

    Even against XP, GNU/Linux does much better throughput with TCP, 50% better with 5% bidirectional packet-loss.

    In fact, M$ still hasn’t got it right, with this bug affecting just about every M$ OS in use today… That’s right, under certain conditions, M$ sits around for 0.2s doing nothing just transferring a few KB of network data. That could mean a loss of throughput of up to 2.4MB/s at 100mbits/s. They have a workaround. Mine is to use GNU/Linux.

  19. That Exploit Guy says:

    @RP

    Absolute nonsense.

    From you.
    Again, It’s “Remote Assistance and similar technologies”. That’s what’s in the EULA.
    “Diagnostic/teaching purposes” and other wholly-invented, non-existent rubbish, on the other hand, aren’t.

  20. DrLoser wrote, “By the way, Robert, did you even consider doing a course in Computer Science (any module would do as far as this question goes)?
    Did you even consider reading a book?”

    Yes, I took several Computer Science courses at university: introductory programming (Fortran) and numerical analysis to name a couple. I also read voraciously about assembler, Pascal, Fortran, PL/1, Algol, Praxis, Focal, Modula-2, various operating systems, various journals like ACM and IEEE etc. I’ve programmed in a dozen different languages on a dozen different architectures over the years. I like Pascal on GNU/Linux on AMD64 best lately, but I’m thinking ARM is in my future.

  21. DrLoser says:

    By the way, Robert, did you even consider doing a course in Computer Science (any module would do as far as this question goes)?

    Did you even consider reading a book?

    Going on a course to learn, say, one of those basic items that I mentioned earlier, such as algorithmic complexity?

    (I’ll help you out: Big O, T, Theta and Omega. In order: Theta, T, O and Omega.
    (You’re welcome.)

    You know where “rubbing shoulders” gets you, Robert?

    Rubbing shoulders gets you a nasty rash and several unspeakable skin diseases.

    It doesn’t actually teach you anything worthwhile about Computer Science, as oiaohm has conclusively and repeatedly proved so far in this thread.

    Interestingly enough, although oiaohm has posted about four or five entirely unjustifiable claims …

    … you, Robert, as a man who has “rubbed shoulders” with various un-named individuals carrying a deck of punched cards towards an IBM/360, and have therefore imbimed IT wisdom purely through dermal contact, have yet to tell the little chap that he’s full of it.

    It’s obvious that oiaohm is full of it.

    Now, here’s the question, Robert: do you agree that oiaohm is full of it, or do you disagree?

  22. DrLoser says:

    I guess that explains why M$’s networking speed was 10% of what the hardware could do for decades.

    I’m absolutely sure you have proof for that insane and undocumented assertion, Robert.

    But, just in case you don’t have proof of that insane and undocumented assertion, may I suggest that you leave insane and undocumented assertions to oiaohm?

    oiaohm is the undisputed master of insane and undocumented assertions.

    I’m not going to dispute oiaohm’s mastery of insane and undocumented assertions.

    TEG isn’t going to dispute oiaohm’s mastery of insane and undocumented assertions.

    Forgive me for my apparently unwarranted optimism, Robert, but until now I was blissfully unaware that you were into this “insane and undocumented assertion” business.

    After all, you close windows. You like to think for yourself.

    It would be regrettable if such thinking led you … even occasionally … to insane and undocumented conclusions.

    Leave that stuff to oiaohm. It is oiaohm’s sole purpose in life.

  23. oiaohm wrote, ” Counting users means you have to hurd your users todo the right things or you find your self in breach of license.”

    The worst example I ever saw of this silliness was a school that had a non-FREE “programmed instruction” system with N licences. In the middle of a class, licence-limit could be reached and some authorized person had to “reset” the count. Apparently, some students did not “log off” in the approved manner and the licence in use was considered still in use after class. The school therefor had to buy extra licences just to use the software. It was also a pretty useless system in that the elementary students learned how to trick it into giving them a pass on the work… No security. Heavy burdens of administration. Ineffectiveness. Wasteful/expensive licensing. It was just the wrong way to do IT. That software cost ~$100 per student per course per annum. So, our medium-sized school was spending one teacher’s salary per annum for no benefit whatsoever.

  24. TEG wrote, ” VNC is simply not a way to share a single machine over two users. Not ever. “

    Absolute nonsense. VNC allows a local user to point and click and a remote user to point and click. Those could be the same person or two people. M$ sees them as one user but it forbids such usage except for “remote assistance”.

    Further, “Under any platform, and providing your license entitles you to do so, you can run more than one instance of VNC Server on a host computer.
    This powerful feature means you can set up the host computer so users can connect to it in different ways.

    Under Windows, a host computer user with administrative privileges can start VNC Server in Service Mode. This means VNC Server runs, and users can connect, irrespective of whether or not a host computer user is logged on. By default, in order to connect to:
    • VNC Server (Enterprise) or VNC Server (Personal), users must know the user name and password of a member of the Administrators group.
    • VNC Server (Free) , users must know the VNC password.”

    M$’s licence forbids this for many client operating systems without buying extra licences.

  25. DrLoser wrote, like a true M$oftie, “Because real Computer Scientists understand that, if you can get an O(1/log n) improvement, using a tree structure, or even better a hash map, all this faffing around with syscalls is completely worthless.”

    I guess that explains why M$’s networking speed was 10% of what the hardware could do for decades… any why they shipped several OS in the 1995-2005 timeframe with 50K bugs, many of them whoppers… M$ was just going for the low-hanging fruit.

    One of the first things I noticed about GNU/Linux when I switched from that other OS was that it just never crashed no matter the load put on it. I guess M$ is full of people who think like DrLoser and care nothing of the boring details underneath. Thank Goodness Linus’ and friends care about such details:“Those nymbers look very convincing to me, a cool 25.4% speedup!
    mmap-perf is very MM intense – including vma lookup.”

    You see, a real operating system wants to get its work done and get the Hell out of the way of users rather than enslaving them, making them “wait, please wait…”. I guess M$’s sycophants can’t get their heads around that concept.

    I can tell you that in the 1970s and 1980s, I went to great lengths to optimize code, mostly at the innermost loop, because my bosses wanted to be in the “A” queue at the computer-centre and they wanted their answers faster. I actually coded some stuff in integer arithmetic using Assembler just to make the innermost loops faster and to get more precision in 32 bits. According to DrLoser, I was just wasting my time, but people loved my work. He doesn’t get “customer service” at all.

  26. DrLoser wrote, of switches from kernel to user space and one application to another, “The two are separate and distinct concepts.”

    True, but they both require the OS to do a little work during the process of switching. They are both context switches, giving juice to different processes.

  27. DrLoser says:

    Incidentally, oiaohm, you are confusing “mode switches” — user space to kernel, kernel to user space — with “context switches” — between one application and another.

    The two are separate and distinct concepts.

  28. DrLoser says:

    Yes context switch might be microseconds. Thing you are not allowing on is a syscall is two context switch 1 from user-space to kernel space and one from kernel space back to user-space min. This still seams small.
    Problem this is an example of the straw that broke then camels back.

    Actually, no, oiaohm. A synchronous syscall is a single context switch. That is the point of a synchronous syscall.

    But let’s assume otherwise. Let’s go hog-wild and assume many, many, unfeasibly many context switches to a single syscall. I’ll go with your figure of 30µs per context switch.

    Let’s give the browser a single second before it times out on a DNS lookup.

    Can you divide one million by thirty, oiaohm? That’s a whole bale of hay you get there, my agricultural friend …

    I did not say the router crashes straight away. It can takes many hours.

    Another innovative bit of genius when it comes to Queue Theory, oiaohm. Can a Nobel Prize be far behind?

    Do you have any idea how ignorant that claim makes you sound?

    DrLoser how bind collapses falls under a name buffer bloat. Internal buffers of bind bloat to the point that from one end of the buffer to the other is multi seconds at this point matters get worse not all applications are like web browsers that give up some just keep on retrying.

    “Name buffer bloat?” Descrying a single molecule of relevance from the mole of gibberish is tough, but I’ll give it a go. You are of course referring to zoneinfo files (and the like) laid out as a flat file, as per the good old days.

    Now, actually, Mockapetris’ original scheme (which featured domain-specific compression and a rudimentary tree structure) did a reasonable job of getting around this problem. Each query started at one end of the file and hopped through a series of fseeks to reach the appropriate point.

    Which would constitute a large number of syscalls, I grant you. Possibly of the order of twenty or thirty.

    Still not of the order of seconds, though.

    And besides, as of March 2007 and BIND 9.4, we no longer need flat files. We can basically choose any database we like.

    The issue is therefore moot. Dead. Buried. Gone. And you know why, oiaohm?

    Because only idiots like you believe that the way to improve performance is by an O(1) improvement of, say, 40%.

    Because real Computer Scientists understand that, if you can get an O(1/log n) improvement, using a tree structure, or even better a hash map, all this faffing around with syscalls is completely worthless.

  29. That Exploit Guy says:

    Or, and I am loathe to bring this possibility up, it might conceivably be that you are completely clueless on the subject.

    That’s a likely possibility, may I add?
    http://mrpogson.com/2014/04/25/gnulinux-gives-real-savings-in-education/#comment-146129

  30. oiaohm says:

    Link1: http://mrpogson.com/2014/06/22/in-defence-of-menus/#comment-161446
    link3: https://kb.isc.org/article/AA-00629/0/Performance%3A-Multi-threaded-I-O.html
    I picked BIND here for an example here. Then you lock on and say I have some kind of fetish. You don’t say that you are going back to the first document.

    Sorry performance guys and I both used bind as examples in this disagreement.

    DrLoser its not TCP/UDP that is the problem. Its seconds to time out correct.

    Go back and look at the BIND diagram I provided. Yes the old link 3. If you are a computer science person you should have noticed the flaw. Yes context switch might be microseconds. Thing you are not allowing on is a syscall is two context switch 1 from user-space to kernel space and one from kernel space back to user-space min. This still seams small.

    Problem this is an example of the straw that broke then camels back.

    DrLoser the scheduler applications limited time slices to get stuff done. If the work thread is failing to keep up with the listen threads queuing work up you are in trouble. It does not matter if its 1 nano second short or 10 microseconds short is short. I did not say the router crashes straight away. It can takes many hours.

    DrLoser you have never watched bind server collapse. You must have a Microprocessor that can barely do the job or is heavily loaded so only limited cpu time is being given to bind so bind is getting just under what it needs. On those 40 percent gain makes bind work were it would otherwise progressively fail if the load time is short bind will recover if it long you start seeing time outs not exactly caused by network traffic.

    You can also see the same thing happen on servers running close to 100 percent load running bind.

    DrLoser how bind collapses falls under a name buffer bloat. Internal buffers of bind bloat to the point that from one end of the buffer to the other is multi seconds at this point matters get worse not all applications are like web browsers that give up some just keep on retrying. All this is caused is by running out of cpu time on the worker threads to keep up with the listening threads.

    Lot simpler to cause in small cpus in some routers and this is why bind is not used in those locations.

    Bind does not suffer that much from end to end latency as seconds timeouts is normally enough to deal with all network travel latency. Bind suffers from in the middle latency. I will give you that bind seams like a odd test case. But bind is one of the more common things to cause strange issue.

    Sorry DrLoser network latancy has very little to-do with BIND and DNS servers failing to reply. Network balancing can be a issue where DNS packets are not getting enough priority but that does not cause a router to lock up. Running out ram on a router is death. In fact increased network latency would save the bind server on the router and allow it to catch back up.

    This is the problem Dr Loser 3 percent of your cpu time can kill you just by making your application repeat being late. These multi threaded complex beast really don’t take very much to bring them down. There are reasons why you want them to complete their processing as much a possible every allocated slot.

    Bind is many times more feature complete as a DNS server than the light weight alternatives. There is interest in reducing bind foot print.

  31. DrLoser says:

    tinydns(also called djbdns) or maradns these once running are syscall zero and store there logs and the like in ram.

    For all I know they also do hilarious party tricks with helium balloons.

    No, wait, I have the source code in front of me as we speak.

    Sadly, no helium balloons, but, equally sadly, rather a lot of syscalls. recvfrom, for instance, to take a random example.

    It may be the reason you see no syscalls on your router is that you have bricked it, and it doesn’t actually communicate with the outside world.

    Or, and I am loathe to bring this possibility up, it might conceivably be that you are completely clueless on the subject.

    I now look forward to you contorting yourself on the subject of the OS involved.

  32. DrLoser says:

    DrLoser I picked BIND page because it had the cleanest picture to describe how its hooked up.

    Ah, but you didn’t “pick BIND,” did you, oiaohm? Soares and Stumm picked it, along with Apache and MySQL. Do try to read the things you cite before stealing the credit.

    All I’m saying is that BIND is a peculiar choice of example for the supposed benefits of speeding up a multi-threaded daemon/application. Anyone with half a brain, and even a rudimentary command of the English language, would be able to understand my point.

    Apache, Mysql, Postgresql… are all hooked up to networking in very similar ways to bind.

    Indeed so. Isn’t the TCP/UDP interface a wonderful thing? And so what?

    What happens if dns server is too slow to respond. Web-browser displays that the web site does not exist.

    Browser timeouts are measured in seconds, oiaohm. That’s seconds. Not microseconds. Not milliseconds. Not even tenths of a second. Seconds.

    Not only is a neat trick with “exception-less” multi-threading not going to improve the end-to-end latency of a BIND request/response — an obvious fact that you are too dense to see for yourself: parallelism is increased, but end-to-end there is no difference — but, assuming that it did, we’d be talking about microseconds.

    In your scenario, the DNS lookup would fail because of network latency, which can be measured in seconds when things go topsy. Network latency has bupkis to do with “exception-less” algorithms.

    Again you prove you don’t know this topic.

    It’s certainly beyond a doubt that one of us has proven that, oiaohm.

  33. That Exploit Guy says:

    @ Blithering Idiot

    So cases where you do have 1 person sitting in front of the computer and you are in by tightvnc showing them how todo something you are now in breach of the 1 user limit.

    Even putting aside the fact there is no “them”, the EULA clearing states the opposite of what you claim (on Remote Assistance and “similar technologies”).

    That Exploit Guy funny enough one of the ways to stabilise windows install VMWARE.

    This is quite a fascinating piece of creative writing, I must say. If not for the fact that your posts are usually below acceptable readability, I would strongly recommend you to pursue a career of science-fiction writing.

  34. oiaohm says:

    That Exploit Guy funny enough one of the ways to stabilise windows install VMWARE. vmware drivers mess around major-ally with windows memory management system. Comes fairly clear when you run RAMMap the sysinternal tool from Microsoft. This allows you to watch the difference between a system with vmware installed and a system without. Lot of issues happen with windows because windows gets memory fragmented since vmware is always after continuous blocks its driver does operations that cause that to be the result.

    That Exploit Guy yes there are ways of fixing up Windows to run for years on end. This is why you have some people claiming windows runs fine and others claiming it suxs. Mixture of software + Quality of hardware = Magic figure if your Windows computer will run for months/years on end or crash. RAMMap is only one of the tools you need to run to find out if you have a working combination or a one doomed to failure(there are other tools must run of course before you can give final verdict). RAMMap in fact really handy in windows server instances to find what driver is causing a system to random-ally behave at times. Just like vmware drivers can improve your memory location their are other drivers that can make it worse. Worst one I had was a wifi driver setting up a new DMA block in a new memory location every time wifi connection dropped out. Yes ram thrashing in background caused by stupid drivers does happen. It is possible for a driver in background to lock off the windows means to use its swapfile. Default windows driver by the way that is normally replaced when you install your motherboard drivers. Yes the ide controllers out box getting a strange message back from motherboard. For some reason you can still write to ntfs but swapfile that may be complete unsed is now not accessible.

    That Exploit Guy yes there are a list of reason why windows crashes most solvable but some in very strange ways ie install x application with y driver and everything good.

  35. oiaohm says:

    How come remove access unenforceable here is that Microsoft provided software and instructions that lead people to breaking the license. Entrapment by this means equals must be defective clause so not enforceable.

  36. oiaohm says:

    That Exploit Guy the issue is support cases. Yes Microsoft provides rdp support interface even that its not allowed in the EULA to be used by a non owner.

    This is Vista Home Premuim/Basic.
    REMOTE ACCESS TECHNOLOGIES. You may remotely access and use the software installed on the licensed device from another device to share a session using Remote Assistance or similar technologies. A “session” means the experience of interacting with the software, directly or
    indirectly, through any combination of input, output and display peripherals.

    So cases where you do have 1 person sitting in front of the computer and you are in by tightvnc showing them how todo something you are now in breach of the 1 user limit. That Exploit Guy when you use RDP a dialog appears on screen blocking view in default mode. So preventing the case of 2 users looking at the same session. Using third party VNC you have to remember opps.

    But that is not the only issue. Do you own the machine. You is directed directly at the license owner of the machine. So your child remote vnc into your computer when you are not home is breach why they don’t own the computer. Tech worker wants to use remote desktop on your computer to fix something guess what they are in breach again.

    Australian fair-trading decide that the remote clause was that far broken it was not enforceable so Australians only have to obey section 2 b. So as long as we don’t break that we are fine.

    Basically the windows Vista license suxs for remote desktop. Windows 7 home suxs as well. The only licenses in Windows Vista, 7 and 8 with decent remote desktop licensing is the enterprise volume license forms.

    That Exploit Guy really what would have been so hard for Microsoft to write you are allowed 1 remote desktop connection to the existing session only. So not binding it that you had to own the license so allowing support people to use this feature. They manage to write that into the EULA’s of the enterprise forms.

  37. That Exploit Guy says:

    @RP

    How about the OS demanding re-re-reboots or crashing?

    About this…
    You know that Vista install that you claim has totally violated the EULA? It’s been running for 1065 hours (~44 days) since the last time I ran Windows Update (and I have decided to skip this round).
    Also, due to the presence of two VMWare VMs, the memory usage is always near maximum. Hmmm… I wonder when all these fabled “re-re-reboot” and “crashing” will start happening.
    Well, probably never.

  38. That Exploit Guy says:

    @Dougman

    Bill Gates, Steve Jobs nor did Mark Zuckerberg have degrees in CS, but their collective wealth far exceeds your meager sum by a factor of a billion coding doesn’t it?

    The last guy winning the lottery didn’t even have to work to get his money, but can you in honesty say that his experience also applies to everyone?
    Out of countless college kids making their own websites in their dorm rooms, how many of them do you think have turned out to be Zuckerberg?
    Of course, no one will stop you from daydreaming about accidental success. However, your lack of knowledge will pretty much guarantee that your ability to innovate and earn success will always be zero.

  39. That Exploit Guy says:

    @ A Pesky Interloper

    MS _does_ restrict the number of users any given PC could simultaneously service and they do require every user to be licensed

    Again, what does this have to do with running (Tight)VNC on an Vista Home edition? Are you and RP suggesting that the use of VNC will suddenly spit a user into two separate persons? Furthermore, unless you are perfectly content with two users wrestling for control over one desktop session, VNC is simply not a way to share a single machine over two users. Not ever. The aforementioned binary hack (Google “concurrent rdp sessions hex”), however, is and will allow two or more users to connect to and use a single machine under separate desktop sessions, although, of course, this is where the EULA will come into play.
    tl;dr: VNC won’t make you two separate persons, idiot.

  40. oiaohm says:

    A Pesky Interloper not all commercial software is created equal on licensing. Not all commercial licensing is per user either.

    Lightworks Pro NLE running on OS X. I can remote desktop share that to as many users as I like. Lightworks Pro is a installed instance license and OS X license does not care how many remote users I have hooked up. All commercial just not Microsoft.

    Installed instance licenses are nicer. Counting users means you have to hurd your users todo the right things or you find your self in breach of license.

    Lightworks is also fun because it has a free version that you can offload particular workloads from the pro version to it. Note offload particular tasks only. This is how lightworks model works. You want more processing power you pay more.

    There are a lot of commerical licenses that are like that. Even some of Microsoft server products are per number of cpu cores/chips no care about number of users.

  41. oiaohm says:

    DrLoser question did you flunk out of Computer science degree after the first year??? In fact everything you asked for is covered in certificate 4 information technology here in Australia not even worth giving a BA for. Your general web developer is meant to know those here.

    Why do I ask this is every thing you just asked for is first year computer science. Of course if you only know first year would explain why you did not understand about the existence of multi queuing logic.

    Simple fact here you have ran into a brick wall of being out of your depth. Heck you claimed Circular buffer are not used from user-space to avoid syscalls and it simple to prove otherwise.

  42. oiaohm says:

    DrLoser I picked BIND page because it had the cleanest picture to describe how its hooked up. Apache, Mysql, Postgresql… are all hooked up to networking in very similar ways to bind. What happens if dns server is too slow to respond. Web-browser displays that the web site does not exist.

    Again you prove you don’t know this topic. Most Linux routers don’t use BIND DNS too syscall heavy. tinydns(also called djbdns) or maradns these once running are syscall zero and store there logs and the like in ram. The change to syscalls would be required so BIND could be used in routers.

    Yes you can put BIND in a openwrt router at times you will learn very quickly that you should not have. Yes I can measure the difference watching the complete router lock up because of BIND was kinda fun. DrLoser no all machines in your network are big ones like routers can in fact be quite darn small. I was not in any case saying that BIND was unique. Of course troll you runs away. I said I could bring in the maps of the other ones as well if you wanted them. They are not as well done as BIND the one thing about ISC they have good professional document produces.

  43. DrLoser says:

    Anyway, I rubbed shoulders with a lot of Computer Science people while waiting on the IBM 360 to pass our jobs through the queues.

    And I rubbed shoulders with Sir Stephen Hawking when I was at Cambridge, Robert. Not literally, I suppose — I just waved at him as we passed on the lawns of Kings College.

    I like to think that this gave me crucial insights into the bleeding edge of particle physics, but then again I may just be fooling myself.

    Just as you are fooling yourself that you have the remotest understanding of Computer Science.

    Quick, quick: what’s the difference between syntax and semantics in a compiler?

    Put the following in order: Big-O, Big-T, Big-Theta and Big-Omega.

    Describe the No Halt problem.

    What’s the difference between Functional Programming, Declarative Programming, and Procedural Programming?

    How would your provide an approximate solution to the Travelling Salesman problem? And how approximate would that solution be?

    And what is Third Normal Form, anyway? And why would anybody care? What are the trade-offs? In what cases are those trade-offs important?

    These are all very, very simple questions in Computer Science, Robert. And you don’t have a clue about any of them, do you?

    I’ll give you this: you have a far better understanding of the mathematical underpinnings than do the rest of your cohorts here.

  44. A Pesky Interloper says:

    “That’s what the EULA requires if you share desktop software over the network. Every client has to pay the licence for the software visited, again.”
    There is no “again,” for the extra user, license hasn’t been paid yet.
    If you’re having more than one user you’re going to need more than one license Robert, that’s the name of the game called “commercial software,” none of this has anything to do with a EULA, all commercial licensing requires it.

  45. A Pesky Interloper says:

    “A user needs more CPU power and wants to copy the software to a second PC for parallel processing.”

    This is actually permitted, Robert; you are often expected to remove the software from PC 1 before deploying it PC 2, but if you’re the only one using it, and if you only use one at a time, you can have such a deployment.

    As far as home use is concerned — nobody cares what you do.

    “Same for trying to figure out what the OS is doing.”

    Huh? Open the freaking event viewer and see for yourself; besides, if you don’t trust your software — don’t use it.

    “How about the OS demanding re-re-reboots or crashing?”

    What does this have to do with licensing? Nothing? You cannot use (imagined) quality defects in an argument about licensing, surely you realize this?

    “Where are the consumers able to choose M$’s OS or GNU/Linux at my local Walmart?”

    Where are they demanding this? Remember Robert, Linux was available for free in the 90’s when installing an OS wasn’t considered a total no-no, it was available in stores, and finally, it was available as pre-installed OS; people were trying Linux out and rejecting it for nearly 20 years now. There is no demand, despite the prolonged propaganda barrage, Linux simply isn’t an answer to anyone’s need.

    “There is no free market in OS in many places.”

    Patently false claim. Right now, despite everything MS tried, a full 20% (and more) of the market uses Windows XP, if there was any real market control, preventing free choice, these users would be forced to migrate to W8. This is now the second time it has been revealed – to everyone who isn’t blind – that MS holds no control of anything and they never had.

    “In many cases, consumers have no idea what the OS costs because of bundling, so they aren’t even aware they are buying it.”

    It’s the other way around Robert – users have no idea that PC’s are anything but Windows and aren’t even aware that the actual hardware is separate to the software they are buying. This idea, that it’s the PC, the computer, that is being purchased, is one of the most ridiculous myths around. People buy Windows boxes and Windows screens, that’s what they want, the underlying hardware is insignificant and, without Windows, worthless.

    “The EULA forbids sharing of desktops over the network, same with file-serving, or connecting more than N machines together on the network.”

    That’s not networking Robert, it’s a set of enterprise features and MS wants companies to pay for them; home users don’t matter here, it’s entirely up to them to either get an additional solution, upgrade their SKU, or even pirate it. It’s irrelevant to MS.

    The hysteria about “poor users” totally misses the point — if they have any additional needs, then they have additional options available.

  46. DrLoser says:

    I don’t disagree that caching DNS name translations is a good thing, Robert. I have, after all, written a full implementation of RFC 1035 and Mr Mockapetris’ following schemata.

    But, actually, this has nothing at all to do with some spurious tiny little efficiency gain in BIND, does it? Even a round-trip to the local router is going to splatter all over the microscopic gains that oiaohm, who wouldn’t know how to measure them even if he admitted the possibility, could show.

    Now, just guessing, how many DNS IP translations would the average server need? A hundred thousand?

    That rather large number could comfortably be contained within a hundred megs, and paged in from your beloved disk cache as and when required.

    Additional benefit from a 40% speed-up in BIND?

    Absolutely zero.

  47. Pesky wrote, “Under no condition would you have 5 licenses for the exact same type of software per PC”.

    Yep. That’s what the EULA requires if you share desktop software over the network. Every client has to pay the licence for the software visited, again. That’s why the EULA is the wrong way to do IT. GPL works for people, not against them.

  48. DrLoser wrote, “I guess oiaohm believes that a server (indeed, any computer at all, including smartphones) should spend such a significant amount of time doing a DNS lookup that a 40% performance gain is important.”

    That, of course, depends on the traffic. Some servers have a bunch of gigabit/s NICs working hard. Any of that traffic that is DNS matters. Even for client machines, local caching of DNS is very important for the responsiveness of browsing. A round-trip to the router is much less expensive than a fishing expedition “out there”.
    $ dig www.cbc.ca

    ; < <>> DiG 9.9.5-4-Debian < <>> www.cbc.ca
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER< <- opcode: QUERY, status: NOERROR, id: 1237 ;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 1280 ;; QUESTION SECTION: ;www.cbc.ca. IN A ;; ANSWER SECTION: www.cbc.ca. 75121 IN CNAME www.cbc.ca.edgesuite.net. www.cbc.ca.edgesuite.net. 10308 IN CNAME a1849.gc.akamai.net. a1849.gc.akamai.net. 20 IN A 184.50.238.64 a1849.gc.akamai.net. 20 IN A 184.50.238.89 ;; Query time: 81 msec ;; SERVER: 192.168.0.1#53(192.168.0.1) ;; WHEN: Fri Jun 27 17:07:27 CDT 2014 ;; MSG SIZE rcvd: 139 pogson@beast:~/Documents/linux_share/WW$ dig www.cbc.ca ; <<>> DiG 9.9.5-4-Debian < <>> www.cbc.ca
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER< <- opcode: QUERY, status: NOERROR, id: 23771 ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;www.cbc.ca. IN A ;; ANSWER SECTION: www.cbc.ca. 75112 IN CNAME www.cbc.ca.edgesuite.net. www.cbc.ca.edgesuite.net. 10299 IN CNAME a1849.gc.akamai.net. a1849.gc.akamai.net. 11 IN A 184.50.238.64 a1849.gc.akamai.net. 11 IN A 184.50.238.89 ;; Query time: 0 msec ;; SERVER: 192.168.0.1#53(192.168.0.1) ;; WHEN: Fri Jun 27 17:07:36 CDT 2014 ;; MSG SIZE rcvd: 131

    On a busy system with say, 1000 users, 80milliseconds becomes 80s, real time, real money. Visiting a hundred sites a day could affect the bottom line. Since there's no need to have this important function of the web slow, it should be accelerated as much as possible.

  49. A Pesky Interloper blathered on about stuff, “They put certain restrictions on what can be done, but they do not interfere with users going about their work.”

    Huh? A user needs more CPU power and wants to copy the software to a second PC for parallel processing. You bet the EULA forbids that. Same for trying to figure out what the OS is doing. That’s a no-no. How about the OS demanding re-re-reboots or crashing? That’s a biggy, why I switched to GNU/Linux.

    Also, “the market values Windows at or above the price MS charges for it (otherwise they wouldn’t keep buying it)”.

    Where are the consumers able to choose M$’s OS or GNU/Linux at my local Walmart? There is no free market in OS in many places. In many cases, consumers have no idea what the OS costs because of bundling, so they aren’t even aware they are buying it.

    And, beating a dead horse, “networking works just fine on Home Premium and lower SKUs”.

    Nope. The EULA forbids sharing of desktops over the network, same with file-serving, or connecting more than N machines together on the network.

  50. oiaohm says:

    http://www.apm.com/products/embedded/catalina/
    DrLoser apm under cut all of Calxeda, Problem here APM has its own fabs. APM happens to be a very large arm chip producing company. Calxeda had to license some of APM tech to even build their chip. APM of course did a joint patent license agreement. At that point on Calxeda was stuffed APM was going to keep on stealing their lunch. Only way out of this patent death trap for Calxeda was fold.

    APM has been vs Intel in embedded market for a very long time. Its not like APM is going to stand still either.

    Cavium is a big enough company to go toe to toe with APM. Calxeda was basically screwed off the start line DrLoser. Trolls only hold Calxeda up as some kind of bad failure because they don’t understand that Calxeda never stood a chance of living the other ARM vendors were going to kill it. Death of Calxeda has nothing todo with what Intel did.

  51. dougman wrote, “Desktop? Perhaps 80% and declining each year.”

    I doubt that. A lot of thin clients run GNU/Linux even in businesses. They don’t count in the webstats at all because it is the terminal server that gets the page-views. That’s probably 5% M$ doesn’t own right there. MacOS gets about 5% and GNU/Linux etc. get about 10% by my reckoning. That leaves 80% for M$ but that’s eroding rapidly and many countries are ramping up usage of GNU/Linux: India, Russia, Brazil, Uruguay, Ethiopia,… There isn’t any “upside” for M$, just a question of how low and how fast they go. It sure doesn’t help M$ that their latest product is not selling. “7” is on death-watch (2015?). Does anyone have any hope that “9” will sell any better than “8”? Will “7” become the new XP and plug up M$’s market share for years to come? Yes. “7” is getting about 55% of “desktop” according to StatCounter. Each release since XP has peaked at a lower share. It took them, what, 10 years to debug XP sufficiently? Now they are pushing alpha-software everywhere.

  52. A Pesky Interloper says:

    Having a BA, MS or Ph’d in anything does not imply your are any way smarter than the rest of the world, nor anymore talented. Experience begets and trumps book knowledge anyday.

    You know, Dougman, formal education doesn’t just “imply” greater intellect and knowledge, it actually demands it.

    Feel free to seek out excuses for yourself in any way you can though, heaven forfend that we should stand between you and your honor.

  53. DrLoser wrote, “does a single person on this site have a degree in Computer Science”.

    My career spans a lot of the development of Computer Science. ie. The Computer Science department was still young when I was in First Year. I studied Physics and Maths, both of which were the main users of IT in those days. I think Engineering was getting into it as well. The first real computer I touched was in the Engineering department. I saw guys working on what was to become “stealth technology” many years later. They and I were both members of the IEEE. Anyway, I rubbed shoulders with a lot of Computer Science people while waiting on the IBM 360 to pass our jobs through the queues. Those were the days I could run up the stairs to the sixth floor faster than the elevator…

  54. dougman wrote, “Having a BA, MS or Ph’d in anything does not imply your are any way smarter than the rest of the world, nor anymore talented. Experience begets and trumps book knowledge anyday.”

    Experience is serial learning. An MS is parallel learning, standing on the shoulders of giants etc. One needs both.

  55. oiaohm says:

    link1: https://lttng.org/urcu
    link2: http://www.cs.fsu.edu/~baker/devices/lxr/http/source/linux/Documentation/networking/packet_mmap.txt?v=2.6.11.8
    DrLoser RCU is not pure kernel space.

    Circular buffer also operate in userspace as link2 shows. Sorry you claim its only between bottom end of the ip stack and the nic is wrong for Linux problem true for Windows. Circular buffer is between the nic, the kernel and userspace under Linux. Depending on the nic depends if the kernel is that much in the middle. Reason advanced nics can be picking up from multi circular buffers and fill multi circular buffers on different cpus directly no kernel intervention required. Note you said Microsoft asynchronous comms require kernel code.

    Linux equal is designed to use hardware acceleration when it can so some cases completely avoiding kernel code in the network packet processing. Yes userspace and nic could be the only things touching out going packets from Linux.

    You missed the s on buffers DrLoser your starvation event only happens if there is only 1 circular ring buffer in the case of networking there are multi circular ring buffers in parallel.

    Linux asynchronous is normally implemented by using something like a circular buffer or a rcu as transport so only syscalls to set it up have to be performed after that its syscall less. These are both in the Read-Modify-Write class of structures.

    RCU with nice distribution tree structure comes important when you are multi core. RCU and Circular buffers don’t require you to context switch to take up lock or release lock on the structures.

    Note I said scheduler context switches could not be remove without going multi core.

    Each application listening request on Linux network stack gets its circular ring buffer.

    The networking side in kernel that can in fact be multi cpu due to network card being multi cpu stacks what is required into applications buffers.

    Reducing/removing context switches means an application runs for its full allocated time slice no interruptions.

    Heap sort based que equals sorting. Equals excess load particular when you allow advanced network card will sort into multi circular buffers. So traffic for tcp port 80 by network card can be auto stacked by the network cards into a circular buffers on each cpu no cpu usage. Same with the reverse with the network card picking up from multi circular buffers. Linux is operating more by the mail box method following how network hardware is implemented.

    High end Network cards are multiplexing units. Asynchronous communication comes from serial transport that network cards are not. 1G+ network cards don’t send just 1 packet at a time. They are not serial.

    Why is Microsoft Asynchronous communication a joke is that design only suites serial ports. Multi circular buffers running in on different cpus supports more how current day network cards work.

    DrLoser sorry you are making a laughing stock out of yourself you don’t have a clue how the Linux kernel does things.

    DrLoser I have a very good idea what the name of the structure you are hiding. But if I mention it I will have to rip you to bits for being a idiot referring to a structure that does not match real world hardware any more so is unable to be effectively accelerated.

  56. dougman says:

    BING-A-LING….and to what are you implying?

    Bill Gates, Steve Jobs nor did Mark Zuckerberg have degrees in CS, but their collective wealth far exceeds your meager sum by a factor of a billion coding doesn’t it?

    Your preemptive argument does not function as a rational statement; perhaps instead of wasting your time arguing against the merits of Linux, you should get back to coding something that would make you the next millionaire.

    Having a BA, MS or Ph’d in anything does not imply your are any way smarter than the rest of the world, nor anymore talented. Experience begets and trumps book knowledge anyday.

  57. DrLoser says:

    Just out of (very) mild interest, does a single person on this site have a degree in Computer Science?

  58. dougman says:

    Clearly, you’re an idiot and Win-dohs controls which market?

    Servers? NOPE
    Tablets? NOPE
    Smartphones? NOPE
    Desktop? Perhaps 80% and declining each year.

    When one looks at the entire computer market from up high, Linux is in every segment kicking arse.

    The market is saturated with Linux devices and is growing at an exponential rate; just the other day M$ has jumped on the Android bandwagon as no one wants a Windows phone.

  59. DrLoser says:

    (That would be a “circular queue” and not a “priority queue,” btw, and it sort of gives the game away about my XXXXXX data structure. Which was a heap-sort based queue.
    (Even so, I have a hard time visualising how a priority queue, within the kernel, relying on “quiescent” states in the kernel state machine, ie wait and sleep and block and so on, could make enough difference here to be worth implementing.
    (Might be worth a research paper, though, oiaohm. Go to it, my merry little fellow!)

  60. DrLoser says:

    DrLoser the lock contention bit is why Hierarchical RCU. I did mention a set of structures. A single set of structures cannot do the job.

    Finally, a data structure (or set of such). I appreciate your belated honesty, oiaohm. It gives me something to go on.

    But not much at all, in this case, since RCU (classic or hierarchical) is purely kernel level. It might help distribute context switches across a set of CPUs, but it doesn’t have a prayer of eliminating the need for a single one of them.

    Circular buffers are already used in network stack of Linux between userspace and kernel space so the so call implied contract is already broken.

    Is it? I was under the impression that, on Linux, circular buffers are used as the queueing discipline between the bottom end of the IP stack, and the NIC. This particular bit being very definitely within the purview of the kernel and not accessible from user space.

    But, let’s assume there is a circular buffer somewhere between user space and the kernel. What a boffo idea!

    Such an implementation would mean that Application A (being, say, a video-streaming app) at priority -10, is starved of network resources by Application B (being, say, a thin client set up by Robert to serve the Pascal compiler for a tic tac toe game over the network) at priority 10.

    Now, the interesting and bloody obvious thing here is that a priority queue in what is effectively user space does not know or care about kernel priorities.

    The only possibly consequence of this insane arrangement is that a random collection of applications have been given the opportunity to subvert the kernel scheduler.

    That you bring up Microsoft asynchronous communications means you know nothing about Linux.

    I guess oiaohm doesn’t quite understand what Microsoft asynchronous communications means, does oiaohm? Oiaohm has been given two or three years to think about it since the last time oiaohm made an ill-advised sally into the subject. One of so very, very, many with which he is extraordinarly ill-equipped to deal.

    Clue-bat, oiaohm: Microsoft asynchronous communications are not, in fact, based on a circular queue.

    Also, they are implemented purely in the kernel (with a syscall-based API for the benefit of applications who use them).

  61. A Pesky Interloper says:

    “The EULA and the GPL lay out what can and can’t be done with the software at hand.”

    Er, no. They put certain restrictions on what can be done, but they do not interfere with users going about their work.

    “The EULA restricts usage every which way will give M$ more power to sell licenses, in particular forbidding remote access so that server OS or “pro” OS needs to be bought at higher prices and/or with CALs for each client.”

    Again, no. Microsoft considers their cheaper SKUs as “sold at a discount,” to users who simply don’t need enterprise features; they want to make sure, however, that companies do not abuse this discount by deploying under-priced SKUs — so they restrict certain features from being used in subsidized versions.

    This practice does not impact home users, and MS does not interfere if they pirate a higher tier SKU; it only impacts companies, where MS actually expects that they pay the full price and don’t try to mooch off on subsidized software.

    “The EULA requires users to pay for the privilege of using the full capabilities of hardware, e.g. networking.”

    No, networking works just fine on Home Premium and lower SKUs; don’t play word games Robert.

    “Any way you look at it the price/performance of that other OS stinks in comparison to GNU/Linux.”

    This isn’t true, if it were, Linux would be deployed by both users and companies wanting to get more bang for their buck, people (and companies) keep trying Linux and they keep on using Windows, despite the price.

    Clearly, the market values Windows at or above the price MS charges for it (otherwise they wouldn’t keep buying it).

    You don’t, but that’s your own particular choice.

  62. DrLoser says:

    One thing we can depend on is just because the X Calxeda guys release they are going to make a more powerful chip so will apm. The X-Gene chip has had many forms.

    Do you have any idea how completely irrelevant and uninteresting this crap is, oiaohm?

    I guess oiaohm doesn’t.

    Really I personally expect apm to steal the Calxeda guys lunch again.

    Considering that Calxeda has all but closed their doors, this lunch-stealing business is going to offer poor returns.

    I guess oiaohm steals his lunch every day from the nearest bankrupt hardware start-up.

    Calxeda spend 96 million on R&D and apm is making all the profit from it. The core of the apm chip is Calxeda tech. DrLoser its not that the arm tech failed to get market its that the
    Company that did all the R&D got beat by a company that did not pay for most of the R&D.

    This is, for once, an interesting point re. ARM in the server domain. ARM is a great company (I’m a Cambridge grad — I appreciate these guys). But its business model is predicated on offering quasi-open source (though licensed and not free) designs: not on expensive fabs.

    As a supplier in the server market, this is a good thing in one way: somebody else does half your R&D. The difficult half, in fact.

    But, as you correctly point out, there is no added value in doing the other half of the R&D … possibly not even the marketing and sales … because somebody else will come along and, er, “steal your lunch.”

    As business models go, this one is distinctly unappealing — unless you invest massively and aim for a monopoly. See Samsung for details on that one.

    A link that doesn’t prove anything but is relevant to this discussion is here.

    It’s heavy on interesting projections such as…

    While competition will likely increase, we believe APM’s time-to-market advantage should yield strong X-Gene sales near term, and even niche share

    … and light on actual financials. Or indeed the size of the current order-book.

    I distrust consultancy projections in general, but I don’t see the following as unreasonable:

    We estimate server processor market revenue growth of roughly 60% from $12B to $19B by 2018 (a 9% CAGR), with ARM unit share approaching 20%.

    Optimistic, but not unreasonable. So, starting from a base of 0% (unreasonably low), we might be seeing $3.8B ARM server sales by 2018. Which is roughly 50% of the entire growth in the segment.

    Nice, but hardly world-shattering. And it assumes that Intel will sit on their hands for four years. Unlikely, I think.

  63. DrLoser says:

    The third link is a nice basic over view of how bind works these days. I guess Dr Loser is still thinking its single threaded.

    I guess oiaohm is the unfortunate victim of a bizarre fetish over “multi-threaded BIND.”

    I guess oiaohm believes that a server (indeed, any computer at all, including smartphones) should spend such a significant amount of time doing a DNS lookup that a 40% performance gain is important.

    I guess oiaohm hasn’t really thought this one through, has oiaohm?

  64. A Pesky Interloper says:

    “Say, I had 5 PCs, each with a different OS and four additional licences, each costing ~$100. The total comes to $2500 just for the OS.”

    Under no condition would you have 5 licenses for the exact same type of software per PC, you wouldn’t have 5 picture editors, you wouldn’t have 5 video editors, you wouldn’t have 5 text editors, and so on and on — and you certainly wouldn’t have 5 operating systems on a PC.

    The proposed problem is entirely fictional, the actual OS costs would be between $150 and $500 depending on the Windows SKU installed by the PC vendor, and this cost would be paid when purchasing the computers, it wouldn’t represent “an additional cost.”

    Just like Joe, you’re mixing different issues in order to falsely present your case.

  65. A Pesky Interloper says:

    Now, now, Joe, MS _does_ restrict the number of users any given PC could simultaneously service and they do require every user to be licensed.

    Exactly how this is actually dealt with may or may not be flexible depending on the client or rather the size of said client, but Pogson remains correct — having twenty users served by the same PC will require 20 licenses.

    The mere fact that RP can’t make a coherent argument does not permit you to twist words around.

  66. That Exploit Guy says:

    @RP

    Very, unless you have fingerprint scanners on all keys and mouse-buttons.

    There you have it, folks! Thanks to Robert Pogson, it is now know that unless you have fingerprint scanners or such, you, as one persons, can also be identified as two persons or three persons or even four persons.
    Eye-opening, huh?

  67. TEG wrote, “I supposed that’s something really hard to prove”.

    Very, unless you have fingerprint scanners on all keys and mouse-buttons. Anyway that’s irrelevant. M$ licences the devices to run the OS and charges people real money to use devices they own.

  68. That Exploit Guy says:

    @RP

    M$ is explicit

    About Microsoft Office. That’s what the paragraph was talking about, as evidenced in the rest of that subsection.

    Care to tell M$ how you are not?

    That I am not two persons? I supposed that’s something really hard to prove… In some bizarro universe.

  69. TEG wrote, “care to show me how I am somehow “two users””.

    Care to tell M$ how you are not? M$ always puts the benefit of a doubt in their favour. That’s why they love software audits so much.

  70. For the information of TEG and others, M$ has a list of OS that can share desktops and Vista Home is not among them. One needs “business” or “ultimate” or “enterprise”. Heck. One needs a lawyer just to monitor each PC with that other OS…

    Further, M$ is explicit:
    “Microsoft licenses its desktop PC applications on a per-device basis, which means that customers must obtain a license for each desktop PC on or from which the product is used or accessed. When Microsoft desktop PC programs are used in a shared environment, a license must be acquired for every device (desktop PC, thin client, etc.) that remotely accesses the desktop PC program installed on the multiuser system. This license must match the suite/edition, components, language, and the version of the copy of the program being accessed.”

    The magnitude of M$’s “tax” is mind-boggling. Assuming I had XP HOME and a few other versions, Vista and a few other versions, “7”, “8” and “8.1”, I would be expected to pay for N licences on M computers just to use them all transparently as I do with GNU/Linux. Consider also that some of the licences may not even be legal for some PCs and the sheer complexity and cost of running a diverse IT system with M$ is astounding. Say, I had 5 PCs, each with a different OS and four additional licences, each costing ~$100. The total comes to $2500 just for the OS. Imagine if I had a bunch of different office suites from them… Clearly, this is not intended to “get value” from their OS or to provide a service. It is to force customers to upgrade many PCs at once. It certainly has nothing to do with copyright or “intellectual property rights”. Their interpretation of the EULA agrees with mine. Pay up or switch to GNU/Linux.

  71. That Exploit Guy says:

    @RP

    Two users using the same PC constitutes two sessions

    Interesting, care to show me how I am somehow “two users”?

  72. TEG wrote, “Irrelevant unless VNC somehow constitutes an extra session, which it doesn’t.”

    Two users using the same PC constitutes two sessions, one more than M$ thinks you paid.

  73. That Exploit Guy says:

    A “session” implies the singular, does it not?

    Irrelevant unless VNC somehow constitutes an extra session, which it doesn’t.

    The former is for diagnostic/teaching purposes, while the latter is general-purpose.

    This is all well and good except it is nowhere to be found in the EULA. I must say it’s quite an impressive piece of creative writing, though.

  74. A Pesky Interloper wrote, “it’s clear that the Genius of Manitoba had _his_ use case in mind, not yours”

    Not really. The EULA and the GPL lay out what can and can’t be done with the software at hand. GPL allows running the software any way you want. The EULA restricts usage every which way will give M$ more power to sell licences, in particular forbidding remote access so that server OS or “pro” OS needs to be bought at higher prices and/or with CALs for each client. The EULA requires users to pay for the privilege of using the full capabilities of hardware, e.g. networking. Folks who don’t know any better either violate the EULA or pay up or don’t network fully. Any way you look at it the price/performance of that other OS stinks in comparison to GNU/Linux. I, for instance, can network multiple types of sessions multiple ways with GNU/Linux and it’s all permitted. The GPL is a copyright licence. The EULA is an agreement of perpetual bondage to M$.

  75. TEG is becoming a bore: “WHERE in the EULA does it says my use of VNC to remotely access Vista violates it?”

    On M$’s site we find for “Vista Starter” (OEM):
    “a. Licensed Device. You may install one copy of the software on the licensed device. You may use the software on up to two processors on that device at one time. You may not use the software on any other device.
    b. Number of Users. Except as provided in the Remote Access Technologies section, only one user may use the software at a time.”

    Remote Access Technologies: “You may remotely access and use the software installed on the licensed device from another device to share a session using Remote Assistance or similar technologies. A “session” means the experience of interacting with the software, directly or indirectly, through any combination of input, output and display peripherals.”

    “A session” implies the singular, does it not? Does the EULA say “any number of sessions”? Nope. Is “Remote Assitance” similar to VNC? Nope. The former is for diagnostic/teaching purposes, while the latter is general-purpose. ie. The EULA permits remote assistance but not remote use generally.

  76. A Pesky Interloper says:

    “The question here is simple, RP: WHERE in the EULA does it says my use of VNC to remotely access Vista violates it?”

    Oh, for XXXX’s sake Joe, it’s clear that the Genius of Manitoba had _his_ use case in mind, not yours.

    As it is clear that you presented your own use case as a counter to RP’s use case, despite the obvious fact that the two cases were not comparable — one involved free-to-use software, with no restrictions, which is what RP wants; the other involves heavily restricted software which requires users to pay for intended use, which is what RP wants to _avoid_ like the plague.

    It’s sad that you have to pretend not to understand the issue at hand, just so you could win the argument by semantics (even more so when we consider who you’re arguing with).

  77. That Exploit Guy says:

    That Exploit Guy link1 shows that Exploit guy is out of date.

    The first link concerns running multiple RDP sessions on Windows 7 Ultimate/Professional, which is something you can’t do without a binary hack, and I don’t see what that has to do at all with running VNC on a Home edition of Vista.

    Link2: Point 2 section b Vista EULA strictly states 1 user.

    And I am somehow not one user because… ?

  78. oiaohm says:

    Link1 http://blog.lan-tech.ca/2013/10/31/multiple-rdp-sessions-on-a-pc-legal-or-not/
    Link2 http://download.microsoft.com/documents/useterms/Windows%20Vista_Ultimate_English_36d0fe99-75e4-4875-8153-889cf5105718.pdf

    That Exploit Guy link1 shows that Exploit guy is out of date.
    Link2: Point 2 section b Vista EULA strictly states 1 user. So if you use VNC to look over someone else using your computer shoulder you are in breach. In fact most remote desktop assistance software breaches point 2 section b.

    As Robert said possible violation is correct. Are you sure when you are using your computer by vnc that no one else looks at the screen. Did you remember to unplug it or KVM to nothing. Yes the stupid Windows rules.

  79. That Exploit Guy says:

    @RP

    Why have I wasted my bits? Woe is me, my misspent youth!

    So your “misspent youth” involves somehow using the “start” command for printing (which is bizarre not only in that it has absolutely nothing to do with printing but also that it is strictly a Windows 95+ thing).
    It seems my “misspent youth” is going to involve “getting precise details from Robert Pogson”.

    @Dr Loser

    And no, “I saw it with my own eyes” does not qualify as anything remotely peer-reviewed. Or even, to be frank, numerate.

    To be honest, all I am asking for is some good ol’ fashioned hard data, though it seems that RP has been rather short on the delivery. Instead, all I am getting is some bizarre statements that imply the disk cache could somehow read RP’s mind and retain what exactly he wanted to retain.
    Such is the way of this strange, old man.

  80. That Exploit Guy says:

    @RP

    The EULA forbids using the software remotely except by N users maximum

    Again, I asked you for the specifics and you fell short on the delivery. You claim my use of VNC violates the EULA. I asks you for the part of the EULA in question. Then you gave me bushwah.
    The question here is simple, RP: WHERE in the EULA does it says my use of VNC to remotely access Vista violates it?
    You may answer the question when you feel less like being a politician or a used car salesman.

  81. oiaohm says:

    http://www.phoronix.com/scan.php?page=news_item&px=MTczMDY
    DrLoser apache is very context switch sensitive in its current form. The major performance change between 3.15 and 3.16 is protection on 16 bit mode on 64 bit kernel. Apache is the item that will gain or lose the most from any alteration to what happens in a syscall. For something people would say its only 3 percent of cpu time it effects about 90 percent of Apache performance.

  82. oiaohm says:

    The other thing about Linux already is not every syscall causes a context switch or even enters kernel mode. Developers of docker got a very rude shock on this one. Simple task get process id. We will syscall for it. Hang we are in a cgroup pid container yet we are seeing the pid address outside. What had happened the libc cache of syscalls was not cgroup aware and due to memory used was shared across the complete system.

    This is the major thing every point something could do a context switch in Linux it is being reduced. 20 000 pid look ups could be insanely fast if the results are already cached in userspace.

    DrLoser Linux performance behavour is a lot harder. Its not just how many syscalls an application performs its how many that really require to enter kernel mode. Linux behaviour is not like most other operating systems right now. That it took under 6 months for most Linux networking applications to pick up multi que support you could expect lockless syscalls to be picked up that fast if it gets mainline. 40 percent increase on a database is a hell of a lot. Open Source integration of new features is way faster than closed source.

  83. oiaohm says:

    link1: https://www.kernel.org/doc/Documentation/circular-buffers.txt
    link2: http://www.apm.com/news/isc14-x-gene-powers-the-next-generation-of-hpc-servers/
    link3: https://kb.isc.org/article/AA-00629/0/Performance%3A-Multi-threaded-I-O.html

    DrLoser the lock contention bit is why Hierarchical RCU. I did mention a set of structures. A single set of structures cannot do the job.

    Circular buffers are already used in network stack of Linux between userspace and kernel space so the so call implied contract is already broken. That you bring up Microsoft asynchronous communications means you know nothing about Linux. You are talking about an item that has been in Linux network applications for years. So items like mysql apache and bind already exploit this.

    The biggest issue for networking applications is syscalls. Next how can the scheduler balance the two application types. Easily. You are forgetting Linux has cgroups so you can make it a simple cgroup flag to inform the scheduler. Linux has more 64 bit applications than windows. The reality here is new feature deploy in open source code faster than closed source due to the fact patches can be submitted. There are a lot of applications that can benefit from context switch-less syscalls.

    Interesting enough other experiments on flexsc related stuff there are benefits to locking particular syscalls to a core. So for non supporting applications context switching straight to the next application and passing the syscall off to a que on another cpu can in fact be faster than syscalling and processing the syscall on the host system. Remember what you said about lock contention. A syscall that needs to take out many locks getting all those locks from many cpus takes ages. Of course only 1 of that syscall can process anyhow due to locking.

    Some syscalls it may turn out that queing is in fact the right thing that should be done to them.

    Link2: DrLoser Calxeda might have fell over the apm chip maker has buyers and is in production and sale. Its very much like the old AMD vs Intel except its arm chips. One thing we can depend on is just because the X Calxeda guys release they are going to make a more powerful chip so will apm. The X-Gene chip has had many forms.

    Really I personally expect apm to steal the Calxeda guys lunch again. Calxeda spend 96 million on R&D and apm is making all the profit from it. The core of the apm chip is Calxeda tech. DrLoser its not that the arm tech failed to get market its that the

    Company that did all the R&D got beat by a company that did not pay for most of the R&D.

    Drloser you failed with Calxeda to step back and see the big picture. Calxeda failed not because there was not a market. APM straight up focused on providing a 64 bit version. The 64 bit version sells. Calxeda focused on tweeking the heck out of the 32 bit version. So arm 64 bit is out there in server rooms. So your Zero percent claim. My third link was a arm system in production.

    In all tech development there are winners and there are loses. The last round in server arm chips goes to apm with Calxeda being the loser. Of course this battle is not over. AMD and other companies are joining the mix.

    The third link is a nice basic over view of how bind works these days. I guess Dr Loser is still thinking its single threaded. 2012 it went fully multi threaded around the same time so did most of the other open source server applications. Lines up with Linux kernel adding multi que to network stack. 2 threads per cpu core. In other words I want to thrash the heck out your system. Yes its using the multi listeners in a single port. So network side no locks. Only locks it running into is syscall locks. I can bring in diagrams for the other two if you like. It does not take very much of a modification to make bind not require a scheduler and to be able to be allowed to run flat out on a single core. The listern is connected to a ring/circular buffer so that can be filled on a different core. So join worker and listerner and you are in business. The way bind works you could use pure core allocations.

  84. DrLoser wrote, “Not a single one of you has any experience in a commercial environment on a server machine, do you?”, as if it mattered or even was true.

  85. DrLoser says:

    Does anybody here have a deep desire to “improve the performance” of BIND by “up to 105%,” btw?

    Do you people truly grok how important it is for a DNS lookup to take half as much time as previously?

    Well, you all seem to obsess about boot-up times, even though you simultaneously claim that boot-up times don’t matter, because Linux systems don’t need rebooting.

    Just asking, though. Do you know how “BIND” works? How much multi-threading does it require? How many syscalls?

    And the same goes for Apache. And for MySQL.

    Not a single one of you has any experience in a commercial environment on a server machine, do you?

  86. DrLoser says:

    From oiaohm’s cite:

    ThunderX isn’t yet in fab – it’ll be sampling in Q4 2014 – which means Cavium is light on performance specs, but it says its SoCs should compete with, or beat, the power dissipation of Intel-based systems.

    Sounds like an instant world-beater to me. Do these guys have more VC than the $96 million that Calxeda burned through?

    It’s not enough to have a titanium-strength mousetrap with pointy little jaws, oiaohm.

    Somebody has to actually buy the thing.

  87. DrLoser says:

    Computer Science 101 answers, btw:
    1) Under almost all practical use cases, never. That is, after all, what the OS kernel is there for.
    2) This is an inherent problem with the Exceptionless scheme. The scheduler relies on thread/process/whatever priorities that have an inherently comparable measure. One or more T/P/W farting around with a bit of shared memory on the kernel/user-space boundary (which btw will suffer from lock contention and all the other associated issues with shared memory) completely break this implied contract.
    3) Apparently the “pros” are that you can “[improve performance of] Apache by up to 116%, MySQL by to 40%, and BIND by up to 105%.” If carefully crafted. The basic cons are that, as well as my answer (2) above, any other application that wants to take advantage of this particular Magic Pixie Dust has to be a) fully aware of the otherwise unnecessary complexities of the new pseudo-syscall mechanism, and willing to manage them and b) willing to go full-bore asynchronous.

    Point 3 is a bit of a killer for almost anybody. And particularly for Linux applications.

    Remind me again, what’s the scheduled timescale for Linux to acquire the equivalent to Microsoft’s asynchronous comms? Is it there yet?

    And can you remember the name of a single data structure, let alone the obvious one to which I referred?

    Search hard, oiaohm. Search well. And then admit that you are totally clueless.

  88. DrLoser says:

    Ah yes, oiaohm, FlexSC. A paper written by two guys at Toronto University (it has nothing to do with Linux per se: Soares and Stumm, in 2010, used the Linux kernel as a plaything for their theories. And I will admit, the Linux kernel is good for this purpose).

    A couple of things.

    1) We’re four years on, and there’s no evidence of any sort of adoption of “Exception-Less System Calls.” (An inelegant phrase that might even have been coined by you, oiaohm.) Why? Well, here’s the second thing, quoted from the cite:

    With traditional synchronous system calls, invocation occurs by populating predefined registers with system call information and issuing a specificmachine instruction that
    immediately raises an exception. In contrast, to issue an exception-less system call, the user-space threads must find a free entry in the syscall page and populate the entry with the appropriate values using regular store instructions. The user-space thread can then continue executing without interruption. It is the responsibility of the userspace thread to later verify the completion of the system call by reading the status information in the entry.

    I have helpfully highlighted the two important bits of this, oiaohm, to save you the problem of doing so yourself.

    Now, a Computer Science 101 question or two.

    1) When is it appropriate to offload kernel work to each and every user-space application?
    2) Given that most (almost all, in practice) user-space applications are going to be unaware of this mechanism, how does the scheduler balance the set of apps that are aware and the set of apps that are not aware?
    3) What are the pros and cons of suddenly requiring a user-space application, hitherto reliant on simple synchronous system calls, to figure out a way of “later verifying the result of the system call?”

    You see, I read these papers with an interested but sceptical eye. I don’t mean to say I disbelieve them — I’m not misusing the term “sceptical” here. I just mean that I need to think about them first.

    You, oiaohm, on the other hand, have consistently proven yourself incapable of any rational thought whatsoever. If Google comes up with Magic Pixie Dust, then, woo-hoo! You believe in Magic Pixie Dust.

    And that data structure thing, coupled with the state machine of a vanilla OS kernel? It exists, oioahm, it exists. Despite your complete ignorance of OS internals and data structures.

    Nice call on not defining the “data structure” you’re thinking of, btw. No fair of me not to disclose mine, except I had hoped your shining brilliance would have said, No, Dr Loser, data structure XXXXXX is not the droid you are looking for!

    So, then, I’ve just put a stake in the ground. When I tell you my data structure, I guarantee you it will have the properties of XXXXXX. This will help verify that, unlike you, I do not just make garbage up on the spot after other people have led me there.

    Given that, oiaohm, what’s your proposed data structure?

    You don’t have one, do you?

  89. DrLoser says:

    Caches allow folks to access the same stuff at different times or the same time at their convenience, not the convenience of some hard drive.

    I’m not entirely sure how anthropomorphizing a hard drive advances this proposition, Robert. And as TEG was at pains to point out, this is an entirely theoretical and unquantified statement. It doesn’t even feature a single official study — I’m sure there are several out there. And no, “I saw it with my own eyes” does not qualify as anything remotely peer-reviewed. Or even, to be frank, numerate.

    >blockquote>I still remember a computer science teacher advising students to wait to use the desktop after the “clattering” of the hard drive stopped.

    And I still remember my Geography teacher turning red during sex education lessons. Some things stick in your mind. Not all of them have any particular meaning.

    Across the hall, I was telling students as soon as the desktop appeared it was usable.

    You must have had a particularly dim set of students, then. Most kids learn this by trial and error.

    Because, as soon as the desktop appears, it is usable. No, scratch that “error.” Just “trial.” One single trial.

    What a difference an OS makes.

    None at all, in the present case. Do you have any numbers to back this statement up?

    Are you going to continue to ignore polite requests for these numbers?

    I’m willing to provide numbers if you’re not. Specifically, I can look down this thread and count the number of times you have been asked for hard evidence, and the number of times you have evaded that request. I won’t bother right now, because the two cardinalities are equivalent. But I’d be happy to do so if you need “numbers.”

    Can those folks get their life back now?

    It’s just a guess, but most of your students have left school by now. I would imagine that they have consigned your intriguing experiments in the benefits of thin-clients to the historical dustbin of their young minds.

    You’ll be pleased to know that almost all of them will, indeed, have “got their lives back,” and are now using a Windows desktop not mangled by a part-time administrator with a grudge.

  90. TEG wrote, “was your argument that people don’t access the same thing at the same time or that they access the same thing at the same time?”

    Caches allow folks to access the same stuff at different times or the same time at their convenience, not the convenience of some hard drive. I still remember a computer science teacher advising students to wait to use the desktop after the “clattering” of the hard drive stopped. Across the hall, I was telling students as soon as the desktop appeared it was usable. What a difference an OS makes. Over the years I have seen folks wait for XP to stop clattering for up to 5 minutes (most likely 30s) while, on the same hardware using GNU/Linux they were good to go in a few seconds. Can those folks get their life back now?

  91. TEG wrote, “I absolutely do not remember any long pauses when anything was done “in the background” using DOS. In fact, I do not remember there is supposed to be anything “in the background”, either.”

    Printing. see Running a process in background from Batch file or DOS

    Gosh. I feel dirty just having dredged that stuff up from my memory. Why have I wasted my bits? Woe is me, my misspent youth!

  92. TEG wrote, “Vista is apparent part of the Server series now.”

    That’s right. It’s not. The EULA forbids using the software remotely except by N users maximum. That’s why entrepreneurs who wanted to do that were forced to buy licences.

  93. That Exploit Guy says:

    @RP

    Joachim Kempin

    This is all well and good except I have no interest or obligation to give a toss about what an (ex-)Microsoft SVP thinks of “office tasks” (whatever that means) anything. It’s hard data or shut the hell up.

    saving at least N-1 seeks per file cached

    That’s true only provided the caching algorithm hasn’t removed the relevant pages before the Nth users.
    Excuse my short memory, but was your argument that people don’t access the same thing at the same time or that they access the same thing at the same time?

    In use, I often keep the terminal server up 24×7 so the login stuff and LibreOffice and FireFox are cached indefinitely

    That’s totally how a disk cache works. Totally.

    I still remember the long pauses when anything was done in the background

    Strange, I absolutely do not remember any long pauses when anything was done “in the background” using DOS. In fact, I do not remember there is supposed to be anything “in the background”, either.

    Yes, and a potential violation of the EULA.

    Sure, which part?

    Got CALs?

    CAL? So Vista is apparent part of the Server series now.

  94. That Exploit Guy says:

    @Gibbering Fruit Cake

    That Exploit Guy check my links not one is in fact irrelevant.

    But not as relevant as this one.

    DrLoser has put up no links that back his statements at all.

    Funny. Since when does one need a link to mock someone’s inability to take a hint and shut it?
    Also, since when has you cared about links, Mr. I-Have-No-Link-and-I-Must-Scream?

    Sorry DrLoser has been commenting on topics he has absolutely no clue on.

    I am sorry that you are still somehow under the impression your are being taken seriously. I am sorry that RP has perhaps consciously reinforced that impression and used you to bury this comment section in piles of your gibberish in order to save himself from a potentially embarrassing situation (of declaring himself the inventor of the Start menu). I am sorry there is not much I can do to make you realise that, no matter how much techno-babble you spew, you still won’t be the One True Expert that you always desperately want to convince everyone you are.
    I am sorry, but I am only human.

  95. TEG wrote, “I can VNC to a Vista Home install that I have running 2 VMs in less than 1s. See? Magic!”

    Yes, and a potential violation of the EULA. Got CALs?

  96. TEG wrote, ” if you had any hard data to support your argument, you wouldn’t need to resort to weasel words such as “well ahead”.”

    Joachim Kempin (1997), you know the ex-SVP:“current PC technology is totally sufficient for most office tasks and consumer desires and that any performance bottleneck is not in today’s PCs but in today’s COM pipes.”

    TEG also wrote, “No, it’s called a “disk cache”. Even DOS has one.”

    X allows N simultaneous users to all use the same disc cache, saving at least N-1 seeks per file cached. In use, I often keep the terminal server up 24×7 so the login stuff and LibreOffice and FireFox are cached indefinitely, long before the first user gets into the room, so even the first guy gets his socks knocked off. DOS was single-user in the extreme. I still remember the long pauses when anything was done in the background and yes, I have run DOS on the same hardware as GNU/Linux. No comparison in performance.

  97. oiaohm says:

    https://github.com/hoytech/vmtouch/blob/master/vmtouch.pod

    That Exploit Guy in a class room a Terminal server performs better than in business environment. In a class room students are more likely to be running exactly the same applications at the same time. So gain more from disc cache.

    Of course in a business environment you see terminal services decanted to particular applications when licensing costs is not a issue. Same reason cache alignment.

    There is a problem here That Exploit Guy can you control how Linux handles it disc cache and a lot of other Posix systems. The answer is yes. This is different to Windows. So for terminal servers you can prevent items from being flushed.

    Basically vmtouch allows you to choose what is locked in ram.

    Disc cache is a controllable item. Do you have to run the application to have its files caches in ram the answer is no. Items like vmtouch will pull the applications files into ram. This is just terminal services optimisation. Decanting ram to particular applications so they will always start fast.

    If files for applications you have set up a terminal server solution in Linux for are being drop out of disc cache this is a configuration error. Yes allocate a respectable block of ram to a terminal server because you expect a percent of that ram to be forever allocated.

    That Exploit Guy apparently you have never set up Linux terminal services properly if at all.

  98. oiaohm says:

    FlexSC pdf has a very strange name. Soares.pdf is the pdf describing FlexSC.

    That Exploit Guy check my links not one is in fact irrelevant. Yes one is very strangely named. DrLoser has put up no links that back his statements at all. DrLoser just pure makes things up. The idea about start menu was wrong. The idea about reduction of context switches is wrong. His time on how long a context switch takes to perform is wrong. Volume a system will be producing is wrong.

    Sorry DrLoser has been commenting on topics he has absolutely no clue on.

  99. That Exploit Guy says:

    @ That Guy from the Looney Tunes

    That Exploit Guy there are a long list of minor faults around context switching.

    Well, then, can you explain this?
    Or this.
    See, I, too, am perfectly capable of coming up with a list of completely bizarre, irrelevant links to support my argument!

  100. That Exploit Guy says:

    @RP

    In the days of PII or PIII, the CPU of a COTS PC could keep well ahead of most users’ needs

    I had been there long before Pentium was even a thing. Again, if you had any hard data to support your argument, you wouldn’t need to resort to weasel words such as “well ahead”.

    Today we have ordinary PCs with 4 or 6 cores running at more than double the clockspeed and buses 50X faster

    We now also have desktop applications implemented to take advantage of such hardware. So what?

    On most GNU/Linux systems I have seen, a user takes up to 10s to login from booting.

    DOS can go from after POST to prompt in less that three seconds with minimal Autoexec.bat and Config.sys. So what?
    By the way, all of the Windows systems I own and maintain (currently including one ~4 year old Windows 7 install, one ~1 year old Windows 8.1 install upgraded all the way from Windows 7) go from having the power button pressed to showing the login screen in less that 30s. Could any of the Windows machines under your care do the same?

    With a terminal server having the necessary files cached that time is less than 2s.
    I can VNC to a Vista Home install that I have running 2 VMs in less than 1s. See? Magic!

    We could have 5 users login at the same time and they would scarcely notice the slowdown.

    See, I have a configuration meant for 25 users. I put only 5 there and no one notices any slowdown.
    I wonder if RP should consider a career selling bridges.

    I’ve often set things up so that the login is done before the students even arrive.

    Bridges it is!

    It’s nothing about the CPU but the fact that the files are cached on the server and a ton of seeks/reads are skipped.

    Impressive, you have discovered… No, “invented” the disk cache. Now, of course, it follows that every student would also produce the same work and save them in the exact same file.
    It also follows that everyone adopting a thin-client configuration will also use the exact same way the students did at that same class at that same school that hired RP the physics teacher.
    Totally logical, right?

    Students and teachers really love a usable desktop appearing in a few seconds when they are used to waiting 30s or so on an XP-machine.

    Let’s ignore what’s actually on the XP machines for the sake of the argument (because we all know how evasive RP is whenever the subject is raised).
    Disk cache contains only what you have already read from the disk that the caching algorithm considers is worth keeping. This means if you are the first guy to access the portion of the disk in question or you happen to access the portion of the disk after it has been removed from the disk cache, you are going to reap absolutely no benefit that PR claims disk cache is going to give you. After all, it’s just disk cache – not unicorn magic.

    All of this increased performance is due to the much-maligned but effective X window system.

    No, it’s called a “disk cache”. Even DOS has one.
    Ever heard of SmartDrv.exe?

  101. oiaohm says:

    Link1: http://www.theregister.co.uk/2014/06/04/cavium_sets_sights_on_intel_with_48core_soc/
    Link2: http://www.cavium.com/newsevents_Cavium_Introduces_ThunderX_A_2.5_GHz_48_Core_Family_of_Workload_Optimized_Processors_for_Next_Generation_Data_Center_and_Cloud_Applications.html
    Link3: http://www.mitac.com/Business/7-Star.html
    DrLoser Calxeda might be gone but the tech behind it is not.

    Sorry the DrLoser is on wishful thinking that the collapse of Calxeda meant the end of the ARM chips for the server room. 32 bit vs 64 bit did not work. The old Calxeda was only 4 32bit cores per chip. The new is 8 to 16 64bit core per chip.

    Link3 is the really interesting one X-Gene chips design forks off the Calxeda design in 2012. Yes there are viable ARM 64 bit arm servers out there in production. They got to 64 bit first by recycling the old design.

    Applied Micro Circuits is what really drove Calxeda under. Yes another arm chip maker competitor.

    Again another baseless claim about zero percent market penetration. Ok it most likely in the points of a percent at this stage but its not zero.

  102. oiaohm says:

    espfix is fixing a context switching error.

    expfix and expfix64 is particularly fixing 16 to 32 and 16 to 64 switching.

    Lets take the Microsoft solution to the espfix and expfix64 issue. Deprecate 32 bit versions and removed support for 16 bit code from 64 bit version. Basically don’t fix the fault at all. Yes Linux fixes it but it now takes a hit per context switch operation checking if the context switch was 16 bit code and applying corrections as required.

    There is another issue effecting 32 bit to 64 bit context switch as well. If Microsoft follows their past process the next thing should be 32 bit binaries deprecated or maybe they will fix this one and take the speed hit. Switching across segment types on x86 is not nice. Problem is the kernel has to manually check if a context switch was 16 to 64 or 32 to 64 or if it was 64 to 64 on x86 why because unlike arm or power theses are not different crossing operations to kernel. Arm and Power you have independent code paths for those crosses so only the defective take a performance hit instead of everything on x86.

    DrLoser at this point you want to choke a x86 processor. OS context switching has got slower in recent years on x86 Linux due to bugs found. Microsoft has just been deprecating stuff left right and centre.

    The responses to context flaw issues have been different.

    Drloser without recent events you 10 figure was about correct. Recent events have come at some serous pain.

  103. oiaohm says:

    https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Soares.pdf
    DrLoser sorry I said context switch requiring operations queuing up. I did not say que up generic context switches watch my english way more closely.

    One of context switch requiring operations is syscalls. So performing multi syscalls in a single syscall reduces context switch operations required. CPU over head of writing a data structure like this is negligible. And this is not the structure you are thinking of either DrLoser.

    There are particular actions of an application that can be queuing up resulting in reduced context switching. If you want an example of this look up FlexSC. You are a idiot queing up context switch requiring operations like syscalls results in huge savings. FlexSC gets more strange in the fact that the userspace code can be running on one cpu core and the syscalls could be implemented on a different cpu core so completely avoiding context switch completely. Yes once you remove the locking effect from the operation that is causing a context switch how it can be implemented changes. That implementation change has another set of huge performance effects.

    There are other context switch triggering point that have been reduced.

    Sorry there is ways to que up context switch requiring operations that 1 remove context switch completely and 2 don’t use any more cpu time than setting up to do the context switch in the first place. Context switch triggering code in most OS’s is designed on the idea of single core not exploiting multi core to avoid it.

    If you go back a few years it was 10 for context switch speed yet 30 is current.

    You would have been thinking of the wrong structure completely. DrLoser state machine is not solution to reduce context switches. The structure to reduce context switch in places to non existent is Hierarchical RCU and this is not implemented on every OS.

    DrLoser what is special about Linux is when you go and attempt to exploit a context switch flaw it fails. Windows on the other hand they work same with many other OSs.

    Sorry Dr Loser you are just making a bigger and bigger goose out of yourself. syscalls is not the only place in Linux where queing is being researched to reduce context switches. It is in fact possible that one day OS exists that once running don’t context switch at all and just exploit decanting cores to particular operations.

    Really the only context switches you cannot get rid of are the schedulers context switches even these can be avoided when possible.

  104. DrLoser says:

    I’m even going to make it ridiculously simple for you, oiaohm. The data structure in question (and the mechanism I outlined, in block caps, in my previous post) depends upon the state machine of the kernel.

    Any kernel. The Linux kernel, the NT kernel, the Solaris kernel, even an Atari kernel.

    Any kernel.

  105. DrLoser says:

    Actually, with my “Degree in Computer Science” hat on (it’s a Beany Hat with a “thirty years of real-life experience in Very Large IT Organisations” propeller on the top), I can, in fact, think of one good way for a kernel to “queue up” requirements for context switches. There’s no possible way to “queue up context switches” that doesn’t waste CPU cycles.

    But there is a blatantly obvious way to queue up requirements.

    There’s even an obvious and simple and very well-known data structure that supports such a mechanism.

    Sadly, according to the non-existent evidence that oiaohm has failed to reproduce (what with his hat being made out of purest outback tin-foil), we cannot conclude, one way or another, that the Linux kernel “researchers” have, or have not, noticed this fairly obvious feature.

    But, hey, I’m a reasonable guy, oiaohm.

    Name that data structure!

  106. DrLoser says:

    DrLoser its more the average Linux context switch is about 30 μs on x86.

    Which is not far from what I said, oiaohm. I took a two year old figure (God only knows where you dragged yours up from) and extrapolated to an optimistic 10 μs.

    Why are you complaining?

    Unfortunately this is not exactly all Linux fault.

    1) I never claimed it was.
    2) I have never believed that it is.
    3) The Linux kernel was irrelevant to the rough calculation at issue, made for Robert’s benefit and produced to counter his rather shaky observation.
    4) So what, oiaohm, so what?

    Due to cpu design errors in x86 you have to perform particular operations to protect against context switch security flaws those are not free and fairly much lock the speed of doing a context switch.

    SWECURITY FLAWS? Are you insane, oiaohm? No, scratch that. You are insane.

    This is why Linux kernel developers have been researching in queuing up context switch requiring operations.

    With zero evidence, as usual. And even if true, you are aware that there are quite a lot of other OS and hardware researchers around the world, aren’t you? What’s so special about the Linux ones, except that they care and share, although obviously in this case they haven’t cared and shared enough?

    I hesitate to point this out, oiaohm, but there is an unavoidable consequence of “queuing up context switches.”

    Go on. Guess. You’ll never guess. Mostly because you’re clueless.

    Yes arm cpu can do a short context switch time.

    No, ARM was never mentioned. It’s irrelevant in this, ahem, “context.”

    However, should we wish to broaden the topic out, this might be relevant (although not to the insane one hundred thousand context switches per second level, given that there is a viable multi-core ARM commercial server solution that has more than 0% market penetration.

    Sadly, in many ways, post the demise of Calxeda

    … there ain’t.

    So let’s revisit that niche corner subject when there is one, shall we? Because at that point we will, as TEG points out, be able to measure the numbers.

    Not something we will be able to do before.

  107. DrLoser wrote, ” You’d be lethal with something like a commercial server in your hands.”

    Thank you for your kind words. I’ve often used a COTS PC as terminal server or built my own but I have used Some IBM x-series servers with Xeons and SCSI RAID. I loved the seek-times on those servers. They actually clicked during a seek. I could hear how few seeks 20 students needed with all kinds of files cached in RAM. That system was maxed-out in RAM, however, with only 1gB for 20students and Debian GNU/Linux. Students loved it because their thin clients were the snappiest seats in the whole school. We used old P4ish machines and some 400MHz Via machines. That server was about 8 years old and still kicked butt.

  108. DrLoser wrote, “all you’re doing is to thrash one set of registers and associated data (including cache lines) out, and another one in. You’re not actually performing any real work.”

    Nope. I have watched GNU/Linux terminal servers ramp up the context-switches sharply with big I/O going on and users were scarcely affected. One has to watch the wait-queue when considering context-switch rate. Typically users are scarcely affected if the depth of the wait queue changes from ~3 to ~20 if context-switches are under 100K. The reason is the users can get millions of instructions executed during their time-slice and humans scarcely notice an increase in response-time under 0.1s. One can get into trouble of course by increasing the load to some point where network/I/O or interrupts max out. Modern hardware with a good OS can do a lot. One of the demonstrations I used to do with my students was to take the oldest PCs we could find, typically P1/2/3 and test throughput for maths or I/O. Once, I set a P3 up as a terminal server with just 500MB RAM. It had no problem pleasing 7 students at once, each getting the performance of the whole PC as far as they could tell. Of course, it was pathetic for storage because of the old hard drive and interface but just to interact with an application typing or point/click/gawking was fine. Many schools have enjoyed the new life old PCs get with this trick using a fine new machine as the terminal server.

    PS DrLoser wrote, “let’s take a realistic estimate of a server context switch at 10μs”.

    That’s plain silly. Even a regular PC has clockspeed of 2.5gHz with 4 to 6 cores and memory bandwidths of many gB/s. 2X2.5gHz X 10-5s is 5X104 operations, far more than is necessary to swap registers. You should also consider that only a few users/processors will be giving a heavy load in most cases. These just replace the idle-loop. Unless some path is plugged, CPU load is rarely a problem. If it was one can just add processors. I once ran a lab from a single X-series server with 2 CPU sockets. I had another but chose to move all the RAM and storage to one machine to maximize performance (less OS overhead, more RAM per users, more read/write heads per user). I did not bother moving the second CPU. There was no need.

  109. DrLoser wrote, “Oh, yes. I forgot network latency.
    And disk latency.
    Neither of these matter at all when you’re using a server, do they?”

    Of course latency matters but you have to take care of which latencies matter. A GNU/Linux terminal server may actually use less network if the storage is on the server instead of on a storage server. So, network latency depends on the path. I usually set up school labs with 100mbits/s between the student and the switch and gigabit/s between the server and the switch. From time to time I’ve bonded connections to double up those speeds. That allows a whole monitor’s screen to be redrawn a few times a second, fast enough to keep up with typing but not good enough for smooth full-screen video. Flash on YouTube is OK for a few students though. For all the point/click/gawk operations, this speed is just fine.

    I usually deal with disk latency by making sure I have enough RAM to cache the common files. That means it’s only stuff like file/save/open that matters to the users and the disks are quiet so they usually have the disk to themselves when they seek. For more users I use RAID 1 which allows multiple simultaneous seeks but is faster for reading than writing. Students typically use small files so there’s no latency problem at all. I’ve often distributed gigabyte files to terminal servers at full speed with no impact on users at all.

    So, latency is a figment of DrLoser’s imagination. There is one exception, booting. If the clients are diskless and a few MB have to be transferred to the clients all at once, it does increase booting time. I’ve dealt with that in the past by having a button for the teacher to boot all the PCs or do it on schedule with Wake-on-LAN. Thus, the first person in the room starts the process and by the time the last student enters all the PCs will be booted and ready. An organization with many users could easily implement booting a few minutes before the start of a shift. I even shut down labs with SSH “shutdown -h 1” delivered to the clients by a script either on schedule or with a click. Of course, “suspend to RAM” would likely do the job nicely. I never used that in schools but it’s working well here on several types of PCs.

  110. oiaohm wrote, “A context switch memory consume is the store of all cpu registers or less than 4KB. So your wild guess as normal are insanely wild DrLoser. 1000 threads equals at worst 4 meg of ram.”

    For hardware interrupts and such that is about right. Unfortunately, the context switches between/among users and their applications require a lot of the caches to be reloaded. There are MB of data that needs to be sloshing around. Terminal servers benefit by having many cores with large caches but the performance of GNU/Linux terminal servers is still great for a dozen or more users per core. Largo, FL was running 400 users of an office suite or browsers on sixteen cores. Their servers cost ~$40K. Mine cost as little as $2K and could easily run 30 users in 2 cores.

  111. TEG wrote, “it’s not like there will ever be users all trying to log in or execute a program or doing something CPU-demanding at the same time. No siree!”

    In the days of PII or PIII, the CPU of a COTS PC could keep well ahead of most users’ needs. Today we have ordinary PCs with 4 or 6 cores running at more than double the clockspeed and buses 50X faster (wider and with higher clockspeeds). On most GNU/Linux systems I have seen, a user takes up to 10s to login from booting. With a terminal server having the necessary files cached that time is less than 2s. We could have 5 users login at the same time and they would scarcely notice the slowdown. That doesn’t happen in school labs because students wander in at human speed through a single door. I’ve often set things up so that the login is done before the students even arrive. They sit down and get to work with a userid according to the machine at which they sit. Then there are applications. I often see windows opening five times faster on a terminal server than the usual thick client. It’s nothing about the CPU but the fact that the files are cached on the server and a ton of seeks/reads are skipped. Students and teachers really love a usable desktop appearing in a few seconds when they are used to waiting 30s or so on an XP-machine. All of this increased performance is due to the much-maligned but effective X window system.

  112. oiaohm says:

    Link1: https://lkml.org/lkml/2014/4/24/10
    Link2: http://standards.freedesktop.org/menu-spec/latest/ar01s02.html
    Link3: https://wiki.archlinux.org/index.php/Xdg-menu

    That Exploit Guy there are a long list of minor faults around context switching. Recent addition is espfix. Its not just me sorry. Each extra validation instruction costs time. Problem here for intel and amd fixing these errors silicon level could in fact break other items like windows drivers that expect the malfunction.

    By the way That Exploit Guy installing CDE and trying it out is not replicating how VUE was used. Menu generation scripts were common before the 1990~ in X11 world. Reason why X11 applications started with x and kde applications started with k and gnome applications started with g all relates to how will menu generation script detect what is graphical applications. Yes the CDE source contains examples of generation script where the final installed CDE can be missing them.

    CDE or VUE + a menu generation script you will see something like the start menu. Xfce includes that built in. HP made a menu generation script for VUE that other CDE suppiers like Sun that become Orcale did not install by default. Sorry when it comes to history I think you should stay way clear That Exploit Guy. Understanding when something appeared in history is not as simple as install X application and run it. Its install X application and use it how it was used then you see if it existed back then.

    Even that the Linux desktop menu might look like Microsoft start menu under it is something vastly different. LINK2 notice sounds strange in fact the listed folders there don’t have subfolders. Everything that appears to be a subfolder in a Linux desktop menu is in fact Categories entry in the .desktop file. This is the heritage from X11 windows managers like VUE configuration files. Before release of Windows XP their were X11 windows mangers supporting multi file for menus.

    Yes the start menu like thing in VUE magically disappears around 1993 from default install with the release of CDE. The XFCE menu is a historic throw back. Why did it disappear failure for the Unix world at that time to form agreement on how generation script should work. It takes until the year 2000 for agreement to be achieved mostly by Redhat going stuff posix body this is how it will work.

    The finger prints of the early versions of start menus generated by script in the X11 world is all over how current Linux Freedesktop Desktop Menu functions today. Even its look of being highly categorised. Really if the agreement had happened at CDE release to unify menu generation Unix/Linux on the desktop would have been more competitive.

    Yes a Lot of problems in the Unix world came from infighting and those fights have effected Linux.

    DrLoser yes both look similar but the unique differences in the Freedesktop Desktop Menu should have been clear warnings that it was not based off what Microsoft did. Freedesktop Desktop Menu is based off of older abominations.

    LINK3: Yes some of the current day X11 Windows managers still don’t use the freedesktop desktop menu directly but still depend on generators to spit out their menu data. The historic way is still here today.

  113. That Exploit Guy says:

    Due to cpu design errors in x86 you have to perform particular operations to protect against context switch security flaws

    Yes, that’s true. Believe me – I am oiaohm and my teeth are very, very shiny.

  114. That Exploit Guy says:

    @RP

    TEG wrote on and on about nothing in particular

    Particular questions have been raised. Particular flaws in your reasoning have also been pointed out. If your conclusion is somehow “nothing in particular”, then that’s your choice, but you aren’t fooling anyone except yourself.

    Yes, indeed. That might measure swap or something

    I have said before that you have no idea what to measure or how to measure it, and this above statement just proves my point.

    There were 7 users actually browsing for that image.
    Again, having browser windows opened showing a static image is no way to test the system, regardless of whether there are 7 of them or 25 of them.

    Still users are random agents not all demanding resources from the server at the same time.

    Of course not, it’s not like there will ever be users all trying to log in or execute a program or doing something CPU-demanding at the same time. No siree!

    My biggest installation of terminal servers would get about 1% utilization per user.

    This makes me wonder if you understand what “CPU utilisation” is at all.

  115. oiaohm says:

    DrLoser its more the average Linux context switch is about 30 μs on x86. Unfortunately this is not exactly all Linux fault. Due to cpu design errors in x86 you have to perform particular operations to protect against context switch security flaws those are not free and fairly much lock the speed of doing a context switch. This is why Linux kernel developers have been researching in queing up context switch requiring operations. Yes arm cpu can do a short context switch time.

    Also just to be warped the 30μs on x86 is not exactly that. When hyper-threading on a x86 cpu core you do 2 context switches in 30μs. Two independent context switches. Lot about the x86 cpus is what the.

    Please note each time you add a cpu core you increase the number of context switch you can do. Context switch is a per core/virtual core thing. So a quad core will do over 100 000 context switch per second in fact with hyper-threading 200000 is more likely. 32 core is well and truly over the million 1.6 million context switch per second. 3 percent of that is 48000. Allow for the odd load spike to about 9 percent on context switching (this does happen) So some of a time 100000 context switch per second would be broken on a 32 core machine.

    Context switch memory has a fixed size does not increase based on numbers of context switch. Each thread has 1 context switch memory block that is allocated when the thread is created. This block only has to be written out if kernel scheduler changes process its handling.. So a large percentage of context switch data stays inside the cpu core handling it.

    A context switch memory consume is the store of all cpu registers or less than 4KB. So your wild guess as normal are insanely wild DrLoser. 1000 threads equals at worst 4 meg of ram. A million threads equals 4G of ram. A million threads is a hell of a lot of processes. So if you had 25G in context switch data you are running a hell of a lot.

    You got your relativity very wrong. Context switch memory aligns to processes/threads.

    Cache lines are only at risk when process changing. Context switching from a thread to kernel mode and back does not invalid cache lines. Memory transfer burn again links to number of running processes not context switches directly.

  116. DrLoser says:

    Oh, yes. I forgot network latency.
    And disk latency.
    Neither of these matter at all when you’re using a server, do they?

  117. DrLoser says:

    That Exploit Guy’s arguments stand rather well by themselves. But, he will permit me a small calculation based on the following comment:

    A fairly modern server can do ~100K context-switches per second…

    … now, there is a non-negligible cost to a context switch. Not only that, but all you’re doing is to thrash one set of registers and associated data (including cache lines) out, and another one in. You’re not actually performing any real work.

    But let’s take a realistic estimate of a server context switch at 10μs more or less. This number is open to correction but, please, oiaohm, assume that I am using it as a fair representative number on a modern server CPU.

    In other words, this hypothesised server is spending a second per second doing nothing else but value-free context switching, In the real world, this is considered to be sub-optimal.

    Now, let’s take a representative 32-core server. Hoorah! Only 3% of the time spent context switching!

    Except for the nuclear destruction of the cache lines.

    Except for the insane amount of paging required to support 100,000 context switches for applications with a working set of, say, 0.25 MB each (that’s 25 GB of RAM, and it’s conservative, and now we’re talking about page granularity and even more busted cache lines and we haven’t even begun to consider locking or the specific multi-processor memory architecture).

    I’d stick to cyclotrons and cloud chambers if I were you, Robert. You’d be lethal with something like a commercial server in your hands.

  118. TEG wrote on and on about nothing in particular, but I will respond to this, “testing CPU utilisation by merely having 25 browser window opened is a fundamentally worthless exercise”.

    Yes, indeed. That might measure swap or something… There were 257 users actually browsing for that image.

    CORRECTION: That chart was done with just 7 users active… I didn’t read the full previous page… Still users are random agents not all demanding resources from the server at the same time. Users are the bottleneck and not the network or the thin client. My biggest installation of terminal servers would get about 1% utilization per user. A big server can easily handle about 50 users pointing, clicking and gawking over 1gig networks. Largo, FL has hundreds on a server.

  119. That Exploit Guy says:

    @RP

    Twit! The utilization of the usual PC is less than 10%. Shifting Nx10% to a powerful server reduces the cost of PCs mightily and lets the server actually pay for itself.

    Again, 10% under what scenario? Demand for CPU time from an application is not constant, and this is why testing CPU utilisation by merely having 25 browser window opened is a fundamentally worthless exercise. This is not to mention disk I/O is also a consideration when you have swap space, browser cache, etc. Memory is not for free, either.

    4gB on the server and 128MB on the clients would accomplish the same work with file-caching.

    No, your methodology is faulty and your conclusion is nothing more than a mistake.

    That is an imaginary scenario.

    It’s not imaginary. Again, you are assuming that the CPU time demanded by an application is constant. It’s not.

    It doesn’t happen because files are cached and they only need to be fetched once, say on the weekend long before students arrive.

    Cached files have exactly zero to do with CPU time. What on earth are you babbling about?

    The human student has some sort of click-rate.

    Again, more arm-waving.

    Typing is just a few hundred clicks per minute.
    It appears you don’t have any idea what you were actually measuring, let alone how to measure it.
    Typing are not “clicks”. Beside, everything your browser does, be it rendering an image, running Javascript, or showing an embedded Flash object, takes CPU time, sometimes continually Your “test” did not factor any of that in the picture. Rather, you simply assumed having page element “cached” would somehow make these concerns magically disappear, and that, RP, is what makes your argument all the more laughable.
    And I am generous enough to assume that the only thing that your student would ever use was the web browser.

    A fairly modern server can do ~100K context-switches per second
    Such feeble grasp of operating systems principles!
    Unless you are expecting the CPU to perform nothing more than context switching, the “100k” number is a fundamentally worthless value for just about anything.

    The CPU normally only gets to 100% utilization for a tiny fraction of a second to satisfy a user’s mouse-click.

    That’s a rather bizarre statement. Mind if you tell me where on earth you have got that idea from?

    The last terminal server I installed in the Arctic used an ancient AMD64 mobo with single core and it ran ~90% utilization with 25 junior high school kids hammering it but they all agreed the performance was much superior to using their PCs as thick clients.

    “Hammering” is no more a piece of hard data than one’s sense of euphoria. Neither is an anecdote such as “they all agreed”.
    C’mon, are crap responses all you could come up with.

  120. TEG wrote, “What does this prove except my point that, with thin clients, you are merely shifting the need for computational resources from workstations to the server room?”

    Twit! The utilization of the usual PC is less than 10%. Shifting Nx10% to a powerful server reduces the cost of PCs mightily and lets the server actually pay for itself. The same can be said for RAM/storage. I’ve been in a lab with 24 machines each with 500MB RAM. 4gB on the server and 128MB on the clients would accomplish the same work with file-caching. Same with hard drives. 24X40gB of storage can be replaced by 1TB of storage for a lot less money (fewer drives). In fact, we can afford a good RAID on the server while that would be very expensive on the clients. This is about putting the resources where they will do the most good. It’s much more expensive to scatter resources far and wide.

    TEG also wrote, “where in your report have you shown the scenario for peak usage, i.e. where all browser instances demand a reasonably large amount of CPU time simultaneously”

    That is an imaginary scenario. It doesn’t happen because files are cached and they only need to be fetched once, say on the weekend long before students arrive. It doesn’t happen because each student is doing his own thing unless the teacher shouts, “On 3 click on such-and-such, 1, 2, …”. The human student has some sort of click-rate. Typing is just a few hundred clicks per minute. Clicking with mouse is much less. A fairly modern server can do ~100K context-switches per second… The CPU normally only gets to 100% utilization for a tiny fraction of a second to satisfy a user’s mouse-click. The networking lags to the Internet, the time taken to move an image from server to client all happen in the blink of an eye. There’s no need to have more resources sitting idle. It is much more efficient to have resources where they can actually produce useful results. The last terminal server I installed in the Arctic used an ancient AMD64 mobo with single core and it ran ~90% utilization with 25 junior high school kids hammering it but they all agreed the performance was much superior to using their PCs as thick clients. Teachers loved it. Pages loaded in ~1s when they were used to ~10s. That’s mostly due to file-caching. Why clatter all over a hard drive when the files are already in RAM somewhere in the room?

    All the rest of TEG’s comment is crap.

  121. That Exploit Guy says:

    @RP

    see, for example, page 10 of this report.

    What does this prove except my point that, with thin clients, you are merely shifting the need for computational resources from workstations to the server room?
    On top of that, where in your report have you shown the scenario for peak usage, i.e. where all browser instances demand a reasonably large amount of CPU time simultaneously (and in case you don’t understand, that’s different from merely having a browser window opened on each client)?

    I mean the application producing the video runs on the thin client so the video does not plug up the network

    I appear to have misunderstood what you mean by “video”.
    What you suggest is all well and good except one teensy-weensy problem – that it is practically infeasible to implement. Think about this: what is your system supposed to do when it encounters a Flash-based media player?

    A legacy PC, btw, is the typical ATX PC sold to consumers

    C’mon now. Am I supposed to take this seriously?

  122. TEG wrote, ““I have measured” means absolutely nothing in the absence of hard data.”

    see, for example, page 10 of this report.

    What do you think users should do with PCs? Click on them or read/write/think?

    TEG also wrote, “The keyword you are looking for here is “XDMCP””

    No, it’s not. When I say “render video on the thin client”, I mean the application producing the video runs on the thin client so the video does not plug up the network. XDMCP is a protocol for running sessions over thin clients, nothing to do with video except in the sense of X.

    A legacy PC, btw, is the typical ATX PC sold to consumers, businesses, governments and schools for the last 15 years or more. It’s typically a big empty box with fans, hard drives, motherboard, PSU, expansion slots … big bulky things largely unnecessary when you consider small cheap fanless thin clients. All those noisy moving parts tend to deteriorate and M$’s bloat finished them off in 3-4 years when Wintel was in its prime. The typical thin client is limited mostly by screen-resolution and network speed. Many are still going strong with 1024×768 and 100 mbits/s but newer thin clients use gigabit/s and much higher resolution.

  123. That Exploit Guy says:

    @RP

    The few this second who actually point and click get to use all that power in the server room. It’s not a fantasy.

    Certainly not. Don’t you know every server room has a magical tree, on which computational resources are grown?

    ’ve measured the CPU utilization of clients and servers with 20-30 simultaneous users.

    “I have measured” means absolutely nothing in the absence of hard data. Of course, given whom I am talking to here, it’s kind of unrealistic to expect any more than hand-waving.

    That’s easily accomplished by having the video rendering done on the clients.

    The keyword you are looking for here is “XDMCP”, Mr. I-have-measured.

    it keeps increasing year after year

    While total number all clients remains statics. Don’t ask how or why – RP just knows.

    Because the lifespan of thin clients exceeds that of legacy PCs

    What on earth is a “legacy” PC?

  124. That Exploit Guy says:

    DrLoser pardon the first start menu like thing appears in CDE on Unix.

    Rather than taking oiaohm incoherent gibbering as facts, I advise readers here to try out the actual CDE interface and judge for yourselves as to whether the it contains anything resembling a start menu at all.

  125. TEG wrote, “I forgot you are a firm believer of the thin-client fantasy, in which moving the computational resources of twenty workstations to the server room will make the need of them all magically disappear!”

    It’s not a fantasy. On average, most users are gawking. The few this second who actually point and click get to use all that power in the server room. It’s not a fantasy. I’ve measured the CPU utilization of clients and servers with 20-30 simultaneous users. The only issue with using X this way is if a significant number of users run video. That’s easily accomplished by having the video rendering done on the clients. You should know that while the world’s consumption of thin clients is still modest, it keeps increasing year after year. Because the lifespan of thin clients exceeds that of legacy PCs, the installed base of thin clients is increasing even more rapidly. This is not because folks are pursuing fantasy but better IT.

  126. dougman says:

    LOL… M$ may have trademarked the term ‘Start Menu”, and then paid the Rolling Stones money for use of their song, “Start Me Up!”. However, when you read patent #5920316 and #5757371, you will see a listing of citations predating the filing date, these were references to where the idea came from and when you go back to the earlier days of WIMP GUI’s, developed by Xerox in 1973 and popularized by Apple in 1984, you start to see where the notion came from, I would go so far as to include #4772882 as well.

    But this does not remove the fact that Billy was a dumpster-diving thief. Paul and Billy, two miscreants rummaged through dumpsters at the nearby Computer Center Corporation to find notes written by programmers.

    BIlly, in all sense of the word, is actually a (in)famous thief and he has stolen many things. Gates himself once said, “just because you broke into Xerox’s store before I did and took the TV doesn’t mean I can’t go in later and steal the stereo.

  127. That Exploit Guy says:

    I am tired of Dr. Loser inability to keep himself from bothering people of other websites with incoherent babbling by Robert Pogson and Co., so I guess someone has got to show him how to leave comments here without getting other places involved.

    @RP

    One can run any software on Ubuntu GNU/Linux, given sufficient need/effort.

    Given sufficient need/effort, one can also start the first human settlement in the Andromeda galaxy.
    Seriously, what kind of idiotic argument is this?

    If I wanted to run the latest version of FireFox on some ancient GNU/Linux distro-release, I can run it on another PC and slip the screen over the network to the old PC with

    Then you are not actually running Firefox on “some ancient GNU/Linux distro-release” but rather on “another PC” using whatever OS there and, more importantly, its CPU, memory and other system resources, are you?
    Oh, my bad, I forgot you are a firm believer of the thin-client fantasy, in which moving the computational resources of twenty workstations to the server room will make the need of them all magically disappear!

    Nonsense. Applications can start processes.

    The nonsense here is your claim that applications can start processes on its own (given a “properly constituted OS”, as Dr. Loser puts it). Do you not know that even fork/exec in POSIX are meant to be syscalls?

    Nonsense. I invented it in 1983 when I wrote the control system for the King Faisal Specialist Hospital cyclotron.

    And Grampa Simpson invented the toilet when he fought in WWII.
    Seriously, could you at least try and make yourself not sound like a stereotypical, crazy, old fool?

    It was a VT100 terminal from DEC.

    In that case, it’s rather clear your have “invented” something similar to the Norton Utilities menu. In other words, it’s safe to say that you didn’t actually invent anything at all, regardless the difference the difference between a click from a mouse and a “click” from a keyboard.

  128. oiaohm says:

    Robert Pogson if you go back another step before VUE as we can because we do have HP developers notes on the matter in the source code. VUE interface is based on library catalog operations by the notes left in the code. So prior to electronics is the next stop. Even the recently used idea appears in Library catalog operations.

    HP deserves the credit for implementing it on computers with a few changes. The basic idea of a menu including start menu is centuries old pre electronics.

    Its surprising how much of menus and current day interface that goes back to library catalog operations.

  129. DrLoser wrote, “So then. What hardware are we talking about?
    And please don’t say “it was a keyboard, dummy!””

    It was a VT100 terminal from DEC. It might have been a VT10x. Don’t know for sure but that’s what it looked like. One indeed did click things with those keys.

  130. oiaohm wrote, “For a start button there is prior art. For a start button on a taskbar now that is new in 95. CDE task panel is independent to panel with application short cuts and virtual desktop controls.”

    You guys seem to forget that before PCs/IC controllers were affordable and reliable, guys with drills and punches mounted switches and relays on control panels on racks and such decades before all this software stuff came around to reinvent them. There’s nothing new here, not in 1983 and not in 1995. It goes back to the 1930s or so when electronics was new and stuff could be made small enough to fit in tight places. Before that, strong men with leverage moved huge switches… I’ve seen a few of them… “When the #$%! hits the fan, pull that.” Relays allowed mere push-buttons to trigger big/remote switches. Reed/micro-roller switches gave crude analogue inputs to trigger things automatically. Putting this concept on a computer-screen does not make an invention.

  131. oiaohm says:

    DrLoser yes have a valid bone to pick over my use of the name CDE but its also invalid to say my date is random 1982. But its a truly badly picked bone to pick me on. CDE source is older than 1993. The name is 1993 but the earliest copyright in the CDE source is 1981 HP.

    Domain/OS and VUE start in 1981. The start menu like thing was introduced in 1982 by the source code. VUE becomes CDE in 1993. CDE source code contains VUE source code.

    Yes VUE and CDE where both proprietary. In fact only VUE source code released is in the CDE source code base released in 2012. The evidence that Microsoft did not invent the start menu idea completely is sitting in the CDE source base. Yes CDE source base is older than CDE. The VUE source in CDE is the oldest known reference to anything like a Start menu. There are a few other X11 windows managers from the 1983-1995 that had start menu like things some of those way more pleasant than VUE or CDE.

    VUE is also where the Linux/Unix world idea of breaking up applications into categories start. There is a lot of things about the current Linux desktop that we have to thank VUE for. Of course the formal naming of the categories that comes latter.

    I used VUE in 1992 before it become CDE on a HP workstation.

    Just because something is the worst GUI ever did not mean it did not invent things. Remember the first light bulb barely glowed it was Edison who worked out how to make one with a decent amount of light.

    Your sources have deceived you once again. I bet a quick google and looked at release date did not read the full information where its mentioned that CDE is directly based on VUE. HP deserves the credit for the Start menu like things not Microsoft.

    Microsoft and the start menu is like Edison and the light bulb neither invented them both refined them. The problem here is other parties also refined the start menu idea.

    XFCE started to clone CDE because as you said the source code of CDE was not available until 2012 same with its lack of cross platform support. XFCE start menu like thing has nothing todo with Microsofts. There could be a common ancestor but Microsoft has never told us where they took the start menu from.

    I know you call CDE the worst ever but some people truly like it. Liked it to a point they cloned it.

  132. DrLoser says:

    Just out of interest, oiaohm, this docked-menu CDE thing?

    We can leave to one side the well-known fact that the CDE was proprietary until 2012.

    Remind me again, what was that year? 1982? Are you sure you’re not just coming up with a random year that just happens to be one year before Robert single-handedly invented the Start Menu for the King Faisal Specialist Heart hospital?

    My sources tell me that that SunSoft, HP, IBM and USL released the Common Desktop Environment in 1993. A full decade later.

    Don’t you be goin’ showing Robert up on this one, oiaohm. You can’t even claim “dyslexia.” The only two digits you got right were those representing the century.

  133. DrLoser says:

    Absolutely true, oiaohm. Well remembered!

    Now, tell me, did you ever use the CDE “start menu” before 1995? Or even now, in 2014?

    No? Well done again!

    The bloody thing is an abortion. Anybody who has ever been forced to use a desktop featuring it still bears the scars. It was pointless and offensive and completely undiscoverable. It was the worst GUI design ever.

    No normal human being should be exposed to it.

    Naturally I will make an exception in your case.

  134. oiaohm says:

    DrLoser CDE workspaces menu with button is way before 95. You are talking 1982 for the CDE docked workspace menu. Its the first findable start menu like thing.

    DrLoser Microsoft gets more credit than they should for the start menu due to how little the Unix desktop was used and people failing to pay attention to patents filed.

    Note the Microsoft patent. Start menu with taskbar. Never patents the start button. This is odd thinking how much Microsoft loves patenting stuff. The CDE and XFCE panel used is not a taskbar in neither of them. For a start button there is prior art. For a start button on a taskbar now that is new in 95. CDE task panel is independent to panel with application short cuts and virtual desktop controls.

  135. DrLoser says:

    Nonsense. I invented it in 1983 when I wrote the control system for the King Faisal Specialist Hospital cyclotron. Users were given a status line and a menu from the get-go.

    Back in those days, Robert, we called those things “shells.” Not, obviously, as powerful as today’s shells. Normally, we programmers used to rip off a PDP/11 or IBM 3270 equivalent, and give users a choice like:

    1) Frooble?
    2) Wart the thangle, nuckie noo!
    3) Oh Crap, raise the rods, the thing is about to blow!
    4) Quit

    You could have made millions if you’d copyrighted it. But, sadly, it was already there. Also, halfway down the page, users typically forgot whether it was Q or X or even ctrl-C to exit.

    You wouldn’t believe how infrequently this happens on an actual start menu.

    Training-time for new operators dropped from six months to one month because the system had great two-way communication compared to the old system.

    This would be very, very impressive, if only you had compared it to the amount of time required to train a new operator in the use of a start menu.

    I’m guessing, what, a day? And I’m assuming a class full of Neanderthals with fat fingers.

    Not really impressive at all. I don’t believe you. Because I’m pretty sure you’re selling your system short.

    The user clicked on an item which started a process to do something. It was a true multi-user system with up to five operators on stations although we rarely used more than two.

    Now, remember, Robert, I was around at the time. (Not in Saudi, just in tech.) I am not going to scoff at you for this “click on an item” thing, because I was using hardware that made that possible myself.

    So then. What hardware are we talking about?

    And please don’t say “it was a keyboard, dummy!”

    Because a keyboard is no more a start menu than a start menu is a transistor. And now you’re forcing me to repeat myself. I would prefer people to pay attention in the first place.

  136. DrLoser wrote of M$, “Turns out they invented it with Windows 95.”

    Nonsense. I invented it in 1983 when I wrote the control system for the King Faisal Specialist Hospital cyclotron. Users were given a status line and a menu from the get-go. Training-time for new operators dropped from six months to one month because the system had great two-way communication compared to the old system. The user clicked on an item which started a process to do something. It was a true multi-user system with up to five operators on stations although we rarely used more than two.

  137. DrLoser wrote, “In a properly constituted OS, that “something” is mediated by the kernel for start menus, and by the individual app for app menus.”

    Nonsense. Applications can start processes. That has nothing to do with the OS, except that it manages resources. A proper operating system is a multiuser/multitasking system. We are not in the days of DOS. I was writing multitasking applications 45 years ago. That was done by the application servicing interrupts. Now the OS should do that but the application can damned well request the OS to start the process. “File/save” could certainly work with the process started putting a lock on the file to keep writes straight. In the meantime the user and the application can get on with life. Except for very large files there’s no need whatsoever for the user to wait for file-transfers to finish. Indeed the OS may just queue them up anyway.

  138. DrLoser says:

    The examples I gave are of the “start-menu” kind but there isn’t a real difference between those and the application-window kind. They both work.

    That is a needlessly reductionist argument, Robert. Following on from that position, it would be trivial to argue that a transistor has no real difference from a start menu.

    I mean, they both work.

    The one mostly for starting applications/processes and the other for controlling how an application works, perhaps conceptually different but more or less identical in methods. e.g. “File/save” might actually start some process.

    Not on anything but a toy operating system, it wouldn’t. Not even on Gnu/Linux.

    The only things that start menus and app menus have in common are:
    a) They are both GUI elements
    b) The user clicks on an entry and “something” happens. For some version of “something.”

    In a properly constituted OS, that “something” is mediated by the kernel for start menus, and by the individual app for app menus.

    Once again, reductionism is your enemy here.

    An alternative answer, btw, would have been “I’m talking about start menus.” That was really all I was asking for. It’s easier to follow a discussion once the underpinnings are defined.

  139. DrLoser says:

    Yes, you all did a very good job of catching me out on the Microsoft invented the Start Menu round about Vista thing.

    Turns out they invented it with Windows 95.

    I appreciate your corrections.

  140. DrLoser says:

    No one forced you to upgrade?? LOL…. say that after Windows 7 expires idiot; people surely learned about forced upgrades when it came time to expire Windows XP.

    Remind me again, Dougie. What was your favourite Gnu/Linux distro in 2002?

    And what was the version?

  141. dougman says:

    No one forced you to upgrade?? LOL…. say that after Windows 7 expires idiot; people surely learned about forced upgrades when it came time to expire Windows XP.

  142. kurkosdr wrote, “Windows 7 runs ALL modern desktop apps perfectly, unlike a 5-year old version of Ubuntu, which doesn’t run many modern apps.”

    Absolute nonsense. One can run any software on Ubuntu GNU/Linux, given sufficient need/effort. e.g. If I wanted to run the latest version of FireFox on some ancient GNU/Linux distro-release, I can run it on another PC and slip the screen over the network to the old PC with X, build the source-code of FF and all its dependencies on the old system and run it, etc. With source-code being available anything is possible. Of course it’s rarely necessary as one can just as easily run the new version of the OS in a virtual machine or on a terminal server or in a chroot or…

    I know many DOSish programmes will run on newer versions of that other OS but I’ve also seen non-Free software that would not run on a newer OS because M$ or someone did not licence some non-Free software for that OS. That’s a barrier FLOSS does not have.

  143. DrLoser, thinking M$ actually originated any of this, wrote, “either one might have its critics. But it would be good to know which one you are talking about.”

    The examples I gave are of the “start-menu” kind but there isn’t a real difference between those and the application-window kind. They both work. The one mostly for starting applications/processes and the other for controlling how an application works, perhaps conceptually different but more or less identical in methods. e.g. “File/save” might actually start some process.

  144. oiaohm says:

    DrLoser pardon the first start menu like thing appears in CDE on Unix. Its the dock-able workspaces menu. XFCE is a open source clone of closed source CDE. Nothing about XFCE is copied from windows. The mouse icon replaces the CDE logo for the workspaces menu. What are the two unique features of windows start menu. One it was named start two very quickly it had search dialog. Historically none of the X11 wm had search in the start menu equal. Search appeared in some after Microsoft did it.

    Start menu is really wrong to claim invented by Microsoft. Refined by Microsoft would a correct claim. Vista start menu follows KDE design documents that were released first. So its a question mark who design that.

    kurkosdr 5 year old ubuntu still runs modern applications. You just have to install them using 0install or equal. Its not that 5 year Ubuntu cannot its that applications makers don’t package their applications so you can.

  145. AdmFubar says:

    odd.. i’ve not noticed any difference in “menu-ing” styles out there…
    other than on tablets the “start” menu is already open on the desktop..

  146. dougman says:

    M$ never came up with the idea of a “Start Menu” alone, they just repackaged something which were merely inspirations and aggregations of existing ideas.

    Seriously, Bill Gates use to moonlight and dumpster dive in his early days.

  147. dougman says:

    Regarding Start8, why should someone have to download and pay for extra software to make their additional software work?

    It would be analogous to leasing a car, then having to spend extra money for it to function properly.

    People would be outraged, but even if they were in this instance they cannot even sue or have a class-action suit, as the EULA denies them that action.

    Eh.

  148. DrLoser says:

    Just to be clear here, Robert, are we talking about “classic” menus a la Apple Macintosh — the File/View/Edit/Options/Help thing that everybody who has ever used a Windows machine would recognise?

    Or are we talking about the “Start Menu,” innovated and popularised by Microsoft round about Vista and ripped off by various Linux distros in a spectacularly ugly and typically uninformative way?

    I mean, either one might have its critics. But it would be good to know which one you are talking about.

  149. kurkosdr says:

    “Windows 8 = Fisher Price Toybox”

    Only part of it. Most of the traditional desktop codebase is there, and with Start8 you can have the start menu too. Compare and contrast with Gnome, where every new iteration (2 and 3), completely strips the old experience.

    Win8’s problem is the ugly theme. But I prefer looking at the ugly theme of Windows 8 and have PowerDirector, PowerDVD Ultra (Bluray, yes) and MS Office.

    And most importantly, nobody forces me to upgrade, because Windows 7 runs ALL modern desktop apps perfectly, unlike a 5-year old version of Ubuntu, which doesn’t run many modern apps.

    And it’s not like Linux Desktop UIs aren’t embracing the toyfication insanity. The notable exception is Cinammon, which however is just three guys hacking in weekends.

  150. dougman says:

    Windows 8 = Fisher Price Toybox

  151. kurkosdr says:

    Dumbification. That’s the reason. Just look how app UIs (open-source and proprietary) are slowly evolving into a toybox GUI, where menus are removed and replaced with always-visible buttons which however offer only a substitute of the options.

    Geeks are not the target audience anymore. Now it’s about giving people the easiest access to Facebook and games like Angry Birds. That’s the truth.

Leave a Reply