2015 – Crippling Wintel

Wintel is at a huge disadvantage in 2015. All the things that locked the world into Wintel in decades past now are locking Wintel out: mobility, touch, Android/Linux the cool OS, OEMs ship the stuff, retailers stock it and consumers are lapping them up…
The only thing Wintel still has going for it is the lock-in of businesses that think they can’t do IT without Wintel. When the bulk of them figure that out, Wintel will be just a shadow of its former self. Could happen in 2015. Why not? The cost of IT is going down for consumers and up for businesses. They are bound to notice sooner or later.

See Gartner Says Tablet Sales Continue to Be Slow in 2015 Gartner has built their business on Wintel and now they see 8% growth for the competition as something hopeful… Meanwhile, smartphones have explosive growth and thin clients are doing well too.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology and tagged , , , , , , , , . Bookmark the permalink.

309 Responses to 2015 – Crippling Wintel

  1. Dr Loser says:

    I had to think about this one quite hard, Robert, which I suppose is a tribute to your intuition:

    Of course, the web data is cached. Indexes may be cached too, but they can’t cache all possible searches and 5K would not come close to reaching the speeds I’ve demonstrated.

    Leaving aside the obvious fact that you haven’t demonstrated any “speeds” whatsoever — you haven’t even guesssed at them — your comment is quite illuminating, for two reasons. I’m going to take the more offensive one first, but, don’t worry, the second one shows you in a good light.

    1) You get a “story of my life” result from a Google search and for some reason assume that this is the result of a MegaCorp devoting insane amounts of processing power to your particular needs. Let’s try this one again, shall we? You go to a public performance by an “expert mentalist,” and you’re given information like “I see a young lady in your family or amongst your friends … her name begins with M, I think … is it Michelle? It may be to do with struggles, perhaps a serious illness of some kind. Perhaps early symptoms of cancer?”

    Now, the interesting thing here is that mentalists work on very similar lines to search engines. They don’t actually know you: they just take various cues and generalise. You’ve got dozens of friends. Half of them are female. At least one has a medical problem. It’s up to you to fit the “answer” to the “facts.”

    But only the extremely credulous would believe that the Oracle is talking Specifically to You. It isn’t. Now, to the less disreputable half of my response:

    I eventually worked out that you thought, when I described the top 50K responses as “canned,” that I meant that every single query for {parboiled frogs} hits exactly the same canned 100+ responses.

    That’s not what happens.

    See that number attached to parboiled frogs? As of right now, it’s “about” 806,000, and the query was served in less than half a second.

    Do you want to know why? Simple. The responses are cached.

    But, even on the SERP (that’s the first page to the likes of you and me), the 20+ answers will vary from person to person. Why? Because there are other signals in play, beyond the simple query {parboiled frogs}.

    Assuming that there is a medical research establishment near you, Robert, that specialises in the investigation of foolish 19th century zoological heating experiments, then I would imagine that it features in that cache of 806,000 suggestions. And it will be presented to you, precisely because you live nearby.

    Or, alternatively, have a distinctly unhealthy interest in parboiled frogs. That would be the cookies, but let’s not go there.

  2. antifanboy says:

    http://www.techrepublic.com/article/the-most-obvious-user-for-linux-isnt-who-you-think/

    That article was written by a hardcore Linux fanboy named Jack Wallen.

  3. DrLoser says:

    Not only have I been there and seen how this stuff works, Robert.

    I have fixed several bugs in the code that makes this stuff work.

    When was the last time you fixed a bug in code that serves millions of people daily?

  4. DrLoser says:

    Can we please lose the 5K? The 5K was merely an example — and yet you’d be surprised how much of the bandwidth the top 5K queries take. It’s around 40-50%. Need I remind you? I have spent three years looking at the numbers (discovered through data mining).

    Pick any damn number you want. I have been there. I have seen how this stuff works.

    You’re just flailing around, aren’t you, Robert? You actually have no direct experience whatsoever.

  5. DrLoser says:

    Sigh. “{garter snake frog poplar deer}”.

    To illuminate my previous post, what happens here is that various sub-sets are permuted (using an “interesting tuple” method which would associate garter with snake and poplar with deer — some variety of TF/IDF, I would think — which takes all of a hundred microseconds and has no need of any serious parallelisation).

    “Frog” is discarded because it has no specificity to speak of. We are now left with {{garter snake}{poplar deer}}, which is two lookups into separate caches.

    There’s this process called an Aggregator (in practice, with a more complicated query, two or more may be used) that merges the result set from each tuple in the query.

    Then there’s this process called a Ranker. The Ranker works off signals. One very important signal is “locality.”

    After the Ranker, there’s nothing much but output formatting. And that’s it. All of it. No real-time search required. No massive parallelization (other than the data mining, which is actually distributed parallelization, and therefore not relevant to oiaohm’s claims).

    Genuinely, Robert. That is the way that Google works. It’s a tight little community, is the Search Engine business. I actually know people who work for Google.

    Why is this such a hard concept to grasp? Just because, in your imaginary world, it wouldn’t work that way?

    In the real world, that is precisely how it works.

  6. DrLoser wrote, ” Do you seriously believe that it’s possible to use anything other than caches for this sort of information? Even if it were possible (it isn’t), the cost would be prohibitive.”

    Of course, the web data is cached. Indexes may be cached too, but they can’t cache all possible searches and 5K would not come close to reaching the speeds I’ve demonstrated.

  7. DrLoser says:

    Not surprisingly many of the hits were wordlists/dictionaries because few other kinds of sites would have such a combination. That’s not from some cache.

    Oh yes it is, Robert.

    Apparently it hasn’t occurred to you that Google and Bing cache wordlists and dictionaries. Allow me to let you in to a little secret. They do.

    The first hit was from my local province and their lands set aside for wildlife and people. That’s not from some cache. That’s my life.

    Oh yes it is, Robert. The only extra tweak you’ve hit upon here is that Google and Bing use locality information as a “signal.” (Sometimes it goes wrong.)

    I’m going to ask you once again. Do you seriously believe that it’s possible to use anything other than caches for this sort of information? Even if it were possible (it isn’t), the cost would be prohibitive.

    I know you like getting things for free, but even “free” has a cost. Look at it this way. For some fuzzy query term set {A B C}, the results of which, dragged out of several caches and synthesised, satisfy 5,000 consumers, do you seriously believe that it is worth Google’s time spending 5,000 times more money satisfying each and every precise requirement individually?

    Multiplied by 10 hits per day for more than 10 million people?

    It is not, Robert. It seriously is not.

    And I’m going to have to repeat myself here. You’ve never once been near the damn stuff. I have, for three years.

    Is it out of the question that anybody on this site should be prepared to believe the honest witness of somebody who has actually spent time on the technology in question?

    Seems to be the prevalent ethic around here, if you ask me. And it’s deplorable.

  8. DrLoser wrote, “They cache all of them.* That’s what the millions (actually, tens of thousands) of servers are doing — data-mining, not “real time stuff.””

    So, I search for something Google has not seen before. How long does it take Google to find it in their cache? OMG! What do they do? Wait 5h to search through all that stuff? Nope. They dig right in and give me what I want. Here, I’ll give you an example. I’ll pick five words at random from my English wordlist. The probability of this being in their “5K” cache is nearly zero, yet they yield good results in a second or two. QED

    Here’s the search list: overpower filmstrip underhandedly apportionment pleasanter
    “About 506 results (0.46 seconds)”

    Not surprisingly many of the hits were wordlists/dictionaries because few other kinds of sites would have such a combination. That’s not from some cache.

    Another search, for things that matter to me since I was a boy, “garter snake frog poplar deer”

    “About 72,500 results (0.53 seconds)”
    The first hit was from my local province and their lands set aside for wildlife and people. That’s not from some cache. That’s my life.

  9. DrLoser says:

    My memory was off its 94 GHz as that is the atmosphere window for radar. 100GHz is a nice round figure. But you need the really fast silicon to cope with this.

    Not in the CPU, you don’t.

    Nor have I see evidence of Australian police using anything other than K-band radar. It seems unlikely, really, given the cost of buying military-grade kit.

    Me, I’d rather just apologise to the tree in question for wasting its time and wish it a hearty “G’Day, Mate!”

  10. DrLoser says:

    DrLoser you might have two sorting implementation that both mathematically are O(n log n) but one will be massively faster on real hardware than the other.

    Let’s quantify “massive.” My observations are that the differences for sane data sets (N <~ 2^2o) is essentially linear for N and amounts to, at worst, 20% lossage.

    I imagine you have different ideas.

    Why because O(n log n) does not tell you how the they will interact with the threading and cache systems.

    Deaf Spy and I have tried repeatedly to get this into your skull, oiaohm, which is why we mentioned cache-breaking in re heapsort. Amongst other things. Quick sort, in fact, has fairly bad “cache interaction.”

    Bad interaction can result in slow downs of a scale that is insane.

    No it cannot, unless your definition of “insane” is completely different from the definition used by normal people. Which it very well might be.

    On x86 processes you can as stuffed to only get 10 percent processing power per chip if the method is completely incompatible.

    And here, once again, I reference the invaluable Dr Dobbs.

    Parallel non-in-place merge sort on a Intel i7 3630QM quad-core CPU with hyperthreading running at 3.2 GHz, using 16 GB of system memory.

    Resulting in 100% CPU usage.

    Do you have a point, or are you just going to keep pulling random percentages, adjectives, and unsustainable attacks on the Intel architecture out of whatever dimension you pull them from?

  11. DrLoser says:

    Of course Google could cache popular searches but they still have millions of servers doing the real time stuff in a massively parallel way.

    No they don’t, Robert. They cache all of them.* That’s what the millions (actually, tens of thousands) of servers are doing — data-mining, not “real time stuff.”

    I have worked in the Search Industry. You have not. But let’s assume I am lying through my teeth (which would have been fairly pointless, since all it’s done is to attract gutter-sniping from Dougie). This would put me, basically, in the same position as you: that is, utterly ignorant (in the nicest possible way) of how a large Search Engine works.

    (And incidentally, when I was as lacking in relevant information as you now are, I believed the same thing. So it’s a fair enough myth to believe in.)

    By far the preponderant number of words in 99% of search queries is 3-7. There are some twos, but most people intuitively understand that “dishwasher” is not as likely to get a desired result as “purple monkey dishwasher.” Eight and beyond is usually a matter of cut and paste, which would hit a different cache for obvious reasons.

    Assume the vocabulary at the tip of an average person’s tongue is 20,000 words (which is rather high). At 20K^7 you’re looking at rather a lot of cache, I imagine.

    But Search Engine caches don’t work that way. They reduce “words” to “terms” by omitting common things like definite articles. They reduce “terms” to “normalised stemmed terms” by coalescing cases, etc. And once you’ve done this, you find that 99+% of 3 and 4 word queries boil down to the same 100,000 result sets.

    If you’ve got a longer query (5+ words), you just permutate out groups of three or four words and use those. And no, you don’t use every permutation: that would require far too much search latency. You throw away the ones that look silly and you limit the rest to 32 permutations (in Bings case, as of 2013).

    Other tricks, like Related Search (my speciality), allow you wiggle room: you might not hit precisely the right response set, but you can use the Power of People Who Have Hit This Permutation to see what they looked for next. This, again, is data-mining.

    Well, that’s just scratching the surface, Robert. But if you genuinely believe that Google uses massive real-time parallelisation to fulfil your search query, then you need to sit back and think a little harder.

    Here’s an example query that might help you see what’s going on: “introspective doodlebug finagle.”

    Only three words, and the results are predictably rubbish. But do you seriously believe that Google searched the entire web in real time, before coming up with this rubbish?

    What possible motive could they have to do so?

  12. DrLoser, fooling himself but not the rest of us, wrote, ” Let’s further presume that I have 10,000+ networked servers in the background that can somehow boil all of this down to “the most popular 5,000 queries.” Which, more or less, and allowing for Bayesian priors, is what these server farms do.”

    Of course Google could cache popular searches but they still have millions of servers doing the real time stuff in a massively parallel way. There are 100K words in the English language at least. How many ways can you ask Google to search 5 of them chosen at random by a diverse human population in all corners of the globe at any time of day? It’s roughly 100K5, 1025, more than most hard drives can store, for sure. Certainly a 5K cache wouldn’t do it, not even close. That’s just proof #1. Then we get to see it find searches that it knows pogson or any particular user will like… Does every PC on the planet have its own 5K cache? That cache is getting pretty large… Then we can search in any language we want, or for any time period we want,… I can see the caches becoming too large to fit on the planet. No. Google does real-time search. It’s the caching and slicing of web-data that’s their huge storage job, not searches.

  13. oiaohm says:

    DrLoser you might have two sorting implementation that both mathematically are O(n log n) but one will be massively faster on real hardware than the other.

    Why because O(n log n) does not tell you how the they will interact with the threading and cache systems. Bad interaction can result in slow downs of a scale that is insane. On x86 processes you can as stuffed to only get 10 percent processing power per chip if the method is completely incompatible.

    DrLoser sort turns out to be something people using databases request a lot. You know alphabetic lists of names and so on. Improving sort in a database can in fact improve a lot of business software performance.

    DrLoser
    X band (8-12 GHz) or K band (18-24 GHz, possibly extended to 40 GHz).
    It is W-Band radar 75 – 110 GHz. W-Band is commonly aircraft or mil radars that Australian police have in some areas.

    X band and K band if you point at a particular type of tree in Australia will tell you that its moving at 100-300 km per hour(yes a stationary tree that grows next to some roads). Australian police did not spend the money on W-Band radar guns for particular areas for no good reason. W-Band gets above the where that tree make fake radar responses. Even so lidar items are cheaper so those areas are coming fairly much lidar only. How it was found was when an old bw beetle got booked for doing 200km per hour. Yes 200km per hour is kind impossible for a stock standard beetle. It was the tree behind it.

    There is another downside to a W-Band radar why its also being removed from service. Fighter jets can confuse W-Band radar for a SAM site. Yes result has been one police car hit will target round due to Australian jets auto defense. DrLoser is right W-Band police radar is not common but the beast does exist. Yes W-Band police radar is an Australian oddity.

    My memory was off its 94 GHz as that is the atmosphere window for radar. 100GHz is a nice round figure. But you need the really fast silicon to cope with this.

  14. DrLoser says:

    IBM managed to get 350 ghz silicon germanium transistors in 2002 *shrug*

    oiaohm has already mentioned this *shrug*.

    How does it feel to come second from last in the sack race, Dougie?

  15. DrLoser says:

    I may just have mis-spelt that cite. Don’t get too excited, Dougie: it requires physical features that you don’t possess.

  16. DrLoser says:

    Courtesy of Red Hot Porn, incidentally, I’ve stumbled across the human equivalent of ram’s extraordinary claim. Turns out that humans can do even better, and for reasons I will explain shortly. But first, the position:

    White: Pb2 Kb4 Pc5 Pe4 Bf6 Ph4

    Black: Pa4 Pb3 Bc6 Pc7 Kg4 Pg6 Ph6

    Supposedly there is a forced mate here in 83 ply (or forty two moves, for commoners). Let’s imagine how this works.

    An immediate observation, if you know chess at all: at some point, White is going to have to be put in Zugzswang. That’s a way off right now.

    One more. White cannot move his king to a4; otherwise Black counters with a5-6, which is Game Over (and easily countable, for those obsessives who care).

    One more. Black’s bishop can indefinitely shuffle along the diagonal, and still protect a4. This may be important, if Black needs to advance the pawn from c7-6. It’s not particularly important and it doesn’t broaden the decision tree much, but it’s still worth bearing in mind.

    One more. Simplify. I can confidently predict that the first move is b6-5, followed by two more moves that leave the Black king on b5.

    One more. Clearly, the White pawn on e4 gets wiped out.

    One more. At that point, the Black king can wander all over the board at will. The only aim here is to achieve a Zugzswang. I haven’t really analysed this, but it looks like the Black king cannot move the White king off the b3, c3, c4 triangle without the Black king moving all the way around to either the a or b column.

    And that’s about it. I may be wrong, but at some point a5-6 is inevitably going to lead to checkmate.

    Whether or not it really is 83 ply or not, I couldn’t guarantee. But my point is, the U-18 junior chess champion of (say) Winnipeg (a city famed for its coterie of above-average intellects) would.

    Oh, and also a Commodore Pet. And if you stopped parboiling the frog and chucking the pig off a cliff … possibly those, too.

    Now, about them bumblebees …

  17. DrLoser says:

    350 gigahertz silicon is used in radar. Yes its in some police handle held radar guns.

    Dubious, considering that most of them operate on X band (8-12 GHz) or K band (18-24 GHz, possibly extended to 40 GHz). One of them is susceptible to atmospheric moisture, I forget which. Consult Robert in order to have your feeble beliefs plastered all over the ceiling, as, for instance with the earlier MH15 discussion.

    But let’s stipulate 350 GHz. Let’s even stipulate the element Silicon. (I really neither know nor care.)

    You’re talking about a modulated transmitter that basically works like an oscilloscope, aren’t you, Fifi?

    Exercise for somebody who spends most of his economically productive life leaning on a lamp-post in high heels, fishnet stockings, and a rather spiffy red leather miniskirt:

    Describe the similarities between the minimal circuit (the “transmitter,” although you may also wish to include the “receiver” in your submission) required for a 350 GHz radar gun, and any sort of CPU that is reasonably achievable, scaled up to more than ten units, in the next fifty years.

    For a smart little girlie of your leanings, Fifi, that should be trivial.

  18. DrLoser says:

    No consumer would be happy with Google if they did bubble-sorts on the web.

    Normally I just leave idiotic statements by oiaohm dangling.

    In this case, I will make an exception, Robert. Just for you, I will leave this one dangling.

  19. DrLoser says:

    Parallel processing is parallel processing whether it’s distributed across a SoC or the planet.

    That’s not remotely “moving the goalposts,” Robert, and if you insist on accusing me of that vice, I swear I will pinpoint the many times you do so yourself. So far, I have left your evident tendency open to inference.

    Massively distributed off-line processing (which is what Google and Bing do) is entirely different from massively parallelisable sorts (the type that oiaohm is feebly trying to comprehend, sans knowledge of Dr Dobbs), or even massively parallel algorithms (the type that ram is infinitely better equipped to explain to you than I am).

    Look. Let’s presume that I have a Machine that crawls the entire Web on Monday by 12:00. Let’s further presume that I have 10,000+ networked servers in the background that can somehow boil all of this down to “the most popular 5,000 queries.” Which, more or less, and allowing for Bayesian priors, is what these server farms do.

    It’s not “parallel processing” (although it has its own problems with numerical analysis, most particularly skew and in extreme cases kurtosis). It’s distributed processing over a network, with latency and duplication and resolution and all sorts of problems relevant only to distributed processing over a network.

    So, no, there is a blatantly obvious difference. And no, I am not moving the goalposts.

    You want to talk about distributed processing? We can do that too.

  20. DrLoser, again moving the goalposts, wrote, “It’s not “parallel processing” in the sense that we have been discussing.”

    Parallel processing is parallel processing whether it’s distributed across a SoC or the planet. No consumer would be happy with Google if they did bubble-sorts on the web. Consumers demand parallel processing and they get it because folks can afford to provide it cheaply with GNU/Linux.

  21. DrLoser says:

    Well, why do you think they use Hadoop and all those servers if it’s not parallel processing?

    See my previous post, Robert, and engage brain.

    It’s not “parallel processing” in the sense that we have been discussing. If it were online, it might be. But it’s not. It’s offline.

    I did warn you. I have no interest in you making yourself sound completely ignorant.

    Google, Bing, Hadoop and so on are completely off-line parallelizations. Each and every one of them just cans up the “best five thousand” results for serialized consumption. (I can go further into the details beyond that, if you want. But it’s good enough for now, and for 80% of queries.)

    You’re thinking about massively distributed computation. Which, as a moment’s thought will tell you, is a completely different problem from massively parallel computations.
    By an accident of history, I have some knowledge of both. By some sort of nasty accident, b>oiaohm has no clue about either.
    You, Robert? I have high hopes that you can learn from the likes of
    Deaf Spy and I. And, indeed, thus-wise progress to that Pascal library you have always wanted to give back to FLOSS, but have never quite found the time for.

    We can show you the way, Robert!

  22. DrLoser, denying reality, wrote, ““massive parallelism;” it doesn’t even exist in any meaningful sense.”

    Well, why do you think they use Hadoop and all those servers if it’s not parallel processing? See also, Hadoop.

  23. DrLoser says:

    DrLoser the problem was it was obsolete by the time Dr Dobbs published it.

    Just for once, Fifi, do me the courtesy of actually reading my post before responding to it. As I pointed out:

    Nothing at all about the article in question had anything to do with “beating the fastest sort available.”

    And nothing I have said ever implied that it did.

    Since it didn’t try to answer the spurious question you pose, it can hardly be called “out of date” by the spurious measures you insist upon, can it?

    Face it, Fifi, you’re not interested in algorithms at all. You’re interested only in convincing yourself (for probably specious reasons) that you’ve scored a debating point, aren’t you?

    DrLoser the issue DrDobbs is good for those learning to code not very good for those wanting performance by the time its was in DrDobbs it is normally between 12 to 18 months out of date.

    I’m sure the contributors will appreciate your lofty ivory tower aspirations, Fifi. I’m equally sure they could wipe the floor with them. Come back when you, yourself, have “learned to code,” won’t you?

    It’ll be an achievement that will almost certainly enhance that successful IT curriculum vitae that I am sure you are just bursting to tell us all about.

    In the mean time: sorry, but the rest of us read things like Dr Dobbs because it stimulates our enthusiasm and curiosity.

    Which is better than just plonking an arm into the Internet bran-tub and coming up with ludicrously inept propositions, I think.

  24. DrLoser says:

    Exactly. Try using Google, for instance. Google uses huge clusters of commodity servers to get that performance with lots of parallelism.

    Have you considered how Google and Bing leverage that massive parallelism, Robert? Clue: it’s not qsort, or anything like it. It isn’t even really leveraged for serving an HTTP request (a certain amount of parallelism is, based upon subdividing the SERP into maybe twenty separate channels, but we’re really not talking about massive parallel web-crawling here).

    I’ll leave you to consider the implications of that. And don’t forget, I’ve worked on the things.

    Consumers appreciate that and they can get that performance from their smartphones or desktops from any browser.

    Once you’ve finished considering the first half of my response, you will appreciate quite how silly this assertion sounds. Because not only do consumers neither notice nor appreciate this “massive parallelism;” it doesn’t even exist in any meaningful sense.

  25. DrLoser says:

    Reality it most likely a good idea to make sorting method settable based on system you are running on.

    “Reality it most likely” a completely bonkers idea, unless you have a specialist requirement like ram’s 2000-core supercomputer. At base, all decent not-in-place sorting algorithms are O(n log n) or thereabouts. Quicksort can often perform better, but is not guaranteed to do so, and the difference is of no consequence whatsoever in everyday cases.

    And I notice you’ve blithely skipped over cache-busting.

    And I notice you’ve blithely skipped over the obvious fact that, if your Theta and your Omega are (for all practical measurements) identical, you have a very reliable algorithm.

    And that’s without me having to point out to the seriously obtuse that there are precious few applications out there that spend a significant amount of their time sorting things. And that any minimal advantages that quicksort might gain is generally swept away by the time-cost of the rest of the system.

  26. Deaf Spy wrote, “Anyone can start many processes on a PC since, hm, 90-s?. Following your logic, every PC user has tremendous experience with multicore servers.”

    Exactly. Try using Google, for instance. Google uses huge clusters of commodity servers to get that performance with lots of parallelism. Consumers appreciate that and they can get that performance from their smartphones or desktops from any browser. Here‘s the description of how Google started. Here‘s how they work today, with millions of servers.

  27. oiaohm says:

    heapsort vs Parallel Quicksort if you get the research papers you will find it will depend how good your memory interconnect is. Intel based papers normally say Heapsort is faster. AMD system papers normally say Parallel Quicksort is faster.

    Reality it most likely a good idea to make sorting method settable based on system you are running on.

  28. oiaohm says:

    It went out of date rather quickly, didn’t it, considering that it was published in September 2014?

    DrLoser the problem was it was obsolete by the time Dr Dobbs published it. Sorting is one of the most common areas that PHD people like messing with.

    http://www.iman1.jo/iman1/images/IMAN1-Success-Stories/Parallel%20Sorting%20Algorithms%20using%20IMAN1.pdf

    This one is done on intel based hardware. Yes this is before Dr Dobbs published. There is also amd one.

    A bog standard Parallel Quicksort is in fact very hard to beat.

    DrLoser the issue DrDobbs is good for those learning to code not very good for those wanting performance by the time its was in DrDobbs it is normally between 12 to 18 months out of date.

  29. DrLoser says:

    (Oops — that’s Omega and Theta for merge sort, not for heapsort.)

  30. DrLoser says:

    Oh, and as Deaf Spy points out, there are cases with large data sets where heapsort is more cache-friendly than qsort. Without having studied the issue in great detail, I think this implies that (for the same cases) merge sort will exhibit the same relative cache-friendliness.

    Furthermore, Omega and Theta for heap sort are (within a tiny tolerance) identical. Which has the useful property that, with heap sort, you can actually predict how long your sort is going to take.

    Which means that an efficiently parallelisable non-in-place merge sort is, indeed, what I would characterise as “interesting.” Of course, “interesting” in this case requires a modicum of thought, oiaohm.

  31. DrLoser says:

    The Dr Dobbs Archive! is out of date. Merge sort is slower than a qsort merge hybrid.

    It went out of date rather quickly, didn’t it, considering that it was published in September 2014? And it doesn’t even mention qsort. Though it does mention an introsort technique, equivalent to the qsort merge hybrid (which I and Deaf Spy have already described).

    And I didn’t offer it as the Sort to Beat All Sorts.

    I offered it as an example of an interesting article available only recently for free on Dr Dobbs. For my purposes, oiaohm, “interesting” means “a stimulating read.” For your purposes, evidently, “interesting” means “something I can grab a poorly-understood paragraph out of and pretend that it has something to do with whatever fantastic claim I am currently making.”

    Be careful with that Gish Gun, oiaohm: it’s apt to go off in your face.

  32. DrLoser says:

    With 2000 64-bit cores, you might make a move or two and then it tells you something like “forced mate in 33 moves”, make another move and the message turns to something along the lines of “forced mate in 12 moves”. Chess to it is like tic-tac-toe is to us!

    With a single human brain, you might make a move or two and then it tells you something like “forced mate in 33 moves.” This isn’t much of an example of anything.

    In a number of chess games played between masters, you’ll find “forced mate in ten” or more. I seem to recall seeing a victory by Paul Morphy that was a “forced mate in twenty,” but I can’t dredge it up from memory. Ten is a good enough number.

    “Forced mate in ten” simply tells you that the decision tree is very constrained: in fact, at any given point, it’s going to be even more constrained than your tic-tac-toe example. It might be ten (or thirty three) deep, but at each node along the way there’s unlikely to be more than two plausible decisions. In fact, for the plurality of nodes, there’s only going to be one.

    You don’t need a massive parallel 64-bit architecture running Linux for this. A Commodore Pet would probably do.

  33. oiaohm says:

    In fact a transfer on a AMD processor from l1 to l2 to L3 then by hypertransport in different cpu L3 to L2 to L1in a different cpu can also be effectively zero time due to the cpu running other threads at the time these transfers are going on.

    63ns that going out to MMU costs you is a killer. All those steps above on AMD processor at worst is 15ns. A thread might be running longer than 15ns. This is why is so important to remain inside the caches and very high speed transports.

    Transfer between CPU’s can effectively be zero values if you solution is designed to have enough worker threads and each worker thread runs long enough.

  34. oiaohm says:

    This is hilarious, oh, Imminent Ignoramus. Your own source clearly speaks of shared caches, and, consequently, discusses L2 / L3. You do have serious reading-comprehension problems, don’t you?

    Only shared caches can be sliced by threads. Period. L1 on x86 is not shared. L1 and L2 in UltraSparc are not shared.

    You fail hard. Again.
    Really you have failed again.

    Deaf Spy how many idiot mistakes in a row are you going for. How many active threads per core. x86 2 threads per core and UltraSparc 8 threads per core. Active thread per core are required to be in L1 cache.

    So L1 in a x86 is shared between at least two threads. So you need at least 2 threads for a x86 L1 to be able to be fully allocated. UltraSparc for L1 and L2 caches to be fully allocated you need 8 threads. A single thread on either x86 or UltraSparc is not going to allow full allocation of the cache so under perform.

    Sorry the L1 of x86 and the L1 and L2 of UltraSparc are sliced.

    Gets even more interesting due to the fact the cpu cores only can be running 1 thread at a time a data transfer while they are running a thread that does not require the data can effectively cost zero time. So a transfer from L1 to L2 to L3 then back up L2 then L1 on a different core cost can be zero to processing time.

    Deaf Spy you don’t understand the basics of modern cpu designs shared caches are all caches in modern design cpus. The difference is shared cache between different cores or just shared cache between threads in a single core. Shared cache between threads in single core and shared between cores implement fair cache formulas.

    Deaf Spy So far you have a zero success rate for all you challenges so far.

    In fact you managed to completely miss read the paper I provided.

  35. Deaf Spy says:

    I don’t need your help with that. I’ve been parallel-computing since 1968.
    Wonderful, Mr. Pogson. I am eager to see your implementation, and the benchmark results.

    I started with I/O overlapping computation and graduated to multicore server clusters in this century. Beast routinely runs ~200 processes and more when I want to build a kernel.
    Anyone can start many processes on a PC since, hm, 90-s?. Following your logic, every PC user has tremendous experience with multicore servers.

  36. Deaf Spy says:

    Introduce hyper-threading and equals introduce a stack of horible optimizations to the caches.
    ftp://ftp.deas.harvard.edu/techreports/tr-17-06.pdf

    So yes L1 l2 and L3 are sliced by threads.

    This is hilarious, oh, Imminent Ignoramus. Your own source clearly speaks of shared caches, and, consequently, discusses L2 / L3. You do have serious reading-comprehension problems, don’t you?

    Only shared caches can be sliced by threads. Period. L1 on x86 is not shared. L1 and L2 in UltraSparc are not shared.

    You fail hard. Again.

  37. ram says:

    With 2000 64-bit cores, you might make a move or two and then it tells you something like “forced mate in 33 moves”, make another move and the message turns to something along the lines of “forced mate in 12 moves”. Chess to it is like tic-tac-toe is to us!

    P.S. It is not particularly set up for chess, just happen to have the Stockfish chess engine in there.

  38. ram wrote, “don’t try playing chess against it ;-D”

    Chuckle. That comment brought back a memory. Shortly after the first microprocessors, some primitive PCs and chess-playing devices emerged. I bought one. It was terrifying to play because it made no mistakes… until the board opened up. The human mind is no better at thinking one or two moves ahead but when the board opens up, an 8-bit processor with tiny RAM was out of luck while the human mind can just ignore empty space and soar above the field of battle.

  39. dougman wrote of M$, “Seems to me that they ran out of ideas, so they are copying everyone else’s ideas.”

    M$ hasn’t had a real idea about software since BASIC. They bought DOS for $50K and IBM let them lock in the world. Lose ‘9x was DOS+. NT was hired help. 2K+ was designed by salesmen who combined ideas from NT and DOS and got it wrong, really wrong. The great rewrite brought the joke that was Vista. Now we have 8.* and the Missing “9”. Any time they tried to reinvent the wheel they got square wheels.

    Meanwhile GNU/Linux evolved steadily with a constant rate of improvement from the kernel to the apps. Android/Linux got right what SUN failed to do in the 1990s. LibreOffice is a better office suite than M$ ever produced. FireFox is a proper browser unlike that bastard, Internet Exploder. Even M$ is ditching it, finally. M$ is at least a decade behind the rest of the world. They’ve been holding us back for decades. I’m glad that’s over.

  40. dougman says:

    “The average user is ideal for Linux, because this user:

    – Doesn’t want to upgrade to the latest-greatest
    – Doesn’t game
    – Spends the majority of their time within a browser
    – Is prone to installing toolbars, screensavers, and apps to “speed up their PCs”
    – Complains every time they have to “spend money to remove junk”

    These users no longer depend on a platform, but on Software as Service (SaaS). This is an arena in which Linux has been superior since inception — working with and on the internet.”

    http://www.techrepublic.com/article/the-most-obvious-user-for-linux-isnt-who-you-think/

  41. DrLoser wrote, “help oiaohm and Robert out with this tricky “parallel computing” business.”

    I don’t need your help with that. I’ve been parallel-computing since 1968. I started with I/O overlapping computation and graduated to multicore server clusters in this century. Beast routinely runs ~200 processes and more when I want to build a kernel. Probably the busiest system I developed ran ~40 simultaneous users on a single dual-core terminal server back about 2006. Performance was somewhat better with ~100 users on 4 servers distributed over two server-rooms. My main number-crunching career did a lot of Monte Carlo calculations and data-correlations but that was before the PC-era and we mostly waited. Most of those processes would have been amenable to parallel computing except we did not have many processors. I think when I started the University of Manitoba Physics department had just two mini-computers and a good share of a mainframe with primitive networking. The PC-era took off about the time I retired from that career but I certainly know how to do things like that. My ballistics calculator is too small a problem to need that technology. Even the kernel takes only a few minutes to build on the Beast.

  42. ram says:

    Speaking of multicore speedup, I have a Linux cluster with around 2000 cores. Bootup is slow, but parallel applications (e.g. rendering, transcoding, simulations) run like “greased lightning”, like around 1 TeraFLOP. Also, don’t try playing chess against it ;-D

  43. oiaohm says:

    http://www.hpcwire.com/2015/01/22/compilers-amdahls-law-still-relevant/
    Deaf Spy and DrLoser arguments against Multi core speed up is also disproven by many HPC white papers.

  44. oiaohm says:

    I don’t see why Plod can’t stagger by the bog-standard US national issue 24.15 GHz.
    Funny enough there is a reason why USA radar detectors don’t work all the time in Australia. Some of the Australian issue are 100GHZ transmission. The freq alterations off of that requires the higher end chips. Of course optical is used more these days.

    The Dr Dobbs Archive! is out of date. Merge sort is slower than a qsort merge hybrid.

    As Deaf Spy points out, much of this resource contention happens specifically in the L1 cache. (Contention for L2 and L3 is, in principle, the same for SMT-enabled hardware as it is for simple multi-core.)
    Words here simple SMT and multi-core.

    Introduce hyper-threading and equals introduce a stack of horible optimizations to the caches.
    ftp://ftp.deas.harvard.edu/techreports/tr-17-06.pdf

    Yes there is a such thing as cache fair.

    Each thread gets a slice of L1 L2 and L3.
    BWAHAHAHAHAHA!

    No, it doesn’t, oh, Imminent Ignoramus. Each core has its own L1. L2 and 3 can be shared among cores. That is true for x86, ARM and even UltraSparc.

    Cache fair system means your L1 L2 and L3 amounts you can allocated is directly linked to the number of threads you are running. So yes L1 l2 and L3 are sliced by threads. Apparently you don’t have any clue how modern CPU work.

    Deaf Spy Threads are known the modern day CPU and does effect how much Cache it gets. The implementation of cache fair expects that cores will be running multi threads. You shove a single thread down this you end up not able to to use large sections of the cache so end up paying high price in extra transfers from the MMU.

    CPU before cache fair don’t have the strange behavior of greater than linear increase in speed. Basically performance is crippled putting single threaded into CPU that implement cache fair.

    http://konfist.fl.kpi.ua/en/node/269
    There is paper after paper showing SUPERLINEAR SPEEDUP IN PARALLEL COMPUTING ON THE EXAMPLE OF QUICKSORT ALGORITHM

    Took a long time to work out where the evil CPU bug is. This paper covers everything I have been saying basically DrLoser and Deaf Spy are idiots on this topic.

    Yes asking someone to implement a Qsort to demo that something is not Linear is a huge mistake.

    Yes insanity right. You need to multi thread or cache fair will kick your performance in the nuts.

    So its a balancing act between doing what cache fair requires Amdahl’s law.

    Stupid as it sounds cache fair is stupid so a thread that straight up stalls but gets key data into L1 L2 L3 caches can magically increase performance.

  45. DrLoser says:

    Oh, and while you’re at it, help oiaohm and Robert out with this tricky “parallel computing” business. Neither one of them seems to grasp the concept.

    Time for a small-town salesman with no appreciable education or expertise to step in and take the lead!

    Your starter for ten, Dougie: parallel qsort.

    On any non bit-rot platform of your choice. If you wish, you can hum along to your favourite “tunes” by Schoenberg or Carter. (That was hilarious! I didn’t know you had it in you, Dougie!)

    Now, is oiaohm a genius savant, or is he an ignorant buffoon?

    Let’s stick to the subject matter at hand, shall we?

  46. DrLoser says:

    Seems to me that they ran out of ideas, so they are copying everyone else’s ideas.

    Splendid! And fresh from your joint success with oiaohm in teaching Intel how to not suck eggs, as documented by Robert, you, Dougie, would obviously be the best person to offer up these ideas about bit-rot, as you call it.

    I don’t want to stress your limited cognitive abilities, so here’s a simple question:

    How does the Linux desktop differ in this regard?

    (You can take as much time as you like. You can pick a distro of your own choice. You can explain how to get from version X to version X+1.)

    All hail the Conquering Snake-Oil Salesman!

  47. dougman says:

    I wonder if M$ Win-Dohs (skipped over 9, so as to distance itself from version 8) 10 will solve the notorious bitrot problem.

    http://www.networkworld.com/article/2690354/microsoft-subnet/will-windows-10-address-the-operating-systems-biggest-weakness.html

    Isn’t it funny that M$ has saw fit to offer snapping apps and multiple desktops, way after they have been available in Linux? Seems to me that they ran out of ideas, so they are copying everyone else’s ideas.

  48. DrLoser says:

    OK, here we go with the benefit of a Dr Dobbs article I haven’t even bothered to read (not even oiaohm’s customary first paragraph), but which appears to twitch oiaohm’s synapses:

    Parallel in-place merge sort

    Welcome back, Dr Dobbs! I’ve missed you!

  49. DrLoser says:

    (It seems to be a bit messed up, but they only went to “free archive” on Jan 15th. Give them time.)

  50. DrLoser says:

    Now, this has to be a genuine first. Microsoft Troll Contributes to Teh Communiteh!

    I’ve had some difficulty in finding, say, Herb Sutter links for oiaohm’s entertainment and instruction. For a while, they’ve been behind a pay-wall. (Rather pointlessly, as far as I can see.)

    Ladies and Gentlemen, I present: The Dr Dobbs Archive!

    Haven’t looked into it yet, but if it works at all, there’s ample scope for Fifi to tie his own Louboutin shoe-laces into a knot …

    Oh, and Robert? They managed the odd article or two on Pascal, as I remember.

    We’re both SOOL on 1970s minicomputers with card-reader technology, I suspect.

  51. DrLoser says:

    Incidentally, accessing the MMU has nothing to do with anything.

    HCF with the Gish Galloping already, oiaohm.

  52. DrLoser says:

    350 gigahertz silicon is used in radar. Yes its in some police handle held radar guns.

    I don’t see why Plod can’t stagger by the bog-standard US national issue 24.15 GHz. Radar seems to work reasonably well at that frequency, given possible calibration issues (I’m no expert, but I don’t think the choice of frequency makes an appreciable difference to the effort required to calibrate the device.

    Oh, wait oiaohm, you were confidently asserting that the CPUs in a police radar gun operate at 350 GHz, weren’t you?

    BWAHAHAHAHAHA!

  53. DrLoser says:

    Modern day CPU are highly bias to performing with multi threaded code over single threaded code.

    No they’re not. For the most part, they don’t even notice the difference. They rumble on, blissfully unaware of anything that the OS considers to be “a thread.”

    Obviously a multi-core cpu has the ability to “perform with multi-threaded code,” but that’s such a trivial observation that I’d be surprised if even you, oiaohm, could be bothered to make it. So, we’re left with various flavours of SMT.

    A few observations here:
    1) You don’t “magically” get N*perf for N SMTs. In fact, for highly-optimised code (ie work-stealing), you may well notice almost no difference at all … perhaps 10% on a good day. (That’s 110% of the core for each single core.)
    2) Simply by the nature of how it works, SMT is subject to more resource contention than a non-SMT core, and resource contention is the enemy of massive parallelisation.
    3) As Deaf Spy points out, much of this resource contention happens specifically in the L1 cache. (Contention for L2 and L3 is, in principle, the same for SMT-enabled hardware as it is for simple multi-core.)

    I suppose I could construct an extremely peculiar (and rather specialist) software platform in which threads are evenly divided between those that do almost nothing but integer arithmetic, and those that do almost nothing but floating-point arithmetic. Because of the way resources in a core are typically designed, I reckon this would give me maximum bang for the buck out of an SMT core, or at least one with two symmetric threads. It’d still be subject to resource contention, though.

    None of which is relevant to the basic original observation:

    You don’t get a linear increase in performance just by adding new cores (or SMTs). As a rule of thumb, without careful cache management and software optimisation, you won’t see better than 2.5x for a 4-core.

    Like the qsort we know that 16 cores of qsort implemented correctly will linear scale or exceed linear scale.

    Still less will you exceed linear scale. This is just outright fantasy on your part, oiaohm. We know nothing of the sort, because your contention is preposterous.

    Problem here there is a inverse to Amdahl’s law caused by CPU caches as more threads result in less and less need to access the MMU.

    An inverse to Amdahl’s law? Amdahl’s law states that you cannot do better than S + P/N, where S is the unavoidable serial work, P is the parallelizable work, and N is the number of processors. (This is simplified, but it will do.)

    What sort of inverse law are you implying, oiaohm? “-S + P/N” perhaps? Things get faster, the more unavoidably serial work you have to do?

    You’re off your rocker on this one.

  54. Deaf Spy says:

    Each thread gets a slice of L1 L2 and L3.
    BWAHAHAHAHAHA!

    No, it doesn’t, oh, Imminent Ignoramus. Each core has its own L1. L2 and 3 can be shared among cores. That is true for x86, ARM and even UltraSparc.

  55. Deaf Spy says:

    1. Number of elements Insert Sort.
    I meant:

    1. Number of elements less than N – use Insert Sort.

    Time to brew some nice black tea.

  56. Deaf Spy says:

    Multi thread can exceed linear scaling over a single thread if the result is more use of faster ram in the cpu itself.

    BWAHAHAHAHAHA!

  57. Deaf Spy says:

    One minor quibble with myself re the cost of {N ~= 10} qsort vs bubble-sort. The implication here is that the cost of recursion (in qsort), as in the cost of manipulating the stack, is more expensive than the cost of a simple single stack-frame bubble-sort.

    Correct, dear Doctor. That is why most sensible generic implementations (both .NET, and Java) implement sorting like that:
    1. Number of elements Insert Sort.
    2. Number of elements > N => qsort until a section becomes less than 32, then employ Insert Sort.

    .NET actually uses heapsort for very large Ns. Turns out it is more cache-friendly.

    As for your good advice to our dear Mr. Pogson, I would say he can get tangible results with something much simpler. He can pre-create four or eight TThread objects, and use them as workers using events. Poor man’s threadpool, you would say, but it will do the job for the test. Then, he on every recursion he can split the workload up to the total number of threads, and then go recursive. Let’s keep it simple and refrain from work-stealing. He can use a semaphore for a barrier to wait for all threads to finish, and then merge the results.

    Now, Mr. Pogson, armed with so much good advice, would you try and publish the results? After that, we will sit and discuss linear speed vs. concurrent speed again.

  58. oiaohm says:

    Note if you have to go into the MMU to transfer data between independant CPU caches as intel requires you are so performance screwed its not funny. The ram connected to MMU in fact cannot to the transport speeds.

    hypertransport on AMD for example moves data at the same speed L3 cache can.

    If you program can remain majority in the caches its going to run like the bat out of hell.
    https://gist.github.com/spion/3049314 Yes it explains oddities like this as well. Yes where a JIT is kicking the but of native code.

  59. oiaohm says:

    Multi thread can exceed linear scaling over a single thread if the result is more use of faster ram in the cpu itself.
    Ok you don’t get it right.
    … Wait a minute, that’s senseless gibberish, isn’t it?
    DrLoser its not. This is the problem I have with you. Over and over again you call stuff of mine senseless gibberish when its not.

    The cache ram inside the cpu is a fixed size. How it segmented is important. What is in L1,L2 and L3 all can be different. Each thread gets a slice of L1 L2 and L3. Threads sharing data get combine cache area in the cpu.

    So that sharing of results between independant threads may not go out to MMU controller it will be just requests inside the cpu caches. Yet a single thread version will only have 1 thread worth of cache memory so it will have to read more often from the slower MMU instead of out of the CPU cache.

    Modern day CPU are highly bias to performing with multi threaded code over single threaded code.

    So exceed linear scaling happens quite a bit. Resource access is something Amdahl’s law does mention about. Problem here there is a inverse to Amdahl’s law caused by CPU caches as more threads result in less and less need to access the MMU.

  60. oiaohm says:

    http://www.nytimes.com/2006/06/20/technology/20chip.html?_r=0
    At room temperature, the chips operate at 350 gigahertz, far faster than other chips in commercial use today.

    If you had read my link properly DrLoser. 350 gigahertz is no fancy cooling and not as hot as sun runs with general air cooling. 350 gigahertz silicon is used in radar. Yes its in some police handle held radar guns. Air cooling is truly good enough. Problem is how leakage is prevented 350 gigahertz silicon. Make insulation areas in the silicon design larger.

    If you took a arm64 bit chip and remade it using current 350 gigahertz tech the chip would be somewhere in the 10cmx10cm range. Too huge to be practical.

    If this stuff is even remotely achievable on the scale of a modern CPU, why don’t we see IBM (or Georgia Tech) crowing about, say, a 20GHz single core chip? You could scale down that non-supercooled result by twenty times and achieve that. How difficult could it be?

    To go up by a factor of 5 with current chips at 4Ghz the silicon size would have to increase by a factor 4 remaining at the same nm. So where you currently produce 16 chips only produce 1. This pushes defect rate through roof. You could expect a 20 Ghz cpu for like 1/2 a million bucks each due to the high failure rate. If we get to 8 to 10 ghz making a 20ghz chip by upscaling the insulation might be tempting but until then forget it.

    Like the qsort we know that 16 cores of qsort implemented correctly will linear scale or exceed linear scale. So 16 cores could be giving you equal to 64 ghz+. Even lets what you are doing is not that good by Amdahl’s law it has to be fairly bad to make 16 cores at 4ghz slower than 1 core at 20ghz. You can throw way 2/3 of your processing power and the 16 cores at 4ghz still win. Worse is the power usage would be about equal between the 1 core running 20 ghz and the 16 cores running a 4Ghz flat out.

    DrLoser making a 20Ghz cpu chip is not hard. The problem is cost justification for how crappy its performance is going to be. Making a 350Ghz cpu that runs at room temperature is hard but not impossible but it would be the CPU designers equal of a practical joke item that no one could really justify making for anything other than an item to show off with.

    Reality we are no where near clock limit. Why we cannot go faster is nothing more than insulation problems. Every new method that fixes some of the insulation problems allows more and more speed.

    The reason why got stuck under 4ghz was not that we could not go faster. It was not sane to go faster instead multi cores was better bang for buck.

  61. DrLoser says:

    Multi thread can exceed linear scaling over a single thread if the result is more use of faster ram in the cpu itself.

    BWAHA …

    … Wait a minute, that’s senseless gibberish, isn’t it?

  62. DrLoser says:

    Not only is the qsort single threaded faster on Linux and OS X due to lower OS overhead you get these bonkers effects when you go multi.

    BWAHAHAHAHAHA!

  63. DrLoser says:

    There are many reasons to multi thread sometimes just to make sure some OS”s don’t cut cpu clockspeed.

    BWAHAHAHAHAHA!

  64. DrLoser says:

    DrLoser If I can find the blog entry he did a qsort like on a 4 core processor with hyperthreading it was 10 times faster than running as a single thread on OS X.

    BWAHAHAHAHAHA!

  65. DrLoser says:

    I may be wrong about my guess of 10^5, btw.

    I wouldn’t classify that as “a simple circuit.”

  66. DrLoser says:

    DrLoser without massive cooling the speed was 350 GHz.in that paper I pulled. Still way above what we can currently use in more complex chips.

    Oh, goody, you do actually occasionally read into the second paragraph of your cites, oiaohm. Sadly not the third, as I have pointed out. Now, as to that 350 GHz bit:

    Without the helium cooling, IBM and Georgia Tech have been able to operate the simple circuits at around 350 GHz.

    Let us take that on trust. What do we learn from it? One fact immediately springs out: you can get 500GHz with supercooled Helium — as an aside, Helium has very interesting properties around 1K, as Richard Feynmann pointed out — and 350GHz without. I’m taking this “fact” on trust, pending either peer review (you, oiaohm, are not a peer) or experimental replication.

    And that’s it. That’s all we are told.

    Before posing the obvious questions about the experiment in particular, let me pose this blatantly obvious question:

    If this stuff is even remotely achievable on the scale of a modern CPU, why don’t we see IBM (or Georgia Tech) crowing about, say, a 20GHz single core chip? You could scale down that non-supercooled result by twenty times and achieve that. How difficult could it be?

    How difficult? I’ll let you figure that one out for yourself, oiaohm. You know everything about everything.

    Now, let’s get real. Take your tin-foil hat off. What else can we assume about this set of experiments?

    1) It was based on doped chips. I believe I have mentioned this. Unless we know what constituted the doping (and indeed how expensive the doping process was), we know nothing.
    2) Fer sure, it wasn’t based on a common-or-garden foundry chip (tens of billions of transistors). My guess is that it’s based on a small die with 10^5 or so transistors, purpose built in order to gain accurate measurements. Once again, we know nothing.
    3) Naturally, the experiment was conducted in lab conditions. Unrealistically steady power supply, unrealistically dirt-free, etc. Again, as far as the real world goes, we know nothing.
    4) We don’t even know the conditions under which this special snowflake chip reached 350 GHz. Room temperature? The core temperature of the sun? My guess would be something slightly more reasonable, such as a liquid nitrogen coolant. But my guess would be as good as anybody’s. Because we know nothing.

    And a few other caveats, but that’s enough. If you see yourself with a career writing Science Fiction, oiaohm, I’d suggest that you at least make what you write sound faintly credible.

    Obviously it would help if you could write intelligible English in the first place.

  67. oiaohm says:

    Robert Pogson
    There’s also not much point in having a CPU clock way faster than RAM.
    This is true. RRAM is coming that is a lot faster than the current capacitor based ram. The biggest road block to going faster is not the silicon is the ram design we were using. So lower clock speed save on heat and cooling costs was done.

    DrLoser without massive cooling the speed was 350 GHz.in that paper I pulled. Still way above what we can currently use in more complex chips.
    Not quite ready for laptop prime time in any case, is it?
    I 100 percent agree. But its the limit. All other highspeed tests I can find don’t do running air cooled like that old IBM test did.

    Please note I am only saying we will get to 8Ghz we have really only fixed the roadblock caused by ram until about 12Ghz. Yes 12Ghz still way short of max speed. Even when we cannot produce any smaller clock speed will be able to increase if people are able to keep on thinking up designs to reduce leakage.

    We were stuck at a speed because sections designs we were using could not go any faster. The materials the cpu/ram designs are sitting on is able to go at a lot higher speed. But we have to work out the design that will work. We currently are in another time of no major road blocks to increasing speed.

    http://wiki.freepascal.org/Parallel_procedures
    DrLoser If I can find the blog entry he did a qsort like on a 4 core processor with hyperthreading it was 10 times faster than running as a single thread on OS X. Turns out a lightly loaded OS X machine has the habit of turning the clock speed down.

    There are many reasons to multi thread sometimes just to make sure some OS”s don’t cut cpu clockspeed. DrLoser openmp is designed to get you the lowest values of serial with the least amount of coder work.

    http://stackoverflow.com/questions/11183155/speedup-superlinear-of-the-quicksort-algorithm

    Insane things happen with qsort when you multi thread it on real world OS’s like Linux and OS X. Not only is the qsort single threaded faster on Linux and OS X due to lower OS overhead you get these bonkers effects when you go multi. Remember cpu cache is your fastest ram in your complete system and its some of it is segregated by cpu core. The faster qsort has to be multi threaded to use the most CPU cache.

    Yes it kinda does look to shatter Amdahl’s law into nothing until you wake up Amdahl’s law has to be applied to the complete system.

    Multi thread can exceed linear scaling over a single thread if the result is more use of faster ram in the cpu itself. Of course is requires the OS not to get in the way.

  68. DrLoser says:

    One minor quibble with myself re the cost of {N ~= 10} qsort vs bubble-sort. The implication here is that the cost of recursion (in qsort), as in the cost of manipulating the stack, is more expensive than the cost of a simple single stack-frame bubble-sort.

    Now, thanks to this blog, I have put a little thought into it. (Not much. Just more than the cumulative thought that, say, Dougie has ever put into anything.)

    I think, if you can figure out a way to get Open Pascal to do tail recursion, you might be able to save on the cost of the stack.

    Yet one more reason to choose Pascal as your primary future-proof language!

  69. DrLoser says:

    Deaf Spy wrote, “Intel need to speak with Dougie and Ohio how to update their product line.”

    Intel did that years ago when they came out with the Atom and hedged their bets on ARM.

    THEY DID???? I’ve never known either Dougie or oiaohm to be this bashful before. I devoutly hope that one or both of them were paid squillions for their advice.

    Not that oiaohm’s advice is anything to write home about.

    It amounts to some form of chip (who knows what? Presumably a test rig, say a few tens of thousands of transistors) doped with something or other but certainly Germanium, running under lab conditions using some form of coolant operating at close to absolute zero.

    Righty-ho! That’s an easy solution to problems with inadvertent electron tunnelling, isn’t it? Just slow the bastards down!

    Warning: when this technology reaches your laptop, circa 2020 or so, do not place the thing on your lap. Only heads are intended to be frozen cryogenically — I’m sure oiaohm has a contingency plan for his. Testicles do not take kindly to the procedure.

    And oiaohm didn’t bother to read his own cite, as per usual:

    The earliest applications of such knowledge will probably not be for processors, according to Cressler, but rather for high speed circuits used in wireline and wireless communications. Such circuits already crank faster than server processors, and are made using silicon doped with germanium.

    Not quite ready for laptop prime time in any case, is it?

    Doping chips with rare earths is yet another subject that I (who have very little knowledge) am in a better position than oiaohm to judge. (This is not an impressive claim, I know: pick a random Maisie Wong off the street, and she’ll be in a similar position.)

    Quick question, oiaohm: do you know why the Telco industry abandoned the third generation of specially-doped high-speed switches in the fabric of DSLAMs,, and decided to rely on a combination of second-gen switches and a slight cheat on the network layers of the routers instead?

    Clue: it’s something to do with the price. Happy googling!

  70. DrLoser says:

    Now, remind me again how Pascal reference variables work, under the hood.

    It’s been a while.

  71. DrLoser says:

    Oh, I forgot to mention. Best to aim for an i386 target first. The ARM memory model is a little more “free and easy” than the Intel memory model, which means you run the risk of inadvertent memory cache flushes, lock convoys, and other rather unpleasant performance-killing beasties under the hood.

    These can of course be controlled with due care and diligence. But let’s not overreach ourselves on the first implementation.

    A solid implementation of parallelized qsort in Open Pascal on an i386 platform is a fine thing to aim for. I, for one, would applaud.

  72. DrLoser says:

    Btw, did you give concurrent qsort a shot? It is really easy, just give a try in pascal, and benchmark the results.

    Let’s not be unfair — it’s possible on Lazarus/Open Pascal. Incidentally, don’t try debugging it on Linux, Robert: it stuffs up X. Better get it right first time!

    Some friendly advice might be in order. First, I recommend you don’t use TThreads. It’s the easy option, but it’s going to kill large-scale parallelisation. There’s the cost of starting the thread up and shutting it down; plus the cost of issuing a message on the main message queue, and the cost at the other end of waiting for it.

    I’m assuming that Lazarus genuinely uses the facilities of the host OS to multi-thread across cores, of course. If it doesn’t, you’re clearly stuffed to begin with. Best to benchmark that first.

    Now, to address the messaging problem. You could use a thread pool, I suppose, and communicate via SendMessage/PostMessage. That’s probably too high-level an abstraction; it’ll cost.

    Naturally, don’t use forks. (I need hardly point that out, but still.) Also, I’m not sure that Open Pascal supports the Yield statement (I seem to recall it was the main tasking primitive in Delphi), but if it does, don’t use that either.

    I don’t think there’s access to native PThreads, so you can’t use condition variables either. (They wouldn’t be ideal for truly massively parallel algorithms, but if you’re only dealing with four cores, they’ll probably get you in the order of a 2.5 speed-up.)

    One of the more promising avenues for research is the MTProcs package. I’m not sure how this works, under the hood. More benchmarking required, Robert!

    For “embarrassingly parallelizable” algorithms like qsort, however, it probably still isn’t good enough. Ideally you want a work-stealing library, which may or may not exist. I’d suggest just -stealing- borrowing a convenient C or C++ work-stealing library, and putting an Open Pascal wrapper around it. In fact, this alone would be “giving back to the community” in a big way. Not least because it would be both very difficult and a pain in the arse.

    Alternatively you could implement your own lock-free queues and build a work-stealing system above that in Native Open Pascal. That surely shouldn’t be too difficult. The good news is that it would generalise to all other “embarrassingly parallelizable” algorithms. Nothing like reusability, is there, Robert?

    Last, but not least, don’t forget that there’s still a significant overhead in shuffling between threads, even with work-stealing. A good rule of thumb is to switch to a bubble-sort on each thread as soon as the number of elements to be sorted is ~10. Turns out that bubble-sort is actually faster in these cases, even on a non-parallelized sort.

    With luck, Robert, that is enough to send you on your way to a successful implementation.

    Not that I know anything about the state of the art of lock-free parallized algorithms, of course. Ask oiaohm … he’s always good for a laugh.

    Oops, I mean “he’s always good for authoritative information.

  73. Deaf Spy says:

    Pogson, if you have paid more attention, you’d have noticed we speak about high frequency here.

    You have a very valid point that Intel is heavily influenced by ARMs. Atom and Core M are an attempt to answer ARMs. What I am not convinced is that people do not the sheer power of Intel CPUs. Trust me – they do. If you don’t trust me, trust in much more enlightened sources. We have proven here with academic sources, that parallelism does not come for free, and linear speed is sometimes a must.

    Btw, did you give concurrent qsort a shot? It is really easy, just give a try in pascal, and benchmark the results.

  74. Deaf Spy wrote, “Intel need to speak with Dougie and Ohio how to update their product line.”

    Intel did that years ago when they came out with the Atom and hedged their bets on ARM. Intel does exactly what it needs to do to maximize profits in the short term and long. While Atom was envisaged to be a placeholder on mobile devices, it’s now one of the best-selling notebook/desktop CPUs because it is good enough and costs less, covering a huge slice of the market. For the near future, Intel chips will always cost more than ARM to produce so Intel will have to shift production to adapt. They are already doing that with consumer chips going to low power and server chips going to many cores and higher power. ARM already outstrips Intel in units selling per annum but Intel makes far more money thanks to mindshare. Intel will do whatever it takes to prevent decline of mindshare.

  75. Deaf Spy says:

    Good, Ohio, very good! You fell into the trap I sent with a smile on your face. With your post, you proved that you are (what a surprise!) totally ignorant. Now, I feel particularly sadistic today, and I will not tell you why. I leave it to your unspeakable intellect to find out where you went wrong.

  76. oiaohm wrote, “Each 3 steps in major nm numbers doubles clock speeds at least.”

    Clockspeed is limited by more than heat dissipation. One can reduce the total heat produced by a chip by running more cores at lower speeds. That also reduces peak temperatures and increases throughput. There’s also not much point in having a CPU clock way faster than RAM. That just increases the waiting cycles unless the power-wasting cache becomes huge. Eventually, entire systems will be on one chip and communication with the outside world will be the only limitation, probably on some laser beams. It is doubtful that CPUs will increase performance indefinitely since a network of computers is always more powerful than any single node. The issue will be price/performance of the whole system. Powerful servers and thin clients make sense. Powerful CPUs are not the only way to make powerful servers. There will be particular applications where powerful CPUs are useful but properly designed software can use a network more efficiently. I’ve seen that repeatedly in my teaching. A powerful CPU idling on a desktop is a waste often. A powerful CPU nearly maxed out on a server is a wonderful thing. One can do a lot with one powerful CPU and a hundred weak CPUs. The economies of energy consumption, ease of maintenance, central storage etc. easily improve the price/performance by a large factor over multiple powerful CPUs.

  77. oiaohm says:

    http://en.wikipedia.org/wiki/Pentium_4
    Deaf Spy yes I do remember the Pentium 4 yes over 3 different nm generations its clock speed did double as well.
    Even the recent Core i7 45 to 22 double its clock speed. The redesign for core cost Intel some performance. Intel has no current published plans todo a major CPU redesign between now and 5 nm.

  78. Deaf Spy says:

    By current projection 5nm will be 8ghz speeds with the processors using less power than current. Each 3 steps in major nm numbers doubles clock speeds at least.

    Yeah, yeah, ri-i-ight. Remember Pentium 4?

  79. oiaohm says:

    Deaf Spy I did not say that Issues that Intel and other silicon designers are dealing with is simple. The reality is silicon supports a lot higher speeds than we can safely use mostly because we don’t know how..

    http://www.anandtech.com/show/8367/intels-14nm-technology-in-detai

    Like here lets make transistor fins a little taller. Magic leaks reduce so heat produced drops so now you can go faster.

    Since the start of the recent nm race will see speeds for a while yet keep on increasing. 10,7 and 5 nm are on the Intel road-maps.

    There will be a point were there will not be a smaller nm to goto at that point pure focus in leakage will have to be addressed.

    Deaf Spy I would not say I can solve Intel problems but I can say that the limits are not 100 percent hit. 10 and 7 nm prototypes also show reduced leakage again.

    5nm is the excepted end to Moore’s law in 2020 unless something changes. Until then CPU clock speeds will keep on increasing.

    By current projection 5nm will be 8ghz speeds with the processors using less power than current. Each 3 steps in major nm numbers doubles clock speeds at least.

    14nm started speed increasing in quite decent ways.

    The idea that increasing clock speed is over is wrong. 2020 is when it could be really over. There has been a lag due to nm production problems.

    Knowing the power saving of nm shows problems in Intel design as well. Intel chips are not ahead on power usage as far as they should be with how much nm advantage they have.

  80. Deaf Spy says:

    Intel need to speak with Dougie and Ohio how to update their product line. 🙂

    Guys, don’t you feel how ridiculous you are?

  81. dougman says:

    Actually, current fMAX record is ~1THZ: http://www.darpa.mil/newsevents/releases/2014/10/28.aspx

    ..and thats the one’s they let you know about.

  82. oiaohm says:

    DrLoser
    I believe that all Intel-benchmark chips (I use that term loosely: benchmarked against an Intel chip) max out at 3.8GHz or so. As a physicist, you will appreciate the limitations beyond that.
    That is wrong there is 4 Ghz chips from intel these days. Core i7-4790K

    Its base clock is 4Ghz it overdrive is 4.4. “Moore’s Law” stalled around 2012 but its started moving again 2014 and there is still room in the tech that it could catch back up.

    Overclocked chips run at 6Ghz. Physicist understand that the limit is way past 4Ghz. http://www.itjungle.com/tfh/tfh062606-story10.html
    silicon speeds are 500Ghz super cooled and possible 350 GHz air cooled. Around 7Ghz or so we have sync issue this could be overcome. When you hit 350Ghz you are at the aircooled limit.

    Reason we stopped under 4Ghz is tech issues with cooling caused leakage generating more heat. Intel has made a few break through in leakage reduction such as leaving intentional holes to create non conductive areas. Reduce leakage and overnight 4Ghz chips were able to enter production.

    Really Arm chip coming second can avoid all the bugs that Intel has run into.

  83. dougman says:

    Re: Ring any bells, Dougie?

    No, but I think you need more cowbell.

  84. oiaohm says:

    The reality here if you cannot build a Micro-kernel that performs you cannot build a monolithic kernel that performs either. Because same tech.

    So seeing NT developers rag on Micro-kernels fairly much shows incompetence.

  85. oiaohm says:

    Inside Windows NT (Microsoft Press, 1993) written by Mark Russinovich and Bryce Cogswell calls NT Modified Micro-kernel. The interesting point is how the NT 4.0 documentation got so lost into pushing the points attempt to say its not a Microkernel.

    Deaf Spy reality out of all the developers of NT Culter is the only one who pushed the idea that NT was not a Micro-kernel.

    As I said how many books do I need to drop on top of you.

    http://windowsitpro.com/systems-management/windows-nt-architecture-part-1
    Here Mark Russinovich using the term Modified Microkernel. Also Mark Russinovich says Micro-kernel starts mid 1980 that is completely incorrect it is the start of 1980s to be correct. In fact exactly 1980 as the first usage of the term Micro-kernel with a student produced OS that becomes QNX.

    A disadvantage to pure microkernel design is slow performance. Every interaction between operating system components in microkernel design requires an interprocess message. For example, if the Process Manager requires the Virtual Memory Manager to create an address map for a new process, it must send a message to the Virtual Memory Manager. In addition to the overhead costs of creating and sending messages, the interprocess message requirement results in two context switches: the first from the Process Manager to the Virtual Memory Manager, and the second back to the Process Manager after the Virtual Memory Manager carries out the request.
    True for first generation micro-kernels. Not true for second generation. This is explained in the 1995 paper to be avoided. The microkernel diagram he uses is a Generation two design that has user-space to user-space messaging without triggering a context switch. Second generation Microkernel perform better with more cpu cores. Sending and processing messages in second generation only have to cause context switches if you don’t have multi cpu cores to perform the task. Second generation IPC cost in context switchs is setting up the user-space messaging between the services so removing the requirement for the services to send messages by kernel.

    Interesting enough there is something related to user-space to user-space messaging used in monolithic kernels. Called circular buffer/ ring buffers. Yes you use ring buffers to reduce number of context switches in monolithic kernels and the usage of circular buffer between processes is also used in a lot of second generation micro-kernel designs to kill of the need to context switch.

    This is why Micro-kernel lack of performance is called implementation issues. Making a monolithic kernel without ring buffers also under performs.

    Mark Russinovich gets a lot of things wrong about current day and very old Microkernels due to not being upto date or missing information but Mark Russinovich not stupid enough to attempt to say that NT is not a Micro-kernel.

    Deaf Spy will you obey the author you said you would. Culter on the point that NT was not a Microkernel was a rarity even with the other Developer who made NT most of them call it Microkernel of some form. Hybrid Micro-kernel or Modified Micro-kernel. Or what every else would normally call a Insecure Micro-kernel.

    DrLoser
    You may have noticed this thing that dictionaries do, oiaohm? This thing where they list a number of alternative meanings for a word?
    Yes but an existing meaning define is not destroyed by the new meanings. For NT not to be Micro-kernel means that the existing define of Micro-kernel would have to be over turned.

  86. DrLoser says:

    That’s true today, but Moore’s Law will make just about anything fit on ARM in a year or two.

    Whilst we’re on the subject of computer architectures, Robert, may I gently point out that “Moore’s Law” arguably died a death round about 2012 or so? Perhaps you didn’t notice.

    I believe that all Intel-benchmark chips (I use that term loosely: benchmarked against an Intel chip) max out at 3.8GHz or so. As a physicist, you will appreciate the limitations beyond that.

    It’s certainly possible to cram more computing power onto the same area, but as you know we are now dealing with multiple cores. (You may have noticed recent ARM designs, at the very least.)

    Moore’s Law doesn’t apply to this scenario. Amdahl’s Law does.

    Which means that if your “just about anything” didn’t fit inside a consumer chip in 2012, you can guesstimate as many years as you like. It won’t fit then, either.

  87. DrLoser says:

    Chromecast’s are awesome, but I gave those away. Now, I just use Plex installed on my NAS and use the Samsung Plex app on the Samsung Smart TV to access all my movies.

    “I never believed you could earn $7,523 in a month working from home. When Sweet Nanny Giblets showed me the Check, I was blown away! Here check out —> {some stupidly dangerous URL ending with the “biz” or “ca” TLD}!”

    Ring any bells, Dougie?

  88. DrLoser says:

    Having said that, I’ve certainly met two or three professional gardeners who, by dint of their profession, have occasion either to weld the equipment or to ask a mate to do so.

    Which is hardly the point. It was a rhetorical point. Let me try another one:

    I have probably met 70K people (averaged by our respective ages, and merely a guess).

    I can confidently assert that I never met a single one who had any need whatsoever for a software suite that did Structural Engineering until January 2014.

    I’m now working with fifty of them. (And correspond with at least as many more.)

    The point is, Robert, and I know you are not as obtuse as either Dougie or oiaohm and that you therefore intuitively grasp this …

    Just because you’ve never met a composer (professional or amateur) in your life, doesn’t mean that you have any idea how may there are of them out there — in fact, your experience completely invalidates any estimate you might make.

    And it certainly gives you no sort of privileged debating platform upon which to prejudge their computation requirements.

    Weld away, my friend! The counter swings from zero to one!

  89. DrLoser says:

    In my neck of the woods, any farmer worth his salt will have a welding setup and may even have taken a course in welding.

    A farmer is not a gardener, Robert. It’s a question of scale.

  90. DrLoser wrote, “I have never met anybody who welds their own garden equipment”.

    In my neck of the woods, any farmer worth his salt will have a welding setup and may even have taken a course in welding. It really is saving a lot of time/money to rejoin things of steel that have come asunder. e.g. retailers here sell welding equipment and supplies to consumers. That machine would not be bought by a professional welder because it doesn’t do DC nor electrodes heavier than 3/16″. I have one similar but mine does DC. Consumers would not consider DC an advantage so would not be willing to pay the extra price. A professional would not likely use anything less than this machine which has many more features, a higher duty-cycle, DC and a higher output. Of course the price is several times higher but worth it in time saved. DC, in particular is something a pro would not do without because it gives much better performance with digging electrodes and out-of-position welding, as in the real world. A farmer mostly wants to get the equipment working again and doesn’t care if the repair is state of the art as long as it works. Typically, a farmer will use E6013, an “easy” electrode, while a professional will want to use every kind of electrode for various purposes. These days I weld with E6011 because I weld painted/greasy steel a lot. A farmer would likely ignore that or grind off the paint. When I worked on tractors, I used to use 450A DC welders supplied by 3 phase power rated for continuous welding with flux-core wire. We welded 5 tractor-frames per day using that. The farmer’s welding machine would not likely finish one machine in two or three days.

  91. DrLoser says:

    I’ll save you the trouble, Dougie.

    “Dr Loser has just admitted to kiddy-fiddling!”

    Be honest. That’s the cretinous “I win” conclusion you were about to post, wasn’t it?

  92. DrLoser says:

    “ever met anybody who welds their own garden equipment..”, obviously you never met any farmers.

    There are rhetorical points, Dougie.

    There are people who make rhetorical points.

    And then there are clueless types who completely miss the rhetorical point.

    I walk amongst exalted people. I live a privileged and exalted life.

    And yet, and even so, I have apparently met somebody who makes a habit of missing rhetorical points.

    Which rather reinforces my point to Robert, Dougie, and I’m indebted for your contribution. No matter how Robert or I try to categorise people according to our own beliefs, there’ll always be a surprisingly large number of people who don’t fit those beliefs.

    In Robert’s case, it’s composers (amateur or otherwise, as I recall) of music.

    In my case, it’s the educated generality of people in the Western World who can dimly apprehend the concept of a rhetorical point.

    I should get out and about a bit more, I suppose, Dougie. But if I descended to your intellectual level for casual conversation, I’d probably have to hang around the kindergarten gates. Which poses certain problems of its own, really.

  93. DrLoser says:

    Well, to be fair, there is that thing about obtaining spurious qualifications off the back of bubble-gum wrappers (as in the “Microsoft VAR” claim).

    But that doesn’t really count, Fifi. Let’s narrow it down to a job relevant to this site.

    Which particular fool of an employer has ever entrusted you to spend six months or more developing a system on Linux? And what did you develop?

  94. dougman says:

    “ever met anybody who welds their own garden equipment..”, obviously you never met any farmers.

  95. DrLoser says:

    It’s interesting that oiaohm never deigns to concede a single point, even when he realises that he really shouldn’t have claimed to know anything about the subject in the first place. Funny how he goes completely silent when he’s proven to be comprehensively wrong.

    To recap, Fifi, I have gone to the trouble of putting my genuine, long-term, professional IT experience in a particular area on record. To the best of my knowledge, you have never done any such thing.

    Now would be a good time to start, wouldn’t it?

    I’m curious. My special area of expertise is VOS, as you may dimly have ascertained.

    If you had to pick one — what’s yours?

    Obviously it isn’t either NT or microkernels.

    I’m sure you can dredge six months or so out of your copious curriculum vitae, Fifi.

    And no, writing ineffectual memos to the Austin Group doesn’t really count. I’m talking about work experience here.

    I highly doubt you have any relevant work experience at all. Prove me wrong, Fifi.

  96. DrLoser says:

    I think you might need another 9 or two in there. I think oldfart is one in a million. I’ve probably met ~100K people in my life and never met a composer of any kind let alone the digital variety. Perhaps they are shy.

    An interesting observation, and I think it goes straight to the nub of the matter. Now, for instance, I have never met anybody who welds their own garden equipment, although on the anecdotal evidence in front of me I don’t imagine that they are shy.

    I do, however, have two (possibly three) composers in my immediate family — first cousins plus spouses.

    My cousin Joe is a professional (ie college trained and it’s his sole job) jazz guitarist. Pretty much by definition, Joe is a composer.

    My cousin Richard is a contract professional trumpeter for London orchestras and beyond. I’ve never asked, but I’d be surprised if he hasn’t composed the odd Voluntary, at the very least. His wife, Rosie, is a professional violinist in the same area. She may or may not compose music either professionally or as a hobby.

    And my mother, before she died, took her school recorder orchestra onto local Birmingham radio and they played a number of her own compositions. I’ll leave out her more musically talented brother, Eric, father of Joe. I’ll also leave out my cousin David, brother of Richard.

    So that’s, what, at least three known composers in my immediate family? Out of … thinks … something like 30-40? I’m also leaving myself out. I have a yen to compose, but I might have to leave it until I retire. Welding garden appliances, fine though it is, doesn’t really appeal to me.

    Well, obviously I have an unusual family. How about college? Several music scholars at Magdalen, at least one of whom composed music. The only time I was Best Man was for a horn-player from Trinity — I imagine he composes music. King’s College Cambridge is not entirely bereft of composers, either.

    Point is, Robert, you can pick any “world” you like — musical composition, fashion designers, atmospheric physicists (yup, got one of those, too), anything you like — and just pointing out that “I have never met any of these people. They exist, but they are statistically insignificant to me” is a complete cop-out.

    For the record, I have no clue whether any of the composers in my family use Linux, Windows, or simply goose pens and parchment. I have never asked.

    But until you ask, you have no solid basis for your customary sweeping judgement of “nobody needs this bloated crap,” do you?

    Which, to return to my beginnings, is why it’s an interesting observation. I’d suggest you ponder further upon it.

  97. DrLoser says:

    I may have used some. In the 1970s, before microprocessors, the mini-computer was what scientists used in the labs. Total RAM was just a few KB. Essentially, folks loaded standalone programmes.

    Ah yes, fun times, Robert. I remember staring at an OS manual for (I think) a Perkin Elmer — it was quite advanced for the time — and thinking “is that all it does?”

    I don’t suppose either of us will ever convince oiaohm, who has some sort of sui generis definition of a “micro-kernel” all of his own, but I’d argue that these early OSes weren’t really “kernels” of any kind at all. Mostly they were just batch systems with primitive I/O, so in general they didn’t need any sort of a scheduler, any sort of DACL or other permission system, or any “security” beyond “man in bunny suit loading cards into a reader.”

    Actually I hit the scene late enough to the point where I was no longer required to wear a bunny suit.

    At the very least (and prior to Multics, indeed), I think you need a time-sharing system before you get to a real “kernel.” IBM’s OS/360 with a TSO personality (circa 1970), perhaps?

    But in such cases I’d have to think that the OS was so primitive (by today’s standards) that, as you say, it doesn’t really matter whether you call it a microkernel or a monolithic kernel — the two terms are basically the same for 1970s systems.

    Arguably, not for Multics, however.

  98. oldfart says:

    “That blows away everything Wintel shipped on up to about 2000. ”

    The world’s expectations have long gone beyond the tech of the year 2000.

    “Face it. Moore’s Law has met most people’s needs in tiny cheap packages these days. ”

    Perhaps as long as you do nothing but consume content, you will be fine. Of course you will be at the mercy of those who stream you the content.

    I believe they call that slavery in some parts eh?

  99. DrLoser says:

    Btw, Cutler, having designed and implemented a real OS, and a very successful one, has all the authority to define terms.
    So by this logic I invent something I have the right to rewrite the dictionary

    No, by this logic, if you invent something, you have the authority (but not the right: that is a stronger prescription) to define a possibly pre-existing term (“windows” comes to mind) in such a way that it makes sense when referring to the invention you have made.

    It isn’t difficult to work this conclusion out, oiaohm. You are being particularly obtuse in failing to do so.

    You may have noticed this thing that dictionaries do, oiaohm? This thing where they list a number of alternative meanings for a word?

    That’s precisely what would happen in the case of your imagined “invention.” All the old, prior, meanings would still remain valid, while a new one, specific to a particular domain, would emerge — assuming that it gains enough popularity, which is why Deaf Spy disambiguates “authority” from “right.”

    Once again, you are being needlessly prescriptivist.

  100. oldfart says:

    “Farting Old Man…I shall leave you with this; a lion does not worry about the opinion of sheep.”

    Thak you Dougie, you gave me exactly the answer that I was expectingt. And I BTW…

    Who said I worried about you Dougie.

    My only regret is that I will never get to be on the tech team that evaluates your service offerings.

  101. oiaohm wrote, “There are Micro-kernels before QNX. But neither by their authors were labeled as Micro-kernels.”

    I may have used some. In the 1970s, before microprocessors, the mini-computer was what scientists used in the labs. Total RAM was just a few KB. Essentially, folks loaded standalone programmes. Some decided it was silly/redundant/repetitive to do that so they made tiny “monitors” that loaded below some address and the application always loaded at some fixed address above. It was crude but it worked. The monitor knew about a few I/O devices and either serviced interrupts or polled devices periodically. The ones I used were tiny even compared to QNX and the like which are much more general-purpose. The first paying job I had in IT was translating a monitor and applications from the assembly language of one Digital Equipment minicomputer to another. This was done in a room with 80dB ambient noise using paper tape and TeleType machines. We’ve come a long way whatever you call the kernel.

  102. dougman says:

    Chromecast’s are awesome, but I gave those away. Now, I just use Plex installed on my NAS and use the Samsung Plex app on the Samsung Smart TV to access all my movies.

  103. dougman says:

    “did you even bother looking….Did you even notice…Did it even occur to you…I assure you…A funny thing…Adobe is dealing with…Oh and if you wish to be taken more seriously”

    Farting Old Man…I shall leave you with this; a lion does not worry about the opinion of sheep.

    Eh.

  104. oldfart says:

    “Agree, as the consensus seems to be based on Roberts numbers, that OldMan’s opinions are of little value.”

    As are your puerile interjections, little man.

  105. oldfart says:

    “I think you might need another 9 or two in there. I think oldfart is one in a million. I’ve probably met ~100K people in my life and never met a composer of any kind let alone the digital variety. Perhaps they are shy.”

    You make wish to follow those URL’s and take a look around. I would also suggest googling “Electronic Musician” and perusing there. While the type of music I like to write is distinctly niche, the music world itself is almost entirely digital. IN fact there are many cases of full sound tracks where not a single live musician performed.

  106. dougman says:

    Agree, as the consensus seems to be based on Roberts numbers, that OldMan’s opinions are of little value.

  107. oldfart wrote, “Anyone who can make do with a Crapbook has minimal needs IMHO”.

    What contempt for consumers. It’s mostly producers who need much IT-power locally and even many of them keep the power on the server. So, IMHO, a ChromeBook is all any but a few need. Businesses and schools and governments are finding them quite reliable and useful. Acer is even cranking one out with a 15.6″ screen. I’d bet a lot of producers could use that just fine although I’ve never seen a keyboard on a notebook that I liked. These ChromeBooks can play video all day long. That’s the most many consumers will ever need of them that’s compute-intensive. Face it. Moore’s Law has met most people’s needs in tiny cheap packages these days. Some consumers get all they need from a smartphone. I know several who only occasionally turn on a legacy PC. At Christmas, many in my family received Google’s ChromeCast dongles and these smartphones are running huge screen TVs as output devices with no sweat. That’s not “minimal”. That blows away everything Wintel shipped on up to about 2000. I remember when M$ told the world “best seen at 800×600″…

  108. oldfart says:

    “Lol…”(I work with the Personal Orchestra, Concert and Marching Band and Harps Collections)”, and so what if you do, 99.999% of the rest of society does not.”

    Tell me Dougie, did you even bother looking at the URL’s? Did you even notice how any musicians also use production line this? THere are lots of musicians in the world Dougie, eh?

    Did it even occur to you that this might be only one example of an application that does not fit in your world view, eh?

    I assure you that if one of your customers required that whatever solution you were hawking also be able to run this program and you wanted that customers business, you would be ill advised to try to sell him a crapbook!

    A funny thing about the transition from applications to services, Dougie. Microsoft offers licenses of their full desktop Suite with office 360. Why do you suppose that is eh?

    And Adobe is dealing with a full scale palace result of their decision to forcibly transition their existing userbase to online only versions of their products. It may work, but then again Adobe could be in trouble.

    Oh and if you wish to be taken more seriously, you might wish to consider dropping all o the childish asides. All they make you look like is some brain dead 16 year old.

  109. dougman wrote, “99.999% of the rest of society does not”.

    I think you might need another 9 or two in there. I think oldfart is one in a million. I’ve probably met ~100K people in my life and never met a composer of any kind let alone the digital variety. Perhaps they are shy.

  110. dougman says:

    Crapbook? The top seller on Amazon, pretty bad that people avoid M$ so badly that they buy crap. *rolls-eyes*

    When ‘Crapbooks’ merge with Android and allow you to run Linux at the same time, you will just move the line in and sand again to explain your reasoning.

  111. dougman says:

    Lol…”(I work with the Personal Orchestra, Concert and Marching Band and Harps Collections)”, and so what if you do, 99.999% of the rest of society does not.

    Is this your reasoning, as to why someone would not use Linux to begin with? If it is, you surely are delusional.

  112. oldfart says:

    “… it’s all about the apps and services now. ”

    Ok Dougie, show me an app or service that does what these programs do now…

    http://www.finalemusic.com/?_ga=1.252903369.1159380988.1417728612
    http://www.garritan.com/?_ga=1.252903369.1159380988.1417728612

    (I work with the Personal Orchestra, Concert and Marching Band and Harps Collections)

    http://www.soundsonline.com/Symphonic-Choirs

    I await your feedback.

  113. oldfart says:

    “Again, no such thing as “windows based applications”… it’s all about the apps and services now. ”

    Perhaps you are correct, but my experience as a member of teams that implement and manage transitions between technologies, I am pretty sure that its going to take YEARS for that transition to take place. I am also sure that for certain applications it is not clear as to whether it will take place at all.

    Do not make the mistake of assuming that your apparently minimal software needs ( Anyone who can make do with a Crapbook has minimal needs IMHO) of thinking that you represent anything more than a corner of something whose size is as yet unknown.

    “In time, M$ will not be about Windows but just services and apps.”

    Microsoft is not the only company out there selling software that people buy.

  114. dougman says:

    Again, no such thing as “windows based applications”… it’s all about the apps and services now.

    This is why M$ bought Skype, which is available for every platform there is, same thing with Office which is available for Android now. M$ knows that focusing solely on Windows will be a death sentence.

    In time, M$ will not be about Windows but just services and apps.

  115. oldfart says:

    “How many books do I have to quote before it gets through your thick head.”

    It seems to me that it is You who does not understand sir. You can quote until the cows come home – it means nothing because in the end your conclusion is one big non-sequitur. People with requirements and deliverable are not going to just abandon wholesale their use of windows based applications just because of the technical esoterica you quote any more than they are going to abandon linux based on someone elses recitation of its technical esoterica.

    And that is reality.

  116. Deaf Spy says:

    I absolutely can’t care less what people outside Microsoft think about the architecture of Windows NT. I care about what David Cutler thinks. I care about what Mark Russinovich thinks.

    Anyway, it is interesting to point out the next paragon of Ohiologic:
    Cutler doesn’t know his own invention, therefore it is inherently insecure.

  117. oiaohm says:

    Deaf Spy
    https://books.google.com/books?id=t2yA8vtfxDsC&pg=PT742&lpg=PT742&dq=%22David+Cutler%22+nt+microkernel&source=bl&ots=4i2yuq3Id8&sig=xMa5rmPhlKmc6KpGrvUHDohzYM8&hl=en&sa=X&ei=CKC_VMydGIvl8AWx3oC4Cw&ved=0CCsQ6AEwAg#v=onepage&q=%22David%20Cutler%22%20nt%20microkernel&f=false

    How many books do I have to quote before it gets through your thick head.

    The Art of Software Security Assessment: Identifying and Preventing Software
    Note published and written after 1997.

    It is impossible to perform security assessments correctly if you cannot identify the OS type you are dealing with.

    Also notice this from latter in 1992.
    https://books.google.com.au/books?id=F1DQ5qoGN5IC&pg=PA406&lpg=PA406&dq=%22David+Cutler%22+nt+microkernel&source=bl&ots=d-9-t9YTc4&sig=pmSDYV9B8-zBOTOfm4sSqkSazxs&hl=en&sa=X&ei=CKC_VMydGIvl8AWx3oC4Cw&ved=0CFIQ6AEwCQ#v=onepage&q=%22David%20Cutler%22%20nt%20microkernel&f=false

    I love Deaf Spy logic.
    Btw, Cutler, having designed and implemented a real OS, and a very successful one, has all the authority to define terms.
    So by this logic I invent something I have the right to rewrite the dictionary.

    micro-kernel based systems have been built long before
    the term itself was introduced, e.g. by Brinch Hansen
    [1970] and Wulf et al. [1974]. Traditionally, the word
    ‘kernel’ is used to denote the part of the operating sys-
    tem that is mandatory and common to all other software.
    The basic idea of the micro-kernel approach is to minimize
    this part, i.e. to implement outside the kernel whatever
    possible.

    There are Micro-kernels before QNX. But neither by their authors were labeled as Micro-kernels.

    Deaf Spy Sorry only a idiot says a OS design label is the right of the OS designer. OS design is a peer agree thing. It assigns what class the OS owns to for security assessments. So you effectively saying using the wrong OS type name so don’t perform correct assessment on Windows for flaws.

    QNX was the first OS to use Micro-kernel from the Authors. But even if QNX Authors had not after the fact it would have been assigned to the Micro-kernel group. If Linus called Linux a Microkernel the Peer review would still call it Monolithic. This is the way it is.

    The problem here is if Deaf Spy accepts that NT is Microkernel also means having to accept its screwed from the core up.

  118. Deaf Spy says:

    Cutler idea that NT was not a Micro-kernel never got peer acceptance.

    There is not a single evidence for that. As usual.

    Btw, Cutler, having designed and implemented a real OS, and a very successful one, has all the authority to define terms. Paper rats, on the other hand, better listen. People like you, who have done neither OS, nor academic, should not speak at all, unless they want to make a jester of themselves.

    Good work, Jester, I wish I could throw you a penny.

  119. Deaf Spy says:

    That’s true today, but Moore’s Law will make just about anything fit on ARM in a year or two.

    I think I already pointed you to a paper which explains quite well why this is not going to work. But of course you may choose to prevent science from breaking your belief.

  120. oldfart wrote, “The samsung is just too anemic for applications of the size running on windows”.

    That’s true today, but Moore’s Law will make just about anything fit on ARM in a year or two. Smartphones are coming out today with 8 cores, great graphics even for very large screens, and 2gB RAM. With 4gB they would be competitive with my Beast today, so expect just about everything to run on ARM in a year or so. The makers of PhotoShop and other expensive software for the legacy PC and that other OS are not going to want to be stuck with a “replacement-only” market. They want growth because that’s where the easy money lies. Small screens are incompatible still with many applications but ARM is moving to larger screens on smartphones and it can handle normal monitors just fine.

    Until recently, ARM aimed mostly at controllers/embedded thingies and mobile “The ARM® Cortex®-A57 processor is ARM’s highest performing processor, designed to further extend the capabilities of future mobile and enterprise computing applications including compute intensive 64-bit applications such as high end computer, tablet and server products.”applications. Today, their site mentions general-purpose computing as a target for their high end stuff. ARM will appear on more desktop PCs. It’s already there for settops/thin clients/tiny PCs. There’s certainly no problem with many web-applications.

  121. oiaohm says:

    It is also quite doubtful that it the vendors who sell the windows applications that I wish to run will EVER support a smartphone version. The samsung is just too anemic for applications of the size running on windows, And frankly, if can’t run my apps it is of no use to me personally and IMHO of no use to anyone who wished to run the windows apps.
    oldfart there are a few mistakes.
    1) http://wiki.winehq.org/ARM
    The idea that Windows applications will have to be ported to be on smart phones is wrong. Ok will not be all applications but some will run by wine on Android right now.
    2) http://rootprompt.apatsch.net/2012/03/07/windows-xp-on-android/
    The samsung you have is not as anemic as you are making out.

    I don’t know if the phone you have will ever get a Tizen rom because that supports Tizen, Android and Wine (so some windows) applications.

  122. oiaohm says:

    DrLoser
    No, it’s even better than that. David Cutler designed a micro-kernel before micro-kernels were even specified.
    Incorrect there was what a Micro-kernel is before 1995-1997. Formalization was sorting out the 8 different defines floating around. At OS conference after OS conference there was cat fight after cat fight over what a Micro-kernel was. This starts ending 1995 because a define that everyone can agree on appears and that define becomes formalized as of 1997.

    He then made a speech, insisting that whatever his OS was, it wasn’t a micro-kernel, even though nobody knew what he was talking about, because nobody had ever heard of a micro-kernel.
    No everyone had herd of Micro-kernels. Problem is most at the time did not know what they were. Please note at the conference where Cutlers i had given speach some of the parties in attendance hand argued against Cutlers idea. Yes he made a speech in 1992 but it did not get universal acceptance.

    Notice Cutler in the 1992 paper is making arguement why NT should not be called a Microkernel like everything else design that way. If NT did not have commonality he would not have had to make the arguement as all.

    Cutler idea that NT was not a Micro-kernel never got peer acceptance.

  123. oldfart says:

    “I’m retired and don’t have to come anywhere near the stuff I am content to read others’ woes and draw my own conclusions.”

    Fair enough, but if you continue to make such judgements based on increasingly obsolete experiences, don’t be surprised if you just get increasingly dismissed as a crank with a blog and an ax to grind.

  124. oiaohm says:

    You can think of what happened with the term Microkernel is like if someone screwed up the meaning of the word wheel.

    Like the first wheel is like a solid circle. That would be QNX.
    First generation is like a wheel with spokes and rim. The spokes are the security system.
    Then someone takes a wheel that looks lot like a solid circle expect it has a rim this is Windows NT. Of course its not a wheel because it does not have spokes and person making this mistake would have never been into contact with a wheel without spokes. David Culter had never been into contact with QNX

    Now does that not look stupid. That is fairly much what happened without all the complex terms. Does not matter how smart a human is. They can make mistakes on terms. David Culter and other Micro-kernel makers are not exception to this.

  125. oiaohm says:

    DrLoser
    Did he? How very interesting. What I would give to be able to read other people’s minds via microwave radiation, oiaohm.
    DrLoser don’t need to its in David Cutler list of reasons why NT is not a Microkernel.

    Deaf Spy
    David Cutler designed a micro-kernel OS, but didn’t know it.
    Not exactly. Before 1995-1997 formalization of what a Micro-kernel was there was 8 different defines floating around. 1 Was the original QNX define that is fairly clean other than mandating multi-server. Guess what we return to in 1995-1997. QNX authors define.

    David Cutler knew he was following micro-kernel principles when he made NT the problem was is that stack of security principles had to mixed with what a Micro-kernel is. NT does not follow the security principles. For many years QNX even that its the first Microkernel OS was treated as a black sheep and never talked about.

    Just to get better its the Authors of QNX who wrote the first Micro-kernel and made the basic layouts. Those who made the two of the Microkernels that are referred to first generation had in fact read documentation on QNX.

    DrLoser thank you for attacking me over Stratus VOS. Because exactly what you said that I should not do is exactly what the authors of the first generation microkernels did. They just read the documents and did guess work to fill in the gaps. In the process of filling in the gaps they added requirements to what a Micro-kernel is. I basically intentionally did the reverse so any proper QNX users were complaining. Interesting that you claim QNX experience yet you have not told Deaf Spy off for being a idiot.

  126. oldfart wrote, “What is being challenged is your attempts to UN-categorically trash the current supported versions of windows based on experiences with sometimes long obsolete versions.”

    I don’t need to use the stuff to know that it is more costly, more vulnerable and bears a EULA from Hell. I also don’t need to sing the praises of a company that sold me crap in the old days before I discovered GNU/Linux and FLOSS. Now that I’m retired and don’t have to come anywhere near the stuff I am content to read others’ woes and draw my own conclusions.

  127. DrLoser says:

    Do you actually know anything about anything, oiaohm, or is it all just guesswork backed up by googling based on three seemingly important terms, followed by ignoring everything on the site past the fold on the first page?

    I’m curious. My special area of expertise is VOS, as you may dimly have ascertained.

    If you had to pick one — what’s yours?

  128. DrLoser says:

    Having said that I didn’t cause a “driver crash splat” in Production, I suppose I should qualify that.

    Whilst working on-site at the Mexican Government Travel Agency, whose name I forget but it began with an S, and for reasons I prefer to forget, I was forced to converse in broken Spanglish with the various dignitaries who might, or might not, have chosen to cancel a seven-phase, multi-million dollar contract (funded, naturally, by the IMF), based on a particular requirement of the ALC driver. Which didn’t exist.

    One of my proudest moments, in fact. I rewrote the Z80 in front of them (waving at the screen and screaming “Hola!” at appropriate moments … or maybe just “gimme a beer if this works,” whatever … and I did so in full knowledge that, when I downloaded it to the comms card, it might very well take the damn machine down. And the network with it.

    As is the way with life, I was triumphantly successful … and didn’t even get a beer for my trouble.

    But, trust me, oiaohm (and please god stop this awful fantasising), VOS looks like a duck, swims like a duck, and quacks like a duck. Go anywhere near I/O on VOS, and you’ll realise this simple fact of OS life:

    It’s a monolithic kernel.

  129. DrLoser says:

    “OK I see a few papers?”

    I guess that’s all right, then.

  130. DrLoser says:

    DrLoser I never claimed Stratus VOS experience did I.

    No, but as usual you implied it:

    Stratus VOS design is a completely different design to Monolithic or Microkernel.

    Stratus was a wholly private company in a niche business. If you didn’t work with it, you stand no chance whatsoever of understanding the design of the OS. (Maybe someone like Dave Cutler might. Check out the microwaves tonite!)

    Feel free to prove me wrong with a couple of specifics.

    Ok I see a few papers that have Stratus VOS marked as Monolithic kernel but even that has this. VOS has memory compartmentalization and duplex operation in kernel mode. This makes it not a normal Monolithic kernel in normal way.

    Bull-cookies, oiaohm, bull-cookies. Neither memory compartmentalization (whatever the heck that is supposed to mean) nor duplex operation has any relevance.

    Stratus VOS was (and I suppose still is, because it’s used sporadically) a monolithic kernel. It isn’t even a hybrid kernel. There’s nothing at all “micro-kernelish” about it.

    The stuff you’re talking about? Achieved entirely through lock-step hardware duplex fault tolerance and a hardware abstraction layer. And here’s an interesting fact: you could pull a CPU board or a memory board or a disk board out of the back of a Stratus, and it wouldn’t care. It would just keep rolling merrily along, with a red light on the chassis and a “phone home” for a replacement board. You could even pull out both duplexed boards.

    With one small exception. If the duplexed CPU board that was running the kernel hit a red light and was consequently simplexed, and if some utter buffoon …

    (say, the salesman at Novus, UK, where I used to work)

    … pulled out the now simplexed board … the damn thing would crash. (The slots were numbered 1a and 1b, as I recall. None of the other slots had an alphabetical qualifier.)

    Why? Because it was a monolithic kernel. It had no ability to “re-site” the other half of the duplexed pair.

    What with your excusable total ignorance on the subject of Stratus VOS, you didn’t know that, oiaohm, did you? And I have fifteen years’ more experience to share, so please stop embarrassing yourself.

    If there is a OS that needs some other name other than Microkernel and monolithic I would say its Stratus VOS because it truly does have some unique behavours.

    No it doesn’t.

    DrLoser think about it you would have at some point seen a Stratus VOS have a driver crash did that ever result in a failure. Common trait normally given to a monolithic kernel is driver crash splat.

    You’re asking the wrong person, buddy.

    Not only have I seen a “driver crash splat,” I have caused several of them. (In testing, not in production, I should say.)

    I wrote the original ALC driver on a dual-ported Z80/Z8530 card for VOS at or about OS release 6.1. If and when I got the API calls to the I/O stack inside the monolithic kernel wrong, and I did so at least three times, I could bring the entire machine crashing down.

    In short, Stratus VOS is a monolithic kernel, you are an ignorant fool in this respect, and I strongly suggest that you stop fantasising about a subject in which you have no experience whatsoever, and I have fifteen years of practical experience.

    But I guess that won’t stop you bleating on, will it, oiaohm?

  131. DrLoser says:

    Oh, and … “LOL,” I think it is … right back at you, Dougie.

    I’m given to understand that, along with wearing their baseball caps backwards, it’s what all middle-aged fogies do when they want to “get down” and, what’s the word, “boogie?” — no, that’s passé — “jam” maybe? with the hipsters these days.

    Maybe I should join you and use it more often. There’s no shame in repeatedly looking like a pathetic has-been, I suppose.

  132. DrLoser says:

    LOL…but I have contributed a boat-load, refuting, dismissing and citing examples…

    From memory, Dougie, you have never refuted a single claim. (Do feel free to refresh my memory.)
    “Dismissing” other people’s opinions is generally considered arrogance, bordering on ignorance. At the very least, it’s nugatory as far as “contributions” go.
    And “citing examples” is only useful when those “examples” lead somewhere. Robert is quite capable of finding his own pro-Linux cites, and to be perfectly frank he is far more capable than you of backing them up with a cogent argument.
    I might very well disagree with that argument, but I’m prepared to take it seriously.
    Yours? Not so much, really. Hope I haven’t hurt your feelings.

  133. DrLoser says:

    David Cutler had the belief that kernel mode and user mode enforced by CPU was part of being a Micro-kernel.

    Did he? How very interesting. What I would give to be able to read other people’s minds via microwave radiation, oiaohm.

  134. dougman says:

    LOL…but I have contributed a boat-load, refuting, dismissing and citing examples show and disproving all the verbal diarrhea you and others of your ilk spew on this blog.

    Not my fault it upsets you, perhaps you should just turn off the computer and go outside and play in traffic for a great cardio workout.

  135. oldfart says:

    “The package manager for Windows is google.com”

    I look forward to when you grow up Dougie, You may actually contribute something to this debate someday.

  136. oldfart says:

    “oldfart I am sorry to say its not that simple. Poor performance equals shorter operational life out of the same size battery in portable devices.”

    Of course it is exactly that simple, especially when you consider the applications that I wish to run. I have a Samsung Galaxy 5 smart phone running Android 4.4.4. It runs smartphone apps very well. It is useful as a phone, a media player./ it is somewhat cramped as content reader, but useable on the go.

    It is also quite doubtful that it the vendors who sell the windows applications that I wish to run will EVER support a smartphone version. The samsung is just too anemic for applications of the size running on windows, And frankly, if can’t run my apps it is of no use to me personally and IMHO of no use to anyone who wished to run the windows apps.

  137. dougman says:

    Someone asked me the other day, why I used Linux. I said, “Package Management”

    The package manager for Windows is google.com

  138. dougman says:

    On your “disrespect for those who choose to use windows based hardware because they use windows based applications”, actually there computer hardware and software, to call it Windows based is really a disservice.

    On the note of, “UN-categorically trash the current supported versions of windows based on experiences”…what? So truth hurts, so he must shut-up about it? Wow..

    LOL…”stop referring to those whose choices are different than yours as “slaves”.”…is that a whine I detect… seems to me your M$ idiots have refuse to “make” a choice, as Windows is all you know.

  139. DrLoser says:

    David Cutler designed a micro-kernel OS, but didn’t know it.

    No, it’s even better than that. David Cutler designed a micro-kernel before micro-kernels were even specified. He then made a speech, insisting that whatever his OS was, it wasn’t a micro-kernel, even though nobody knew what he was talking about, because nobody had ever heard of a micro-kernel. Three or four years later, Dr Liedtke formalised the term, and gave Cutler the opportunity to admit his mistake. But by now Cutler was too mired in his own sordid dangerous insecure “monolithic userspace” except it’s not in userspace it’s inside kernelspace which is Option Number Three the really horrible one that he refused to acknowledge what was staring him in the face and kept calling his OS by the wrong name, even though it doesn’t matter at all and only a single person on the entire face of the planet cares.

    It’s an elegant and succinct argument, I think, and I hope I’ve summarized it correctly.

    My only question to oiaohm: Where does the time machine come in?

  140. oldfart says:

    “Ahhh! At last oldfart respects my choice of GNU/Linux after that other OS failed to work for me from 1986 until 1999.”

    You personal choices are not being challenged here and they never have been IMHO.

    What IS being challenged is your disrespect for those who choose to use windows based hardware because they use windows based applications.

    What is being challenged is your attempts to UN-categorically trash the current supported versions of windows based on experiences with sometimes long obsolete versions.

    Respect is given to those who respect in turn. If you wish respect Robert Pogson, stop referring to those whose choices are different than yours as “slaves”.

  141. Deaf Spy says:

    David Cutler designed a micro-kernel OS, but didn’t know it.

    It’s only getting better and better. 🙂

  142. oiaohm says:

    Deaf Spy if David Cutler said that NT was not a First Generation Microkernel his statement would be correct.

    The problem is the existence of what is classed as Second Generation Micro-kernels. The first second generation Micro-Kernel is QNX. Yes completely wrong by time-line. Also what is horible is the first form of QNX has no ring separation at all. Everything runs in the 1 memory space.

    https://www.winehq.org/pipermail/wine-devel/2010-July/084742.html
    MSDN containing incorrect information is nothing uncommon.

    http://www.acorn-kernel.net/ Here is a Second Generation Micro-Kernel that you can download the source code of. Note its runs on a AVR processor. AVR processor has no idea of kernel mode or user mode.

    Of course this might be a newer micro-kernel but its like the early QNX.

    David Cutler had the belief that kernel mode and user mode enforced by CPU was part of being a Micro-kernel. So of course NT with all the executive parts in kernel mode breaches the belief David Cutler had. First Generation Microkernel documents claimed stuff that was bogus as well. Comes clear when you look at early QNX and see like an acorn style Micro-kernel that NT is a Micro-kernel.

  143. Deaf Spy says:

    This is completely wrong and it caused by David having very little understanding of QNX.
    But of course. After you said that MSDN contained incorrect technical documentation, it was to be expected that Cutler would also be incorrect about NT. 🙂

    Well, now we can only laugh at you. You made my day, little one.

  144. oiaohm says:

    DrLoser I never claimed Stratus VOS experience did I.

    Ok I see a few papers that have Stratus VOS marked as Monolithic kernel but even that has this. VOS has memory compartmentalization and duplex operation in kernel mode. This makes it not a normal Monolithic kernel in normal way. If there is a OS that needs some other name other than Microkernel and monolithic I would say its Stratus VOS because it truly does have some unique behavours.

    DrLoser think about it you would have at some point seen a Stratus VOS have a driver crash did that ever result in a failure. Common trait normally given to a monolithic kernel is driver crash splat.

    DrLoser looking at OS from a design point of view don’t mean you ever have to use them. OS behavior information is collect-able without personally using the OS.

  145. DrLoser says:

    Stratus VOS design is a completely different design to Monolithic or Microkernel.

    Even by your standards, oiaohm, this is pushing the boat out a bit, isn’t it? Let’s compare notes:

    1) I have roughly 15 years experience with Stratus VOS, starting at version 6.0 and eventually ending with version 13.x (I forget the x).
    2) You have absolutely no experience with Stratus VOS at all. None whatsoever.

    I feel fairly confident in stating this, because VOS was always a niche market. Furthermore, I worked in Australia for a year on VOS. At one point I knew every single Australian with VOS experience.

    You were not one of them.

  146. oiaohm says:

    Deaf Spy the answer is simple. Read the date. April 27−28, 1992.

    Formal define of what a Microkernel is is 1995-1997.

    Next problem read that paper more closely. Where is QNX. Windows NT is not compared to the first true Microkernel OS that is QNX in that 1992 paper. Its only compared against what is refereed to as First Generation Micro-kernels.
    David made the observation that NT is hardly a microkernel and must be the ‘‘other kernel architecture’’ mentioned in the workshop title.”
    This is completely wrong and it caused by David having very little understanding of QNX.

  147. Deaf Spy says:

    Ohio, perhaps it is nice for you that there are other ignorant souls out there, though I don’t see how other people’s stupidity helps your cause.

    Here is something for you:
    Report on the Workshop on Micro-kernels and Other Kernel Architectures, Seattle, WA, April 27−28, 1992.

    Monday, 27th April:
    “David Cutler (Microsoft Corporation) spoke on ‘‘Microsoft Windows NT’’ giving a broad overview ranging from Microsoft’s systems strategy and market perspective through architectural issues to time and space measurements. David made the observation that NT is hardly a microkernel and must be the ‘‘other kernel architecture’’ mentioned in the workshop title.” (emphasis mine).

    Now, if you want to explain how Dave Cutler doesn’t know its own creation, please go ahead. We will laugh.

  148. oiaohm says:

    https://www.reactos.org/wiki/Kernel
    Deaf Spy these are the guys who have dissected Windows operating systems. Notice something they also call it a Microkernel.

    It is really not my idea alone to call Windows NT Design a Microkernel design.

    If it looks like a Duck, Walks like a Duck and Quacks like a Duck its a Duck.

  149. oiaohm says:

    You problem, however, is that NT was never, ever designed to be micro-kernel. It was never, ever intended to be. Consequently, all your text, whether accurate (tiny pieces) or inaccurate (most of it), is irrelevant
    http://en.wikipedia.org/wiki/Windows_NT “modified microkernel”

    Deaf Spy like it or not Windows NT is microkernel OS. Its not designed to be a secure Microkernel. You see the word pure microkernel in thrown about by some in fact pure is not a formal term. Secure or Insecure are the two formal term choices.

    http://arstechnica.com/civis/viewtopic.php?f=17&t=971841

    And I am really not alone with my point of view DeafSpy.

    If you follow back you will find Dave Cutler calling windows NT a modified microkernel. The hybrid kernel and macrokernel…. all come from other parties to avoid the design being compared against microkernel.

    Dave Culter in fact wrote a lot of papers why Microkernel could not perform that was discredited by the 1995 paper I was pointing at.

    The big thing is the define of what a Microkernel is does not mandate security.

  150. Deaf Spy says:

    The Singularity OS has failed.

    Singularity was a research project, Ohio. It was never, ever intended to replace NT. It was an experiment, just to test the road of fully-managed OSes. Trust me, MS are very good at keeping their mouth shut about their internal projects and roadmaps.

  151. Deaf Spy says:

    At last oldfart respects my choice of GNU/Linux after that other OS failed to work for me from 1986 until 1999.

    Pogson, then why don’t you admit that “other OS” works for other people, for example, me? We never tell you that Linux doesn’t work for you. It obviously does. But, you should assume that it may not work for other people.

  152. Deaf Spy says:

    Hm, I think I start to see where Ohio goes wrong.

    Ohio, you are very persistent that NT should be a micro-kernel based OS. Then, you point out all architectural decisions in NT (both real and imaginary) that break the micro-kernel design.

    You problem, however, is that NT was never, ever designed to be micro-kernel. It was never, ever intended to be. Consequently, all your text, whether accurate (tiny pieces) or inaccurate (most of it), is irrelevant.

  153. dougman says:

    Get her a HP Chromebook 14, you won’t have to touch it for a few years.

  154. oldfart wrote, “As long as what I am doing on windows performs to my satisfaction, I am fine with it.”

    Ahhh! At last oldfart respects my choice of GNU/Linux after that other OS failed to work for me from 1986 until 1999. I was way too patient. No kidding. My first “IBM-compatible” PC was a PC-XT with HP’s first inkjet printer. I did ballistics calculations, designed the first house we built with it, but I crossed my fingers every time I tried to save a file or print something. The odds were poor that the operation would succeed. I lived with that 13 years. I think that was giving M$ a fair shake. I’ve not had one iota as much trouble with GNU/Linux. (truth: The Little Woman is getting annoyed that the OpenChrome Xserver dies about once a week on her thin client. It’s rather marginal anyway, so we may just change her hardware again. This is a regression. It was flawless where I last worked.)

  155. oiaohm says:

    So what. As long as what I am doing on windows performs to my satisfaction, I am fine with it.
    oldfart I am sorry to say its not that simple. Poor performance equals shorter operational life out of the same size battery in portable devices.

    oldfart the satisfaction idea will not last forever. You want lighter weight devices don’t you. I think you have not taken into account the things you are truly unsatisfied with and how many are related to the Operating System being poor performing.

    By the way oldfart you have just reworded the classic Works for Me Argument that TMR guys always say Linux people should not say. Since you are a TMR guy you should know better than say this.

  156. oldfart says:

    “oldfart what OS design NT is does come into to explain why its performance on particular things are so bad since its based on a completely invalid idea.”

    So what. As long as what I am doing on windows performs to my satisfaction, I am fine with it.

  157. oiaohm says:

    Please note the date of the document I am using is 1995. Its almost 20 years old.

    Its not like Microsoft does not know this. Singularity OS and Hyper-V are both based around the newer white papers. The idea was that Singularity would just Replace NT and NT would not have to be fixed as it would die out. The Singularity OS has failed. Microsoft needs to bite the bullet and fix Windows NT implementation issues.

    Until Microsoft does you can expect Microsoft to keep on losing server room market shares. As the price of power goes up the cost of running ineffective OS solutions get worse.

  158. oiaohm says:

    DrLoser look at what you listed. NT Design might be based on a lot of things.
    “VMS, Mica and Prism” Is any one of these a peer reviewed paper on how to get performance. Not one is right. Every part NT Design is linked to has no validation.

    OpenVMS or VMS for short is a Monolithic kernel same style as the Linux kernel. So the shared API/ABI from VMS to NT is like QNX and Linux both sharing posix. Please note QNX and Linux both implement posix api/abi very differently.

    Prism is in fact hardware. As what is stated clearly in the micro-kernel performance item I quoted design of hardware has major effect on Microkernel performance. So a Microkernel design for X arch will not perform on Y arch without fairly major alterations.
    http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.26.4581
    5 Non-Portability that you have not read.

    One of the first sections of Windows NT design that needs to disappear is HAL section. Only first generation Microkernels contain HAL. There is no HAL section in second or third generation Microkernels yes this includes the very first Microkernel QNX does not have a HAL section. Linux use to contain HAL as a userspace daemon yes all it caused was performance nightmares. Every where that you do a HAL solution for hardware abstraction the result is performance cost.

    NT Design is based on the completely invalid idea that a first generation Micro-kernel design works other than a context switching problem that they fix by ruining security. A first generation Micro-kernel design is bug central. No user-space to user-space messaging the exist of HAL at this point your chance of ever having great performance has left the building.

    Everything using first generation Microkernel designs needs to die or be fixed by being convert to second or third generation.

    Basically you can alter a first generation design to second generation. Minix 2 to 3 shows this.

    To find peer reviewed papers on OS design to compare Windows NT design against to see is problems you have to call it the correct OS type being Microkernel.

    Yes also annoying when you get to the end of proper compare you also work out high performing OS and binary drivers is not possible. Yes this is why calling for closed source drivers go on deaf ears to the Linux world so much.

    Stratus VOS design is a completely different design to Monolithic or Microkernel.

  159. oiaohm wrote, “GNU/Linux on Chrome-thingie are doing well in Australia.”

    According to StatCounter, Lose “8.1” and Mac OS/X are doing really well compared to just about any other place on the planet. Aussies are different… Chrome OS has reached 0.22% on weekends pretty rapidly compared to other places, but GNU/Linux is way back at ~1% still.

  160. dougman says:

    Welding?? Just get a hoist on cheap: http://www.amazon.com/dp/B007AMM4KK or a cherry picker: http://amzn.com/B0013XLIMW

  161. DrLoser wrote, “wait a while until Robert has dragged it out of the bit-bucket.”

    I was outside in the cold freezing my fingers and toes off. My equipment is not warm enough for the pressure-points while welding. I’m welding up a hoisting frame to finish the assembly of the tractor. It’s a slow process because I don’t have a good selection of steel “shapes”. I’m welding up a bunch of T-bars… The main beam is interesting. It’s pre-stressed… The engine will straighten it out. I’m about half done, I think.

  162. DrLoser says:

    And before you blow a fuse, oiaohm, my earlier comment (very complimentary to your eventual explanation of your theory) is apparently “waiting moderation.” Probably something to do with the formatting. Maybe I added more than two links. Whatever.

    A friendly word of advice, however: wait a while until Robert has dragged it out of the bit-bucket.

  163. DrLoser says:

    This is why I said having a map of what governments support in OS currently and what governments are planing to support in OS would tell us a lot.

    I don’t think it would tell us anything at all about the underlying system architecture, would it, oiaohm?

    Oh, I see what you did there, little Galloping Gishie. You completely changed the subject for no reason at all, didn’t you?

  164. DrLoser says:

    oldfart what OS design NT is does come into to explain why its performance on particular things are so bad since its based on a completely invalid idea.

    That’s rather a sweeping statement, isn’t it? NT is based on a lot of things, not least (as you have insisted upon, yourself) VMS, Mica and Prism. Not to mention that it’s the second-best implementation of Multics I have ever seen. (Stratus VOS was the best. *nix is a distant third.)

    Would you care to specify this “completely invalid idea?” Is it #3 on your list of user-space microkernel extensions, or is it something else?

    Hard to criticise something so evanescent, really.

  165. oiaohm says:

    oldfart what OS design NT is does come into to explain why its performance on particular things are so bad since its based on a completely invalid idea.

    But it’s not oiaohm’s case, is it?
    DrLoser the major issues here in Australia preventing Linux Desktop usage are forecast to disappear over the next 3 years. As government departments here take on a new system that is truly OS neutral instead of Windows .exe. Banks here changed to OS neutral 6 years ago.

    Even for the Australian government the idea of Windows only desktop is over. Just things don’t change quickly.

    This is why I said having a map of what governments support in OS currently and what governments are planing to support in OS would tell us a lot.

    Every where that there is a case the Linux Desktop mostly something that can change.

    GNU/Linux on Chrome-thingie are doing well in Australia.

  166. DrLoser says:

    This is going to come as something of a surprise.

    I was actually impressed by the detail in oiaohm’s last argument. (Yes, I know I’m of no consequence, but if you can impress a Microsoft Troll, you’re doing a damn fine job.) It leads me to wonder why he can’t manage this level of discourse on a more regular basis.

    It’s still wrong, though. But probably no more wrong than I am, considering that neither one of us has written an OS in our lives.

    First, why would putting anything completely in its whole in userspace is “horrible”? And “horrible” for what?
    It comes down to the result from security. There is two ways to move subsystems completely to user-space.

    Method 1: Implement subsystem as Multiserver Architecture Microkernel userspace that you add userspace to userspace messaging for performance…
    Method 2) Implement subsystem as Monolithic Userspace…

    That seems to boil down to an admission that there’s nothing “horrible” about it at all: just two different “proper” ways of doing it, each with its own security implications.

    Having got past that admission, however, I’m still left wondering what a “monolithic userspace” (case 2) might be, as compared to a “Multiserver Architecture,” in practical terms. I’m only aware of three multiserver microkernels out there: Hurd, Minix 3 and HelenOS. Of these three, Hurd is to all intents and purposes still-born; Minix 3 is specifically targeted at embedded systems; and HelenOS is a research project. (Not that that’s a bad thing.)

    The question I have here is, once you’ve moved services out of kernel space and into user space, what’s the difference between implementing those services via some fancy IPC mechanism (a la HelenOS) and simply implementing them as daemons? It’s not clear to me that such a thing as a “pure monolithic userspace” exists. (It clearly doesn’t exist in the case of NT.)

    And now to your rather strange theory of how Windows NT behaves:

    What is the surprise is being Microkernel does not require you to use Ring protection.

    Windows NT is not a microkernel. This bears repeating until you recognise the fact. And that issue (in fact, as with absolutely any other issue you bring up) has nothing to do with ring protection, which is a special case of CPU mode protection: it’s architecture-dependent. I’m going with Maurice Wilkes here … well, I would do, what with graduating from Cambridge. No Operating System needs more than two “rings.” And frankly at that point you’re pretty much quibbling about what a “protection ring” might be.

    The worst of the worst choice is Method 3 that Windows NT developer thought was a good idea and other OS’s that attempt to say they are not Microkernels but some other strange name.

    Windows NT incorporates a microkernel. It is not a microkernel OS. Nobody but you, oiaohm, claims otherwise.

    Hell having all the Multiserver Architecture Microkernel in user-space ring lets stick it all in kernel space ring and give up ring protection completely around those parts.

    Except that Windows NT doesn’t bear any resemblance to, nor have any interest in, a “multiserver architecture microkernel.”

    I’m not actually sure what you’re getting at here, and further information would be welcome. In the mean time, can we perhaps base the discussion on the following diagram of the Windows NT internals?

    If you have issues with that diagram, feel free to offer up a better one.

  167. DrLoser says:

    Munich, Extremadura, France, India, China, Russia, Brazil, … all migrated away from that other OS to GNU/Linux because there was a real case against it.

    That may very well be true, Robert. But it’s not oiaohm’s case, is it?

  168. dougman says:

    Floss flaws or Linux flaws, of which do you refer?

    Show us your cite!

  169. oldfart says:

    “The flaws in using M$’s OS are in your face, not hidden much.”

    And the flaws of using current FLOSS on a current Linux desktop are in MY face for me to see every day, and no amount of posting on the forced changes brought about by government fiat is going to impress me one bit.

  170. oldfart says:

    “Fine oldfart go back to the can response since you will not believe the truth any how and are going to just get into pointless off topic attacks canned responses are better.”

    As far as your “truth” is concerned, I can google and read too. As far as being off topic, I suggest that you re-read robert Pogson’s post here. No mention of NT kernels at all from what I can see eh. That means that your entire stream of “fact” on the NT kernel is itself nothing more than one big pointless off topic screed here.

    So spare me the bullshit.

  171. oiaohm says:

    Yes the Multiserver Architecture Microkernel userspace comes straight from QNX
    It is based on the original idea of running most of the operating system from a number of servers

    Yes funny right and QNX is not first generation.

  172. oiaohm says:

    oldfart
    ” Yes calling MACH first generation is really allow some of those developers to save face and not have to publicly admit they got it badly wrong.”
    1) It occurs to me sir, who are you to make such statements. Where are YOUR credentials that show that you have done real OS system designs beyond hanging around the reactOS site?
    Really that has nothing todo with the topic. Oldfart.

    Fact is fact.
    http://www.cs.uiuc.edu/class/sp05/cs523/prev_exams/2004sprin/midterm/shabib/index.html Not even new book.
    Mach if you look into it starts after 4.2BSD kernel is released or in other words 1983 at the absolutely earliest for the first line of code.

    The other first generation Microkernel Chorus starts in 1982.

    QNX starts in 1980 yes right from the start includes user-space to user-space messaging.

    “Microkernels have been around since the 1980’s” Yes the very first Microkernel is QNX it is the only one started in the year 1980 that is a Microkernel. There is no Microkernel OS with a older history.

    Yes that QNX is called second generation does not line up with time line at all and its fact.

    What makes you think that any of this esoterica about windows internals is going to make a difference in the use of windows on a desktop. Any who user windows based applications is not going to move platforms just because you present some esoteric case “against” windows.
    Really I don’t care the truth of what Windows is happens to be the Truth.

    Fine oldfart go back to the can response since you will not believe the truth any how and are going to just get into pointless off topic attacks canned responses are better.

  173. oldfart wrote, “Any who user windows based applications is not going to move platforms just because you present some esoteric case “against” windows.”

    Munich, Extremadura, France, India, China, Russia, Brazil, … all migrated away from that other OS to GNU/Linux because there was a real case against it. Esoterica is not required. The flaws in using M$’s OS are in your face, not hidden much.

  174. oldfart says:

    My comment was related to your “confession” Re: your knowledge of the workings of the IBM SAN Volume Controller. Frankly Sir, I find quite a stretch to say the least that you failed to notice that you forgot all your knowledge about the IBM SAN Volume Controller before you began your exchange with me.

    As far as I am concerned, your admission is tantamount to an admission of fraudulent representation of your expertise on this device. And that is the ONLY reason that I am responding to you in detail directly.

    If you make an attempt to take your confession back, then I will return to answering any attempts to address me with the canned response that so bothered you.

    Leave things as they are, and I will resume responding you you…

    As I see fit, of course.

  175. DrLoser says:

    I asked did you have 3000 dollars to spend. If you do I can lead you to other documents using the term. That document is referred in one of the papers behind paywall on third generation micro-kernels.

    I fail to see where the $3000 comes in, oiaohm. Show us your cite, and we’ll determine whether or not it’s worth paying for.

    Just because it’s behind a pay-wall doesn’t mean you can’t supply a URL and (obviously unusable without paying) instructions on how to access the relevant paper.

  176. oldfart says:

    ” Yes calling MACH first generation is really allow some of those developers to save face and not have to publicly admit they got it badly wrong.”

    Two questions and a comment:
    1) It occurs to me sir, who are you to make such statements. Where are YOUR credentials that show that you have done real OS system designs beyond hanging around the reactOS site?

    2) What makes you think that any of this esoterica about windows internals is going to make a difference in the use of windows on a desktop. Any who user windows based applications is not going to move platforms just because you present some esoteric case “against” windows.

  177. ram says:

    DrLoser says: “I take it you never shopped in department stores in the Soviet Union, then, ram? Because, if you had, you would be “reminded” of an almost complete lack of advertising, and row upon row of singularly empty shelves.”

    Yes I was, and you might be surprised by who I was working for! I stand by my original comment about centrally controlled “economies” dictated by an hereditary elite.

    As for your insults, you can go perform an “impossible erotic act”!

  178. oiaohm says:

    Deaf Spy
    Google Academic Search and Bing Academic Search
    Sorry OS Design using those is pointless most of the time. 90 percent plus of the documentation about OS Design is behind pay walls. Every document behind a pay-wall does not show in either search engine. I asked did you have 3000 dollars to spend. If you do I can lead you to other documents using the term. That document is referred in one of the papers behind paywall on third generation micro-kernels.

    First, why would putting anything completely in its whole in userspace is “horrible”? And “horrible” for what?
    It comes down to the result from security. There is two ways to move subsystems completely to user-space.

    Method 1: Implement subsystem as Multiserver Architecture Microkernel userspace that you add userspace to userspace messaging for performance. Yes each part is in fact isolated from each other in an individual memory allocations. So limiting exploit damage. Benched performance shows no real performance cost in doing this as long as you have userspace to userspace messaging implemented correctly.

    Method 2) Implement subsystem as Monolithic Userspace. You end up with many of the same kinds of security problems as when it was in monolithic kernel space. Where exploit in fact have the means to exploit the complete monolithic user-space part. So all you have gained is the means to restart it but you have not gained as much security as you could have by choosing method 1.

    The key word is whole. As a whole piece is Method 2. As many pieces is Method 1. Both can be used to move a subsystem completely from kernel-space to user-space.

    Both of these designs up until this point have in fact used rings to maintain some form of security. Using monolithic userspace is worse than using user-space to user-space messaging to accelerate Multiserver Architecture Microkernel userspace from a security point of view. Both run and basically the same speed. There is no performance overhead doing either as long as you code it correctly. Monolithic userspace is still miles ahead of what NT does.

    The worst of the worst choice is Method 3 that Windows NT developer thought was a good idea and other OS’s that attempt to say they are not Microkernels but some other strange name. Hell having all the Multiserver Architecture Microkernel in user-space ring lets stick it all in kernel space ring and give up ring protection completely around those parts. What is the surprise is being Microkernel does not require you to use Ring protection.

    Liedtke’s minimality principle is the test if something is a Microkernel or not.
    A concept is tolerated inside the microkernel only if moving it outside the kernel, i.e., permitting competing implementations, would prevent the implementation of the system’s required functionality.

    Key test permitting competing implementations Does NT Design allow changing the parts inside the executive yes it does so it allows competing implementations.

    Yes the NT kernel core that NT developers called a Microkernel in fact pass Liedtke’s minimality principle. Yes the principle was created 1995 accepted at test if something is a Microkernel or not 1997.

    The separation of mechanism and policy also does not mandate any usage of rings. Please remember many Microkernels operating on CPU without any protected memory modes. So protection rings are in fact not part of Microkernel design but optional extras to enhance security. If you have protection rings and you are attempting to make a secure Microkenrel OS you should be using protection rings correctly. If you are making a insecure Microkernel OS you don’t have to give a rats about protection rings at all.

    Does it sound familiar? It should, as the history of NT has seen a few components moving between the one and the other.
    Yes but you missed why this is possible. Reason why this is possible is the fact NT is a malformed Microkernel. The NT design is that far insecure you kind have to be insane want keep it around when there is no valid performance benchmark to prove any advantage by moving the Microkernel userspace services into kernel space at all.

    NT design starts with a Microkernel design and is in fact still a Microkernel design but it has a huge stack of modification that:
    1 Make absolutely no sense because the modifications extremely under mine security.
    2 Make absolutely no sense because the modifications don’t improve performance one bit compared to properly implemented Micro-kernels for performance and security.

    Lets lose the lie about what NT design really is. NT design implementation from security you might as well has build a monolithic kernel and been done with it.

    Please note minix 1 and 2 move the MMU code into kernel ring with the with the Microkernel because it was believed as required to get performance. NT developers were not the only idiots moving stuff into the kernel ring because they thought it was the way to fix the performance problems and that the loss of security was required to fix the performance problems of Microkernel. Yes 1995 those ideas were proven completely bogus. The performance problem fix was create user-space to userspace messaging and as soon as that is proven to work in 1995 Minix moves the MMU code back to userspace. Problem is QNX was doing user-space to user-space messaging from the start.

    Yes QNX being a second generation is so funny. By order of time line it should be first generation but that would be just extra embarrassment for those who implemented all the stuff off MACH. Yes calling MACH first generation is really allow some of those developers to save face and not have to publicly admit they got it badly wrong.

  179. Deaf Spy says:

    Ohio, I am glad that you finally saw some light, and labored hard to find a paper which says “monolith in userspace” to add some meaning to your ridiculous post. And, Google Academic Search and Bing Academic Search could not list a second paper using this phrase. I guess you were lucky to find a proof that your are not speaking utter non-sense. You also deserve some credit for abandoning the hopeless statement that “NT was designed as microkernel”.

    Now, that we have a definition for “monolith in user space”, let’s take a closer look at the paper. It says:
    “It seems as if we are just exchanging a monolithic kernelspace component with a monolithic userspace component…”
    This basically means: “we moved a whole subsystem from kernel space to user space”.

    Does it sound familiar? It should, as the history of NT has seen a few components moving between the one and the other.

    Ah, surprise, the paper actually argues against your statement:
    No one has made a Micro-kernel that can perform at monolithic speeds without messaging or horribly sticking a monolithic in userspace(that also completely ruins security).

    This “horribly sticking a monolithic in userspace” deserves some attention. First, why would putting anything completely in its whole in userspace is “horrible”? And “horrible” for what?

    Then, you notion that putting anything completely in userspace “ruins security” is the pinnacle of your misunderstanding of the whole purpose of having protection rings, hence kernel and user space.

    Unless by “Monolithic in Userspace” you mean something completely different, but then, hm, you will need to find proof again… Hard life, I tell you.

  180. oiaohm says:

    http://os.inf.tu-dresden.de/papers_ps/2014-vee-switch.pdf

    See section 5.3 “A Monolith in Userspace” So either I include in that was a mistake or forgot to drop the ic. Other authors have used Monolithic in Userspace.

    Basically there are three valid ways of writing it.
    A Monolith in Userspace
    A Monolithic in Userspace
    A Monolithic Userspace
    A Monolith Userspace.
    All have the same meaning.

    Btw, you latest post is beyond any hope. No only you second the absurd compound “monolithic userspace”. It reveals clearly that you do not have the slightest knowledge of OS theory and practice.
    Deaf Spy really shows here is not up on the last 2 years of OS design theory. The term in its many forms appear in the last 2 to 3 years of documents.

    Fairly much DrLoser don’t follow a idiot.

  181. oiaohm says:

    DrLoser the fact you are pulling out “monolithic in user-space” says that you have not watched videos on OS design. Because its in fact said a lot.

    Each field as its own unique terminology. Attacking someone in a field you don’t understand only makes a goose out of yourself.

  182. oiaohm says:

    A monolithic in user-space?

    DrLoser the only kind of mistake there is I have inserted 1 word called “in”.

    “monolithic user-space” the term you will find in may OS design papers. So you so called stupid statement is not a stupid statement. When you verbally speak about “monolithic user-space “for ease of saying you normally say monolithic in user-space. So I type it how you speak it.

    So how is this stupid. How is this stupid is in fact that you are a idiot who does not know OS design so you don’t straight see that what I type is a term used in OS design.

    A memory controller can feed up false data?
    Yes this is written very differently. You will find this question worded differently with exactly the same meaning when talking about building defect resistant operating systems. It is also over the practicalities of security question. This complete concept covers half a A4.

    DrLoser reality there is not a single stupid statement there. Only stupid people here is you and Deaf Spy.

  183. DrLoser says:

    The American economy today reminds me of department stores in the Soviet Union during the “peak” of the Soviet era — full of heavily advertised goods very few people wanted regardless of how low the price.

    I take it you never shopped in department stores in the Soviet Union, then, ram? Because, if you had, you would be “reminded” of an almost complete lack of advertising, and row upon row of singularly empty shelves.

    Off-topic from the completely obvious fact that the sole goal of the US Federal Reserve Bank is to pump out enough paper to keep Microsoft afloat …

    The problem with a fully-centralised Command Economy such as the Soviet Union circa Brezhnev isn’t that it doesn’t “produce” goods. It’s that, absent a market mechanism, it mass-produces the wrong sort of goods.

    Now, you could reasonably argue that “Microsoft Windows” is the wrong sort of goods, but you couldn’t really argue that it’s wrong in the Soviet way. In the days of GUM et al, people would queue around the block, and for hours, to buy whatever “wrong sort of goods” they could … simply because they’d then have something to barter on the black market.

    That’s a slightly different proposition from “Microsoft Windows is a technically inferior product than Linux, and fails on a price/comparison basis,” I feel. Because in the Soviet Union, neither of the two would have been “on the shelves.”

  184. DrLoser says:

    Also you did not read Gish Gallop define its never absolutely incorrect.

    If you insist on proving yourself a Gish Galloper, oiaohm, then I must say you’re doing a far better job of it than proving some form of ability in IT.

    In this case, you are absolutely correct. To quote the Urban Dictionary on the point:

    To make matters worse a Gish Gallop will often have one or more ‘talking points’ that has a tiny core of truth to it, making the person rebutting it spend even more time debunking it in order to explain that, yes, it’s not totally false but the Galloper is distorting/misusing/misstating the actual situation.

    This is eerily reminiscent of Deaf Spy’s claim that he could, with a tremendous amount of basically wasted effort, find a single true statement in that particular oiaohm wall of gibberish.

  185. DrLoser says:

    DrLoser but that is not the bit Deaf Spy called a lie.

    He didn’t call it a lie, oioahm, nor did he call you a liar. If you’re going to be prescriptivist about such piddling matters as the difference between “innovator” and “innovation,” you could at least be consistently prescriptivist.

    The phrase Deaf Spy actually used was:

    Ohio, I tried to find a single true statement in your post. I tried hard. And there is only this single one:

    Note the following:
    1) A post can be entirely devoid of “truth” without the poster being a liar. Incompetence, delusion, fantasy, a simple misunderstanding of what one reads … there are plenty of other possibilities.
    2) Deaf Spy admitted that there is at least one element of truth in your post.
    3) Deaf Spy implicitly accepted that there may be more — he just couldn’t find them.
    4) English is not Deaf Spy’s first language. It is mine, so I would have phrased this as “a single accurate statement …” Whoops, there goes your accusation of “deformation.”

    But I’m glad you brought the subject back up again. Which of the following do you, personally, regard as your stupidest statement?

    A memory controller can feed up false data?

    … or …

    A monolithic in userspace?

    Both of them are high-quality stupid, I agree.

  186. ram says:

    dougman quoted “The Wintel marriage is now threatened, oddly enough, by technological progress. Processors grow ever smaller and more powerful; internet and wireless connections keep speeding up. This has created both centripetal and centrifugal forces, which are pushing computing into data centres (huge warehouses full of servers) and onto mobile devices—businesses that Microsoft and Intel do not dominate.”

    Well now Intel does dominate the huge data centers (almost exclusively running Linux), it still earns money from supplying parts for mobile devices (mostly running Linux), but Microsoft only has some government offices that it maintains a foothold in via “aggressive marketing tactics” that many call bribery and kickbacks.

    Microsoft may have the support of the US Federal Reserve Bank which will print money until the dollar or their clients collapse — perhaps both simultaneously, but that will not save an “economy” that “produces” items almost nobody wants. The American economy today reminds me of department stores in the Soviet Union during the “peak” of the Soviet era — full of heavily advertised goods very few people wanted regardless of how low the price.

  187. oiaohm says:

    If you’re going to make an outrageous and specious claim such as “the original version of NT was a microkernel”, you need nothing more than a paragraph or two and a relevant cite.
    DrLoser but that is not the bit Deaf Spy called a lie.

    http://mrpogson.com/2015/01/06/2015-crippling-wintel/#comment-235498

    I want his cites to that prove that every bit of this is a lie.

    Problem is its not. There is only 1 line about NT. The rest is about monolithic qnx and other things. Of course Deaf Spy does not have cites. His idea was I will call it a lie then he has to present cites. Sorry nothing in there is a lie.

    You have been doing this repeatily.

  188. DrLoser says:

    DrLoser not all the time I do Gish Gallop. Sometimes its just super complex stuff.

    Once again I will leave both of these hilarious comments dangling.

  189. DrLoser says:

    I have in fact compressed about 100 to 200 pages of information down.

    Not nearly far enough, you idiot.

    If you’re going to make an outrageous and specious claim such as “the original version of NT was a microkernel”, you need nothing more than a paragraph or two and a relevant cite.

    You can leave the other 100 to 200 pages (I’m sooooo impressed!) to your own broken imagination.

  190. DrLoser says:

    At least oldfart always had a cite to back up any of his arguments and did not go around blindly hitting in the dark.

    Feeling nostalgic for the old days, Fifi, when you were merely called out for claiming IBM SAN Controller expertise when you had no such thing. What is your excuse again? “I sorta forget. There are so very many platforms out there.”

    Very convincing, Fifi.

    Oh, and this “always has a cite” thing? I just checked this thread. Olderman has precisely two posts, and neither one of them features a cite.

    Try again, Fifi.

  191. oiaohm says:

    DrLoser you might call what I types a wall of text but in the case of OS design. I have in fact compressed about 100 to 200 pages of information down. Some of it could be confusing or I have left something critical out by mistake. But that does not mean its incorrect information.

    DrLoser not all the time I do Gish Gallop. Sometimes its just super complex stuff.

    This is a problem if you presume its always Gish Gallop so can call it false sooner or latter you will be called out.

    Deaf Spy I still want to see your cites proving absolutely incorrect because I want to know where you got your incorrect information from.

    DeafSpy was thinking that I had to defend my points. The problem here is if you challenge you have to be able defend that challenge as well.

    At least oldfart always had a cite to back up any of his arguments and did not go around blindly hitting in the dark.

  192. DrLoser says:

    Also you did not read Gish Gallop define its never absolutely incorrect.

    Oh, I read it. Anybody who reads it would recognise you in an instant.

    So, which bits in general did I get wrong? And which bits don’t apply to you, Fifi?

  193. DrLoser says:

    DrLoser The word lier does not have to be used.

    This calling something absolutely incorrect means you better be able to back it up.

    I think we’ve already covered the issue of legally actionable “defamation,” oiaohm, although you personally still have a way to go when spelling various forms of the word “liar.”

    Please don’t descend into a pitiable whimpering hell of uncontrollable loss of bodily control. Stop snivelling! Hold your tin-foil head up high!

    Not because I admire your gumption in any way. I don’t. But I really don’t want to waste my time pitying you.

  194. oiaohm says:

    The rest is completely, absolutely incorrect
    DrLoser The word lier does not have to be used.

    This calling something absolutely incorrect means you better be able to back it up.

    If it was as Deaf Spy claims absolutely incorrect he should be able to present at least 1 cite proven it. Also you did not read Gish Gallop define its never absolutely incorrect.

  195. DrLoser says:

    Business schools are churning them out by the million these days, aren’t they, Robert? I remember, as you remember, a blessed time when they were practically ghettos attached to places like Harvard, UPenn and a handful of others. Now they’re in London, Oxford, Cambridge … all over the UK, let alone the rest of the world. I do not see this as a good thing.

    Nor would I expend much thought on the musings of a recent graduate with no industry experience. Isn’t “industry experience” the supposed point of these dismal institutions?

    Anyway, to your interesting conclusions.

    I expect we will see a slowdown in their office suite with the amazing growth of Libreoffice sooner rather than later.

    No evidence so far, but you hedge your bets well with “sooner rather than later.”

    In a week we should see another damaging quarter to M$’s client business while others segments grow.

    The interesting thing about this is that you seem to have accepted that Microsoft still has a lot of growth available to it … just not in the “client business.” This is a step-change in your views, Robert. I applaud your honesty. Incidentally “other segments” includes the vibrant market in Windows Servers…

    Here’s a provable claim, one way or the other, though. I’m even prepared to water down “damaging” to a simple question: will the Client division in the quarterly report show a decline in sales/revenue/profitability/pick your measure (not stock price, Dougie! Stop obsessing!), or will it show an uptick?

    I’m expecting an uptick. I’ll go further. I think the uptick will be around 4-5%.

    I look forward to your forthcoming report on the figures.

  196. oldfart wrote, of the success of */Linux, “Classical Linux as a desktop – not so much.”

    GNU/Linux on thin clients is doing very well and it has real salesmen
    GNU/Linux on Chrome-thingies is doing very well and it has real salesmen
    GNU/Linux on legacy PCs is doing very well and it has real salesmen (e.g. Dell in India, China, Acer and others in Brazil). It’s just not doing real well everywhere.
    GNU/Linux on legacy PCs is doing very well as a replacement for that other OS and millions are choosing it. e.g. Munich, France, Spain, Norway, India, China, Brazil…

    That may not quantify as “so much”, but if you look at history, there are more GNU/Linux PCs being produced/converted each year than M$ had shipped from OEMs annually before about 1998. Quoting Wikipedia: “In 2001, 125 million personal computers were shipped in comparison to 48,000 in 1977. More than 500 million PCs were in use in 2002 and one billion personal computers had been sold worldwide since mid-1970s till this time.” So, now */Linux has that kind of growth and M$ is stagnant in units shipped. M$ on the client could be overtaken in a year or two because */Linux costs less and is more flexible. In this intense competition those advantages will win out.

    BTW, I was talking with a recent graduate of a business school last night and she completely understood my arguments about M$, Apple and such. She eats, sleeps and breathes competition, supply/demand, leverage, etc. She sees huge opportunities for new ways of delivering what consumers want as knocking these biggies off their thrones. M$ does as well which is why they now embrace */Linux and have diversified to servers/clouds rather than relying on the client OS to hold everything together. In a week we should see another damaging quarter to M$’s client business while others segments grow. I expect we will see a slowdown in their office suite with the amazing growth of Libreoffice sooner rather than later.

  197. dougman says:

    luvr, perfectly describes what M$ trolls main goal is…

    “The signal-to-noise ratio is rapidly approaching zero. Reading them has just become a waste of time.”

  198. dougman says:

    “Classical Linux as a desktop”….hmmmmm

    Its funny you mention that, as that is something M$ is trying to do away with and drive everyone bat-shit crazy with ‘Charms Bars’ and “MetroFail Tiles’.

  199. oldfart says:

    “Well, I would side with you on the arrogant side a bit, but considering you use Linux as well, does this mean you are flogging yourself for pity?”

    Actually Dougie it could be that we are IT professionals who are paid to support environments in which Multiple enterprise LOB applications are run on Linux in its server role. And as you know quite well it is in it role as a replacement platform that Linux has had its largest success.

    Classical Linux as a desktop – not so much.

  200. luvr says:

    As much as I hate to say it Mr. Pogson, some moderation of this site’s posts may be required

    +1. I’ve given up following the comments. The signal-to-noise ratio is rapidly approaching zero. Reading them has just become a waste of time.

  201. dougman says:

    Linux users are pompous arrogant asses?

    Well, I would side with you on the arrogant side a bit, but considering you use Linux as well, does this mean you are flogging yourself for pity?

  202. satrain18 says:

    At least I’m not a pompous arrogant a$$ like you and the other linux fanboys.

  203. DrLoser says:

    I wish to point out that “So unless you can prove someone is lieing you cannot accuse them of lieing” is the core defence of the Gish Galloper, a species of debater of which oiaohm seems to be hell-bent upon becoming a charter member.

  204. dougman says:

    So you do agree that you’re a dimwitted limey yes? Now that we have that settled, maybe just maybe, you will realize that Wintel is not the force it once was.

    For example, here is writing from a few years ago, which drives this home. “Both firms have often co-operated, despite occasional crockery-throwing. Microsoft has been pushier: in the mid-1990s, for instance, Mr Gates leaned heavily on Andy Grove, Intel’s boss, to stop the development of software that trod on Windows’ turf. Intel backed down.

    The Wintel marriage is now threatened, oddly enough, by technological progress. Processors grow ever smaller and more powerful; internet and wireless connections keep speeding up. This has created both centripetal and centrifugal forces, which are pushing computing into data centres (huge warehouses full of servers) and onto mobile devices—businesses that Microsoft and Intel do not dominate.”

  205. DrLoser says:

    Let’s clear this “defamation” foolishness out of the way first, and perhaps return to microkernels (sigh) later.

    You straight up call me lieing when you are the one in the wrong.

    Neither Deaf Spy nor I have called you a liar one single time in this thread. In fact, if you search for “lie,” you will find that the word is practically monopolised by … oiaohm. Possibly this is evidence of a guilty conscience?

    This is another case of you attacking documents because you don’t understand them.

    Which “document” did I attack? Not a single one.

    http://www.mondaq.com/australia/x/295580/Corporate+Commercial+Law/Defamation+online+legal+perils+of+social+media+postings

    Another puff-piece from an Australian lawyers’ office: not the finest quality evidence I can imagine. Although it does raise some useful points (see below) — none that support your thesis, oiaohm.

    You guys are in the USA right DrLoser and Deaf Spy.

    No, we’re not. I’ve pointed this out to you before.

    Making a false claim on social media including blogs is a offense. This changed with the ruling 2014.

    No, “making a false claim” is not. No ruling in 2014 changed that.

    Remember innocent until proven guilty this applies to on-line. So unless you can prove someone is lieing you cannot accuse them of lieing.

    Making the suggestion that somebody is lying (as noted, we didn’t) is not the same as making a defamatory accusation, oiaohm. If it was, it’s hard to see how you would escape multiple charges. No, defamation is aptly described as follows from your cite:

    In Australia, a person may bring defamation proceedings if he or she considers that the publication of a statement caused damage to their reputation.

    You don’t have a “reputation” to damage, do you?

    Furthermore:

    [It is a defence if] the statement was a “fair comment” or “honest opinion” – that is, the statement is on a matter of public interest, it is comment or expression of opinion rather than a statement of fact and is based on proper material

    As it happens, it is my honest opinion that you are a “compulsive fantasist, “oiaohm, and I believe this to be “fair comment” in context.

    I do not, however, believe you to be a liar. In fact, my honest opinion is that you are, for whatever reason, completely incapable of distinguishing between the truth and falsity.

    But, go ahead. Bring legal action against me (and Robert: he’s the publisher) for “deformation.” If this silliness ever reaches court, you’re going to need an interpreter, I think.

  206. dougman says:

    I think a liberal application of “Troll Be Gone” would suit nice.

    Meanwhile, for the intellecually curious, check out Satrains post history at MaximumPC: http://www.maximumpc.com/users/satrain18/track

    The best one is “By definition, a Personal DESKTOP Computer is one that is x86 based that can run a desktop OS.”

  207. DrLoser says:

    Re: The URL sort of gives it away, doesn’t it? It’s a “record Q4 and full year revenue,” for what that’s worth.

    Uhhh… and what’s the very first graphic shown? … You sure are a dimwitted Limey.

    Not sure quite what that makes you, Dougie, considering that I made precisely the same point in my following line.

    Not only can you not scroll down a page, but it appears that you can’t even read more than one consecutive sentence at a time …

  208. ram says:

    As much as I hate to say it Mr. Pogson, some moderation of this site’s posts may be required.

  209. oiaohm says:

    In fact oldfart calling me a lier/fraud without true evidence was also an illegal before 2014 rulings what he was doing was acceptable. The rules of engagement have changed.

  210. oiaohm says:

    1. It’s “defamation,” not “deformation,” you … should it be “defamed,” or “deformed”? … numbskull.
    I give on that one its the limitation of a spell checker and human error. In fact the the first two letters are correct tells you that natural human error can come into play.

    Number two I have not said I am wrong at all

    Deaf Spy http://mrpogson.com/2015/01/06/2015-crippling-wintel/#comment-235771 summaries as I am right.

    DrLoser at the core design of Windows NT is a Microkernel.

    That is the protected statement.

    Its in fact Jochen Liedke who in 1997 formally created the define of what is a Microkernel. DrLoser sorry that is the book you need. There is no formal define for Microkernel before 1997. What year is Windows NT 4.0 from 1996 so guess what Windows NT 4.0 documentation is incorrect because too old.

    A lot of people have not read the formal define of Microkenrel to wake up that NT design officially is a mangled first generation microkernel.

    Naturally (what with not understanding a subject, as usual, you have claimed to have full understanding of, as usual), you’re not going to favour us with a cite on this accusation, oiaohm.
    You straight up call me lieing when you are the one in the wrong. This is another case of you attacking documents because you don’t understand them.

    http://www.mondaq.com/australia/x/295580/Corporate+Commercial+Law/Defamation+online+legal+perils+of+social+media+postings

    You guys are in the USA right DrLoser and Deaf Spy. Making a false claim on social media including blogs is a offense. This changed with the ruling 2014.

    Remember innocent until proven guilty this applies to on-line. So unless you can prove someone is lieing you cannot accuse them of lieing. You can question and request cites or rewording of sections and remain legal. This is the result of the 2014 rulings.

    So every time you claim I am or anyone else lieing you better have the documentation to back it up if you are living in the USA. Due to the ruling the USA it might also apply in other countries. Do you really feel like being stupid.

  211. satrain18 says:

    You’re bullsh1t, Douglas Smith.

  212. satrain18 says:

    And, Doug, why you act like an a$$h0le?

  213. dougman says:

    Lo’ and behold!!!….Satrain, the porn stroking – video game poking dimwit from Alabama has returned! http://mrpogson.com/2014/12/23/come-on-the-year-of-gnulinux-on-the-desktop-was-ages-ago-now-were-mopping-up/#comment-230305

    …and yes, I did accept Bitcoins as donations on my website before it was sold.

    Question: do you always run-around acting like a littler 5-year old snitch twit? Did you even look at the links I provided for you? You could be making a killing spewing your M$ bullsh1t all over the web! http://mrpogson.com/2014/12/23/come-on-the-year-of-gnulinux-on-the-desktop-was-ages-ago-now-were-mopping-up/#comment-230265

  214. satrain18 says:

    Bitcoins?

    Please!!.. give us your wisdom on Bitcoins!
    DrLoser, before it went bankrupt or whatever, Doug’s ‘company'(Jet Computing) accepted donations in Bitcoins.

  215. dougman says:

    Bitcoins?

    Please!!.. give us your wisdom on Bitcoins!

  216. dougman says:

    Re: The URL sort of gives it away, doesn’t it? It’s a “record Q4 and full year revenue,” for what that’s worth.

    Uhhh… and what’s the very first graphic shown? https://dl.dropboxusercontent.com/u/12600821/Intel%20Stock%20Price.png

    You sure are a dimwitted Limey.

  217. DrLoser says:

    I repeat, Dougie. Nobody but you is interested in “stock pricing.”

    Not Robert. Not oiaohm. Not Deaf Spy. Not me. Nobody but you.

    Has the bottom fallen out of your bitcoin wallet? Because otherwise I can’t really see why you are obsessing about this completely irrelevant issue.

  218. DrLoser says:

    You interjected yourself into a conversation regarding stock pricing, that was brought up by Deaf Spy, so yes you did…

    Here’s the original post. For whatever reason, you are averse to reading it again, so I’ll save you a click:

    In the other news, Intel are actually doing fine:
    http://anandtech.com/show/8901/intel-reports-record-q4-and-full-year-revenue

    The URL sort of gives it away, doesn’t it? It’s a “record Q4 and full year revenue,” for what that’s worth.

    But the only bit about it that refers to the stock price is the graph at the top of the link.

    What’s the matter, Dougie? You can’t read anything, unless it’s presented to you in a graph? Or is it simply that you can’t be arsed to scroll down to the “Q4 2014 Financial Results (GAAP)?”

    Presumably you have some other physical or even cognitive problem, since the GAAP results are also presented in graphical format, and very definitely make no mention of “stock price.”

    Is your mouse broken? Do you have crippling rheumatoid arthritis? I sympathise in either case.

    No matter. Having established to our mutual satisfaction that stock price is of little or no consequence, perhaps you would care to sally forth on the 2011-2013 baseline asset/liability/equity performances of Intel, Google, Red Hat and Microsoft, as presented earlier by me, your humble and obedient servant?

    They’re just numbers, Dougie. I mean, it’s not like they would impinge on your happy little world in any way.

  219. dougman says:

    Idiot,

    You interjected yourself into a conversation regarding stock pricing, that was brought up by Deaf Spy, so yes you did, but nonetheless don’t quit your day job please!

    Go crawl back to the server room closet and blow-out some dust bunnies.

  220. DrLoser says:

    Mr Pogson is probably getting bored watching the children play, however. (I can’t say for certain: he certainly seemed to get a weird sort of fun watching deprived kids in the Frozen North cleaning fluff out of fans with a Q-Tip.)

    So, as usual, it falls to a Troll like me to bring us all back to the basic point of the OP.

    No, Dougie, it isn’t the stock price of Intel. Nobody but you has shown the slightest interest in that.

    No, Fifi, it’s nothing to do with microkernels.

    Wintel is at a huge disadvantage in 2015. All the things that locked the world into Wintel in decades past now are locking Wintel out: mobility, touch, Android/Linux the cool OS, OEMs ship the stuff, retailers stock it and consumers are lapping them up…

    I don’t really see any of these propositions as proven, although every one of them deserves examination.

    But just to take this “huge disadvantage” thing, Robert. Why in 2015? Why Lollipop, rather than (say) Jellybean in 2012?

    Because there was no obvious evidence of a “huge disadvantage” in 2012. And any “huge disadvantage” in 2015 is pretty much by definition going to be speculative.

    Which is fine. What disadvantages are you speculating upon?

    * The Linux kernel
    * The hardware performance (for the sake of argument, ARM vs Intel)
    * The feature set of mobile phones
    * The application stores

    Or anything else your fertile mind can come up with.

    Because, quite honestly, I don’t really see anything knocking either Microsoft or Intel off their financially secure perch in 2015. Maybe some time soon, but not in 2015.

  221. DrLoser says:

    Dougie, as well as being a dicey sort of salesman, you apparently share with oiaohm the unfortunate and possibly insurmountable quality of being an ignorant buffoon.

    Look at any disclaimer, on any website that hosts stock pricing, you will find this tidbit: “current stock price performance data is not necessarily indicative of future performance”

    Did I mention stock price? I did not. Not once.

    I purposefully used headline numbers from filings as a simple way to show people who are (like me) not professional accountants or stock-brokers what the basic figures for four companies of interest are.

    Feel free to pick the nits out of the numbers presented, Dougie. But please don’t misrepresent me as somebody with any interest, published or otherwise, in stock prices.

  222. DrLoser says:

    Basically present your cites now Deaf Spy to disprove the case of deformation against you. You now have to prove you are not a criminal.

    There is a difference between questioning and stepping over the legal line.

    What you do here, Fifi, is you politely ask for a specific cite on a specific subject. A scattergun “discovery” request might work in some of the more disreputable court-rooms in the Western World, but it makes absolutely no sense on Mr Pogson’s site.

    And now to the Celebrated Process Of Formal Debate Method!

    1. It’s “defamation,” not “deformation,” you … should it be “defamed,” or “deformed”? … numbskull.
    2. Nobody, even in court (and as mentioned Robert neither possesses nor claims jurisdictional responsibility), has to “prove that they are not a criminal.”

    In passing, Criminal Law is yet one more thing that you comprehensively fail to understand, isn’t it, oiaohm? Oh well. Back to Formal Debate Method.

    3. “There is a difference between questioning and stepping over the legal line.” A cite, perhaps?

    Now, in a court, there is indeed a difference — it’s called “Contempt of Court.” In fact, that’s what the legal line is, when you’re presenting an argument in court.

    On an Internet blog? There’s no difference at all, oiaohm. In fact, it’s a divide-by-zero error, because in almost all cases there aren’t even any legal lines.

    The only legal lines I can think of are incitement, of some kind; stalking, to some degree; or … OK, I’ve run out of relevant legal lines.

    Naturally (what with not understanding a subject, as usual, you have claimed to have full understanding of, as usual), you’re not going to favour us with a cite on this accusation, oiaohm.

    But the very least you could do is to narrow the accusation down to the specific “legal lines” that Deaf Spy has crossed, in your considered opinion.

    Formal Debate Method over to you!

  223. dougman says:

    Idiot,

    Please crawl back into your cube and stay there…seriously.

    Look at any disclaimer, on any website that hosts stock pricing, you will find this tidbit: “current stock price performance data is not necessarily indicative of future performance”

    For others that are so inclined, please read: Market Sense and Nonsense: How the Markets Really Work – Jack Schwager

    D.

  224. DrLoser says:

    DrLoser
    Publish, and be Damned.
    About time you obey this rule yourself. You just claiming stuff is crap without a single bit of evidence to back yourself up.
    http://mrpogson.com/2015/01/04/2015-could-well-be-the-year-of-the-linux-thin-client/

    I don’t think that citing one of Robert’s OPs is going to help much, is it, oiaohm? For somebody who spends his copious spare time googling at random and claiming that the results are somehow authoritative (I particularly enjoyed the URL you linked to on “Ludicrously Over-Priced Android Phones For the Aspirational Indian Middle Class”), you barely seem to grasp either the concept of a descriptive summary (clue: don’t just repeat the URL) or indeed an accurate and definitive URL.

    I am a patient man, however. Let me know what you are fulminating about, and I promise to answer.

  225. DrLoser says:

    And, not to bang on about this, but the equivalent figure for Microsoft is 35%.

    A while to go before the Rapture, my friends.

  226. DrLoser says:

    As an interesting comparison, btw, here’s the equivalent headline information for Google. The two companies seem to be roughly comparable in terms of size, although obviously they’re in different markets, because Intel actually makes things.

    Over those three years, Google’s Stockholder Equity has increased by an excellent 43%, whereas Intel’s has only increased by 26%.

    But then again, neither set of figures is any too shabby, is it? Red Hat’s equivalent is 11%, btw.

  227. DrLoser says:

    Re: Deaf Spys assert that Intel is doing fine, I will point out that stock prices are actually based on what other investors are willing to pay for your shares — not necessarily on the financial health of the company.

    God help you if you ever try to run a public company, Dougie. Anyhow, as usual the wannabe stock-brokers around here apparently need a little update on their base figures.

    Which headline number between 2011 and 2013 do you want to focus on, Dougie?

    Total Assets? $71 billion to $84 billion to $92 billion.

    Total Liabilities? (No, not oiaohm) $25 billion to $33 billion to $34 billion.

    Total Stockholder Equity? $46 billion to $51 billion to $58 billion.

    No word yet from the end of the 2014 fiscal year (which I believe is in December for Intel), but honestly.

    This looks like a pretty robust (and massive) corporation to me.

  228. DrLoser says:

    I asked a question that required research and effort.

    Then perhaps you should have put some research and effort into it, Fifi. As Deaf Spy points out, your massive screed can be summed up in three simple words:

    “I was wrong.”

  229. Deaf Spy says:

    Wow, Ohio, what a long way to tell “I was wrong”.

    See? I managed to summarize your wall of text into three simple English words. You may try it yourself next time.

  230. oiaohm says:

    Jochen Liedke is a master of microkernel design. Anything designed around the idea of a Microkernel should be following his papers if you expect it to perform.

    NT contains Microkernel ideas so implementation of those ideas should match up to Jochen Liedke if it does not expect performance problems. Jochen Liedke is a person who has basically benched everything and has the faster micro-kernel containing parts of MACH design.

    You want to understand if NT is written correctly it must be compared to those who can write OS designs making up the OS correctly.

    I asked NT compared to Micro-kernel design. Not NT design compared to someones random crap guess of what Microkernel design is.

    Something shocking is NT is a Microkernel with violations. This was the big trap. Violation list.
    1) Excess parts in the same memory space as the Microkernel. Remember early QNX run on 8088 processes that did not have protected memory. And not all current Micro-kernel OS require protected memory either.
    2) Microkernel not a independent part to all Services and Drivers.(Some OS’s classed as Microkernel OS this is true as well LynxOS and the original Minix and many others)

    So NT is not a pure Microkernel. This is why most other terms are now dead because something is either pure monolithic kernel or pure micro-kernel based on a generation or micro-kernel design with violations.

    Is NT a Microkernel is a point of view. Some point of views say yes some point of views say no its not.

    Deaf Spy so most of the Microsoft papers claims that NT is not a Micro-kernel are mostly bogus when you compare to real-world Micro-kenrels and allow that a Micro-kernel design can have violations to following Micro-kernel rules..

    Really you foolishly trusted a untrustworthy source. The terminology for defining a Micro-kernel has been refined over the last 20 years.

    Yes big mistake you need at 2 cites. 1 that define microkernel completely and 1 that defined NT design then you had to do some work comparing the two.

    I asked a question that required research and effort.

  231. oiaohm says:

    Ok opps the one thing I mention about NT in that post is First Generation Micro-kernel
    https://www.gnu.org/software/hurd/microkernel/mach.html

    Please that is Mach kernel and yes the NT link mentions it. So nothing I say about NT in that post is wrong.

    Yes NT design has glued in ideas from First Generation Microkernel.

    So this bit is absolutely true that you do owe me a sorry for at least. Guess how much more is absolutely true.

  232. oiaohm says:

    Deaf Spy
    The rest is completely, absolutely incorrect
    Look up the post you call absolutely incorrect. It has nothing in that post is about NT.

    Now I want your cites proving that everything in that post is absolutely incorrect as you claimed. Put up now.

  233. dougman says:

    Re: Deaf Spys assert that Intel is doing fine, I will point out that stock prices are actually based on what other investors are willing to pay for your shares — not necessarily on the financial health of the company.

  234. Deaf Spy says:

    Deaf Spy unfortunately I did include proof.
    http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.26.4581

    And since when does Jochen Liedke qualify as specialist and trustful source for the architecture of Windows NT? Btw, the abstract does even mention any OS in particular.

    So, my dear Fifi, what exactly are you trying to prove with your “proof”? That you can use Google to find the abstract of a paper you have never read yourself?

    By law making a statement that some is lieing with evidence is deformation.
    In which Universe?

  235. oiaohm says:

    Basically present your cites now Deaf Spy to disprove the case of deformation against you. You now have to prove you are not a criminal.

    There is a difference between questioning and stepping over the legal line.

  236. oiaohm says:

    Opps typeo.
    This is wrong. By law making a statement that some is lieing without evidence is deformation. So I really don’t need to bring evidence that I am right at all. You need to bring evidence that I am wrong.

  237. oiaohm says:

    Actually, it is you who must prove you are correct. This is how things go in normal world. I can prove that your proof is wrong, but you mist bring a proof to prove your statements.
    This is wrong. By law making a statement that some is lieing with evidence is deformation. So I really don’t need to bring evidence that I am right at all. You need to bring evidence that I am wrong.

  238. oiaohm says:

    Deaf Spy unfortunately I did include proof.
    http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.26.4581

    Its not free. To cover all the proof documents of OS design I hope you have 3000 dollars to spend. When you got the money argue.

  239. Deaf Spy says:

    Deaf Spy if what I said is incorrect where are you cites proving it.

    Actually, it is you who must prove you are correct. This is how things go in normal world. I can prove that your proof is wrong, but you mist bring a proof to prove your statements.

    For example, in my other post I say that Intel are doing fine, and I bring a proof. See? Not that difficult. If you can find a proof, of course. But, finding a proof that NT has ever been designed as microkernel is mostly difficult.

    Btw, you latest post is beyond any hope. No only you second the absurd compound “monolithic userspace”. It reveals clearly that you do not have the slightest knowledge of OS theory and practice.

  240. oiaohm says:

    http://www.cs.huji.ac.il/~feit/papers/ClustBaseTR.pdf
    There are no doubt general criticisms to be made about QNX (it really is slow, practically by design, and I’m not entirely sure I like the idea of using it as a Distributed OS — the built-in features are there, but in this case I really do worry about security.
    Did you even look up one benchmark. Apparently no. QNX micro-kernel runs at fairly much the same speed as any other Monolithic OS. Only design cause of some slowness is opening applications because QNX being a RTOS allocates all the memory straight away. This is RTOS behavior not because QNX is a micro-kernel. Monolithic RTOS do the same horible thing.

    Most of QNX design its in fact fast in a lot of places faster than Windows or Linux. Basically if you could switch QNX to a GPOS instead of a RTOS it would rings around Windows or Linux in performance on every metric.

    “general criticisms to be made about QNX”
    The problem here is yes there are but most are TROLL arguments without a single benchmarking document backing them up. The problem here is every benchmarking leads you to extremely limited set of issues on QNX. There is more issues using Windows or Linux.

    Most QNX issues are drivers and the RTOS fact that it allocates all application memory straight away. QNX does not have the problem of application normally dieing or running slow due to lack of ram. Linux dieing in lack of ram due to OOM and Windows running slow due to thrashing the hell out the swap system.

    DrLoser you love making crap up don’t you.

  241. oiaohm says:

    DrLoser
    Publish, and be Damned.
    About time you obey this rule yourself. You just claiming stuff is crap without a single bit of evidence to back yourself up.
    http://mrpogson.com/2015/01/04/2015-could-well-be-the-year-of-the-linux-thin-client/
    And you still have not respond here with sorry that I have challenged a cite without grounds. Why should I give you a proper cite when you will fight it and claim its a lie.

  242. oiaohm says:

    Deaf Spy if what I said is incorrect where are you cites proving it.

  243. oiaohm says:

    Deaf Spy/DrLoser Micro-kernel with monolithic in userspace are like l4 microkernel and the Linux kernel running under it to provide driver support.

    The fact that you don’t know what monolithic in userspace means to a Micro-kernel means you don’t understand the define of Micro-kernel.

    Micro-kernel does not demard that you have individual services in user-space.

    Actually, I can’t even work out what “userspace” means in context. Which bit of a microkernel architecture is properly “kernel,” and which bit is “userspace?”
    Not that simple. A monolithic in userspace is normally a double kernel setup.
    1 Microkernel designed kernel with direct hardware control 1 monolithic kernel in user-space providing all the driver functions the Microkernel needs. But it does not have to be another OS kernel it can be just that all what would normally be Multiserver Architecture Microkernel userspace fused into one binary then LTO optimized and drivers being libraries not independent executables. This is still a Micro-kernel OS.

    Second generation microkernel architecture formal as of 1997 allows for “monolithic in user-space” Yes “monolithic in user-space” is the tech term a set of solutions. Third generation microkernel architecture starting being define as of 2009 includes hardware instructions for virtualisation and attempting to design a universal API/ABI for all core Micro-kernel parts.

    http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.26.4581
    This cite in fact covers everything I said on performance.

    I think what is the problem here that this is really above your level. Its not uncommon for those who don’t understand Microkernel and the different generations to challance because second generations means you start talking about stuff that does not look anything like Mach or Tanenbaum books.

    They funny part Tanenbaum changed Minix to operate as second generation instead of first generation in 1995 but this is not reflected in his books. Tanenbaum books after 1995 are basically do as I say not as I do.

    Yes common problem here people base there ideas of what Micro-kernel is off of individual ideas.

    Tanenbaum focus is armored operating systems

    I really like how Tanenbaum weasels.
    http://www.cs.vu.nl/~ast/publications/computer-2006a.pdf
    However,since all servers and drivers in MINIX 3 run as physically isolated processes, they cannot directly call each other’s functions or share data structures. In-stead, IPC in MINIX 3 is done by passing fixed-length messages using the rendezvous principle: when both the sender and the receiver are ready, the message is copied directly from the sender to the receiver.
    Basically MINIX has implemented user-space to user-space messaging just without the straight up execute options. Result is fairly high overhead in MINIX on single core systems. Multi core systems not as critical to be able to execute jump between the Multiserver
    Architecture Microkernel as MINUX is because the messaging system is also across cpus so MMU service could be running on 1 cpu and driver requesting memory could be running on another so request can process in real-time but you still get overhead with particular times of drivers because services are not running at the right time. Yes Minux overhead has a cause. Its the classic security vs performance.

    Reason why Minix performs today is still replicating the monolithic of execution of code flowing all over the place without having to request process change. But do note the messages in Minix are still user-space to userspace memory without having to go down to kernel to IPC send or receive message. Only have to go down to kernel to setup IPC link. Note this breaks the first generation model that all IPC had to go through kernel.

    Tannenbaum focus is security not high performance. This leaded to people who follow Tannenbaum believe a stack of things about Micro-kernels that is not exactly true. Tannenbaum is not who you read if you are looking for high performing Microkenrels.

    I guess DrLoser did not read the party he picked carefully enough.

    Yes Tannenbaum also likes using the term Paravirtual Machines then getting confused. l4 developers tried Tannenbaum idea of having many OS instances running drivers reality it mostly does not work. The time-slice/dead lock problem raises it ugly head.

    X driver is waiting on Y driver todo something if they are independent virtual machines how does X driver give its CPU time to Y driver. Please do not say go by the Microkernel because this is what causes context switch overload killing you dead. Reality is you end up fairly much only have two options.

    Monolithic in userspace or messaging in userspace for performance are fairly much your only 2 options when making a micro-kernel.

    By the way understanding the requirement for great messaging/execution flow explains kinda why vista idea of moving few parts out of kernel space can be a very bad idea.

    Reason why a particular set of terms is dropped like macro-kernel is because you are leading yourself into a path from hell there is no performance advantage to those old ideas only disadvantages. Kernel overload/context switch overload is a huge nightmare to performance. Linux fuse drivers suffer have suffered from lots of problems yes this is attempting to be halfway between userspace drivers and kernelspace drivers is not a good place to be.

    Yes the arguement of we are getting closer to a Microkernel is more a failure to understand problem.

  244. oldfart says:

    “How dare a microkernel architecture try to automate this fine lifetime pursuit of a craftsman dedicated to the only thing he knows how to do?”

    But he does it sooo well my dear Doctor….

    But this is not about this blogs resident troll. The contention is that Smart phones are “crushing” wintel (actually Win+MacTel IMHO ) . Frankly as the owner of a really nifty Samsung Galaxy S 5 as well as a Samsung Galaxy Pro 10.5 Tablet I fail to see how even these fairly state of the art systems are going to hold a candle to even a cheap standard computer.

    Neither of these form factors (smartphone tablet) are as delivered capable of being used as a full substitute for a standard pc. If you equip them with keyboard and mouse there result is a) as expensive of not more expensive than a basic standard desktop b) still inadequate software wise and c) cumbersome as hell to carry and use.

    But if believing that wintel is doomed floats your boat robert pogson. knock your socks off.

  245. DrLoser says:

    Deaf Spy so will you agree this I will give you PDF link to formal book. In agreement you never as either Deaf Spy or DrLoser or any other post on this site again.

    If you’re going to be a pathetic little bully, oiaohm, I suggest that you couch your requests in language that does not make you appear to be totally swivel-eyed.

    Formal Debate Method (your invention, I should say), implies the following logical progression:

    1) You post “PDF link to formal book” (me Tarzan, you Jane: where do you come up with this phraseology?)
    2) Both halves of my brain, the half in the fourth dimension (Deaf Spy) and the half that sits in the interstices between the second and third dimensions (Dr Loser), are forced to accept the wisdom of your words and the relevance of your cite. Perhaps we are reduced to simple gibbering Hero Worship.
    3) Consequently we adjure from posting, ever again.

    Now, a moment’s logical consideration, coupled with the Awesome Power of Formal Debate Method, will demonstrate the significant weakness in this otherwise fine logical progression. To whit:

    What happens if you just link to a YouTube video of Kim Jong-Il reading his favourite Noddy book by Enid Blyton?

    After all, there’s nothing to stop you doing that, is there?

    In the words of the Duke of Wellington:

    Publish, and be Damned.

  246. DrLoser says:

    I think QNX’s messaging may be just peachy for small systems but it probably doesn’t scale in places where schedulers actually are needed.

    You are welcome to think what you like, Robert, but I’d suggest you should avoid making wild assumptions about the characteristics of microkernels in general and QNX in particular.

    I politely offer you the following conundrum, based upon your stated assumption:

    How exactly does QNX function as a hard real-time OS, if it doesn’t feature a scheduler?

    (The answer, btw, is that the very core of QNX is built around a hard real-time scheduler. That and the messaging system are basically all there is in the microkernel itself.)

    Additionally, messaging is a huge vulnerability with all the hopping back and forth in user-space.

    And the difference between this and the APIs found in standard hybrid OSes like Windows and Linux is, what?

    Nothing very much at all, actually. You could probably construct a theoretical model whereby the QNX network stack, for example, is suborned by a “rootkit” that perfectly mimics the real stack, but incorporates a few convenient attack vectors. But at that point you’d probably be better off just suborning one of the many Linux or Windows daemons or services with proven vulnerabilities. Much simpler and in fact (for Black Hats) better supported. Also less likely to crash the entire platform.

    Bottom line: QNX and other microkernels adhere to the “separation of concerns” principle of computer design. Or, in terms more familiar to you, “Do One Thing And Do It Well.”

    There are no doubt general criticisms to be made about QNX (it really is slow, practically by design, and I’m not entirely sure I like the idea of using it as a Distributed OS — the built-in features are there, but in this case I really do worry about security.

    Mumbling something about messages pinging back and forth, however, merely suggests that you can’t come up with a coherent criticism at all.

  247. DrLoser says:

    No, wait, on second thoughts I think I’ve worked out what oiaohm is trying to say there.

    Messages are bad.

    Give the lad props for total consistency. That belief is certainly consonant with his longstanding total inability to deliver a message of any meaning whatsoever.

    “Feeding up false data” is the job of a minimum-wage itinerant fantasist in northern NSW. How dare a microkernel architecture try to automate this fine lifetime pursuit of a craftsman dedicated to the only thing he knows how to do?

  248. DrLoser says:

    I can’t even pinpoint the most stupid statement. I am divided between:

    “A memory controller can feed up false data”
    and
    “a monolithic in userspace”.

    Tricky choice, but you’ve picked the right two envelopes out of the Bran Tub of Rubble, I think.

    Minimalists will go for “a monolithic in userspace,” which is probably the essence of oiaohm’s ignorance boiled down into a mere four words. What “monolithic” means in this context defeats me. Actually, I can’t even work out what “userspace” means in context. Which bit of a microkernel architecture is properly “kernel,” and which bit is “userspace?”

    Personally I prefer the baroque magnificence of the first envelope, which is, I think, improved in its utter awfulness by quoting the follow-on sentences:

    A memory controller can feed up false data. Driver can feed up false data. A file system driver can feed up false data. That is the problem if you look at messaging the wrong way you see it as a huge vulnerability run way it will harm me. The problem is a vulnerability in particular OS parts you are screwed anyhow.

    Five whole, classic, sentences of gibberish, adding up to … what? The Signal to Noise ratio is extraordinarily low in this one, Luke.

  249. Deaf Spy says:

    Ohio, I tried to find a single true statement in your post. I tried hard. And there is only this single one:

    3) Applications. Might sound horible but there are less applications for QNX than Linux.

    The rest is completely, absolutely incorrect.

    I can’t even pinpoint the most stupid statement. I am divided between:

    “A memory controller can feed up false data”
    and
    “a monolithic in userspace”.

  250. oiaohm says:

    If that was true, why isn’t QNX in general use?
    1)Price QNX has never been cheep. To release a product using it cost some serous coin.
    2) drivers remember you can get QNX for x86.
    3) Applications. Might sound horible but there are less applications for QNX than Linux.
    Robert Pogson QNX messaging appears hypervisor microkernels. So the QNX design scales. Nova and l4 are examples of this.
    http://www.qnx.com/products/neutrino-rtos/neutrino-rtos.html#multicore
    Turns out that QNX scale quite well. QNX design avoid huge amounts of locking problems.

    BlackBerry is kinda giving QNX a black eye. Not really QNX or Blackberry the company own fault. Does not matter how good the Design OS is if the drivers are absolutely a train wreck.

    If you are able to get into the debugging logs of a BlackBerry you can find that drivers are crashing. If QNX was not as tough as what is was you would not be just worried about it running slow most cases.

    Blackberry and Windows Phones are running into the same problem. Hardware makers are not putting the effort into making quality performing drivers for those platforms.

    At one point, the source code of QNX was widely available so if someone found it useful on general systems it would be out there.
    Not exactly. Yes the source code was readable but it was look but don’t use unless you pay license.

    Additionally, messaging is a huge vulnerability with all the hopping back and forth in user-space.
    Seams that way. A memory controller can feed up false data. Driver can feed up false data. A file system driver can feed up false data. That is the problem if you look at messaging the wrong way you see it as a huge vulnerability run way it will harm me. The problem is a vulnerability in particular OS parts you are screwed anyhow.

    Monolithic kernels do all that huge hopping around inside kernel space between there drivers and core kernel parts without suitable security. This turns out to be the secret to how come Monolithic kernels performed so well.

    But in Second Generation Micro-kernel (Micro-kernel with messaging). Unlike monolithic were a driver inside monolithic can write almost anywhere. Memory mapping in a Second Generation Micro-kernel is using to assign what each driver can have access to. Security wise a Second Generation Micro-kernel drive has access to less code than an monolithic driver. Yet there is no major overhead.

    Why First Generation Micro-kernel(Like MACH) under perform. It is a timeslice is given out not fully used combined with memory management.

    This is security vs performance. Second Generation Micro-kernel is a little less secure than a than a First Generation Micro-kernel. Yet even a Second Generation Micro-kernel is technically more secure than a Monolithic yet with the same performance.

    Robert Pogson there is a balance between security and performance. If implementing security completely destroys performance people will not use it.

    No one has made a Micro-kernel that can perform at monolithic speeds without messaging or horribly sticking a monolithic in userspace(that also completely ruins security). Second Generation Micro-kernel that is the modern form is about the sweet spot between security and performance. You cannot get higher combined security and performance by any other OS design other than possibly a third generation micro-kernel that exactly what that is is not locked down yet.

    Robert Pogson the arguement about the possible risks is why for so long no one wanted to do it but QNX developers had nuts like no one else and just went for it. Then finally formally proved right 1995
    http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.26.4581
    Turns out there is not very much wriggle room on implementation of a Micro-kernel to result in a performing OS.

    Second Generation Micro-kernel is in fact include QNX even that it comes before most First Generation Micro-kernel exist its design is really Second Generation Micro-kernel.

    Third-generation microkernels are now under construction.

    Yes NT design has glued in ideas from First Generation Microkernel. This is very much opps. Is it fixable most likely yes will it be painful most likely yes.

  251. oiaohm wrote, “QNX matches the performance of a monolithic kernel in 1983 and its the first commercial microkernel.”

    If that was true, why isn’t QNX in general use? I think it’s mostly in embedded systems where performance is not the biggest issue over reliability/simplicity/size. I think QNX’s messaging may be just peachy for small systems but it probably doesn’t scale in places where schedulers actually are needed. Additionally, messaging is a huge vulnerability with all the hopping back and forth in user-space. At one point, the source code of QNX was widely available so if someone found it useful on general systems it would be out there. It’s not. I’ve only heard of QNX in relation to embedded/BlackBerry systems. If The Little Woman’s BlackBerry is any indication, QNX is a dog. You’d think there would be a Debian port of it…

  252. oiaohm says:

    DrLoser
    http://technet.microsoft.com/library/cc750820.aspx
    I will take this cite part it contains a lie.
    No commercial operating system is based on a pure microkernel design.The reason is simple: the pure microkernel design is commercially impractical because it is too computationally expensive that is, it is too slow.
    This is completely incorrect 100 no question lie. Unfortunately a majority believed lie. QNX matches the performance of a monolithic kernel in 1983 and its the first commercial microkernel. Remember 1985 is when prism/mica OS starts. So QNX proves both parts of that statement false. That statement was false before a single word about NT/Mica design was put on a bit of paper.

    Basically a huge stack of modifications to Windows NT microkernel design is based on the idea that Microkernel that perform is impossible. Yes a stack of masking modifications were done to NT none of the real problem was fixed.

    What is the difference between QNX microkernel that performs and a old MACH microkernel that suxs. Its a thing called Synchronous message passing all modern Micro-kernel OS design include it of some form. Windows NT design does not include this. But is absolutely critical for performance how the Synchronous message passing is implemented. Messages must not trigger returning to kernel.

    Next the cite you use makes out it comparing to commercial microkernel, Really its only comparing to failed academic experiment MACH. MACH goes from 1985 to 1994. Please note MACH itself was never fixed. Other decedent OS’s from MACH were fixed not all of them some have dig themselves into too much of a hole to get themselves out with incorrect fixes to the problem .

    The MACH Microkernel being the only thing people were using as an example of Micrekernel for many years meant that the fact QNX worked and had Microkernel design correct got completely missed. L4 Minix even the never really well released Hurd that is meant to be based off MACH contains the QNX feature required for performance.

    Please note Tanenbaum makes the same mistake. Reading his books is more ideas not the solution to the mircokernel design problems. Problem is NT core has not been redesigned.

    Almost Every OS referred to as a Hybrid kernel is based off of people reading papers on MACH OS. Almost all of them contain the same screw up.

    The direct descendants of MACH OS the BSD systems converted there kernel to a true monolithic design because like everyone else at the time they failed to find the one working example. Mind you by converting to a true monolithic design you also do fix the problem blocking good performance. When you get the idea of going halfway between Microkernel and a monolithic is how to screw it up completely. You get all the bad issues of Microkernel with all the bad issues of monolithic.

    Windows developers need to make a choice. Either go full monolithic or go full micro-kernel being half way in the middle is helping.

    And a link to the relevant one of your own posts wouldn’t really qualify as a cite, would it?
    It would. This is why insult me enough that I will not give cites any more is something you should have never done.

    Are you going to formally apologies and promise not just to my handle to everyone hand never to modify to play jokes with them again??

    Deaf Spy so will you agree this I will give you PDF link to formal book. In agreement you never as either Deaf Spy or DrLoser or any other post on this site again.

    The BOOK you need is linked of the wikipedia. I have given you every clue to find it. Maybe you are too stupid or lazy to find it.
    That is the bet the book exists and you will find after you take the bet.

    I told you that there is a newer define to Microkernel. I don’t know how much more of a clue I can give a idiot and the book is a cite to the wikipedia page. You cannot be that dumb that you cannot find something like is. If you are this dumb you really should not be posting. I really would be doing everyone a favor if you did not come back.

  253. DrLoser says:

    Already have and I am not repeating self in the prior.

    No you haven’t. And a link to the relevant one of your own posts wouldn’t really qualify as a cite, would it? Your pathetic self-defined sense of ethics would hardly be infringed by at least highlighting the specific instance of your own brilliance, would it?

  254. DrLoser says:

    DrLoser if you believe I am lieing accept the bet.

    Explain the bet and I am up for it, oiaohm.

    Just, please, no more of this “I have given you enough clues” drivel.

    You’re not Willy Wonka. I don’t care about cites. Spell it out in the closest you can come to plain English, please.

  255. oiaohm says:

    DrLoser if you believe I am lieing accept the bet.

  256. oiaohm says:

    Explain in clear and simple terms, oiaohm, what hardware bug this might be and what range of Intel chips (years would do, unless you feel like being more specific about architectures and so on) are affected.
    Already have and I am not repeating self in the prior.

  257. DrLoser says:

    In fact there is a huge clue in my posts to the correct book.

    So what? Who cares?

    ‘Fess up or admit that you are just lying through your teeth.

  258. oiaohm says:

    Look up a formal define if a cite. Did I link you to the exact point nop. Did I give you a page number nop. If I give you book to read I am still not giving you a cite.

    Call we a weasel.

  259. DrLoser says:

    Whilst waiting for a usable cite (not, in this case, oiaohm’s misfeasance), I’ll carry on.

    Yes part of the reason Microsoft cannot find it is there dominance of usage of intel chips that contain a hardware bug.

    Explain in clear and simple terms, oiaohm, what hardware bug this might be and what range of Intel chips (years would do, unless you feel like being more specific about architectures and so on) are affected.

    Because it’s utter imaginary tosh, isn’t it?

  260. oiaohm says:

    DrLoser do you take the bet.
    This “book” does not exist, oiaohm. The nearest I can come to guessing it is the Tanenbaum book, which clearly refutes your ludicrous claim.
    Its not the Tanenbaum book.

    Further reading is not where the book I am referring to is hiding. The Tanenbaum book is as Further reading and not cited because in many areas its discredited. DrLoser so you like using discredited sources to attempt to make your point of view. In fact there is a huge clue in my posts to the correct book.

  261. DrLoser says:

    (I take that snark back on (a). For some reason the linking system here is apparently temporarily broken.)

  262. DrLoser says:

    a) No cites, remember, Fifi?
    b) If you’re going to cite, at least cite the specific comment.

    Nobody would even trust you with a Rolodex, would they?

  263. DrLoser says:

    The evidence was found by Deaf Spy on AMD opertern. Yes part of the reason Microsoft cannot find it is there dominance of usage of intel chips that contain a hardware bug. Yes the bench marks that Deaf Spy/you brought in show it. So why do I need to provide a cite to this when you already did. Maybe you did not know what you were citing.

    So, you don’t have any evidence that, given (say) an 8 core i7 with a GPU of choice, Blender is seriously hampered by NT as compared to Debian.

    Rather a roundabout way of admitting it, if you ask me, Fifi.

  264. oiaohm says:

    Got any evidence that, given (say) an 8 core i7 with a GPU of choice, Blender is seriously hampered by NT as compared to Debian?
    http://mrpogson.com/2015/01/03/atom-pc-future-pc/#comments
    In fact it was you who posted the cite containing the demonstration of the flaw in NT design. What do you have short term memory on what cites you have posted around here DrLoser.

    See you failed to claim ownership for a cite you posted.

  265. DrLoser says:

    The change of define is in a official book. That book is connected to many wikipedia articles.

    A simple cite would help, oiaohm, but I realise that you would trash your weird moral universe by providing one.

    A few keywords, perhaps? A well-known IT savant involved? Other esoteric clues for those of us who love nothing better than an Easter Egg hunt led by a fool in a tin-foil hat?

    This “book” does not exist, oiaohm. The nearest I can come to guessing it is the Tanenbaum book, which clearly refutes your ludicrous claim.

    Just blurt it out. You’ll feel better for it. And if you don’t blurt it out, then frankly I’m going to accuse you of telling porkies yet again.

  266. oiaohm says:

    DrLoser no point pretending not to be Deaf Spy now.

  267. DrLoser says:

    Me: Got any evidence that, given (say) an 8 core i7 with a GPU of choice, Blender is seriously hampered by NT as compared to Debian?
    Fifi: The evidence was found by Deaf Spy on AMD opertern.

    No it wasn’t. Got any evidence that, given (say) an 8 core i7 with a GPU of choice, Blender is seriously hampered by NT as compared to Debian?

    Raw numbers without a cite of any kind would do.

    Got any

  268. DrLoser says:

    DrLoser Alpha CPU architecture is Prism. Alpha CPU is designed for Multi in and out at the same time. This is why its not Harvard or Von Neumann architecture.

    Ludicrous. Try again. What precisely does “Multi in and out at the same time” mean?

  269. oiaohm says:

    DrLoser Alpha CPU architecture is Prism. Alpha CPU is designed for Multi in and out at the same time. This is why its not Harvard or Von Neumann architecture.

    So Alpha is Alpha. Its a unique beast design to massively scale. You see the same kind of thing in IBM power risc chips that are able to process 8 threads per cpu core. This is not hyperthreading but in fact 8 different input and out per CPU.

    There are a lot of chips out there that are not Harvard or Von Neumann architecture.

    Got any evidence that, given (say) an 8 core i7 with a GPU of choice, Blender is seriously hampered by NT as compared to Debian?
    The evidence was found by Deaf Spy on AMD opertern. Yes part of the reason Microsoft cannot find it is there dominance of usage of intel chips that contain a hardware bug. Yes the bench marks that Deaf Spy/you brought in show it. So why do I need to provide a cite to this when you already did. Maybe you did not know what you were citing.

    Let’s assume that the meaning of “microkernel” has changed, at some unspecified date with no cite attached.
    I will give you that in exchange you leave. The change of define is in a official book. That book is connected to many wikipedia articles. It includes the exact date of the change and why the change. At the core of NT is a old microkernel design. With a very old design flaw. Apparently no one at Microsoft has read the book that is require reading for anyone thinking of doing OS design today.

  270. DrLoser says:

    It’s been a short, yet instructive thread. Specifically, it has offered a very fine example of Gish Galloping:

    Deaf Spy really its all written on the wikipedia with references what David Cutler worked on. Yes the web page about David Cutler.
    “Me Google better than you Google, monkey brother!
    DrLoser the only party with free silicon production capacity is Intel.
    Crap.
    There is a very strict restrictions that can be in kernel space to be a Microkernel.
    Nope.
    PRISM/ALPHA chips are not Harvard or Von Neumann architecture.
    So, what are they?
    We are going to learn how a true cite nazi operates.
    No, we’re not, are we Fifi?
    At the core design of Windows NT is a Microkernel.
    No justification of this extraordinary claim will be forthcoming.
    Once you have Jochen Liebtke something bad happens.
    Presumably L4? Who knows? Is it relevant?

    Highly entertaining, if you enjoy watching a complete ignoramus making the customary fool of himself … but, can we move on to a proper discussion, please?

    Alternatively, somebody (other than oiaohm) could step up and defend oiaohm. I know this is difficult, what with him being a witless gibbering fool, but presumably he occasionally makes a comment that other people on this site are prepared to defend?

    No?

    Oh dear, Fifi.

  271. DrLoser says:

    The existence of NT core major bug comes clear with blender and other programs that that should increase in performance equally with increase cpu side or increased number of cpus but don’t.

    An “NT core major bug?” That should be relatively easy to pin down, Fifi. I mean, we’re not even talking about an architectural decision or even a design wart. We’re (dressed nattily in the customary abbreviated red leather and matching high heels) talking bug!

    Have you considered filing a bug report? (I seem to recall you doing so to the Austin Group, regarding various Posix bugs.)

    Remember to argue with this you will need to try to find the counter Cites DrLoser as I am not providing cites for anyt of this. This is all basic computer Science knowledge.

    That’s not really how “knowledge” of any kind works, Fifi. It’s a pretty fine definition of “self-regarding stupidity,” but it’s not knowledge as we know it, Jim.

    Sans cites, though, I can still offer the following counter:

    Got any evidence that, given (say) an 8 core i7 with a GPU of choice, Blender is seriously hampered by NT as compared to Debian?

    Because, without that, Fifi, and as usual, you have nothing.

  272. DrLoser says:

    You want to prove and find Microsoft Windows internal problems take the cite you found and compare to proper documents for how a Microkernel functions and performs.

    What cite? I didn’t provide you with a cite. Can you cite it back? Oh dear, how unfortunate. That would be against your newly-developed arbitrary ethical standards, wouldn’t it?

    Back to the frilly knickers and the flickering lamp-post with you, Fifi! So much easier to justify, ethically speaking.

  273. DrLoser says:

    The problem is the meaning of Microkernel has changed.

    Nope, it hasn’t. Mach, QNX, etc etc … still microkernels, in exactly the same way they always were.

    But let’s indulge your red-miniskirt fantasy here, Fifi. Let’s assume that the meaning of “microkernel” has changed, at some unspecified date with no cite attached.

    Unless that date would be somewhere around 1990, is there any good reason to claim that either Mica/Emerald/DEC whatever or NT 3.1 was by the definition of the time a microkernel?

    Obviously not. Clueless once again, aren’t you?

  274. oiaohm says:

    Funny apparently DrLoser is a lazy IDIOT. The problem is the meaning of Microkernel has changed.

    Using Microsoft republished out of date define is wrong. Remember how X11 was always trying for memory management in user-space. This was following stupid idea error that it could be done.

    In fact also finding the correct source on what a microkernel is would have lead DrLoser to Jochen Liedtke. Once you have Jochen Liedtke something bad happens. Turns out Microkernel bad performance is bad design of particular parts. Now if Microsoft had to go hybrid for performance this means at NT core there is some form of major bug that has been wall papered over.

    The existence of NT core major bug comes clear with blender and other programs that that should increase in performance equally with increase cpu side or increased number of cpus but don’t.

    Remember to argue with this you will need to try to find the counter Cites DrLoser as I am not providing cites for anyt of this. This is all basic computer Science knowledge.

    Do you know one of the core bugs. Memory management code is Independent to what NT developers call the Microkernel so causing too many calls to get stuff done. The problem with using an over strict Microkernel design.

    Commercial successful Microkernels include L4 and QNX.

    You want to prove and find Microsoft Windows internal problems take the cite you found and compare to proper documents for how a Microkernel functions and performs.

    Come on DrLoser you had a chance to prove your self as brilliant yet you just let a buffoon like me state all the facts. You could have beat me to the punch line.

  275. DrLoser says:

    You called be a baffoon.

    No, I called you “an [ignorant] buffon,” oiaohm. I’m waiting for a cite to contradict that observation.

    Is a “baffoon” related to a “gruffalo,” I wonder?

    Or, perhaps, a “buffoon” speaks gibberish in the deep, manly tones of a “bassoon.” Personally I see you as more of a squeaky high-range “piccolo” person, oiaohm.

    A “piccaloon,” perhaps?

  276. DrLoser says:

    You asked the question if Windows was a Microkernel. It only took a simple search to find that your trap was bull crap.

    Surprisingly, oiaohm, even you can’t keep track of your own gibberish. A simple search for “microkernel on this thread will reveal the earliest (and possibly the silliest) reference:

    DrLoser at the core design of Windows NT is a Microkernel.

    No it isn’t.

    Micro-kernel have zero extra cost on a ALPHA/PRISM.

    Per definitionem, oiaohm, microkernels have extra cost on any non-quantum CPU architecture whatsoever. How do you think they pass messages? Via pink unicorns?

    NT becomes a Hybrid kernel to compensate for the context switching overhead of a non ALPHA chips.

    No it didn’t.

    Its the uniqueness of the PRISM/ALPHA design. PRISM/ALPHA chips are not Harvard or Von Neumann architecture.

    Well, who needs a cite? But it would be absolute spiffy to learn in what precise architectural ways Prism/Alpha chips (bog standard RISC as far as I know) differ from Harvard (either original or modified) and von Neuman architectures.

    Face it, Fifi, you’re not going to front up on this one, are you? Not even in an “architecture for Dummies” sort of way.

    First edition NT was not a Hybrid kernel design.

    Yes it was.

    What is any of this supposed to prove, anyway? And since when did you become a DEC insider, to go with your many other completely unprovable “experiences?”

    Does Ken Olsen communicate with you via microwave brain patterns? At this point it wouldn’t surprise me if you claimed he did.

  277. DrLoser says:

    You are going to learn how a true cite Nazi operates. I have been nice. Now you are going to see my true colors. Last 7 years you have not once seen what I am truly like.

    Bit of a waste of seven years full of futile gibbering, walls of unintelligible text, gish galloping extraordinaire, and complete lack of evidence that you know anything about anything, then, wasn’t it, oiaohm?

    Oh, to have had those seven years back. You could even have completed a PhD in that time!

    Still, I’m sure we all look forward to the “real” oiaohm. But, please, no photos. I can find enough of those inside public telephone boxes in London.

  278. oiaohm says:

    Deaf Spy so will you agree this I will give you PDF link to formal book. In agreement you never as either Deaf Spy or DrLoser or any other post on this site again.

    The BOOK you need is linked of the wikipedia. I have given you every clue to find it. Maybe you are too stupid or lazy to find it.

  279. Deaf Spy says:

    Now, all of a sudden the official documentation of Microsoft on the subject of Windows NT “contain so much garbage its not funny”. MS do not know their own OS, and Ohio and the Grand Sage Guild of The Free Tards know better.

    Now, Ohio, this is funny.

  280. oiaohm says:

    Please also note there is no such thing as modifiedmicrokernel or macrokernel,

    Any document using the terms modifiedmicrokernel or macrokernel is incorrect and out of date garbage. Both of these were attempts to put a term on something that could not be define. Microsoft likes keeping old incorrect terms alive to confuse people.

    Please find a proper cite for Hybrid kernel. Yes there is a book full of academically acceptable OS type defines.

  281. oiaohm says:

    Fifi, I am afraid you are not in a position to ask me such things. You go first with cites. Cites, which do prove your point, not cites which have nothing to do with it.
    From now on every statement you make will be questions where its it cite.

    You asked the question if Windows was a Microkernel. It only took a simple search to find that your trap was bull crap.

    You are going to learn how a true cite nazi operates. I have been nice. Now you are going to see my true colors. Last 7 years you have not once seen what I am truly like.

  282. oiaohm says:

    Deaf Spy also that link of your contain so much garbage its not funny.

    From the link you just quoted state there is no such thing as successufl commerical Microkernel OS. Please now list 4 successful commercial Microkernel OS. There are 4.

    You called be a baffoon. I now don’t have to provide you with facts at all.

  283. oiaohm says:

    Deaf Spy a link is not a cite try again.

  284. Deaf Spy says:

    Deaf Spy before asking any more questions on this site to me please provide the cites answering this question.

    Fifi, I am afraid you are not in a position to ask me such things. You go first with cites. Cites, which do prove your point, not cites which have nothing to do with it.

    Anyway, since I am in a particularly good mood today, I will throw a bone. Not to you. But to other readers, whose reading-comprehension skills exceed these of a wild dingo:

    http://technet.microsoft.com/library/cc750820.aspx

    If you decide to venture into this fine piece of text, please focus on this:
    ” …it is important to clearly understand the difference between the terms kernelmode2 and microkernel…”

  285. oiaohm says:

    Deaf Spy please note I will not be accepting wikipedia as that is not an academic usable source.

  286. oiaohm says:

    Deaf Spy sorry please give a cite for what a Microkernel is.

    Anyone who looks up Microkernel even on somewhere as poor as the wikipedia will see that vista does not meet the requirements.

    There is a very strict restrictions that can be in kernel space to be a Microkernel.

    Also if you look around NT documents you will also see another name other than Mica. Mach. Mica was Dec implementation of the Mach Mirokernel design. Marh run on DEC VAX.

    See how as per Vista a large part of the graphics subsystem and the window manager were moved back to user-space. Does that make Vista microkernel, oh, source of endless entertainment?
    Deaf Spy before asking any more questions on this site to me please provide the cites answering this question. Particularly where it shows you are a complete moron who does not understand what a basic term microkernel means.

  287. Deaf Spy says:

    DrLoser at the core design of Windows NT is a Microkernel.
    No, it wasn’t. It is much closer to micro than to monolithic, but it has always been hybrid. The “original NT kernel” (I believe you speak of 3.1) has quite some additional services (the whole executive) and only ignoramuses can call it a pure micro kernel.

    NT becomes a Hybrid kernel to compensate for the context switching overhead of a non ALPHA chips.
    It is obvious that you draw this painfully wrong conclusion from the fact that MS move the graphics subsystem and windows into kernel-space as per NT 3.51 to improve performance.

    Here is a surprise for you, oh, misguided one. Having the graphics subsystem into kernel space does not translate into hybrid kernel. A strong point of NT, Fifi, is that the OS is modular. MS can replace modules and change the running context of modules without causing architectural changes to the OS itself.

    See how as per Vista a large part of the graphics subsystem and the window manager were moved back to user-space. Does that make Vista microkernel, oh, source of endless entertainment?

  288. oiaohm says:

    DrLoser at the core design of Windows NT is a Microkernel. Micro-kernel have zero extra cost on a ALPHA/PRISM. NT becomes a Hybrid kernel to compensate for the context switching overhead of a non ALPHA chips. Its the uniqueness of the PRISM/ALPHA design. PRISM/ALPHA chips are not Harvard or Von Neumann architecture. First edition NT was not a Hybrid kernel design.

    Every CPU architecture design has it own flaws.

    There is just not enough production to go around anymore. Markets cannot grow forever there will always is limits. There will always be saturation points. There will always be short falls.
    My quote

    “Production” is Supply. “Growing Markets” is Demand. “Saturation points” is Supply.
    Your complete idiot miss assignment. Saturation points is not Supply.
    http://www.investopedia.com/terms/m/marketsaturation.asp
    Saturation is Saturation.

    Also Growing Markets is not Demand as such. Why Demand can be positive or negative. Demand is Market. Demard is also not Markets.

    Ignorant stupidity is what DrLozer did here 1 out of 3 correct.

    “BTW, this was not an usual procedure at DEC. Many employees left the company with intellectual property from a cancelled project under their arm, with the understanding that if they made it a commercial success then DEC would come back knocking on the door for for royalties.”

    This is a comment on state of affairs at the time.

    DrLoser did I at once say the process was odd. Back then it was very common for a project to start at one company and complete software development at another. So NT starts as Mica OS at DEC to become completed as Windows NT at Microsoft.

    You thing the typo of double words should be a sign of problem. Its not.

    So, Cutler walked down the street to Microsoft and offered them Mica which became NT. Later DEC sued MS and, in an out of court settlement, got royalties for the filched technology. Part of the deal included targeting NT (back) onto the Alpha platform.
    This here is a different author to the BTW bit. You can tell this by the “” around the BTW bit. I guess I should expect DrLoser not understand the fine points. Its this bit that is the conformable bit of information. That quote comes from a Microsoft Published book. NT is a direct descendant from Mica and that is fact.

  289. DrLoser says:

    Deaf Spy if you follow what NT design the cpu is Alpha/Prism that it was designed for.

    No, it wasn’t. Google harder, young pathetic Jedi wannabe!

  290. DrLoser says:

    Ah, Fifi, Fifi, Fifi. You’ve completely misunderstood your cite, you cretin. Quoted by you:

    “BTW, this was not an usual procedure at DEC. Many employees left the company with intellectual property from a cancelled project under their arm, with the understanding that if they made it a commercial success then DEC would come back knocking on the door for for royalties.”

    One would assume that the “for for” sorta gives it away.

    But, no, that “not an usual?” As is completely obvious to anybody who is not an ignorant buffoon, it actually refers to “not an unusual procedure.”

    Everybody else on this site will be able to draw the obvious conclusion.

    You, oiaohm? I beg leave to doubt it.

  291. DrLoser says:

    The following doesn’t qualify as a lie, per se, but it certainly qualifies as utterly ignorant stupidity:

    There is just not enough production to go around anymore. Markets cannot grow forever there will always is limits. There will always be saturation points. There will always be short falls.

    “Production” is Supply. “Growing Markets” is Demand. “Saturation points” is Supply.

    Many thanks for avoiding your typical Wall Of Gibberish, oiaohm, but frankly you’ve merely dropped down to dismal inconsequential gibberish.

    Just out of interest, what on earth do you do for a living?

  292. DrLoser says:

    Sorry I did a post without a single lie in it. Everything the truth. As a moron you have attacked it Deaf Spy.

    Congratulations, Fifi! It’s been a long, hard, eight year hard road of nothing but blood, sweat, tears and toil, but you’ve finally managed it!

    An entire post without a single lie in it! I’m proud of you!

    Nevertheless, to my count, there were at least three specific misrepresentations. Some way to go, I suspect.

  293. oiaohm says:

    Deaf Spy how many times in a row are you going to be wrong.
    “David Cutler butted heads with Intel chip designers from day one after joining Microsoft. There are many reports of it.”
    In fact I have already given you one example this.
    http://www3.sympatico.ca/n.rieck/docs/Windows-NT_is_VMS_re-implemented.html
    “Given your interest in VMS you might find this amusing. In the early 1990’s we visited Microsoft to try to ensure that their new OS “Windows NT” would be available on IA32. We met with Dave Cutler, and he was adamant that IA32 was doomed and would we please get lost so he could target Alpha and then whatever 64-bit architecture was certain to replace IA32 by Intel. It was not a polite disagreement; that guy (Cutler) HATED IA32 and wasn’t reluctant to transfer his displeasure to IA32’s representatives (us). What an ugly business meeting. Smart guy, though.”

    Deaf Spy time to shut up you have been absolutely wrong. Almost nothing has been correct in your recent posts at all.

    Sorry I did a post without a single lie in it. Everything the truth. As a moron you have attacked it Deaf Spy.

  294. Deaf Spy says:

    At no point your sources prove you fantasm:
    “David Cutler butted heads with Intel chip designers from day one after joining Microsoft. There are many reports of it.”

    As for the rest, poor sources, Fifi. Do some more reading. Hint: Show Stopper! Gosh, even some more Wikipedia will do.

    P.S. Anything about the IBM SAN Volume controller you once claimed to have worked with? Piss off.

  295. oiaohm says:

    Deaf Spy David Cutler book on NT in fact mentions Prism and Mica.

    http://www3.sympatico.ca/n.rieck/docs/dave_cutler-prism-mica-emerald-etc.html

    Deaf Spy its worse than you think. Microsoft paid DEC for Mica that happens to be at the core of NT.

    So, Cutler walked down the street to Microsoft and offered them Mica which became NT. Later DEC sued MS and, in an out of court settlement, got royalties for the filched technology. Part of the deal included targeting NT (back) onto the Alpha platform. “BTW, this was not an usual procedure at DEC. Many employees left the company with intellectual property from a cancelled project under their arm, with the understanding that if they made it a commercial success then DEC would come back knocking on the door for for royalties.”

    So the reality is NT is Mica OS. So it starts before Cutler works for Microsoft.

  296. oiaohm says:

    Deaf Spy really its all written on the wikipedia with references what David Cutler worked on. Yes the web page about David Cutler.

    Basically you don’t know the topic at all even to a wikipedia level.

  297. Deaf Spy says:

    Ohio, again you cannot prove with source / link any of the fantasies you tell, of course.

  298. oiaohm says:

    Deaf Spy if you follow what NT design the cpu is Alpha/Prism that it was designed for.

    The David Cutler software team from Digital that formed the core team that made NT the last CPU and OS before leaving Digital was Prism and Mica. Prism cpu design is what becomes Alpha. Mica OS is where the commonality between VMS and NT comes from.

    Something interesting its the fact that Prism was canned at Digital in 1988 only to be brought back to life a few years latter is why the team left for Microsoft. If Prism had just straight up become Alpha and not sat on a shelf for a few years Microsoft would have never got the development team to make NT.

    The Digital hardware design guys did not leave with David Cutler software team .

    The Intel 860 and MIPS R3000…. Yes the magical shuffling cpu types is all about looking for a CPU with the features of Prism/Alpha but by the time Alpha is released its fairly much too late everyone has released all the applications for x86.

    David Cutler butted heads with Intel chip designers from day one after joining Microsoft. There are many reports of it.

    Deaf Spy basically you have your time-lines wrong again.

    The big problem is Mica OS and Prism CPU were designed as a joint team operation. In other words cpu designers and OS designers working as one to make sure CPU had exactly the right features. The number of times this happens is insane rare. IBM has a cpu that is designed for a particular OS. Majority of CPU are designed to be generic.

  299. Deaf Spy says:

    Microsoft hasn’t done any “interesting” (to Intel) technology in many years

    I don’t think they have ever. NT was originally intended to run on Intel 860, then on MIPS, then on Alpha. Actually, support for 386 came at a much later stage.

  300. ram says:

    “Wintel”? Microsoft’s and Intel’s businesses are not bound together or even address the same market. Microsoft is probably “cactus” (as they say in Australia), but Intel gains business with every smartphone (including ARM ones) sold. As presented at the Intel partners conference, for every 20 smartphones sold, Intel gets to sell one (high margin) server — after all, that smartphone content has to come from somewhere! The odds are overwhelming the content gets served by a server using mostly Intel chips (particularly for I/O) running Linux.

    Having met several of Intel’s VPs and many of its senior managers, I can tell you they are neither arrogant (like Microsoft’s and AMD’s) nor stupid. Intel’s executives do pay attention to their customers which are primarily hardware and software development companies/organisations. While Intel doesn’t ignore Microsoft (yet) they don’t grovel to them as Microsoft is now only a tiny customer for Intel. Perhaps even more significantly, Microsoft hasn’t done any “interesting” (to Intel) technology in many years.

  301. oiaohm says:

    DrLoser the only party with free silicon production capacity is Intel.

    Intel does make arm smart phone chips. One of them brands they go under Windriver. Wintel could collapse completely and Intel remain. Intel does not need Microsoft.

    DrLoser there is absolutely no point looking at the Intel Share price. Intel is truly a diversified production company. The complete x86 line of Intel could be no more and Intel still remain very healthy.

    DrZealot if you knew how to do research you would have worked out the only partlies effected by wintel collapse is really Microsoft and protection software vendors.

  302. DrLoser says:

    Something to cheer you up, Robert. Did you realise that, in the time between your penning this piece and now, Intel stock dropped a whole dollar from $36 to $35?

    Your words evidently carry considerable weight in the world of International Finance.

    However, I’d work on making the message stick … because the stock is now back around $36. Oh well.

    Care to define the “smart-phone era?” (My phrase, not yours. It’s something you monitor, though, since you seem to believe that it will spell doom for Intel.)

    I will tentatively define it as “the last two years, and ongoing.” Oh look, again: Intel stock rose from $22 to $36.

    For your sake, Robert, I hope that the Little Woman (or someone with more financial acumen and less inbuilt bias than yourself) is looking after your pension pot.

  303. oiaohm says:

    Robert Pogson “cheaper/smaller/less wasteful alternatives.”

    Key word here is less wasteful. The problem with limited production is the fact that production runs that could be wasteful could get less access.

    Arm tablets and Arm Smart phones take fairly much the exact same SOC chips. This is something your graphic did not show. If 85% of market can buy what you are selling you have far more chance of profit.

  304. oiaohm wrote, “This is turning into a big problem.”

    This is a bigger problem for Wintel than FLOSS on ARM. The second uses much less resources to do anything and is taking over the world while M$ is not growing because there are cheaper/smaller/less wasteful alternatives. When the Digital Divide is bridged, people cross over. M$’s tanks can’t even get onto the bridge. They are too wide.

  305. oiaohm says:

    http://www.itworld.com/article/2865341/amd-nvidia-reportedly-get-tripped-up-on-process-shrinks.html

    This is turning into a big problem. There is just not enough production to go around anymore. Markets cannot grow forever there will always is limits. There will always be saturation points. There will always be short falls.

Leave a Reply

Your email address will not be published. Required fields are marked *