My Favourite Distro is Number One

All this chatter about Ubuntu and Linux Mint distracts people from the fact that old and respected Debian GNU/Linux still plays a major role in web-servers. It’s more popular globally than RedHat or Suse but most of that popularity lies in Europe. I recommend Debian GNU/Linux for desktops/notebooks/servers because it’s easy to install, easy to maintain and easy to manage. It works.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology. Bookmark the permalink.

38 Responses to My Favourite Distro is Number One

  1. oiaohm says:

    By the way on a fully hardened OS a JIT is forbin.

    Reason executable code section and Data section of binaries are not allowed to overlap enforced by OS kernel.

    Also executable code section is also mapped to memory read only. All data sections are also marked no execute.

    Lot of attacks don’t work on fully hardened systems.

    Java and .net JIT engines are not operational on a fully hardened system at all. PHP to executable Java to Executable and other languages to executable have to be used. Shock horror right. Even running items like bash scripts brings questions on harden systems.

    Systemd on Linux is a step to bring the basic designs of fully secure systems to the desktop.

  2. oiaohm says:

    Dr Loser
    “You don’t need connection tracking, whatever that is and however your fertile brain can imagine it being implemented. State is carried around by cookies and other methods.”
    Cookies is are invalid for secure site designs. People find with cookie replay attacks and other methods make the so called other methods of state protection to become suspect.

    SSL sockets are what you are depending on at min. I encrypt there for I am. This is something outside the http stream. Next is source location.

    Connection tracking is where windows is weak.

    Connection tracking information has a standard to move between linux and unix systems. So yes even if the stream has been decoded from SSL. You still want to know what SSL client sent it order for the operation to be performed. This information in a fully secure site can be queried when you are at the database and filesystem behind the site.

    At the windows filesystem level behind IIS can you query where the request came from so allow or reject it. Remember using selinux or unix trusted extensions I can.

    A lot of what you call secuirty Dr Loser would cease to operate once you pass the http server. So the http server has basically reduced the layers of secuirty attacker has to beat. Ie beat the http server you are in basically. Proper secure systems beat the http server you still have a up hill battle ahead of you.

    So yes secure site state is cookieless other than as a backup to identify the user. In fact for a encrypted site operating cookieless saves you resources.

    So yes I can have a selinux or trusted unix rule saying any non local user cannot write to this section of the drive or database. That is if you have not had a pool remove blured information.

    The process is called onion wrapping. You have many layers. Any 1 layer fails attacker still has to beat the next one.

    So php script has a flaw. Next layer is either the file-system or database. In a secure design both the file-system and database can block the attack.

    When you assess most defeated websites you find they are only one layer of security thick.

    You are not getting were millions of sockets come in.

    That I can display a different socket set to use based on source location. Linux supports million of open sockets at once if you have a need. So port 443 you see could be highly locked down http server on a read only file-system and database. port 443 I see could be vpn link to openvpn or any other trick.

    Secure systems sometimes have stealth as part of design.

    So yes each user could be given there own unique set of displayed sockets based on there authorization and where there are approaching the server from.

    Dr Loser mudflap does not just enable library. Gcc alters the final produced code in the binary as well. mudflap is part of gcc. Library and optimization engine.

    “Do you actually understand what a C pointer does? Do you even have a clue as to how much control the compiler has over this stuff?”

    I know exactly what a C pointer does I did code in asm. There is extras than can be put on top of mudflap. On a hardden system that addresses it weakness by memory controller and compiler assistance.

    The std::vector bug is fix http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19319 has been for a few years.

    DSO one is funny. If you use mudflap the threaded version it works. Dynamic loading of libraries goes threaded somewhere like loading the library. Has to be fixed because single threaded programs don’t need the over head.

    Frank Eigler and Chris Scott email is also fixed. Call me a mean bugger I gave you a out of date link. Where non of the mention errors exists any more. Notice that page last major update of information 2008-01-10. Basically its 3 year old information.

    Yes never presume you have anything if the document is out of date. Harden Linux distrobutions are using items like mudflap by default. So does AIX by the way.

    Basically do you own homework DR Loser don’t expect me todo you homework for you.

  3. Dr Loser says:

    But then:

    (1) I know nothing, and am worth less than spit.
    (2) You are an omniscient genius.
    (3) As all Sicilians know, it is inconceivable that the C++ STL was designed to get around as many buffer overruns as possible. Consequently, it is not only necessary for GNU to keep chucking stuff out in shoddy C with tens of thousands of compiler warnings, but it is also necessary to come up with something with a cute name like mudflap, which nobody uses and which doesn’t really work anyway.

    Good job there, FLOSS. Even when I’m not watching out for it, it still goes SPLAT! on the windscreen of life.

  4. Dr Loser says:

    @oiaohm:

    From the gnu link:

    “mudflap produces copious spurious violations for most C++ programs (e.g. any program using std::vector): see [gccbug:19319].”

    Well, that’s all right then.

    In fifteen years of using C++, I’ve never really seen the point of std::vector.

  5. Dr Loser says:

    @oiaohm:

    “Solutions for native code exist if people use them.”

    If, oiaohm, if.

    For some reason they choose to use Java or .Net or Python or whatever instead.

    I can’t really imagine why, and I am an embedded C or C++ developer at heart.

    BTW your “solution” was a library, not a gcc construct. There’s no particular reason not to use that in mvc, should you wish to turn your C program into an indeterminately-secure version of bounds-checked Pascal.

  6. Dr Loser says:

    @oiaohm:

    So then. Let’s see.

    You don’t understand C/C++ pointers (and on the side you don’t understand how .NET works).

    You don’t understand how IP network stacks on any variety of OS work. You specifically do not understand the difference between stateless packets and SSL packets.

    You don’t understand how to follow up a link and read the information in it.

    You don’t understand how to interpret the legalese in it and make a reasonable judgement based on that.

    I’m not entirely convinced that you understand Old English, despite your protestations. This would be an easy accusation to refute. Tell us an anecdote in Old English!

    Basically, oaiohm, it isn’t obvious that you understand anything.

    Here’s a simple one for you:

    What’s the main difference between an LR parser and an LL(k) parser? And what does the ‘k’ mean? And which one is easier to implement in a recursive descent parser, and why?

  7. Dr Loser says:

    @oiaohm:

    No charge, but allow me to be the teacher here.

    “Yes general HTTP spreads more simply. HTTPS in Apache in nginx state information can be transferred server to server in a cluster so load spreading is not a major issue. But you need connection tracking to join the right state up.”

    The connections are stateless. The session, definitionally, is not. That is where the problem lies.

    You don’t need connection tracking, whatever that is and however your fertile brain can imagine it being implemented. State is carried around by cookies and other methods.

    “Stateless HTTP exists for insecure sites.”

    Nope, it exists for insecure connections. It is not necessary to presuppose an “insecure site.”

    “There is no need to track state information on it because there is no secuirty requirement.”

    Correct. This is how basic Web requests work. You carry the state around with you in each packet.

    “If you are using Stateless http in a section of a site you need secuirty you should be hung, drawn and quartered.”

    Even more correct. Did I, or did I not point out that the standard methodology is to transfer to HTTPS at this point? I think I did. I believe I did so whilst trashing your absurd contention that Web servers like to leave a thousand or so sockets open on each and every interface for no reason at all. But, ignoring that demolition of your absurd propositions, I’m quite prepared to second your rather silly little nostrum.
    Hopefully if that happens you will send a message not todo these stupid things.

    “This shows you are out of your league. You don’t know secure sites. Dr Loser.”

    Does it?

    I wonder what qualifications you possess in this area, apart from unintentional obfuscation, that most beloved of attributes when applying for a job in a secure environment?

  8. Dr Loser says:

    @oiaohm:

    A pure lie, is it? Well, let’s see you quote an example of anything or anybody who has ever turned -fmudflap on.

    “It is enabled by passing -fmudflap to the compiler. For front-ends that support it (C and very simple C++ programs), it instruments all risky pointer/array dereferencing operations, some standard library string/heap functions, and some other associated constructs with range/validity tests. Modules so instrumented should be immune to buffer overflows, invalid heap use, and some other classes of C/C++ programming errors. The instrumentation relies on a separate runtime library (libmudflap), which will be linked into a program if -fmudflap -lmudflap is given at link time.”

    And let’s ignore “very simple (I assume C99 convertible) C++” and “some standard library” and “some other associated constructs.”

    Do you actually understand what a C pointer does? Do you even have a clue as to how much control the compiler has over this stuff?

    Much like applying a macro to Lisp in order to turn it into an entirely different language, I’m sure you can do this stuff to C/C++. Point one is that nobody does it. Point two is that (absent evidence) PHP 5.x doesn’t do it. And point three is that you are still left with the residue after “very simple and some.”

    Get real.

  9. oiaohm says:

    Hmm smart http insert does not like ftp in caps. Funny.

  10. oiaohm says:

    Dr Loser in fact the complier does make a large different than you think.

    “then you are going to have to face the inconvenient fact that overrun protection is impossible.”
    This is a pure lie.
    http://gcc.gnu.org/wiki/Mudflap_Pointer_Debugging

    “-viol-abort violations cause a call to abort()” enabled takes out a lot of PHP issues. Ok person loses connection but the overrun fails.

    Gcc has run-time monitor of pointers and buffers in C/C++

    MSVC does not have this option or a equal at all Dr Loser this does make a different fairly major one. Of course mudflap is not free it does cost some processor time. Security vs a Little extra CPU usage. I will take secuirty.

    “If you are talking about a .NET jitter … which it sounds like you are …”

    That you think I am talking about a jit system shows how in the dark to tech you really are. Truly hardened system the native code is not really a soft target. Buffer overflow flaws in a hardened system don’t work in native code other than causing application to be terminated. Execute code because of buffer-overflow does not happen on harderned. Yes the big .net selling point is bogus to properly configured systems with secure capable compilers. Yes its a myth that using .net or java is required to stop buffer overflows attacks. Solutions for native code exist if people use them.

    Before the protection system in gcc was called Mudflap it name propolice from IBM dating from 1998. So every buffer overflow attack from 1998 on should not have happened if everyone was using secure methods.

    Yes .net Unsafe mode allows pointer operations.
    http://msdn.microsoft.com/en-us/library/ct597kb0%28v=vs.71%29.aspx

    Dr Loser
    “If thousands of sockets on a single interface were that cheap, why bother? Everything would be TCP/IP in that case.”
    http://FTP…... Lot of the older protocols are TCP/IP there is no issue with thousands of sockets on Unix, BSD and Linux based systems. Even millions.

    “Which would be a farcical waste of resources for a large, clustered, site. Stateless HTTP is used precisely to spread the load over several interfaces and/or machines: it’s easier to handle that way.”

    Yes general HTTP spreads more simply. HTTPS in Apache in nginx state information can be transferred server to server in a cluster so load spreading is not a major issue. But you need connection tracking to join the right state up.

    Stateless HTTP exists for insecure sites. There is no need to track state information on it because there is no secuirty requirement. If you are using Stateless http in a section of a site you need secuirty you should be hung, drawn and quartered. Hopefully if that happens you will send a message not todo these stupid things.

    This shows you are out of your league. You don’t know secure sites. Dr Loser.

    Pooling and queued are both to deal with the fact you don’t have enough resources to go round. One maintains state ie queued. One does no Pooled.

    “vertical scaling (which Solaris does well and Linux does horribly badly)” What in Windows is non existent.

    Linux horrid management of virtual scaling might be offset that the application parts you are depending on run twice to three times as fast as Solaris under Linux.

    Linux vertical scale issues reduced when the Big kernel lock was removed and the on going processes of locking reduction. Before that Linux basically did not vertical scale. Time has moved on. cgroups is improvement and the difference between Solaris and Linux in vertical scale reduces every release.

    Lot of Solaris guys are holding on to the past what Linux was. Not what Linux is today.

  11. Dr Loser says:

    @oiaohm:

    “Turns out MS complier kind lacks build native binary code with overrun protection.”

    Which compiler? What lack?

    If you are talking about any C or C++ compiler in existence, on any OS or platform, then you are going to have to face the inconvenient fact that overrun protection is impossible.

    If you are talking about a .NET jitter … which it sounds like you are …

    “Just like deploying .net with pointers enabled unsafemode on a website.”

    I’ve never heard of “unsafemode” before; is it an option on gcc, like -funroll-loops? But one thing I believe I should point out to you: yes, if you choose to circumvent the .NET runtime by using P/Invoke or unmanaged code in general, then of course you are sacrificing a lot of protection. Because, once again, you are in the land of C/C++.

    Look, GNU/Linux (and for once the gratuitous addition of GNU has some meaning here) has absolutely no way of protecting you from the various security holes that PHP has exhibited through the years. The libraries eventually compile down to C/C++. It makes absolutely no difference whatsoever whether you use gcc or msc.

  12. Dr Loser says:

    @oiaohm:

    I’m sorry, but I don’t buy any of that gibberish.

    First: to HTTPS (or SSL in general, for that matter). Yes, it’s TCP/IP; but you may have noticed that the general use scenario slips in and out of HTTPS on a need-to-secure basis. If thousands of sockets on a single interface were that cheap, why bother? Everything would be TCP/IP in that case.

    Which would be a farcical waste of resources for a large, clustered, site. Stateless HTTP is used precisely to spread the load over several interfaces and/or machines: it’s easier to handle that way.

    I still don’t understand what you mean by Queued, or why it somehow differs from the capability of a non-*nix machine. It’s hardly likely to be a silver bullet, though. If you’ve got capacity issues, the OS isn’t going to help you very much: you need a proper architecture for your data centre, preferably featuring vertical scaling (which Solaris does well and Linux does horribly badly).

    As for your nutzoid suggestion that database connection pooling somehow enables “man in the middle” attacks and cookie stealing and so on … Wow. You are endlessly entertaining, aren’t you? Will the day ever come when you run out of things to make up?

  13. oldman says:

    “ldman Lot of cases load balancers used to patch up windows weakness are linux based or BSD based tech.”

    Irrelevant. The problem is mitigated/avoided. The windows application where needed continues in use.

    End of Story.

  14. oiaohm says:

    I think I will give some prospective to those numbers by ntfacejerk.
    By netcraft the numbers of servers
    all sites
    Apache. 378,267,399
    MS 84,288,985
    nginx 56,087,776
    Active sites
    Apache 105,684,049
    MS 22,142,114
    nginx 22,221,514
    Defacement
    appache 1.095.982
    MS IIS6 195.154 + IIS7 10.433 + IIS5 6.109 +IIS7.5 4.002 so giving a total of 215698
    nginx 40.640
    Now divide breached by number in existence to give odds.
    all sites
    Apache 0.002897374
    IIS 0.00255903
    nginx 0.000724579
    active sites.
    Apache 0.010370363
    IIS 0.009741527
    nginx 0.000001829

    Interesting right. Even that MS has a low number of installs and Apache just has had a Shockley bad run due to some really good flaws so people defacing would have a easy time.

    If .net was so magically good. We should be seeing more of a difference between Apache vs MS.

    I would be expecting something like nginx numbers. One catch nginx does not run .net stuff.

    Allowing for statistical error your odds of being hit is no different running IIS vs Apache on a year when Apache is having a bad year. Really the odds when IIS is having a bad year have been worse.

    “With .net there is buffer overrun protection since the code is managed while PHP gets a few overrun exploits every year. PHP is the programming equivalent of duct tape and should have been replaced long ago.”
    Again this is something that should not happen with PHP. Deployment incompetence at work. http://www.hardened-php.net/suhosin.127.html
    Has been a deployment recommendation for years.
    https://www.owasp.org/index.php/PHP_Security_for_Deployers

    Duct tape no. Hiphop also protects against this stupidity as well. Also to bring bad new most Linux distribution built php5 have compiler added overrun protection. Yes most of the overrun faults ever year turn out to people running php5 on windows.

    Turns out MS complier kind lacks build native binary code with overrun protection.

    Just like deploying .net with pointers enabled unsafemode on a website.

    “CVE-​2010 – 3301, that was fixed in 2007 and was mys­te­ri­ously rein­tro­duced in 2008, in a large pile of ker­nel ver­sions x86_​64.”
    This is not mys­te­ri­ously item. Funny enough its a goof. The first bug was never fixed in the 32 bit x86 arch files since it was never reported being 32 bit kernel effecting. So when the 64 bit and the 32 bit versions of the kernel reduced files. Gremlin came back. No one had reported a EAX not being zeroed on 32 bit version of kernel. You do have to wonder how many other bugs like this do exist. That most likely will be found by fluke in all operating systems.

  15. oiaohm says:

    oldman
    “The same can be said of windows and its administrators. The question is, why should we accept the excuses from either group?”

    If you know me. I will kick both there asses for setting up poorly designed systems.

    Some of it is MS taught crap like incorrectly using connection pooling so reducing possible site secuirty. Yes this annoys me poor training only makes matters worse.

    MS and Orcale golden bullet sales don’t help things either.

    I did not say this but. Read only data is perfectly fine by pooled as long as it data is public data. You do not pool private and protected data. It adds a secuirty risk. Insecure sites really can use connection pooling as much as they like but they will not be stuff I will be producing.

    You are building a secure system. You might have to spend a little more on hardware to handle the load than using pooled database using a queued style. But at least the leak rate of data is lower. Since user information are check-able at the database and where the request has come from is check-able at the database.

    Admin attempting to do admin work by not approved path does not work on secure systems. This now makes the attackers job many times harder. They have to know how does the administrator admin the site. Is it by vpn is it by ssh is it by something else strange like an approval sms.

    Yes even password guessing is reduced. Why attacker is guessing passwords on what appears to be the admin interface. Has not gone the approved path. So even if they get the password they can still do nothing other than raise alarm that password has been breached when they try to change something they are not permitted to.

    Correctly setup systems most of attacks should not work. It shows the sad state of skill most IT officers around the world are.

    Oldman I don’t do weak crap systems.

    Really admins doing poor job of setting up secuirty that we should kick there asses until they cannot sit down is most likely we both can agree on. Oldman.

  16. oiaohm says:

    Dr Loser. Queued is dead straight forward. You use it all the time. Linux, BSD and Unix has conntracking.

    Remember http might be stateless. But you use Https that is stateful on secure sites. I gave examples of 200+ unique users. I am not talking about a http insecure site here. Where a person can take a cookie and impersonate you or do a lot of other man in a middle attacks.

    conntracking in Linux, Unix and BSD enabled many interesting things. Like tcp socket limit not applying how you would think. From outsite each unique IP address can use the full tcp socket limit each. So number of support tcp sockets Linux can take from outside is huge. People forget ip header has to be at the start of each packet. Don’t lose that information and you socket limit goes up.

    It very easy on systems to exceed the amount of processing power you have due to the number of connections you can choose to receive.

    Same with not losing the process id when internally so again expanding the number of connections that can be done to a single socket. The higher process id limit of Linux comes in handy.

    queued is normally done what is called load blancers. They pause you connection attempt until space comes around that you can attempt to connect.

    Pool and Queued are two different things.

    Dr Loser
    “As a general rule of thumb, if you have more than a thousand sockets open on a single Ethernet interface”
    Yep sites using https would kinda disagree with you.

    A thousand would be quite light load in some places.

    Exactly how do you think pooling the front side of https site would work. Answer it don’t. Yet people get the stupid idea of using pool be-hide a https site everywhere and wonder why they have problems with secuirty of the database. Same problem database no longer can see who is who.

    Database in pooled setups more often than not is only seeing 1 user. So administrator changing settings and some random bugger from somewhere on the internet look like exactly the same person to the database. So random bugger finds an exploit he can do anything the administrator can. Site vandalized.

    This is why a minor php or .net error causes a complete down right site failure. Web designers are creating sites that are basically doing the equal to running as root or administrator todo everything. Then people wonder why so many sites get defaced.

    Us who setup secure sites 90%+ of report issues with php stuff when we test it on our systems they don’t work. Yes we have the sql injection flaw. But the fact you did not login to apache and get tagged with a selinux tag the configuration data and lot of other data about the site is read only. So you cannot modify it no matter what SQL attack you send or upload files where you should not either. Ability to alter records is also limited.

    Basically the fact that the database is running with the same user logged in no longer gives the attacker the magic means todo everything due to extra information like selinux role being checkable. Person comes in by general internet they get tag as general internet. Person who comes in by vpn get taged as entered by vpn. So a person from general internet cannot tweak settings ever. Unless we do stupid things like use connection pooling that splits that information away from the request.

    Connection pooling disables you means todo what I do effectively. pooling has the habit of masking who is really doing the request to the database so making exploits more functional.

    queued can more often than not be done firewall side as part of a load balancing solution. Ie here can only take 200 connection. Delay any more. Don’t fail them.

    Behind https site you will be using queued to the database so that you secuirty is consistent from where the person enters the site to where they access the data.

    queued and pooled are the two techs you have to use. Secure site queued is most critical.

  17. oldman says:

    “Yes Linux offers administrators a lot of defense against lots of these attacks. Issue is administrators do not use them.”

    The same can be said of windows and its administrators. The question is, why should we accept the excuses from either group?

  18. oiaohm says:

    NT JERKFACE
    http://www.zone-h.org/news/id/4737
    True but of course I can depend on you not to read.

    MS is still accounting for about 20 percent of all breaches. When wake up that MS only has 14 percent market-share that is running a little high.

    “PHP is the security problem in the web server world. Do you realize how many times wordpress has been hacked?”
    Simple question is wordpress selinux integrated or trusted extensions integrated. Answer nop it is not.

    Is wordpress designed for secuirty from the ground up nop. Could I code something as bad as wordpress in .net yep. Basically just because someone can code something bad in the language does not make the language bad.

    Programming language does not prevent coders from being poor coders.

    By the way netfacejerk. I would like to know who was the person running this NovellNetware web facing. That has not been patched for years.

    By the way OsCom­merce CMS fault they mention selinux configured web-server and backend killed that one dead.

    Yes Linux offers administrators a lot of defense against lots of these attacks. Issue is administrators do not use them.

  19. oiaohm says:

    Dr Loser you cannot read right. Know that link you gave this one.
    http://www.ibm.com/developerworks/library/os-php-unicode/index.html
    Read “Programming considerations” before that it shows all the screwed up ways of doing it. Those screwed up ways are PHP 4 ways.

    Then IBM tells you exactly how todo it correctly and cleanly. Debian php5 in fact does support unicode if you turn it on. mbstring is part of the main php5 package on debian.

    Redhat and some related distrobutions is the prick who splits the main php5 program into parts so you could require to manually install mbstring as a extra.

    “php.ini, assign default_charset = UTF-8” This setting of default_charset comes into php in version 5 that has been out of a long time. This set the char-set of the script files. Yes you can set that to UTF-16 and UTF-32 many other strange things.

    Mind you don’t think about sending UTF-16 and UTF-32 to web-browsers. Does that get interesting or what.

    Mine you windows still has ascii only string processing option as well so does .net….. So does every programing language.

    That is a 2007 document. The recommend methods in that IBM document work on PHP 5.0.0 yes the one that was released in 2004.

    The planned difference with PHP 6.x has turned into a disaster. default_charset they tried to make auto detecting. Result that don’t work. There are too many ways that a utf-8 can get mixed up with a utf-16 and utf-32 and vice verse.

    There is zero rush because PHP 5.x.x supports unicode quite decently if it configured correctly.

    The error why stuff has to be coded horid in php5 is not php5 issue.

    Key note from that IBM document after the utf-8 solution that is very neat.
    “In this form, however, the source code itself is not seven- or even eight-bit “clean,” and many editors, configuration management systems, and other development tools, are likely to mangle it. One of the consequences is the mystery mentioned above: Programs that appear to work or fail capriciously.”

    One of the items that eats UTF-8 files badly is MS visual studio. Note 7 to 8 bit clean is referring to chars only 7 to 8 bit in length no unicode. ascii default of php is to be safe from editing program stupidity. Handy if script will be going across many server types.

    This editor/tool damaging your unicode can sneak up on you when doing a .net or java application as well.

    There is a php to java compiler by the way.

    Basically you can do very nice and clean utf-8 utf-16 and utf-32 program in php5. But you do have to be looking over shoulder. One of the disasters is the fact that most unix/posix based systems are utf-8 as default fileformats. Windows is utf-16. Visual studio can turn nicely evil. Open in utf-8 when you save its now utf-16. So now things kinda don’t work right.

    I hate unicode does not matter if its java .net php bash perl whatever. As soon as you are using it editors and tools that don’t support it properly it will ruin your day with strange bugs from no where.

    Only have to open and save a file once with a program that does not support it properly to have a strange bug turn up down the track.

    Think about if I insisted that you sent your nice java or .net source code into a 7 bit encode how much would be left of your nice little unicode.

    IBM always gives you how to deal with the worst possible event.

  20. Dr Loser says:

    @oioaohm:

    Bit by bit, taking a manageable amount of your unfounded assertions:

    “You cannot effectively run a queued system on windows.”

    You’d have to define “a queued system” for that. I feel confident in asserting that you can.

    “Windows networking stack gives up the ghost due to too many connections queued not the database.”

    Ignoring the fact that HTTP is stateless (so we’re not talking about TCP/IP sockets here; in general we are talking about UDP/IP sockets), this is simply untrue. On any Windows Server since (I think) 2003, you can use overlapped I/O if you really have a problem. It really has nothing to do with the network stack.

    As a general rule of thumb, if you have more than a thousand sockets open on a single Ethernet interface (even 100GB, although that would most likely be local), then you are doing something hideously wrong. Any sort of throughput on a system with that network load is going to be utterly horrendous.

    “The limit of allowed open connections by windows is too low to tolerate Queued being used in the most effective ways.”

    OK, let’s see the link. And, once again, this mysteriously capitalised Queued. What on earth do you mean?

    “Google is not truly restarting there servers. They kill and restart the services. The server stays running the complete time.”

    We had to kill the village to save it for Democracy. Indeed. Pray tell, in a stateless environment with clustered servers, why would anybody care to differentiate between killing a service and killing a server?

    “This is also to fight possible native code secuirty breaches or leaks.”

    Oh dear.

    First of all, there really shouldn’t be significant leaks in a well-designed garbage collection environment: certainly not enough to exhaust 32GB of RAM every two hours.

    And secondly, I don’t feel particularly secure when I read that a large company is totally ignoring actual security precautions and relying on restarting their services every two hours or so.

    Do you know how long it would take the average malware attack to penetrate your system, if allowed? Do you know how long it would take, in the extreme case, to install a root-kit? (Which would rather obviate the advantage of “restarting the service.”)

    About a hundredth of a second, that’s how long.

    In fact, oiaohm, do you know anything at all?

    Java usage is quite min in the back-end of Google.

    All the features of Java and Net also exist in PHP if you use PHP right Phenom.

    In fact when it comes down to it PHP has the most solutions designed remove requirement to reinvent the wheel.

  21. Dr Loser says:

    @ioaohm:

    PHP has had Unicode support for years, has it? Try this.

    I hate to disappoint you, mate, but Unicode support does not simply consist of kindly allowing programmers to type Unicode points in by hand.

    Plus which, insofar as it has Unicode support at all, this is PHP6 specific. PHP6 is now widely considered a bit of a botch, and they’ve gone back to PHP5 and are trying to retrofit things in. And Robert can probably help you with this one, but I strongly suspect that the default Debian version of PHP is an early 5, and most definitely does not have Unicode support.

    I await your normal Wall’O’Text in response.

  22. NT JERKFACE says:

    Linux with high secuirty design websites makes .net look like a god darn joke. Windows 2008 more secure my ass.

    And yet somehow fewer Windows 2008 websites were defaced in 2010 compared to FreeBSD.
    http://www.zone-h.org/news/id/4737

    You are really going to have to drop the propaganda on WS2008/.net and security. PHP is the security problem in the web server world. Do you realize how many times wordpress has been hacked? Linux does not offer automagical security protection for websites. With .net there is buffer overrun protection since the code is managed while PHP gets a few overrun exploits every year. PHP is the programming equivalent of duct tape and should have been replaced long ago.

  23. oiaohm says:

    Phenom don’t you get it. What else should I expect when dealing with a novice who truly does know nothing.

    The OS kernel does not know JIT generated stuff is JIT generated and that it does not have to send that to swap space in high load. So to the OS kernel everything the JIT has generated is important user data it cannot dispose of. So it has to send it to swap space when the system gets stressed so making the system even more stressed so more likely to come unstuck by the connections that are still coming.

    Connections don’t wait for you to get house in order.

    JIT is not a native code compiler that the OS kernel understands in a effective way. Windows and Linux are not different here.

    Native code in a native format the OS kernel understands is different on a stressed system. The system don’t need to send that native executable code to swap I can just drop it from memory for applications it will not be needing soon so solving the stress problem at least for a little bit.

    Best part about doing this is freeing the program executable code can give the space to make a buffer to send user data to swap space or compress memory or some other method to attempt to save OS from doom.

    Yes JIT does good profile guide and hardware particular optimization. I have never disputed that.

    I have disputed JIT effectiveness under system stress. When systems are being pushed to limit stressed for ram and cpu time. Last thing you want in that location is doing required operations if the binary was built a different way those operations would not be required.

    Sending JIT produced binary executable code to swap is one of those operations that you really don’t want to be doing when system is right on the edge of breaking. It becomes the straw that broke the camels back.

    Is there any optimization a JIT does that is commonly required that you cannot do to a native binary the OS kernel sees as a native binary. The answer is none. Can a native binary use a common run-time answer is yep.

    Sorry Phenom no matter what you do with .net it still second rate to a native binary the OS kernel knows when the system is stress to breaking point.

    Yes .net vs native binary OS knows on a system stress the native binary will take more before the system packs it in. Just because the OS kernel can do its job.

    Yes there was hope when MS said they would be releasing a OS that was .net at core that you might have got something with a chance of using .net effectively.

    A compiler that produces code into ram from a bytecode is crap Phenom with current OS designs.

    There are reasons why hiphop and other native conversion tools have been developed for some of the world largest sites. When server is under max stress native binaries OS kernel knows will give the system the best chance of living out the other side still running.

    Really .net is not a save all. Its a bringer of doom. Of course .net coders don’t want to accept this as fact.

    That more sites should be built in php than .net because php does have the option of being turned into a native binary the OS kernel understands. So Php can tolerate more server connections and load before failing than a .net site doing the same thing.

    Yes that defect of keeping the OS kernel in the dark of what it can and cannot throw away becomes deadly critical at max load events.

    Funny enough at high stress system a byte-code interpreter can out perform a JIT compiler. Mostly if it was designed right to map the byte-code to memory and its tight on memory usage. Reason kernel understands memory mapped files as another item it can just drop from memory in case of stress other than the alterations not written yet.

    The fact a byte-code interpreter can beat a JIT hands down in the right conditions shocks most people. But the byte-code interpreter is cleanly beaten by a native code binary in a high stress event.

    So far on this story not once have you been correct Phenom. Are you ready to give up yet and accept you are out of your depth. Your skill is no where near mine when it comes to running large systems and the issues involved.

    Oldman possibly could give me a run for my money. He has some skill with large systems. You Phenom not a chance. Repeatedly Phenom you only play on what I call toy classed systems because only people playing on toy class systems would say what you do.

  24. Phenom says:

    Ohio, you never stop to amaze me how much utter nonsense you can pile up together. After the pooling fiasco now you turn to “.Net is lacking a really good to native code compiler.”

    Never mind the JIT compiler in CRL. Never mind the CPU specific optimizations it does. JIT differentiates between Pentium 3, 4 and CoreDuo, and also between Intel and AMD. Never mind NGEN, even though it targets only P4 due to certain reasons. Never mind that it has quite solid knowledge of many BCL class methods and knows how to optimize them efficiently in native code. Example: JIT compiles Math.Sin to fsin on x86.

    Now I wait for your explanation how fsin is actually the worst efficient way to compute sine function on x86.

  25. oiaohm says:

    oldman Lot of cases load balancers used to patch up windows weakness are linux based or BSD based tech.

    Miss use of pooling tech by people like Phenom can end up abusing the cluster/load balancing system or/and lead to secuirty failings that would not otherwise happen. Their knowledge of database and clustering tech is so limited they are in fact dangerous staff to have around.

    Hopefully Phenom will take the important lesson. -There is no such thing as always. Tech must only be used when the right conditions to use are met.-

    If number of connections into the windows box will not cause windows box to ever exceed the number of connections it can put out there can be no reason to pool the database so allowing the load balancing in the clustering todo its job.

    I find a lot of .net programs attempting to do the database cluster job windows box end causing problems in load not being handed out correctly.

    Even so working around the limitation comes at excess cost. Sometimes the cost is higher than converting the application from Windows to Linux or other posix based OS’s without the limitations. Reason windows limitations don’t allow you to use the max power the hardware has on offer at times.

    The excess cost is not a cost I write down as a cost of doing business. I truly do write it down as a excess cost so it can be seen if that cost is high enough to justify effort of making replacement.

    Yes partitioning databases is a good method for expanding the number of connections to a point. The cap on number of allowed connections due to design issues in windows does not magically go away.

    Something anyone working on large systems has to keep in the back of there mind are the limits of the OS’s they are using.

    That windows is the weakness option also one of the more expensive options makes it really annoying.

    Yes what oldman stated is why its rare to find large business that is Windows only. It is possible to find large business server side that are Linux only.

    Native code has in fact been found to scale better Phenom. Mostly because of something you have over looked. OS kernels understand how to correctly manage Native code. JIT for Java and .Net really leave the kernel in the dark. Not allowing kernel to know what it can simply drop from memory.

    If the kernel could understand the cost of dropping a section of JIT converted code and regenerating it so able to make a selection between under high load copying to swap space or junking it. The difference between Native code and JIT code would be reduced.

    Yes the kernel can decide what section of a native binary it can reget from disk and not have to copy to swap. So saving a disk operation. Those are expensive and do add up.

    Gcj for converting java to native binaries on Linux can in fact under particular conditions improve performance. Yes it also can kick you in the teeth if you don’t profile guide optimize it. All tech has to be used correctly.

    PHP and Java compilers to native code both link to solving the same problem. .Net is lacking a really good to native code compiler.

  26. oldman says:

    “You cannot effectively run a queued system on windows. ”

    And yet there are applications that need to run on windows. So we scale the front ends wide using load balancers and Clustering and partitioning databases.

    This also give the added benefit of high availability.

  27. oiaohm says:

    Phenom you also have to be too small to be using clustered databases as well. Pool can result in a heavy user operations not being shared around the cluster evenly as well.

    Yes there are times where using queued is mandatory for other reasons other than secuirty.

  28. oiaohm says:

    Phenom try that with a postgresql database. Instead of a mysql one.

    Phenom
    “you always must use connection pooling with databases”
    and
    “Connection pooling is the only way out to save resources”
    These are both complete lies. So you are from the school of I build crappy insecure websites Phenom.

    You don’t know the tech and it just showed. Another complete myth by Phenom and more proof he don’t know tech.

    There is three solutions to the problem. One is to pool this is fine insecure setups.

    Second is Queue what you use on secure setups.

    Third is both. This can be special that a particular user will pool and the rest will run queue setup.

    pgpool-ii allows you todo queued or pool or both at selection of site. There are others for Postgresql and Oracle that are specialists in queued alone. More secure sites stuff.

    The different between pooling and queued is the login information is not recycled between connections in queued.

    Where pool attempts to reuse connections with the same login information. This is where pool brings secuirty nightmare. Username and password checking is being bypassed by the pooling process.

    In fact in some cases pooled can in fact be the cause why the website fails. When you swap to queued the problem disappears.

    What has gone wrong that pooled causes a failure lets say your database supports 200 connections. You have 300 users on the site using direct database user information. So the first 200 users have open connections the next 100 are shot to hell reason the first 200 make a new request the pool recycles there prior connection. The other 100 are screwed because there are no connections to the database coming free.

    Queued this issue does not fail particularly when there is a number of sql operations set. Ie each connection can do 5 sql operations then will be disconnected and make way for the next in the queue to do its max of 5 operations.

    Queued the advanced form get like task management and cpu access. Sharing the processing power of the database fairly or unfairly as instructed.

    Queued has to be used at a min. Pooling is optional thing that can cause you major hell. Linux can have queued a few million connections.

    You cannot effectively run a queued system on windows. Windows networking stack gives up the ghost due to too many connections queued not the database. The limit of allowed open connections by windows is too low to tolerate Queued being used in the most effective ways.

    Google is not truly restarting there servers. They kill and restart the services. The server stays running the complete time. This is also to fight possible native code secuirty breaches or leaks.

    Java usage is quite min in the back-end of Google.

    All the features of Java and Net also exist in PHP if you use PHP right Phenom.

    In fact when it comes down to it PHP has the most solutions designed remove requirement to reinvent the wheel.

  29. Phenom says:

    Pogs, you can be dreadly incorrect about native code scaling better.

    Scalability is something you achieve with careful planning, careful allocation of resources and suitable patterns. Pooling is one of them, and I mean pooling of everything, including memory heaps. No feature of native code can help you here. Instead, the fact that you need to re-invent the wheel over and over again, compared to Java and NET only can make things worse.

    Dr Loser is right that Google use Java in their backend, I can confirm that (of course, no official info). I can also tell you that Google restart their servers every couple of hours on a scheduled order to fight with leaks. And this is Java, mind you. God help them if that were pure native code.

  30. Phenom says:

    “On windows 2008 due to limited number of connections to database you have to use connection pooling.”

    Ohio, once again you demonstrate your benighted technical mind.

    For the readers with modest technical expertise – you always must use connection pooling with databases, regardless of the database. DB connection pooling has nothing to do with the DB, or the OS. It has to do with the number of concurrent clients, and web apps are especially prone to having way too many. Connection pooling is the only way out to save resources and license costs. Keep you connections open in PHP with MySQL under Linux, and the whole system will go down within 1 hour.

  31. oiaohm says:

    Dr Loser also under web load. On windows 2008 due to limited number of connections to database you have to use connection pooling.

    So you cannot maintain a direct connection between the connect to the web-server and permission processing inside the database.

    The advantage on a Linux system is a rejection in the database due to a invalid request by a web user can contain the highly useful information of exactly who that was. IP address login and time of there access all can be stored because they attempted to access something in the database they should not have. All based on the connection information to the web server.

    Windows Limits being way lower than Linux Limits means you have to compromise security so the item works. Yes same limits issues what forces stuff on to UNIX as well.

    Personally reguard windows 2008 as more secure you basically know crap if you believe that.

  32. oiaohm says:

    Phenom complete bogus myth claim about pup unicode support.

    What is internet. UTF8 or ISO 8859-1.

    Default PHP yes is ISO 8859-1 out box but its switchable to a default of UTF8. ISO 8859-1 is compatible with UTF8 to a point by the way.

    http://developer.loftdigital.com/blog/php-utf-8-cheatsheet yep enable international mode.

    Read point 9. Thinks like checksums for passwords and the like can take great offense processed as UTF when they are not.

    Yes PHP has full Unicode support.
    http://php.net/manual/en/mbstring.supported-encodings.php

    PHP has had full Unicode support for years. Phenom

    PHP has a set of compilers include like facebooks hiphop. Hiphop goes from php to c++ to native code. Reduced server loads by about 50 percent and can increase through put by 2 to 6.

    Basically Facebook and many others pull of using native code without issues. The HipHop interpreter is a more interesting beast. Yet some reason the bing developers fell up short even that every other major pulls it off without issue.

    I know why they would have been using Microsoft not to standard implementation of c++ with its strange and evil bugs.

    Also you don’t know a Linux system Dr Loser.

    Apache and Nginx selinux object tagging of connections. That tagged information is accessible by php and the request from php to postgresql or orcale database also contain the same selinux policy information. Of course most time you don’t need to check the selinux from php. Instead you have it checked in database or filesystem if user should not should not be allowed. Role based secuirty enforced on all data-storage and access requests stored in a single system. So making it really simple to pull someones rights.

    You pull there access rights to something you pull it by all access methods the user might use with there valid logins.

    Same tagging also works with solarias secuirty framework and other trusted unix systems.

    Linux with high secuirty design websites makes .net look like a god darn joke. Windows 2008 more secure my ass. There is not proper integration between system secuirty and web secuirty under Windows. You can very simply create secuirty holes with IIS in Windows 2008.

    Dr Loser you don’t know Linux so you don’t know how to set it up secuirty so you believe MS Marketing crap that they are secure. I can tell you now anyone who knows both systems inside and out will tell you Linux is the more secure choice. More of evil at times to set up but is more secure. You would only go Windows 2008 if you are going lazy and don’t particularly care about secuirty.

    “Google back-end stuff is in Java” I can tell you now that is not the case DR Loser. Google dominates are PHP and Perl both have good options for going to native code as well as “go” that replaced what they had as C++.

    Go is way more suitable for web design that C++. Yes google ran into the issues you did with C++ Dr Loser they addressed it by creating a new language.

    Phenom and Dr Loser when you get really serous like goggle you build your own webserver, own filesystem, own data processing language “Sawzall”.

    Phenom and Dr Loser love trying to spin myths.

    PHP is quite a decent language for web development if you know how to use it right. This includes not limiting yourself to the default engine.

  33. Dr Loser wrote, “For reference, btw, the “native code” thing is a non-starter.”

    We’ve been over this before. Native code is faster and is useful where that matters, for particularly onerous transactions or to scale better/use less hardware.

    I was not the one who claimed PHP was crap…

  34. Dr Loser says:

    @Robert:

    What sort of lunatic would convert a site like this (I’m not denigrating it; I’m just pointing out that its computing needs are modest) to C/Oracle/Server 2008?

    For reference, btw, the “native code” thing is a non-starter. M$ Bing is transitioning from C++ to C#, and I’m pretty sure that most Google back-end stuff is in Java. Web-servers are bottle-necked by comms and (quite often) disk access; not very often by CPU.

    I’m an outlier in that I have programmed Web sites in both C and C++, and I wish at this point to tell anybody who will listen not to do this. After that, on the language front, it comes down to a choice of sane and clean, neither of which fits PHP. Perl and Python should be good enough for anybody. And for the love of God, do not touch Drupal.

    I suspect you are using Oracle as a red herring.

    And when it comes down to it, you can get a virtual slice of a Linux machine at about $10 a month or a virtual slice of a Windows machine at about $15 a month (I haven’t checked recently). The difference is not very important here. I would personally regard Windows Server 2008 as more secure, but I’m not going to proselytise it when you can get a perfectly good LNMP stack for $10 a month.

    Nginx, btw. Apache is for outdated fools.

    So, the usual strawman arguments. Nobody takes the alternative path you suggest, Robert. Ergo you are comparing your beloved OS against a phantom.

    Oh, and Koz? Enough with CERN and stock exchanges, already. I suspect that CERN uses rather less Linux than you think, but actually I don’t care. You know why? Because I don’t have a stock exchange in the attic and I don’t have a toroidal supercollider in my back garden.

    Your mileage may vary.

  35. Kozmcrae says:

    “hobbyists”

    Spelled correctly. Ah yes, that word the Cult of Microsoft loves to use. The super computer hobbyists. The stock markets hobbyists. The CERN computer hobbyists. Those dang hobbyists.

    You know how Microsoft loves to take charge of the language we use, change the meaning of words. Hobbyist now means dedicated, mission critical, installations.

  36. Phenom says:

    Exactly, Pogs. PHP is great for writing CMS systems (though not highly sophisticated), blogs, forums, galleries and other stuff, where security, scalability and performance are not top priority.

    The language itself is a paragon of software engineering crap, and the lack of Unicode support in 21st century is an outright crime against humanity.

    Once you need to get serious, you need to resolve to Java, .NET or even native modules.

  37. Phenom wrote, “We all know that PHP is a joke, it is the toy for the non-pros and hobbists. In other words, Linux as a web server is used to host blogs and simple websites.”

    As much as I prefer a native language for performance, PHP is widely used as a rapid development language for web applications. It’s not a joke. Any site with users and databases is not simple.

    This site costs a few dollars per year for domain registration and a few dollars per month for a virtual server. It runs on LAMP. What would it cost to convert/run on Oracle, that other OS and C? Probably thousands of times more and for little or no benefit.

  38. Phenom says:

    An interesting piece of info from your source:
    An overwhelming majority of all Linux servers use PHP as server-side language, and Debian is no exception: 97.5% of the websites served from Debian are written in PHP. Around 30% of those use one of the common PHP-based content management systems.

    We all know that PHP is a joke, it is the toy for the non-pros and hobbists. In other words, Linux as a web server is used to host blogs and simple websites.

Leave a Reply