Competition Returns For Desktop OS And Business – Chromebook v Wintel

“Google saw Chrome rise to take the number one spot in market share”, from January through mid-July, Stephen Baker, NPD’s vice president of industry analysis, says.”
 
See Why would Dell sell a business Chromebook that competes with Office and Windows 10?
Chuckle… M$’s monopoly is definitely dying in USA in 2015. Not only are most OEMs shipping some GNU/Linux, and many retailers display GNU/Linux, and Android/Linux is everywhere, but now, in USA, M$’s home, businesses are buying and selling GNU/Linux as Chromebooks. For the greater certainty, NPD reports that 50% of notebook unit sales this summer were Chromebooks for business. Oh my… Google’s cloud is competing with M$’s cloud, apps and all.
This is looking good for */Linux. School’s out now, but in a few weeks, the kids will be back on line in USA with their Chromebooks and GNU/Linux desktops and the salesmen of many businesses will be accessing their web-applications via Chromebooks and M$’s share of the desktop OS will tank, almost certainly reaching new lows, even in the year of “10” the last of M$’s desktop OS.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in Linux in Education, technology and tagged , , , , , , , , , , , , , . Bookmark the permalink.

114 Responses to Competition Returns For Desktop OS And Business – Chromebook v Wintel

  1. oiaohm says:

    –2. Have you ever setup a MySQL cluster? If yes, then tell us the number of replicas, number of nodes, and the role of each node. If no, just shut #!$$#& up.–
    ***Yep mysql clusters behind a Maxscale partition and Node Group value becomes simple. Yes its partition and Node Group both equal 0.***

    Did you not read this Deaf Spy. There is a big reason why you don’t use multi nodes per MySQL Cluster its called its global locks screwing performance to death. You use Maxscale to join multi individual MySQL clusters into 1 beast.

    Replicas per node/cluster normally start at 3 to what ever number suites the processing. Basically multi nodes per MySQL Cluster you are already stuffed.

    Yes setting up using Maxscale in front of MySQL Cluster is different to setting up a normal MySQL Cluster. Its better performing and more failure resistant.

    –1. If you design a new relational database for a small CRM application, will you use natural or surrogate primary keys, and why?–
    CRM application for who. Do they have existing unique issued customer numbers they wish to keep on using.

    You say I don’t need the BPM. The problem is I need the BPM to show if I am mandated to use Natural Keys or not. Does this CRM have to send raw tables to somewhere like the Australian board of education(you must use Australia wide issued student number/teacher number as part of your primary key) or credit processing bodies ?? Yes there are mandated key designs you must use. Yes natural in design of staff member and time can be mandated.

    Is it always your choice if you use natural or surrogate primary keys no it not. Stupid legal requirements can get in way. Real world experience with real world legal requirements dropped on top you screw all that theory stuff.

    Basically your question does not tell me who I am designing the CRM for or the legal requirements I am dealing with. You don’t go out and design CRM from scratch unless its a custom order. Items like sugercrm exist. Doing NIH is bad for security. You want the BPM first to decide if there is an existing CRM to-do the job as well.

    What are the audit rules. Like if I have to embed staff member and time with every new customer entry. This does make quite a good natural key.
    So its possible for CRM to exist that 100 percent run on Natural and that is what is required to meet legal requirements.

    Deaf Spy the question you are asking I know it its expecting the answer surrogate primary keys because you generate them so you can ensure uniqueness. Problem here those writing laws don’t give a F what we think is good Database design. If they mandate the primary identifier on records has to be X natural primary key that is what it has to be. If client mandates that you use X natural primary key then that is what is has to be.

    I will use natural primary or surrogate primary based on the BPM of the CRM and the BPM is based on the customers and legal requirements of the CRM they need.

    Only reason why I normally would be making a CRM in the first place is that there is some odd ball requirement. Like using some third party unique numbers that is mandated by some legal requirement like law or contract.

    Deaf Spy basically you are asking is an academic garbage question that does not match up with real world requirements. It might be an Australian thing to have Primary key values mandated on you by law or contract.

  2. Deaf Spy says:

    Questions pile up, Fifi, one after another:

    1. If you design a new relational database for a small CRM application, will you use natural or surrogate primary keys, and why?
    Note: you do not need the BPM to answer this. The question contains enough information. Provided, of course, one has ever designed and supported relational databases that end up in production.

    2. Have you ever setup a MySQL cluster? If yes, then tell us the number of replicas, number of nodes, and the role of each node. If no, just shut #!$$#& up.

    Answer these, or admit you are a fraud.

  3. oiaohm says:

    –If you design a new relational database for a small CRM application, will you use natural or surrogate primary keys, and why?–
    Deaf Spy I have given you a valid answer to the question you are asking. Give be the BPM to the CRM and then I will give a more exact answer. Put up time. If you cannot put up a valid design BPM for a CRM you are admit you’re a fraud and never post here again. You are meant to write the BPM when you get the clients requirements by the way.

    Deaf Spy
    –In order to you Maxscale, oh Simple Bushy One, you need an existing cluster. Configured, operational cluster. —
    No moron you don’t need a cluster in all cases.
    From the link I provided that you cannot read
    **MariaDB Master-Slave Replication** This is not a cluster. You can in fact use Mysql Master-Slave with Maxscale as well. **Oracle MySQL Server Replication** Does not have to equal cluster and is also Master-Slave configuration.
    **Schema Sharding Router: Route to shards defined by database schema.**
    Do you know what this is. This is where you can break a single database into many individual databases. Horizontally scale does not have to equal Cluster.

    Basically learn to read before saying garbage Deaf Spy. There was a key hint on the page that cluster was not required.

    Yes making Mysql cluster is a pain in ass. But to scale Horizontally you don’t have to cluster if you don’t care about horible effectiveness levels at times.

    –Yes Mysql is quite simple to horizontally scale if you know what you are doing. The question is if the horizontally scale will be well optimized is another problem but those optimizations are very much the same with Oracle DB, Postgresql and Mysql.–

    Optimizing a horizontally scaled database solution equals have to create clusters for sections of the database.

    Basically application side of proxy it looks like 1 database but the other side of the proxy it many databases. Some of those databases may be just master slaves, Some of those databases might be clusters. In fact behind a Maxscale it possible to be running a Mix of mysql and mariaDB and other mysql relations yet to the application it appears to be 1.

    Mysql clustering is such a pain in ass that MariaDB Galera Cluster use to be a commercial product add-on to Mysql.

    Of course it suxs to be Windows users as Galera Gluster product has been Linux only. http://galeracluster.com/downloads/ yes it got a few versions behind.

    Basically if you try to make a cluster do everything you can end up in a very miserable location very quickly. Most cases you need to break your database into operational segments to that you don’t have traffic overload inside the cluster.

    MySQL Cluster Nodes, Node Groups, Replicas, and Partitions are a complete screw ball way todo it.
    https://mariadb.com/kb/en/mariadb/getting-started-with-mariadb-galera-cluster/
    Read this and cry Deaf Spy. One simple page of instructions and you have a mysql compatible cluster up and running.

    Please note mysql does not come default with most Linux Distributions any more. Default is mariadb. If you want to torture self with Mysql Cluster design go ahead. The main reason for my preference for Mariadb is not to have anything todo with Mysql cluster as it has way too many options to get anywhere near correct.

    The big problem with Mysql default cluster design it attempts todo everything as a single database instead of shading into multi independent databases to limit failure damage. Yes to make mysql cluster stable and perform you end up having to put Maxscale on top any how.

    Yep mysql clusters behind a Maxscale partition and Node Group value becomes simple. Yes its partition and Node Group both equal 0.

    Setting up Mysql clusters to use with Maxscale is simple. Setting up Mysql clusters without using Maxscale super pain in ass with nasty failure paths.

    Basically there is a easy way todo things and a down right hard one guess what one you choose Deafspy.

  4. Deaf Spy says:

    Mysql and Mariadb apply proxy Maxscale that is specially made to make them horizontally scale.

    Thank you for demonstrating the fraud that you are, Fifi. In order to you Maxscale, oh Simple Bushy One, you need an existing cluster. Configured, operational cluster. Don’t you believe me? It is written in your very document, little one. Perhaps it is easy to setup an MySQL cluster. But, you never know, until you’ve tried it. And you haven’t, right, Fifi? Or would you dare tell us the number of replicas, number of nodes, and the role of each node? Does it hurt, Fifi? Does it remind you of your IBM SAN fiasco?

    But don’t let this distract you. You have a question to answer.

  5. Deaf Spy says:

    Fifi, don’t hide behind empty talks. You still didn’t answer the simple question:

    If you design a new relational database for a small CRM application, will you use natural or surrogate primary keys, and why?

    Answer this, or admit you’re a fraud.

  6. oiaohm says:

    https://mariadb.com/products/mariadb-maxscale
    Deaf Spy you need to learn to keep you mouth shut on topics you know nothing about.
    –Have you tried to scale MySQL horizontally, Pogson? Or any relational DB, for the matter? Btw, I have tried (with MySQL) and I can respectfully suggest never to try this at home.–
    Never let a moron who does not know to to-do it attempt it.

    Mysql and Mariadb apply proxy Maxscale that is specially made to make them horizontally scale.

    Postgresql you use the proxy pgpool-II most commonly to make it scale horizontally. Oracle db also has Proxy to horizontally scale.

    Yes Mysql is quite simple to horizontally scale if you know what you are doing. The question is if the horizontally scale will be well optimized is another problem but those optimizations are very much the same with Oracle DB, Postgresql and Mysql.

  7. oiaohm says:

    Deaf Spy PS I made a intentional error in the BPM yet you were too dumb and missed it.

    –4) each user id is unique to a location and not share between locations.–

    I made a mistake to see if you had handled higher grade stuff Deaf Spy.

    Correct line is.
    Each user id is unique to location and not shared between locations and value over all is unique at all locations.

    The intentional error prior line Location x and location y could have a user id number 100. So collision is possible.

    If you cannot see errors like that you should not use Natural Keys. But the price will be oversize databases.

    There is a reason why lower qualified are told surrogate primary keys over Natural Keys. They don’t have the training or the skills to properly determine Natural Keys.

    I guess Deaf Spy had no clue than is question showed that is was a under-qualified person. In fact he has proven he cannot read BPM either. No way you should trust DeafSpy anywhere near making a CRM.

  8. oiaohm says:

    Deaf Spy BPM of the CRM defines what natural keys will form Unique Values. In fact BPM can define that there are no Natural keys that will form Unique Values.

    I did not avoid the original question. At all.

    https://en.wikipedia.org/wiki/Business_process_management

    Business Process Management is the step you should have done before you start attempting to build or supply a CRM to a client.

    A CRM that is incompatible with a companies/organization Business Processes is worthless.

    Deaf Spy the reality is you are out of your depth. Cert 4 web design courses have the stupid question you asked expecting is something like surrogate primary keys due to generated uniqueness. Yes that answer is 100 percent incorrect at Diploma or BA level. Diploma or BA in IT infrastructure design the answer is way different design and includes BPM with each question. Lot harder to answer.

    The question you ask asking is showing you are less qualified than me but you are too much of a idiot to know it Deafspy.

  9. Deaf Spy says:

    PayPal uses HP Moonshot with ARM for their datacentre as do others.
    Says Pogson, comparing apples to oranges.

    Have you tried to scale MySQL horizontally, Pogson? Or any relational DB, for the matter? Btw, I have tried (with MySQL) and I can respectfully suggest never to try this at home.

    I’ve used databases and know about bottlenecks.
    Tell me, Pogson. What are the bottlenecks?

  10. Deaf Spy says:

    The BPM of the CRM you are design decides if you are using natural or surrogate primary keys.
    No, it definitely doesn’t.

    Back to the original question, Fifi, don’t try to avoid it.

  11. oiaohm says:

    Fifi, if you design a new relational database for a small CRM application, will you natural or surrogate primary keys, and why?
    Deaf Spy I have already told this question is wrong.

    Why is lets say I have a CRM design with a BPM. BPM states the following:
    1) Each new account has to be created by a user.
    2) Each user create 1 account at a time.
    3) CRM will be operating in multi locations on multi servers at the same time with data merging between locations so collisions must not happen.
    4) each user id is unique to a location and not share between locations.

    So a compound natural key of time 64 bit and user id in this case is perfectly suitable as a primary key. A compound key many or may not be two columns. Depends how it optimizes. Using this as a primary key the odds of collisions is zero.

    The BPM of the CRM you are design decides if you are using natural or surrogate primary keys. Asking a person to say design some random CRM without some BPM and they answer use natural or surrogate primary keys they are idiots. Lot of course ask this question the correct answer is you use natural or surrogate keys based on uniqueness and how they fit into the BPM of the CRM you are design.

    http://mrpogson.com/2015/08/16/competition-returns-for-desktop-os-and-business-chromebook-v-wintel/#comment-311028
    Deaf Spy this is where I first answered that question you idiot has kept on asking.

  12. dougman says:

    “Yes, you read it right, Chromebooks have overtaken sales of Windows notebooks. As you can see in the graph below, Chromebooks seem to be nibbling at both Macbook and Windows market shares: Windows’ share decreased from 66.2% in 2013 to 59.3% in 2015, whereas Apple’s share came down from 19.2% to 10.3% in the same period. Meanwhile, Chromebook’s share went up from 14.6% in 2013 to 30.3% in 2015.”

    http://www.linuxveda.com/2015/09/01/chromebooks-are-eating-microsofts-lunch-and-dinner/

  13. Deaf Spy wrote, “Linux and ARM are thriving
    Not on database servers, Pogson.”

    MySQL and PostgreSQL are both available on ARMed servers. Who cares whether or not the CPU’s Intel or ARM? FLOSS licences don’t care. Oracle does not ship for ARM.

    PayPal uses HP Moonshot with ARM for their datacentre as do others.

  14. dougman says:

    Dr. Loser does software work for embedded devices and uses Linux, he is a hypocrite!

  15. DrLoser, flinging mud, wrote, “Fifi is a fraud, Deaf Spy. Even Robert knows this.”

    As far as I know oiaohm is the real thing. DrLoser, OTOH, frequently uses innuendo, strawmen, outright misinterpretation of clear prose etc. to create chaos, sure signs of a true troll. To some extent all trolls are fraudulent because they just can’t be stupid enough to believe the stuff they spew. So, it seems the pot is calling the teapot black.

  16. DrLoser says:

    Obviously, Robert, you are welcome to state that you consider oiaohm to be a worth-while and technically adept and all-round good egg contributor to your blog.

    Given your obvious contempt for him when he deals with issues like underwater radar, however, I suspect that you are secretly on the side of reality.

    Fifi is a total idiot, isn’t he?

  17. DrLoser says:

    A few lines of coherent answer are enough. Answer this, or admit you’re a fraud.

    Fifi is a fraud, Deaf Spy. Even Robert knows this. And Robert almost certainly knows more about the difference between natural or surrogate primary keys than I do.

    Fifi will never admit that he is a fraud. But it would be nice if Robert admitted it on his behalf.

  18. Deaf Spy says:

    Fifi, if you design a new relational database for a small CRM application, will you natural or surrogate primary keys, and why?

    A few lines of coherent answer are enough. Answer this, or admit you’re a fraud.

  19. Deaf Spy says:

    Linux and ARM are thriving
    Not on database servers, Pogson. Can you prove your claim?

  20. oiaohm says:

    Row compression is also garbage. Your normally waste more cpu time than the gain it gives you. CVS example in row format gets compression mostly because you are compression multi rows. Now if you rerun that same table the used in the example(yes I knows it a multi G download) except this time compression each row individually it remains about 75% of its original size.

    SQL Server 2016 documentation write up row compression as something useful. Most other database documentation provides a clear warning that row compression mostly worthless unless you design your database for it.

    Column compression in databases works. The overhead of having to write to multi blocks of memory to write or read a row can be a issue. Low row tables like tables in databases listing like Australian states done as a column database is highly ineffective. Tables that store a lot of highly random data that don’t compress well by any means can at times be more effective as row database.

    The reality is 70+ percent of cases are better off in Column processed database with compression. Oracle DB core engine has been Column processing since 2010. So Postgresql is late to party here. Microsoft is also being very late to the party.

    Handling a Column formated database with compression is a a different problem. Row compression where you are compression across a row you don’t gain by making your PK values less random. Column compression as you have in column processed databases gain from PK values being less random. So you database design changes slightly between if the database is row or column processed. The catch is if you design an database optimized column processing it shows no major negative effects being used on a row processing database without row compression enabled.

    Now using design of a database for a row based database on a column database is do you fell lucky. Lot of cases the row based database will be slow because PK values are like GUID values that don’t compress down the column and a person design for a row based database did not have to care about this since row based databases use row based compression.

    Why I said column based design on row based database without row based compression shows no performance defect.

    Here is the compression magic trap.
    On a row based database with row based compression. To make a table compress well. All the same data type columns should be placed with each other. So for example int:int:string:string compresses better than int:string:int:string in most cases when apply row based compression. This is about reducing how random the row is to the compression engine.

    Column based database does not matter what order your columns are in
    int:int:string:string and int:string:int:string perform identically. Column based database is more worried about Column contents because the compression runs down columns.

    Is it possible to make a Database Design that obeys both rules yes it is. Reality is most database design you find are not optimized for row based compression. You are more likely to find a database design that is optimized for column based compression bar for a few minor errors like PK generation method resulting in higher than you want overhead.

    Most of the problem here is that people designing databases don’t understand the problem. Yes using stuff like SQL Server that has not had compression have got use to design tables without thinking about compression at all.

  21. Deaf Spy wrote, “Hey, Pogson, do you realize why you need sheer, raw CPU power, which ARMs fail to provide compared to x86?”

    I guess Deaf Spy doesn’t read specs. ARMed CPUs have lots of MIPS these days. They’re even 64-bit and multi-cored. In fact, one can put more MIPS of ARM in a package than x86. That’s why ARM exists. Then there’s MIPS/$, MIPS/W and MIPS per unit volume etc. where the value of ARM shows. I’ve used databases and know about bottlenecks. ARM isn’t one of them. Intel, Oracle and M$ are bottlenecks in IT. Hence, */Linux and ARM are thriving.

  22. oiaohm says:

    http://www.postgresql.org/about/news/1573/
    –With a parallel-processing engine especially suited for processing column-oriented data–
    Deaf Spy Please you did not read my prior provided link did you.

    –You do this for data warehousing on indices on selected tables, which you populate once and use for reports.—
    Now note my prior link has nothing todo with data warehouse on indices.

    Oracle example of 100 times faster using Column oriented is generic as well.

    Data warehousing is what Microsoft SQL Server documentation says about Column oriented that is incorrect on other databases.

    –A complex query can easily kill your db–. Yes a complex query is more likely to kill you on oracle if you are using row based when you should be using Column oriented. As Column oriented uses multi core better.

    PKs are a just a tiny portion of the data. Kinda wrong. PK values used in Foreign key sections of your database. So the more compressable your PK values are the more likely the Foreign key columns will compress.

    PK look-up in Foreign key to find rows to return is a very common processing event.

    A table in database that you are always doing PK searches down the Foreign key to extract rows can end up massively more memory effective in Column formated. Even a complex query spread across multi CPU cores is more effective with Column formated.

    Row formated database advantage is really not much.
    — If you do so, you will get into a morass so deep that any benefit of lesser IO will get drowned, and you will end up with a slow, inflexible database. —
    This is so not true its not funny. Flexibility of the database operation is not linked to optimizing items like PK values so they compress.

    You have never optimized for compress ability. Natural Time combined with identifier is a common one. 64 bit Time values normally compresses quite(think how many of those bits remain identical for a year). Com-bind with a 32 bit unique operator identifier. Note GUID is 128bit. So you have saved 4 bytes straight up. Practical reality here. How many databases do you know will need a 32 bit count doing operations in the same second. 100 percent sure to be a unique value combination. Zero Collision risk.

    Yes recycling the unique operator identifiers is perfectly valid. If you are worried about alignment you can do 128bit value with the first 4 bytes worth always zeros.

    –Application programmers do not care about that and they should not.–
    That is the trap here. Oracle does teach compression in Java application development courses for Oracle databases. The reality database administration job is not to fix up poorly design applications causing IO issues.

    Reality here when optimizing a database for speed you are altering the application and the database. Like taking GUID out putting identifier + time in it place does not have to alter the data structure. Makes a hell of a difference on performance in particular places.

    The application developer says I don’t need to know SQL database compression its the database administrator job??? Then you have the database administrator complain he/she cannot get more performance out of the database because something the application developer did is hindering it???
    This is the reality of what deaf spy proposed by saying Application developer does not have to care about how there actions effect compression. It creates a chicken and egg problem. How well or how poorly a database can compress really must always land on the Application Developers head.

    You create the impossible for the database administrator. Database administrator job is exactly that administrator what is do minor alterations in settings. Like turning compression on or off. Not making the data compress well.

    Both Database Administrator and Application Developers have a role to play when it comes to compression and databases.

  23. Deaf Spy says:

    Sigh. Fifi, I am going to say it one more time, and stop paying attention to your gibberish. You either say something that makes sense, or you stay alone with your embarrassment.

    You compress data in two ways:

    1. Row compression. You do this to reduce IO operations on tables that get modified often because normally it is the IO being your bottleneck, while CPU should not be overloaded. (Hey, Pogson, do you realize why you need sheer, raw CPU power, which ARMs fail to provide compared to x86?) You may also apply it on partitioned tables you archive, but for a lesser effect. It also has a good effect on backups, as it reduces the maintenance down-time window.

    2. Columnstore compression. You do this for data warehousing on indices on selected tables, which you populate once and use for reports. The goal is to fit the whole table into physical RAM to improve performance.

    Compression is all about database administration. Application programmers do not care about that and they should not. You do not try to design your data to make compression work better. If you do so, you will get into a morass so deep that any benefit of lesser IO will get drowned, and you will end up with a slow, inflexible database. A complex query can easily kill your db, compression or not. And your obsession with compressing PKs is laughable. PKs are a just a tiny portion of the data. What you need is a good source of unique values.

    Finally, Fifi, return to the original challenge: If you design a new relational database for a small CRM application, will you natural or surrogate primary keys, and why?

    Just answer the question and spare us the gibberish and your fantasies. The question has many hints. Use them, Fifi, use them.

  24. oiaohm says:

    Deaf Spy compression effects are in the Orcale provided training courses on Oracle db and how to design Databases to take advantage of it.

    –Random numbers naturally compress poorly–
    That a direct quote from a Oracle training course. Calling that gibberish proves you have never had formal Oracle Db training.

    –For long time Database designs have skipped out on having basic compression knowledge–
    ++And for a reason.++
    Mainly because course have student not using high performance databases where the difference is in face.

    –People who have worked on Oracle, Mysql and Postgresql are use to compression and its effects–
    +++No such thing.+++
    Funny super moron. The compression and its effects is detailed in all three manuals. Anyone using any of the three should have read. As well in training courses using them.

    Sorry claiming no such thing is pure moron move. You started an arguement that there was no way to win.

    This is the problem SQL Server user arguing with a Oracle, Postgresql or Mysql use risks having their head cut off for stupidity.

  25. Deaf Spy says:

    People who have worked on Oracle, Mysql and Postgresql are use to compression and its effects
    No such thing.

    Random numbers naturally compress poorly
    Gibberish.

    For long time Database designs have skipped out on having basic compression knowledge
    And for a reason.

  26. oiaohm says:

    Once you start using a database with compression things are very different.

    The better your data compress the lower the disc IO load is. People who have worked on Oracle, Mysql and Postgresql are use to compression and its effects.

    SQL Server 2016 is the first SQL Server with compression built in. Those with only SQL Server experience are in for a kick in the teeth trying to work out why X and Y design that almost look identical perform radically differently. Its also how come two column can be faster than 1. Why one column has a lower random factor so compresses like nothing else so is very light on disc IO.

    Understanding the difference between row and columun performance with compression is key to bring out the most performance out of a database supporting compression. Now of course row based has its own particular list of advantages as well.

    Worrying about if a table should be column or row based and random your PK data is become important once you have a database using compression.

  27. oiaohm says:

    –Advantage of using column-based database on Linux or in vSphere or Xen is reduced ram usage compared to row based due to memory block deduplication.–

    Xenis error was typo instead “Xen is”.

    Its about time you lose your doctor title as you are just a Loser.
    Kernel Samepage Merging was first vSphere then given to Linux kernel by vmware. COW file system of vSphere support block deduplication and tracking this into memory allocations. Yes this is implments in the Linux kernel for normal applications and embedded hyper-visor and in Xen as well.

    So as soon as you load Windows with SQL Server under vSphere some some databases speed up and are faster than running on the native hardware. All due to the effects of memory deduplication.

    I just stated what effect Kernel Samepage Merging on a database. Columun based databases take more advantage of Kernel Samepage Merging than row based ones. Due to going down Columun is more likely to be duplicating go across row. Its quite a difference in fact.

    Yes SQL Server has a Columun backend as a option.

    — Things like GUID are double sided swords. Yes lazy to use a GUID but the price is lack of compression.–

    ++More gibberish.++
    Not at all look up how a GUID is generated. Its generated off the idea of a random number generator. Random numbers naturally compress poorly. Particularly once you start working out that records in a database will be stored in particular orders. Like the order of creation. Using something like a GUID in the wrong place in your database design can have a hell of a performance it particularly if the result is expanding memory foot print to the point you have to be transferring blooks from disc all the time.

    DrLoser this is basic compression knowledge. For long time Database designs have skipped out on having basic compression knowledge.

    DrLoser calling this Gibberish LOLOOLLOLLOLOLLLLLLLL about time you lose the Dr title you don’t deserve it you are a loser who calls stuff Gibberish that are facts.

  28. DrLoser says:

    Things like GUID are double sided swords. Yes lazy to use a GUID but the price is lack of compression.

    More gibberish.

  29. DrLoser says:

    Advantage of using column-based database on Linux or in vSphere or Xenis reduced ram usage compared to row based due to memory block deduplication.

    Gibberish.

  30. oiaohm says:

    https://mariadb.com/kb/en/mariadb/mroonga/
    Deaf Spy still missed what I said.
    –1. Take a table in a database.–
    Note the storage engine of that database like above might be column-based storage.
    Advantage of using column-based database on Linux or in vSphere or Xenis reduced ram usage compared to row based due to memory block deduplication.
    –2. Optionally: convert to column-based storage, preferably in CSV.–
    If transferring from a column-based database to another column based database Column based CSV is about the best on size you are going to have but its important that the Column based database supports proper exporting and importing CSV in Column based format. Also there can be ram saving processing down column instead going across rows that cause blocks from other columns to be loaded before you have processed all the block that is loaded.

    Also the Column based CSV is more likely to have identical memory blocks when in ram so it will memory deduplicate.

    == Performance on Linux!==
    Not quite. Performance boost on Linux and Windows in vSphere or Xen.

    This is where saying CSV support is not important. Since the advantages of Column Based has been found CSV has got a new life.

    Yes Column based databases are a different beast to handle to row based. The sad part is SQL Server supports Column based yet the exporter/importer that supports Column based well is broken. That is the true sad state of affairs.

    Column based CVS is still fairly new. Like mariadb also does not have proper export and import for Column based CVS yet.

    With SQL Server 2016 having compression Deaf Spy you will have to relearn database design allowing for it. Things like GUID are double sided swords. Yes lazy to use a GUID but the price is lack of compression.

  31. Deaf Spy says:

    I think I’ve figured out Fifi’s intent behind this rather opaque claim.
    1. Take a column in a database.
    2. Index that column.
    3. Performance!

    I think, dear Doctor, we’re facing a different case here.

    1. Take a table in a database.
    2. Optionally: convert to column-based storage, preferably in CSV.
    3. Compress!
    4. Performance on Linux!

    Still a sad state of affairs.

  32. oiaohm says:

    Deaf Spy what you don’t get.
    http://devnambi.com/2013/compressing-tabular-data/ 11 January 2013 this is the first major demonstration with down-loadable samples in column-oriented formated provided major compression advantage.

    http://www.dba-oracle.com/t_row_column_oriented_data_storage_tde.htm
    Of course Oracle has been using compression for longer. How well does this data compress is a factor. But this is not like kernel same page matching and file system block deduplication. Since neither of these cost CPU time to decompress because they are exploiting the virtual memory system of the OS or hyper-visor to make 1 block appear to be many.

    Optimizing a database for performance is not a straight forwards as one things. Lot of database designers fail to ask the question how well will this compress. Failure to compress well has negative effects on performance.

    Yes benchmarks over compression relate directly to the core of the database engines.

    https://msdn.microsoft.com/en-us/library/cc280449.aspx SQL Server 2016 is the first one to introduce compression. Postgresql had compression back in 2009 and Oracle as you can see by other link was 2010.

    Basically Deaf Spy I have experience working with databases that have compression you don’t. Also experience dealing with the different compressions.

  33. oiaohm says:

    –then ask him to explain why GUID-based PKs in SQL Server do not have negative impact on performance,–

    Deaf Spy SQL Server is running on a OS without Memory Compression and on a file systems that don’t have block duplication reduction there is no major performance difference. This is Windows + SQL Server on real hardware or Windows + SQL Server on Hyber-v

    There is performance difference when you have SQL Server+Windows Server running inside VMWare vSphere or Xen due to introducing memory compression of the form of duplicating identical content pages of ram. Result more of the database fits into ram the less disc IO is required. Changing your database to produce more identical memory blocks the more of it fits in ram and the faster it goes.

    Yes using GUID has negative impact on performance SQL Server installed in particular environments.

    Compression memory and disc causes some very interesting things to happen.

    –You clutch to your column-based CSV as some cure-it-all pain-killer magical pixie dust, without actually having used one ever. Without actually having exchanged data between systems ever.–
    Problem is this is you guessing. I have used column-based CSV on Linux systems I know the performance difference. Memory Compression and Disc block duplication is a new factor in Database optimization. Of course only a idiot who has only used Windows based solutions would not be aware of this.

    Column-Oriented Data-Processing is becoming more popular because this equals more same page matches in memory. This does not just reduce your normal ram foot print it also reduces you cache ram in cpu foot print.

    The problem here Column-Oriented Data-Processing under Windows shows very min performance gains using Microsoft supplied parts only.

    –You don’t have any benchmark of your own to show. You only stumble upon random articles on the Internet, which you can’t even understand.–
    Of course I have my own benchmarks but when they shows the same things as what is already published with down-loadable samples why waste my bandwidth. That CSV example is using a multi G sample file.

  34. Deaf Spy says:

    I think I’ve figured out Fifi’s intent behind this rather opaque claim.
    1. Take a column in a database.
    2. Index that column.
    3. Performance!

    Ah, this is a too common trap amongst young minds who just happened to learn about indices. That is why you should never let juniors design databases.

    I am almost tempted to ask the Wunderkind of the Bush under what circumstances an index will actually degrade the performance, and then ask him to explain why GUID-based PKs in SQL Server do not have negative impact on performance, but that is too much. Let’s not cast pearls to the swine.

  35. Deaf Spy says:

    Fifi, at the end of your wall of text, I see that:
    1. You don’t have any benchmark of your own to show. You only stumble upon random articles on the Internet, which you can’t even understand.
    2. You still can’t answer what type of PK you would use in a design of a small CRM system.
    3. You still can’t explain how your additional table in question helps you generate unique PKs. I can imagine how, but it is utterly stupid.
    4. You clutch to your column-based CSV as some cure-it-all pain-killer magical pixie dust, without actually having used one ever. Without actually having exchanged data between systems ever.

  36. oiaohm says:

    Deaf Spy compression is a factor in system performance under Linux.
    https://www.kernel.org/doc/Documentation/vm/zswap.txt

    –And don’t forget the complexity you introduce with column-based CSV–
    If you are transferring between to column based databased that directly generators column based CSV this is reduced complexity not increased. So its usage case. Oracle DB is one of those databases. Prototype Postgresql is also one of those databases.

    Column-based CSV is the smallest you might choose not to use it and use row based and take the larger size due to processing costs.

    ==cases where Compound PK are safe, faster and smaller they are not done.== You cut in the wrong place.
    –No, they are neither of these. How can 2, let’s say, integers, along with all the overhead, be less than 1? —

    You need a unique PK generated on multi different servers at the same time. You can go for magic GUID and hash solutions and hope you don’t have a stuff up or go for a very simple Compound PK. Server assigned ID number + Server Generated Number. GUID will not compress well due to how random they are. Yes when you mix this with compressed memory, compressed file-system and KSM memory you can get some interesting performance advantages.

    –Absolutely not, Fifi. It runs under Solaris.–
    Oracle recent benchmarks Linux beats Solaris runing Oracle DB.
    http://www.oracle.com/technetwork/server-storage/linux/technologies/rdbms-12c-oraclelinux-1973518.html
    Even Oracle admits it. So your Absolutely not is complete bullshit Deaf Spy.
    The fastest Orcale DB of any platform it runs on is Oracle DB running on Linux.

    http://packetpushers.net/tcp-over-ip-bandwidth-overhead/
    –I know it will come out of the blue for you, Fifi, but on the network there is a thing called overhead. In the numbers I quote, this overhead is going to make your advantage only theoretical. How do I know? Because I have done it and optimized for throughput, Fifi. You haven’t.–
    Deaf Spy O hell I have you stupid idiotic moron.
    Turns out http/tcp/ip overhead is almost constant percentage. So the advantages I talked about fully allowed for overhead. With http you can basically ignore the http handshake because that nothing compared 100Kb or larger. http request and send handshake is 4kb max. I had allowed for that. Sorry 200KB is not a overhead hidden difference. Ok I was rough.
    To be say I should have said
    –360 KB. Compressed XML: 540 KB. That is cvs is 66 percent of XML or 33 percent smaller. So for every 4 XML you send you can send 5+ CVS files compressed this adds up. — That leaves more than enough space to cover http overhead with space to spare. Column compressed was 3 xml to 4+ CVS or 1/4 more transfers. My rough maths was still in ball park.

    Really claiming you have done optimized for throughput you must have screwed up so badly it not funny because you don’t know the basic maths of it. My numbers were only out by 1. Not out far enough to say zero advantage at all you stupid moron on throughput optimization.

    Further, this is an optimization you can also perform on the XML, though it will be an outrageously stupid one. I will let you guess why.
    Its stupid because if you pull up the white papers where they have done column formated XML there is no compression advantage. Its all due to XML format. I had already stated this and you say I need to guess why have you not being reading. Of course out a Oracle database that is Column formated and you want a XML file it is temping to dump a column formated. If you have a Column formated database and transfering to another Column formated database Column formated CSV files start looking quite nice if the database supports it natively. Native import gets to skip over the requirement to fill out rows completely before adding to table.

    Column formated CSV is one of those things you need the database to support it natively or your screwed with excess processing.

  37. DrLoser says:

    Generation of PK is also about performance of database.
    It is not, unless you are an idiot.

    I think I’ve figured out Fifi’s intent behind this rather opaque claim.
    1. Take a column in a database.
    2. Index that column.
    3. Performance!

    Of course, confusing “indexed columns” with “primary keys” is a very sad state of affairs. But no less than we have come to expect from the Magus of the Outback!

  38. Deaf Spy says:

    Your other post, Fifi, I am afraid, is one of the good masses of stupidity you have the unfortunate habit to up-chunk.

    The fastest Orcale databases run under Linux
    Absolutely not, Fifi. It runs under Solaris.

    Generation of PK is also about performance of database.
    It is not, unless you are an idiot.

    Compound PK are safe, faster and smaller they are not done.
    No, they are neither of these. How can 2, let’s say, integers, along with all the overhead, be less than 1?

    The rest is total bullshit that is not even worth discussing. The answer to every statement of yours is “incorrect”.

    I do not need to prove that they are incorrect, Fifi. Just like I don’t need to prove that sky is not pink. You need to prove your amazing claims. Good luck.

  39. Deaf Spy says:

    Fifi, I see you are good at quoting other people’s numbers. But where are yours, sweetie? Have you ever made a real experiment? Do you have any real experience?

    Now, a closer look:
    360 KB. Compressed XML: 540 KB. That is cvs is 66 percent of XML or 33 percent smaller. So for every 3 XML you send you can send 4 CVS files compressed this adds up.

    I know it will come out of the blue for you, Fifi, but on the network there is a thing called overhead. In the numbers I quote, this overhead is going to make your advantage only theoretical. How do I know? Because I have done it and optimized for throughput, Fifi. You haven’t.

    And don’t forget the complexity you introduce with column-based CSV, where Pogson’s dear “it works for me” will suddenly stop. Further, this is an optimization you can also perform on the XML, though it will be an outrageously stupid one. I will let you guess why.

  40. oiaohm says:

    By the way its like the claim natural results in smaller tables. This depends how your database is and what your database is running on.

    Column-Oriented Data-Processing on Linux gets highly interesting same with block

    https://btrfs.wiki.kernel.org/index.php/Deduplication
    At file system level depending on the filesystem using Column-Oriented you can have more block duplication in the database so resulting in the database shrinking. On disc.

    In memory you have Linux Kernel Same Page Matching. Then even under NTFS you have file compression options. So items that don’t compress well can result in lower performance under Windows. GUID being a full random number mess means it don’t compress well. The old Last number +1 PK production method can be better than using GUID. Remember Linux has compressed swap.

    Your database design is required to be different on Linux compare to what people normally do under Windows or you just will not be processing as fast as you could be. Yes the two design on Windows under Oracle will appear to process at the same speed but then when its put on a Linux server due to Linux kernel differences one is massively faster than the other.

    The fastest Orcale databases run under Linux and are many times faster than SQL Server. Its Oracle DB items like Postgresql has to worry about beating. Even that Postgresql is up to SQL Server speed it still has not fully caught Oracle.

    Generation of PK is also about performance of database. Yes a lot of SQL courses teach do not use Compound PK resulting in cases where Compound PK are safe, faster and smaller they are not done.

    First week dealing with a person who has just done a SQL course is showing the many benchmarks of why their course is wrong when taken into the real world.

    When Oracle did Column-Oriented they got 100x faster on Linux. When SQL Server did Column-Oriented they only got 10x faster. Worst Postgresql still operating in row mode keeps up with SQL Server in Column-Oriented as of 9.5.

    To increase SQL Servers speed Microsoft needs to modify the Windows kernel and pay for a patent license from VMWare (or highly unlikely open source Windows kernel as the patent license from VMWare is free for all open source works)

    This is the super funny part. Postgresql has been missing features but so is Windows kernel. So Databases running on Windows are now just as performance crippled as Postgresql 9.5. Its all about missing features. Postgresql runs on the OS’s that will allow it to go faster.

  41. oiaohm says:

    http://devnambi.com/2013/compressing-tabular-data/
    A simple way to turn a CSV file into a column-oriented (columnar) format is to save each column to a separate file. To load the data back in, read a single line from each file (column), and ‘stitch’ the data back together into a row.
    Deaf Spy Read again. That is about CSV and how to get compression smaller.

    Swaping CSV to column-oriented 31% saving most of the time. Lets be rough and say 1/3 so compressed will be 2/3 of what you said in column-oriented so roughly 240. Or roughly 50 percent of using XML.

    200K greater if normal CSV about 300K greater against column-oriented CSV.

    360 KB. Compressed XML: 540 KB. That is cvs is 66 percent of XML or 33 percent smaller. So for every 3 XML you send you can send 4 CVS files compressed this adds up. Of course its better if you have used rotated CVS where for ever 2 XML files you send you could have sent at least 3 CVS files.

    Generating the XML file to send where are you going to store that before compressing so the uncompressed size matters.

    Comparing uncompressed against compressed is bias comparing.

    JSON is what you should have been comparing to not XML.

    Efficient vs time chart you need todo. Not including how long to took to compress really hides another issue.

    In my link notice its exactly the same size data in a CVS table just the data aligned the two ways. Makes a major difference to the final size and processing time.

    Do I have to remind you that http has time-outs as well as compression. If you take too long to prep the data the client can drop the connection.

    Like it or not doing it http uncompressed size is just as important as compressed. CVS in your example will have consumed less ram. JSON paying double in ram you can justify. XML example you used 12megs is just insane. 12 megs x 1000 connections is ouch. That is exactly what a bank with people downloading large bank-statements would be on the receiving end. 13K rows for very active businesses can be a small bank statement.

    Its very simple to say compression solves the problem. Reality it does not. Only a idiot would argue that hey xml compressed everything fixed. Decompressing and processing you are also back in the same problem.

    Transmission is send and receive. Yes saving size at the price of being slow is not a good idea.

    http://www.postgresql.org/about/news/1573/
    –Ah, and to remind you, the original topic was SQL Server vs. Postgres–
    Column-Oriented Data-Processing exists in Postgres development branches. 50 times faster than existing postgresql. Yes those have Column-Oriented CSV. Thinking current day postgresql is about the speed of SQL Server I see Microsoft needing to do major performance work in future.

    Yes Column Oriented Formats are very interesting performance and ram usage tweaks.

    –that you’re clueless about the actual practical benefits of natural and surrogate keys.–
    Not the case at all. Something is only a benefit if it suites the design you are doing. Sorry you where clueless that Surrogate keys could have duplicates. So over presumed the benefits. The benefits is not the important thing to know its the downsides. Downsides it what breaks databases.

    Natural and surrogate generic benefits lists are mostly bullshit once you get into a real example. Like time and Social Secuirty Number both have different probabilities of being identical. Same with generated surrogate keys.

    http://ask.webatall.com/sql-server/12106_are-guid-collisions-possible.html
    The answer is in fact yes SQL-server default surrogate generate has had collisions as there has been cases. Insanely rare but there have been cases.

    Real world facts don’t line up with the academic claims with natural and surrogate. This is the problem. I have real world experience using Databases and how they screw up. Not everything you learn in a course has value in the real world. Doing a proper assessment on generators and natural sources is critical. A natural source might be 100 percent unique(there are some that are) surrogate might be highly likely to have duplicates because it generator is crap or on GUID case the random number generator has screwed up.

  42. Deaf Spy says:

    Fifi, I knew I could rely on your to bring something completely irrelevant to the discussion.

    Let me bring you back to one of the topics discussed (which is also a side topic, mind you!).
    Sample data: 13K rows, 16 columns. CSV: 2.8 MB. XML: 12 MB (the schema contains information about data types and sizes). Compressed CSV as ZIP: 360 KB. Compressed XML: 540 KB.

    Now, Fifi, it should be obvious even for you. XML tabular data, compressed with a simple compression like ZIP, produces a file 9 times smaller than the uncompressed CSV. Compressed XML is only less than 200K greater, but it also bears quite some meta-data, plus it is reliably parsable. Do I need to remind you that compression comes for granted in HTTP?

    Ah, and to remind you, the original topic was SQL Server vs. Postgres, and that you’re clueless about the actual practical benefits of natural and surrogate keys.

  43. oiaohm says:

    Deaf Spy the reality is even with compression
    –Because there are things like compression, which works amazingly well with text-serialized data. Compressed CSV, JSON and XML will end up with almost the same size. —
    Interesting enough no its case of being a moron.
    http://devnambi.com/2013/compressing-tabular-data/
    If you are compressing using CSV, JSON or XML same size not even close.
    Deafspy I guess you have never heard of column-oriented (columnar) formats
    Yes there is csv normal formated what is rows and columns layed out like a human expects but then there is column oriented cvs where you have turned the table to the human 90 degrees.

    You can do column oriented CSV. You cannot do column oriented JSON or XML the trash in the middle of XML and JSON formats disrupt compression. For compression you want column oriented. Reduces your compress and decompress time and improves you final size by a large margin.

    DeafSpy I guess you did not have a single clue there are two major CSV formats types. Column oriented and Row Oriented. Idiots who don’t know how to optimize compression normally don’t know about Column oriented.

    Really over CVS you don’t have a clue Deaf Spy. Sorry I have a clue over databases as well you are just that big of a moron that has not handled all the different cases that you don’t know anything really.

  44. Deaf Spy says:

    A simple question by the biggest moron can confuse the wisest man.
    No. The wisest men will see the moron. This is what happens with you all around.

  45. Deaf Spy says:

    Blah-blah-blah…Size is the reason why you would choose CSV…Blah-blah-blah…
    Size, Fifi, does not matter nowadays. Because there are things like compression, which works amazingly well with text-serialized data. Compressed CSV, JSON and XML will end up with almost the same size. Then consider that compression comes with HTTP for granted and that HTTP is the most popular protocol nowadays.
    Gosh, even your own article points out JSON as the best out there. Can’t you at least find better proofs for your desperate causes?

    I am not saying I would use natural or surrogate. It would depend on the customer requirements of the partly requesting the CRM for what is used as PK values. The one thing is that is sure my PK values would be unqine no matter if they are natural sourced or surrogate
    In other words: you don’t have a clue.

  46. oiaohm says:

    operator login is a creative one by the way. Databases like postgresql and oracle in security mode record the user who created the entry and when first created. So operator login + time can be natural or surrogate or both.

    Interesting point is both. Where the security table record create time and user login and the PK is generated from user login and time if there is too much variation you have a modified records.

    This is why when someone asks me will you be using natural or surrogate its not a sane question. All it shows is massive lack of experience making audit-able databases.

    Audit-able database very commonly use compound PK that are half natural being time and half surrogate. It also reduced the possibility of surrogate miss generation making a duplicate key to insanely small. Yes two surrogates exactly identical are unlikely to have been generated at exactly the same time. It also makes finding people attempting to rewrite history a lot simpler. Most useful natural is time.

    A simple question by the biggest moron can confuse the wisest man. Remember that Deaf Spy. The problem here you don’t know enough to know the question you asked answer is not simple and not straight forwards. You never gave me the requirements of the CRM on what class of auditing it required or the BPM it was being built to handle. So not enough detail to in fact answer the question you asked simply. Anyone who thinks the answer to that question you asked is simple is inexperienced.

  47. oiaohm says:

    Deaf Spy the real world is full of completely botched systems. I can give many examples of real world existing systems that are even more horible than the one I just described.

    http://www.javamazon.com/csv-vs-xml-vs-json/
    –The are two reasons why CSV stays in usage: legacy, unmaintainable systems and the inertia of dinosaurs who can’t assimilate anything new anymore and always claim that the good old ways “work for them”.—
    Basically bullshit arguement as normal.

    Json on average is double the size of CSV and XML is 3 times the size.

    Its the same bullshit arguement SQL Server database users make all the time. Of course due to the compactness of CSV and SQLServer CSV import/export being stuffed you end up implementing it in .net or php or some other language in front of it. Only thing smaller than CSV is go binary. Size is the reason why you would choose CSV. If size was not a issue then you go JSON because is safer to process. Of course XML that is huge and painful to process you only use that one for legacy stuff if at all. Yes legacy being standards define before JSON existed.

    JSON replaces XML. Nothing released replaces CVS yet.

    However, he never answered the simple question:
    –“If you design a new relational database for a small CRM application, will you natural or surrogate primary keys, and why?”–
    I did provide a link explain them then pointed out using those terms is bullshit and just gets you into trouble. Real World databases are no where near as cleanly designed as academic texts make out.

    Problem is I bet your answer to surrogate primary keys was no duplicates as you pointed out latter on. I would say in a CRM I would prefer to use correctly generated/source PK. There can be quite a good natural key combination time and operator. If each operator can only be logged in once there operator login+ the time of the operation is quite a solid PK. Of course this is classed compound natural key. Why because time and login were not generated by the database. You said a compound key is a bad thing?

    This is the problem I have with all these academic terms. CRM depends on the BPM of the CRM on what natural keys there are.

    I am not saying I would use natural or surrogate. It would depend on the customer requirements of the partly requesting the CRM for what is used as PK values. The one thing is that is sure my PK values would be unqine no matter if they are natural sourced or surrogate.

  48. Deaf Spy says:

    Distilling Fifi’s Wall of Text:

    Fifi, trying to give a proof of his qualification as database expert, comes up with an example of a completely botched system.

    However, he never answered the simple question:
    “If you design a new relational database for a small CRM application, will you natural or surrogate primary keys, and why?”

    Actually the example of your experience, unfortunate as it is, is fully inline with your just as unfortunate statement: “One of the reasons why CSV is staying in usage is compactness”.

    The are two reasons why CSV stays in usage: legacy, unmaintainable systems and the inertia of dinosaurs who can’t assimilate anything new anymore and always claim that the good old ways “work for them”.

  49. oiaohm says:

    Business Process Management (BPM) has another slang nickname Bastard Process Maker.

    Deaf Spy the problem is the BPM design has to be matched.

    The example I have given is horible.
    Staff type A issues Cards.
    Staff type B enters account information and approved credit.
    If Staff type B rejects a person Staff type A have given card to the customer number is voided to be reused by Staff type A after Staff type B has returned it to them.

    This is what you call unfortunate BPM result is horible SQL to match. Everything in card table is created by user type A everything in customer table is create by user type B.

    When you have dealt with enough businesses Deaf Spy you will strike some with some very horible BPM that they will not allow changed.

    The unique number system in that horible is the super simple Largest Value+1 or first gap in the number sequence because that is how they did it when it was in physical card files for accounts….. Lets not rewrite BPM that has worked for 40+ years instead you are mandated to make the SQL match the 40+ year old BPM.

    This is a gapless unique identify generator. Also every midnight any card that had not had account data created was voided to point to account 0(void) so account numbers could be reissued next day.

    This process is still properly generated unique identifiers generator just not a common one or a CPU effective one. BPM at times causes to create these bastard unique identity generators. Gapless sequential unique identifiers numbers are horible.

    BPM you can start with a bad idea and it only gets worse after that.

    Deaf Spy ask these questions of me is showing lack of real world experience having to make SQL match BPM. Of course next time you see some really horible SQL design not straight up presume bad database design the problem might predate the SQL design by many decades because it coming bad BPM implemented as SQL. Its a real art to be able to bend a SQL database tolerate really bad BPM.

    Note I said add two tables at one point but this was not an option because the client mandated the min tables possible.
    1 table to generate client numbers. Card table with card and customer number and customer table with customer data. Of course you would find the gaps by comparing all 3 tables. Again horribly CPU ineffective without some optimization tricks.

    Deaf Spy really calling me Fifi really makes me want to call you moron or inexperienced idiot. What in this case you are.

  50. Deaf Spy says:

    Deafspy the fact you are asking me why the extra card table means you have not dealt with a CRM with multi-individual billing cards issues per customer

    No, Fifi. I would just have a sane structure, with proper identifiers properly generated. What you describe sounds like a sad case of unfortunate database design. Any reliance on special values (min, max, negatives) is going to hit you back in the face sooner or later. But you still don’t explain how would your additional table help you generate unique identifiers. Unless you rely on compound keys, which are almost always a bad idea in a relational database.

  51. oiaohm says:

    Deaf Spy the strangeness of business process and hacked up SQL alterations you find.

    Business operation where I saw the card number and customer number table. Its a nasty business choice. Customer is issue with multi cards for their staff. Lost cards have to be canceled.

    CustomerNumber is 1 number and CardNumber is the infinity.

    Why did someone hack it this way is Business process management (BPM).

    On invoice Customer what to see what staff member acquired the item. So customer number was not important but CardNumber was.

    So what that one was using was a stored procedure to generate a new customernumber in the card table as issuing first card. The to open account scan card enter account information. Adding a card was of course scan existing card and then scan a new card.

    Someone thought this was good BPM.

    If you have not worked out the pure nightmare if someone magically thinks Surrogate key is safe so don’t have to check generator. As generator it was searching the Card table deduplicating CustomerNumber entry to find out where the next customer number was. Of course when someone unregistered all cards from a customer this system kinda went stupid.

    In a database you normally want 1 to many when you have many to 1 be very careful. Yes the first write of a new customer number in the real world I was dealing with was into the card table then later into the customer table.

    Fixing was fairly straight forwards check both the customer and the card table on generation. Also introduced a new issue card when customers card count was zero. Of course this model is still flipped on head to normal database design.

    Australia again it where I am. Some businesses use ABN of client on Invoices not customer numbers. Staff look up accounts by ABN. Why can you not use ABN as customer number is a business can change ABN numbers.

    Yes ABN of customer is what is printed on Invoice.

    http://www.agiledata.org/essays/keys.html Basically this is USA. I don’t think the USA has the option like the ABN on invoices instead of your own company number.

    This is the reality customer number disappears into internal only.

    This is what I have really seen done to a CRM. And it where I learnt Surrogate key define is crap.

    Deafspy the fact you are asking me why the extra card table means you have not dealt with a CRM with multi-individual billing cards issues per customer.
    20 years experience and you have not had to set that up????

    Matching a database to BPM sometimes equals strange and evil things. The card issuing I list here is one of the strange and evil.

    Please don’t say issue a customer number in customer table at the same time as card as the number of entries in the customer table was in the report as number of customers with accounts.

    Yes there was another option that would be cleaner is 2 tables extra. 1 table to generate customer numbers 1 table to host cards and then a table for customers but that is not required if you are careful in generation.

    Deaf Spy reality here my example has nothing todo with what you find on websites but what I have seen in real life.

  52. Deaf Spy says:

    So much text to state something no one even though of disputing here. That primary keys should be unique. Congratulations, Fifi, you just discovered the purpose of primary keys.

    Ah, and you counter design, while a good idea, hm, midly put, interesting. You say:
    CustomerNumber is the next one there put up as natural. Add one extra table to to that databasse design contain a value value CardNumber with CustomerNumber and the CardNumber be the value staff use then CustomerNumber becomes a Surrogate key.
    Why on earth would you combine a surrogate and a natural key in one extra table? And, unless you have other columns, you have no counter here at all. Unless you consider auto-generated columns, but then you don’t need the extra table at all.

    But I shouldn’t be too hard with you. It is not fair to expect you to construct a valid, transaction-capable generator. You managed to come with a right conclusion about uniqueness of PKs and having a generator by just reading a bunch of sites. This is quite an accomplishment for you that doesn’t happen very often.

  53. oiaohm says:

    Natural or Surrogate or Artificial terms all come about to explain what us old guys using OID solutions use todo in hopefully simpler terms. The problem is in the process the authors who wrote the books have lost the most important fact.

    Rule 1 of all selected Object identifiers uniquenesses. You are not talking to a person were a mere 20 years of experience.

  54. oiaohm says:

    DeafSpy to be correct I absolutely agree Primary Key values are absolutely important to get right.

    Problem is using the terms Natural or Surrogate or Artificial tell you nothing useful to know if your primary keys are right or broken. So are bullshit.

    I stick to the old OID style where you define the generation/source of a PK. This older school method does not leave room for implementors to screw it up.

  55. oiaohm says:

    DeafSpy problem here what are called surrogate keys can also have duplicate key events.

    I started off with databases before the terms natural or surrogate or artificial were used. Object Identifier or OIDs. Far more common sense idea.

    Artificial Primary you did not mention those did you.
    http://ycmi.med.yale.edu/nadkarni/db_course/Design_Contents.htm

    Surrogate and Artificial keys have either the same meaning or different meaning depending on what school of SQL design you went to.

    Some schools call Social Security Numbers an Artificial Key others call it natural.

    Some schools call Artificial Key and Surrogate the same.

    Some schools give Artificial key and Surrogate key slightly different meaning being
    1) Artificial key is displayed to user
    2) Surrogate key is hidden.
    Other than that identical.
    Bullshit difference how do you know from database design what the final interface will display to user.

    Yes Artificial key is used to define what Surrogate key in a lot books but due to the dispute over the term Artificial safer term would generated.

    https://developer.teradata.com/blog/mtmoura/2011/12/lets-talk-about-surrogate-key-generation
    The hashing algorithm will generate the same surrogate keys on different Teradata systems but key collisions can happen.

    –Everything in life is likely to change, and your perfectly great natural candidate for a PK can suddenly produce a duplicate and blow your head off.–

    Reality this statement is right to a point but gives the wrong point of view.
    –Everything in life is likely to change, and your perfectly great natural candidate or Surrogate generator for a PK can suddenly produce a duplicate and blow your head off.–
    Is the correct statement. Your processing code should be designed to prevent either event. So natural or surrogate should have protection against and detection for duplicates. If you don’t you have a mother of screw ups waiting to happen.

    DeafSpy this is why I say moron. Experience software engineers know better.
    Surrogate or Natural or Artificial you can all have the nightmare that a duplicate turns up 1 day. Presuming that a Surrogate will not be duplicate is a mother of screw ups waiting to happen.

    Presuming the person talking about Surrogates is using the same define as you is another mother of screw up waiting to happen. You are fair better to use Primary key generated by or Primary Key Sourced by.

    Generated by naming generation method gives clue how much duplication resistance what you call a Surrogate key has.

    –Changing your PK is not minor change– Please note changing what is called is not changing the PK. Adding a table so the added table was being the source of PK numbers instead of natural source does not mean changing.

    I added a table with Client number and card number. Did I say that Clientnumber in that table was PK or FK linked. What changed the meaning was adding a stand alone table that is generating Clientnumber values.

    –A key made from data that exists outside the current database. —
    This is the stupid part. If your source for information to fill in a entry in a database is a table in the database even if it not linked the Key is no longer natural so is a surrogate key.

    Worse being a surrogate key if I have a in database generator that returns a value 1 every time its still a surrogate key. If I have not checked for generator duplication stiff briskets. No matter what key source you use duplicate values is always a nasty possibility. Mitigation against duplicate value events in primary keys is a key part of secure and stable databases.

    The problem with using the term surrogate key is it telling you absolutely nothing useful. Like Primary key with a generator return value 1. Clearly shows o crap that broken but you could have written that as Surrogate key.

  56. DeafSpy says:

    If you get uniqueness wrong get ready for the mother of screw ups. If your database does not have enough flexibility for clients needs its also a mother of all screw ups.
    I must give it to you, Fifi, these two statements are quite correct. Congratulations!

    Now, sadly, your prior statements are, hm, braindead.

    At this point you should wake up that Natural and Surrogate are bull shit academic concepts.
    On the contrary, Fifi. While academics may find it very entertaining to theorize on the topic, practitioners are vividly interested in the concepts. DBAs are generally inclined to bet on natural keys to save space (with exception of boundary cases like clustered primary indices). Software engineers, on the other hand, generally prefer surrogate keys because of the exact reason even you managed to get right. Everything in life is likely to change, and your perfectly great natural candidate for a PK can suddenly produce a duplicate and blow your head off.

    Minor changes in database design change something from being called Natural or Surrogate. If you spend time worrying about if something is Natural or Surrogate you are not spending you time worrying about the most important things uniqueness and flexibility.
    Changing your PK is not minor change, Fifi. Don’t you know that data(bases) outlive any software out there? Decisions on PKs are among the most important decisions you need to make.

    CustomerNumber is the next one there put up as natural. Add one extra table to to that databasse design contain a value value CardNumber with CustomerNumber and the CardNumber be the value staff use then CustomerNumber becomes a Surrogate key.
    And why would anyone add an extra table, or Wiseman of the Bush? To make a bad problem worse?

  57. DeafSpy says:

    I doubt it, Fifi. Deaf Spy is an academic, with the occasional foray into consultancy.

    I would like to make a small correction. In addition to the above, I have almost 20 years of professional software development and being paid to by clients. Though I do code very little last few years.

    But, God help me if I have to seek consultancy from Fifi. If such time ever comes around, please shoot me and throw me in a nearby well.

  58. oiaohm says:

    Really I will answer Deaf Spy about Natural and Surrogate keys
    http://www.agiledata.org/essays/keys.html
    Most using the two terms is how to a a mother of a screw up.

    Note in that link puts SocialSecurityNumber in Natural. Problem is yes is naturally sources but when Social Security Numbers can come from different countries its not truly ensured to be unique. So not usable as a primary key unless you happen a government department or equal where all clients has to have one of your Social Security Numbers.

    CustomerNumber is the next one there put up as natural. Add one extra table to to that databasse design contain a value value CardNumber with CustomerNumber and the CardNumber be the value staff use then CustomerNumber becomes a Surrogate key.

    At this point you should wake up that Natural and Surrogate are bull shit academic concepts. Minor changes in database design change something from being called Natural or Surrogate . If you spend time worrying about if something is Natural or Surrogate you are not spending you time worrying about the most important things uniqueness and flexibility. If you get uniqueness wrong get ready for the mother of screw ups. If your database does not have enough flexibility for clients needs its also a mother of all screw ups.

  59. oiaohm says:

    DrLoser MS SQL Server has a table limit of 32 767 including views. I have hit that porting a oracle database. Of course Microsoft is very careful to always write the tablet limit in database physical size.

    Sorry DrLoser this has got to a joke with you wanting to spam post I will not be posting any more answer to you here.

  60. DrLoser says:

    I have another fun thing. SQLServer one of its selling feature is that it can import excel. Here is super fun do you know what banking and other financial groups create as excel files to give to customers. cvs file ended with .xls or .xlsx. So reality SQLServer one of its selling features is broken. Why cvs is the fact cvs cannot carry a macro language.

    Expound further on this topic, please, oiaohm.
    This is going to be fun.

  61. DrLoser says:

    Droping views tablets happens quite a bit. Its performance optimization that you setup on Orcale, Mysql, Postgresql and DB2 that you don’t use on SQLServer mostly because you have limited number of tablets to play with.

    Friday came very early this week, didn’t it, Fifi? “Because you have limited number of -tablets- tables to ‘play with'”?

    Bwahahahahahaha!

  62. DrLoser says:

    CSV is flabby, and nothing can change that. I repeat: today, every sane data-exchange infrastructure is based on JSON or XML. Period.

    Simply for purposes of titillating the humanoids, I shall point out that this is an incorrect statement.
    The incorrectness derives from the fact that the author of the statement does not have English as his first language. The term “flabby” is therefore misleading.
    Substitute one of the following: fatuous, lossy, lacking in referential transparency, arbitrary, perfect for a Pascal database, of practically zero expressive power, rancid, the lowest common denominator, lexically very annoying indeed, very popular with dimwits who think that awk is an acceptable filtering program in 2015, as opposed to being a largely extinct bird, a plaything for the likes of Fifi
    I can add to this rich thesaurus.
    Sorry, kids, but if you’re using CSV for any sort of data persistence these days — welcome to the 1990s!

  63. DrLoser says:

    So a client is asking you a question so you are asking me Deaf Spy.

    I doubt it, Fifi. Deaf Spy is an academic, with the occasional foray into consultancy.
    First of all, I’m pretty sure he has all the answers — otherwise, as an academic, he would simply be embarrassing himself.
    Secondly, as usual, you are evading the question because you are well aware that anything you say will show you up as an incompetent buffoon.
    And thirdly, you can short-circuit the whole thing! You can, you can, you can, Fifi! You don’t have to answer the question at all! You just have to answer the following question:

    When was the last time a client asked you to design a new relational database for a small CRM application?

    See, Fifi, the details about primary keys and so on do not matter. The fact is, you have never done it, have you?

    And it doesn’t matter if the CRM application was small.
    And it doesn’t matter if the application was CRM.
    And it doesn’t matter if the relational database was “new.”
    And it doesn’t matter if the database was “relational.”

    You just don’t have any experience in any of that at all, do you?

    Now, I am a fair man. Proof over the internet is a known hard problem. Here’s a way out for you.

    The last time you received a check for anything remotely resembling any of that from a client —

    How much were you paid?

  64. oiaohm says:

    –CSV is flabby, and nothing can change that. I repeat: today, every sane data-exchange infrastructure is based on JSON or XML. Period.–
    I missed this the super moron called CSV flabby. JSON and XML not compressed containing the same data is always bigger than CSV file containing the same data.

    One of the reasons why CSV is staying in usage is compactness.

  65. oiaohm says:

    –Tell me, Fifi. If you design a new relational database for a small CRM application, will you natural or surrogate primary keys, and why?–
    So a client is asking you a question so you are asking me Deaf Spy.

    –CSV is flabby, and nothing can change that. I repeat: today, every sane data-exchange infrastructure is based on JSON or XML. Period.–
    http://www.w3.org/TR/csvw-ucr/
    Start reading Deaf Spy this site contains example after example of records only provided in CSV format. Rules of submitting to particular governements around the world require format to be CSV. Getting transaction records from banks again CSV records.

    There is a lot of data-exchange infrastructure based on CSV.

    The fact you don’t know this DeafSpy means you are a moron of morons so there is no point me answering you questions.

  66. Deaf Spy says:

    Tell me, Fifi. If you design a new relational database for a small CRM application, will you natural or surrogate primary keys, and why?

  67. Deaf Spy says:

    Defaults work for me.
    Because you happen to leave in a place where decimal or thousands separator is not comma.

    CSV is flabby, and nothing can change that. I repeat: today, every sane data-exchange infrastructure is based on JSON or XML. Period.

  68. oiaohm wrote, “There have been commercial versions of Postgresql for quite a while that have been able to keep up in performance against SQLServer. The open source version of Postgresql has lacked those patches.”

    Still, the differences in performance have been comparable to differences in storage/networking and such, less than an order of magnitude. Paying for Oracle or M$’s licences has been a great waste. Why pay M$ or Oracle for the hardware you own???! You don’t pay more for a book because you are a speed-reader!

  69. oiaohm says:

    Robert Pogson read that Redhat test again. Notice its “postgresql plus” not Postgresql. In other words a Enterprisedb custom patched version with locking fixed.

    There have been commercial versions of Postgresql for quite a while that have been able to keep up in performance against SQLServer. The open source version of Postgresql has lacked those patches. So its been a matter of time before Postgresql and SQLServer was the same performance.

    I have another fun thing. SQLServer one of its selling feature is that it can import excel. Here is super fun do you know what banking and other financial groups create as excel files to give to customers. cvs file ended with .xls or .xlsx. So reality SQLServer one of its selling features is broken. Why cvs is the fact cvs cannot carry a macro language.

  70. dougman wrote, “Isn’t that the ALWAYS the case?”

    It’s an old trick M$ developed in the days of DOS, mislead folks into needing M$’s OS by messing with “foreign” applications.

  71. dougman says:

    Isn’t that the ALWAYS the case? XXXX beats out M$ on same hardware with different OS.

  72. oiaohm wrote, “Postgresql was slow”.

    Not slow enough to matter. Red Hat reported that PostgreSQL beat M$ years ago on same hardware with different OS.

  73. oiaohm says:

    Robert Pogson performance becomes important when it was a like postgresql that it worked out at the worst level of different postgresql was 8 times slower so 8 times the hardware was required todo perform the same task. 9.4 improved speed quite a bit to 4 times. 9.5 moves it to 1 to 1. This is the problem Postgresql was slow. Its no longer slow. Yes the cost damage of poor performance was quite a bit.

  74. oiaohm says:

    Deaf Spy csv is a banking information exchange standard used in many countries you moron.
    https://help.xero.com/au/BankAccounts_Details_ImportTransCSV
    Yes you might have moved on to using JSON or XML.
    But the world really has not moved over from csv for everything yet.

    So csv import working is kinda important. Particularly thinking what you normally importing as csv is financial records. So claiming that you don’t need CSV is a pure moron statement. Its more like hey we will excuse SQLServer for not supporting something key.

    –DROP TABLE IF EXISTS–
    ==Yeah, because you drop tables you don’t know about every day. Riiiight.==
    Droping views tablets happens quite a bit. Its performance optimization that you setup on Orcale, Mysql, Postgresql and DB2 that you don’t use on SQLServer mostly because you have limited number of tablets to play with. So a person who as only used SQLServer would not understand this usage. Unlimited tables means you will create tables as caches. Wait what about OUTPUT clause creating tables mentioned latter you have to clean up.

    –PostgreSQL supports DROP SCHEMA CASCADE… This is very, very important for a robust analytics delivery methodology–
    ==Good. All you need in SQL Server is a simple script you can write in a few minutes. ==
    Please provide demo script and I will rip it to bits. Please allow for SQLServer being in active usage creating views and other items off the table while you are trying to destroy it. Welcome to race condition script failure. DROP SCHEMA CASCADE avoid a race condition.

    –In PostgreSQL, you can execute as many SQL statements as you like in one batch–
    ==Yes, you can (gee I wonder how DBs are created from single scripts). The author is a clueless moron.==
    SQL standard schema is a single script. Sorry you are the clueless one here again. So quite a few databases are created from a single file that happens to contain scripts.

    –PostgreSQL supports the RETURNING clause, allowing UPDATE, INSERT and DELETE statements to return values from affected rows. This is elegant and useful. MS SQL Server has the OUTPUT clause, which requires a separate table variable definition to function. This is clunky and inconvenient and forces a programmer to create and maintain unnecessary boilerplate code.–
    ==Yeah, right. Jim likes apples, John likes oranges. Therefore, John is a moron.==
    No Deaf Spy is the moron. Notice you mentioned delete table would not be done often. But to use the OUTPUT clause you have just created a table you will need to dispose of. So this section here was a example why the early statement about needing to delete tables was important. List usage of Postgresql RETURNING is auto delete item where SQLServer OUTPUT to tablet is in fact not auto delete . Yes this is another case of code wrong in SQLServer race conditions happen.

    –PostgreSQL supports $$ string quoting–
    That is a SQL standard thing. Its one of the areas where SQL Servers is not standard.

    ==In MS SQL Server, you can either use the lumpy, slow, awkward T-SQL procedural language==
    –Slow? Even you, Fifi, admitted SQL Server is faster.–
    Sorry but the statement that T-SQL is slow correct. I said SQLServer has been faster over all currently this is not true compared to 9.5 postgresql.
    https://wiki.postgresql.org/wiki/PGStrom
    Compared to postgresql 9.5 maxed out on speed SQLServer is insanely slow.

    Turned out Postgresql developers optimized the SQL language processing extremely well and missed the fact that locking for data access was blocking everything. The reality Deaf Spy is a moron presumed that just because SQLServer could bench faster than postgresql that SQLserver was faster then postgresql at everything that is completely bogus.

    Deaf Spy so your score for correctly attacking a document is in fact completely Zero. So I did not need to pick out a single 1 as every one of your attacks was wrong as normal.

    http://blog.jooq.org/2014/06/09/stop-trying-to-emulate-sql-offset-pagination-with-your-in-house-db-framework/
    This is another one that gets really fun. Something so simple in postgresql.

    This last link should show you why T-SQL is dead slow you spend so much time doing operations to emulate 1 simple operation in Postgresql.

  75. oiaohm wrote, “the only place SQLServer has been winning is performance.”

    There are many instances where performance is the determining factor but for everyone else PostgreSQL or MySQL work very well. It’s price/performance that matters often, not just performance. It’s like Intel v AMD. There’s a reason I use AMD rather than Intel. I bought my last Intel processor in the middle 1990s. I’m only stuck with Intel in this house because someone bought a few while I was away in the bush. For a while, after AMD64 happened, AMD could command ~$1K for a state of the art processor. Later, I could easily buy a decent AMD processor that would idle most of the time for half the price of an Intel processor that would idle most of the time, say $200 v $400. So, there was no merit in buying Intel. Same with PostgreSQL. If it’s good enough for what folks do, there’s no need to pay to be M$’s slave. The performance of such databases is mostly dictated by the hardware, rather than the software. If the bandwidth to storage is the limiting factor and you have a huge database running on SATA rather than RAM, what difference does it make what database software is in use? Then, consider licensing. PostgreSQL wins easily.

  76. Deaf Spy wrote, “Do you see how quiet he has been since his own source made fun of his point that smarties will replace laptops?”

    Been ill for a week. Not enough energy to mess with the blog, the weeds, the tomatoes, etc. My favourite doctor is now on vacation so I’m using a backup… Damn! Hunting season approaches.

  77. Deaf Spy, throwing out all kinds of strawmen, wrote, “how do you handle regional formatting properly in CSV unless both sides agree in advance on it?”

    Why do you care, if you are not a big organization, just a SMB or individual? I use CSV all the time. It’s a quick, easy way to move tabular data around. The user gets to choose formats if he wants. Defaults work for me.

    Quoting Wikipedia, “CSV is a common data exchange format that is widely supported by consumer, business, and scientific applications. Among its most common uses is moving tabular data between programs that natively operate on incompatible (often proprietary and/or undocumented) formats.”

    The question you should ask is why the Hell doesn’t MS_SQL support csv?

    Quoting TFA, “MS SQL Server can neither import nor export CSV. Most people don’t believe me when I tell them this.”

    Yeah, that’s about right. M$, protecting the monopoly again… For folks using M$ anywhere in their empire, life is easier to use M$ everywhere because they are locked in.

  78. Deaf Spy says:

    Fifi, your reading comprehension skills leave a lot to be desired.

    I did read your http://www.pg-versus-ms.com/. It is total bullshit. For a change and entertainment, I will go quickly through the first couple of points:

    1.1. CSV support
    “CSV is the de facto standard way of moving structured (i.e. tabular) data around. “

    This is the stupidest thing I’ve read for a while. CSV is so fragile, that no one with IQ > 100 will use it. For example, tell me, Wiseman of the Bush, how do you handle regional formatting properly in CSV unless both sides agree in advance on it? If you are going to exchange data, you have JSON or XML. This is what the world uses today.

    1.2. Ergonomics
    DROP TABLE IF EXISTS
    Yeah, because you drop tables you don’t know about every day. Riiiight.

    PostgreSQL supports DROP SCHEMA CASCADE… This is very, very important for a robust analytics delivery methodology
    Good. All you need in SQL Server is a simple script you can write in a few minutes. If you have dependencies in your analytics schema, you’ve already blown it, because analytics schemas are supposed to be denormalized. A fosstard afraid of scrips? And the same guy says later on: “PostgreSQL can be driven entirely from the command line”. Stupid, I tell you.

    In PostgreSQL, you can execute as many SQL statements as you like in one batch
    Yes, you can (gee I wonder how DBs are created from single scripts). The author is a clueless moron.

    PostgreSQL supports the RETURNING clause, allowing UPDATE, INSERT and DELETE statements to return values from affected rows. This is elegant and useful. MS SQL Server has the OUTPUT clause, which requires a separate table variable definition to function. This is clunky and inconvenient and forces a programmer to create and maintain unnecessary boilerplate code.
    Yeah, right. Jim likes apples, John likes oranges. Therefore, John is a moron.

    PostgreSQL supports $$ string quoting
    So what. How many times exactly do you need to include quotes in your SQL scripts? Haven’t these guys heard of constants as a concept? Gosh, this is getting more and more pathetic.

    In MS SQL Server, you can either use the lumpy, slow, awkward T-SQL procedural language
    Slow? Even you, Fifi, admitted SQL Server is faster.

    And, the cherry:
    PostgreSQL can’t do this. I wish it could, because there are an awful lot of uses for such a feature.
    PostgreSQL can’t do many other things, too. But hey, don’t let this spoil the great article.

    Now, Fifi, it is time you pick a single item, spill out a wall of gibberish, and go into something totally irrelevant.

  79. oiaohm says:

    Of course deafspy cannot follow links.
    http://www.pg-versus-ms.com/
    First link is spec for cvs and the forth link is cvs file to spec to download that does not import correctly into SQLServer but import correctly into Orcale db, Mysql and Postgresql. Keep on proceeding down the page there is link after link to issue after issue effecting SQLServer. Items that Mysql, Orcale and Postgresql handle without problems.

    Sorry Deafspy moron I provided quite a good link just you are too much of a idiot to read it effectively.

    By the way the pointing to a case of the google cloud failure dues not change the fact azure has had more failures than google cloud. All cloud providers have failures sooner or later.

  80. Deaf Spy says:

    dougman linked to some stats on cloud’s reliability…

    Cloud’s reliability… Like this one?
    https://status.cloud.google.com/incident/compute/15056#5719570367119360

    Clouds trashing clouds. Google’s clouds. 🙂

  81. Deaf Spy says:

    http://www.pg-versus-ms.com/
    Please, Fifi. This hurts. So much incompetence and fud. Every single statement show total lack of knowledge about SQL Server. If it were Doggie, fine, but this is a low standard even for you.

    Go find something better, little one. Even if you do, you won’t hide away your shame for quoting a reference which is a proof against your own point.

    Take a note from Pogson. Do you see how quiet he has been since his own source made fun of his point that smarties will replace laptops? 🙂

  82. oiaohm says:

    http://www.pg-versus-ms.com/
    Postgresql vs SQLServer compares have been done for years. Yes the only place SQLServer has been winning is performance. That is changing fairly quickly.

  83. DrLoser says:

    Basically anyone recommending Alcohol as a brain improver is a moron.

    I see what you mean, me old slightly off fruit.

  84. oiaohm says:

    –Which in your case was a reply to Phenom, not to me.–
    This is interesting thinking that the post was Deaf Spy. So either drunk and cannot read as well DrLoser or you are confirming that DeafSpy is a fake identity or are you wild guessing.
    –Paraquat works, too. In small dosages. A spoonful of sugar helps it go down.–

    Truck drivers allowed Blood Alcohol level is 0.02 mostly because food at times can give you a Blood Alcohol level. Like eating slightly off fruit. But even at the level of 0.01 there are studies that show impaired judgment.

    Basically anyone recommending Alcohol as a brain improver is a moron. Please note DrLoser you were asked to provide a reference. One of the reasons why I am not provide DrLoser with references when you ask for them of DrLoser they are never provided. And Drloser comment on Alcohol here is a uninformed idiot response what is fairly much normal for DrLoser. Yes DrLoser you like going around asking everyone else for cites then go off saying stuff your self without any cites.

  85. DrLoser says:

    I agree DrLoser, but do you have a reference that backs this up? I need to have something to show to the police officer doing the DUI when I tell this excuse. And how do you define small?

    “Small,” as in “small enough to allow me enough brain cells to recognize the person to whom I am replying,” Kurks.
    Which in your case was a reply to Phenom, not to me.
    Paraquat works, too. In small dosages. A spoonful of sugar helps it go down.

  86. kurkosdr says:

    Small quantities of alchohol, well-tuned, do miracles to the brains.

    I agree DrLoser, but do you have a reference that backs this up? I need to have something to show to the police officer doing the DUI when I tell this excuse. And how do you define small?

  87. YY says:

    The funny thing is that MS Windows 10 has been announced als the only and last Windows.
    Nobody realized the true meaning of this prediction and it may be well so that Nadella has a much, much deeper sense of the future than Gates!

  88. oiaohm says:

    One of the biggest teller of a idiot is a person who comes up with the idea that 2 cars cannot be compared as some excuse to dig themselves out of a hole.

    There is a really fun fact about Postgresql is that blocksize it uses can in fact be changed. So 32TB limit can be changed to 128TB. .
    16 TB per file SQLServer
    32TB per file default 8kb block size or custom built 32kb blocksize 128TB per file Postgresql.

    Postgresql properly support partitioned.
    http://www.postgresql.org/docs/current/static/ddl-partitioning.html
    Yes it hell to set up.

    SQLServer claimed 524,272 TB table is 16TB tables x32 767 in virtual table. In other words SQL Server that hits a max size of 524,272 TB is stuffed. Postgresql limit is way more than that due to not having a file limit(yes remote modes allow getting around single OS limits as well).

    Problem here manual partitioning in Postgresql can do more partitions than 32767 into a virtual table and the individual table size of Postgresql is double SQL Server in default mode.

    I guess Deaf Spy never guessed that Postgresql max size per virtual table is infinity. Yes 32TB limit per table is for real in 1 file tables in postgresql not virtual table limit.
    Postgresql specs.
    Max tables infinity.
    MAX table size in a single file 32TB defaut 8kb blocksizse size or 128TB on blocksize 32KB.
    Max number of tables that can be joined up into a virtual table using partitioning infinity.
    So MAX postgresql virtual table size infinity.

    Interesting right SQLServer has 1 advantage for claiming falsely large table size. Auto partitioning of a virtual table. Problem here that may not be and advantage. Postgresql single server can host more data than SQLServer.

    SQLServer vs Postgresql is very much a mini(the car) vs a Tank. Yes SQLServer is the mini and Postgresql is the Tank that usage can be tricky. Only reason Postgresql tank is not flatting the SQLServer mini is that at this stage the SQLServer Mini has been faster.

    This is the problem when people start claiming SQLServer scales. Reality Postgresql scales insanely.

    A lot of people miss the 32 767 is the max number of tables SQLServer can host. Because a table at min has to consume 1 file. SQLServer is small fry pretending to be big fry.

    Mysql also contains partitioning and virtual tables so the Mysql max limits is since Mysql has unlimited standard tables is Virtual table max size is also unlimited. Again scalability goes to Mysql ahead of SQLServer.

    This is the super fun fact that both Mysql and Posgresql out scale SQLServer and OracleDB.

  89. dougman linked to some stats on cloud’s reliability…

    Google: storage – 5 nines and an 8
    Azure: storage – 4 nines and a 0
    Google: Compute – down 9.42h
    Azure: Compute – down 42.56h

    Yeah. Google’s cloud is better than what a lot of SMBs get from a server or two. M$’s is much worse, assuming folks use GNU/Linux. My Beast just lost a hard drive and it didn’t go down. It’s been down only for a new kernel reboot every couple of weeks… probably less than 15 minutes’ down time. Thank Me. I use GNU/Linux. Would I prefer the performance M$ offers? Nope.

  90. Eli Cummings wrote, “It takes a long time for a giant tree to return to the soil. It can stand dead for years.
     
    Microsoft had its day in the sun when it cast its shade on everything around it.”

    I do not accept comparing M$ to a tree. A tree is one of Nature’s most beautiful creatures. I’ve invited many Bur Oaks to come to live in my yard even though I will be dead before they’re in their prime. M$ is an evil product of man’s greed.

  91. oiaohm says:

    Deaf Spy from a technical standard point of view Postgresql is ahead of SQLServer. I would guess you would think SQLServer is the volvo. When it is the reverse. You can compare cars on technical features.

    Really there are sites that compare cars on technical features just like there are sites that compare databases on technical features.

    Sorry no point tell me to go get a drink. All you just proved is that you are a complete idiot. Yes you can compare Dachia to a Volvo without problems. Really SQLServer is the Dachia and the Postgresql is the Volvo. Postgresql has been highly dependable and driven slowly driving everyone else nuts. So exactly like a Volvo and their drivers.

  92. Deaf Spy says:

    Well, now, Fifi. It is someone else’s fault that you cannot give a proper reference.

    Whatever gibberish you spill here, can’t change the fact that your original claim of PostgreSQL being a better relational database than SQL Server is as stupid as comparing a Dachia to a Volvo. All your text is totally irrelevant.

    Try harder, man. Have a drink. Small quantities of alchohol, well-tuned, do miracles to the brains.

  93. oiaohm says:

    How many times do I have to tell you not to use 1 page before commenting its how to make a idiot out of yourself Deaf Spy

    The site showing postgresql ranked 5 and Score 281.86 is the old 9.4 not the new 9.5
    http://db-engines.com/en/ranking
    In fact it would have paid to read the ranking page before quoting rankings.

    Number 2 is in fact Mysql but its lacks a stack of functionality.
    Notice SQL Server has slipped by 133.84 in 12 months. SQL Server had a ranking like Orcale back in 2013 and has been slipping ever since.

    Sorry Deaf Idiot please stop posting on stuff you have no clue on. The ranking says you should use Mysql ahead of SQLServer yet you have been raging on Mysql as well. SQLServer is has lost its means to compete against mysql or postgresql and by the trend that has been going for over 2 years now SQLServer will keep on dropping. Over two years Postgresql has been growing. Of course its performance has been horible for those 2 years.

    Problem here did you read the ranking definition is when you read this as well Deaf Idiot due to the fact Postgresql is marketed under many different names means it going to be under counted. In fact it highly in face when you notice in the rankings Postgres-XL is counted as different to postgresql. Also due to counting what is on resumes means it over counts what is historically used not what is currently used.

    https://wiki.postgresql.org/wiki/PostgreSQL_derived_databases
    Deaf Idiot if you know the PostgreSQL derived and you add all those up in the rankings it more like 500 rank than the. There is over 250 there in derived. So in 12 months time 50% of SQLServer if both follow there trends over the last 2 years they will pass each other in 3 years as long you count postgresql and is derived as one. Even so I still would not trust the ranking numbers from that site. I brought it in because it has a very good feature list compare.

  94. Deaf Spy says:

    Fifi, have you completely lost the only brain cell that would somewhat function in that marvelous skull of yours?

    You own source says: SQL Server, Rank 3, Score 1108.66. PostgreSQL, Rank 5, Score 281.86. PostgreSQL is falling behind five times.

    And I do hope, for your own sake, that you don’t actually mean Azure DocumentDB. Comparing a document storage with a relational database is like comparing a hammer with a screwdriver.

  95. oiaohm says:

    dougman to be correct if you are depending on the cloud you should not depend on 1 single provider. Reason no matter how good they are they will have outages. Postgresql+Linux can be deployed on all cloud providers but performance has been a problem.

    –$399 and $899 — DrLoser this is kinda the problem.

    Projected costs of a entry level smartphone with wireless docking station to charge it $150 AUD. So with screen, keyboard and mouse under 300 AUD a seat. Add in the fact users are turning prototype steamboxs into nas/storage servers as well as games solution.

    Timing is everything. We have a perfect storm brewing. The funny part is Microsoft made a video of what it thought the future would look like. That video is appearing to come true just without Microsoft.

  96. dougman says:

    DrBing boohoo’s Chromebooks, but M$ sure is copying Google’s methods by introducing a Cloudbook. However, the ACER device would make for a decent low-cost Linux laptop; but back to M$ and irrelevancy.

    http://arstechnica.com/gadgets/2015/08/acers-cloudbooks-are-windows-10-laptops-starting-at-170/

    School procurement of iPads have mostly stopped and have gone totally with Chromebooks. These are kids that will grow-up using a non-Microsoft device and to them, Windows as an OS will be of little use.

    So lets review PC History 101: For the two decades through 2005, the personal computer was the only game in town, selling about 200 million units a year. But then smartphones and tablets came along and now they dwarf the PC market entirely. This shift in personal computing device adoption, meanwhile, has radically diminished the power of the Windows operating system platform. As recently as six years ago, Microsoft’s Windows was still totally dominant–the platform that ran 70% of personal computing devices. Now, thanks to the rise of Google’s Android, Windows’ global share has been cut in half, to about 30%. More remarkably, Android is now a bigger platform than Windows. See where this is going?

    Here is a dated article, but still relevant to discussion.

    “All these things will get consumers to look for the OS and apps that can give them all that,” Milanesi says. A key problem for Microsoft is that it is the people who don’t yet own PCs – in emerging markets such as Africa and China – who are most likely to have a smartphone and tablet as their first “computer”. Milanesi says: “They’re starting with a smartphone, not a PC, so when they’re looking for something larger, they look at something that’s a replacement smartphone experience – which is a tablet or ultra mobile device. And Android or [Apple’s] iOS are the two that they’re looking at.”

    http://www.theguardian.com/technology/2013/apr/04/microsoft-smartphones-tablets

    Here is another one, worthy of study.

    “Microsoft has been a monopoly with one game plan, leverage what they have to exclude competition. If someone had a good idea, Microsoft would come out with a barely functional copy, give it away, and shut out the income stream of the innovator. Novell, Netscape, Pen, and countless others were crushed by this one dirty trick, and the hardware world bowed to Redmond’s whims.

    The company sucked the life and innovation out of the industry for so long that eventually no one innovated because it was pointless, if the idea was good, Microsoft would end it. Ask Gateway about doing something as basic as making the initial desktop and installation process more user-friendly. Microsoft killed them for the sin of trying to make the user experience better. Everything stagnated as a result of this misuse of monopoly power.

    Then came search and Google. Microsoft missed this one but Google couldn’t be shut out because they weren’t on a PC. There were no vendors to threaten if they carried Google, and no way to exclude them. Microsoft tried to copy with a second rate search strategy. It failed. They re-branded. It failed again. They tried bribing users, that failed too. Re-branding again with several other ‘brilliant’ by Microsoft standards ploys later, Bing still loses billions of dollars a year. If it wasn’t for buying Yahoo’s users outright, Bing would have a user base that is essentially zero. Microsoft failed in search.

    Similarly with Linux, Microsoft just made sure that no OEM could bundle it with PCs, any that tried paid a high price. It was shut out. On the datacenter side however, Microsoft couldn’t force bundle Windows Server, customers put their own software on. For some strange reason, most large datacenters balk at paying $2000+ per two sockets for something that is vastly inferior to manage, slower, more resource hungry, and completely insecure versus the free alternative.

    https://semiaccurate.com/2014/05/15/microsoft-now-irrelevant-computing-want-know/

    Finally, you find this

    “Microsoft has yet to adapt fully to the exigencies of competing in the mobile device world, and will probably need to go through a painful period of adjustment, as it develops revenue alternatives within the Win10 ecosystem.”

    http://seekingalpha.com/article/3354755-microsoft-the-long-struggle-ahead-to-grow-windows-revenue

    To summarize, M$ is in a pickle long-term, one lack of revenue and two, lack of mind-share.

  97. DrLoser says:

    It [Microsoft] will never have the market share it once had, only a smaller share of a larger market. Whether that’s enough time will tell.

    Then again, a Dell product range pitched between $399 and $899 on mostly inadequate hardware (dollar for dollar) is hardly going to cause the sun to go supernova, is it?
    Dream on, guys. Dream on.

  98. Eli Cummings says:

    It takes a long time for a giant tree to return to the soil. It can stand dead for years.

    Microsoft had its day in the sun when it cast its shade on everything around it.

    It will never have the market share it once had, only a smaller share of a larger market. Whether that’s enough time will tell.

  99. dougman says:

    Azure?? M$ Cloud?? what a joke.

    Any sane business would never, ever rely solely on the cloud for it’s business reliability. Go ahead and rely on Azure, go ahead… the bean counters will fire your ass!

    Read the stats!…

    https://cloudharmony.com/status-1year-of-storage-and-compute-group-by-regions-and-provider

  100. oiaohm says:

    http://www.howtogeek.com/199483/tablets-arent-killing-laptops-but-smartphones-are-killing-tablets/

    By the way Deaf Spy you have pulled Roberts link as item that Robert is wrong. There is a problem. This is 2014 when it was noted that tablets were being pushed aside by phones. The PC market is not showing any major growth.

    http://www.wired.com/2015/02/smartphone-only-computer/

    –What happens with your notion that tablets and smart thingies are replacing PCs and laptops?–
    Only thing that has changed with the recent advancements is the idea of tablets. The current path appears to be smartphones with wireless docking and charging replacement segment of the PC market.

    Funny right your complete picking one source document makes you into a idiot moron Deaf Spy. Please learn todo way more research before attacking anything.

    Smartphones with docking will not replace all the PC market. We do know up to 80 percent of the market does not need Windows. The fact Android has MS Office this has made Smartphones with docking to large screen with keyboard and mouse able to replace larger market share.

  101. oiaohm says:

    Deaf Spy really you are being a idiot.
    https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-postgresql/
    Mircosoft Azure does postgresql hosting.
    And there is a down right good reason for it. Multi cloud services supporting postgresql so you can spread your risk. Elephantdb sets that up for postgresql users.

    Scalability sorry postgresql has it. Because it can scale between cloud service providers and the means to use multi cloud providers at the same time.
    http://db-engines.com/en/system/Microsoft+Azure+DocumentDB%3BMicrosoft+SQL+Server%3BPostgreSQL
    Functionality postgresql exceeds that of SQL Server by quite a large margin. The fact it supports more server side languages than SQL server does.

    Performance has been Postgresql problem in all competitions vs SQL Server and Orcale. That disappears with the new version 9.5.

    Deaf Spy ever head the name enterprisedb before and their product EnterpriseDB PostgresPlus‎. What is basically postgresql design to replace Orcale installs. So its Oracle or Postgresql that can challenge SQL Server.

    Yes 9.5 postgresql makes Oracle and SQLServer usage questionable. When Postgresql 9.5 has more functionality, more scalability, Almost the same performance and lower costs than SQLServer or Oracle its a problem.

    Next 12 months we are going to see pressure on users of Oracle and SQLserver. Some places running Oracle have already declared they are removing all their Oracle and replacing with Postgresql.

    Deaf Spy like or not the SQL market place changes over time. It was only time until someone invested the money to fix Postgresql performance issue.

    Being foolish and thinking only Oracle can challenge SQL server is wrong. Very few databases have a powerful enough sql langauge to handle 100 percent conversion of Oracle DB. SQL server does not have the language to handle a pure conversion of Oracle DB. Postgresql based databases is what you are looking at for the ones that can do 100 percent conversions from Oracle.

  102. Deaf Spy says:

    The cloud is still an infant.

    Exactly, Mr. Pogson. And MS are entering the cloud with great force and great success. The growth of Office 365 is nothing short of outstanding. At the same time, they are constantly SQL Server on Azure to match the premises offerings, and they are to gain unique advantage, that only Oracle will be able to match.

    On client side, MS successfully conquer Android as client platform. Just check out the latest ad of Samsung S6. A good deal of the frames are dedicated to the fact that it runs… MS Office Mobile. The ad never, ever mentions Android or Google at all.

    P.S. Just a warning: please spare me the stories how MySQL / PostGre / MongoDb (God forbid) can run on a cloud. None can get the combination of functionality and scalability on a cloud that SQL Server in Azure can.

  103. Deaf Spy says:

    Pogson, please don’t be so slimy. Let me ask you again:

    What happens with your notion that tablets and smart thingies are replacing PCs and laptops?

    The article is rebuffing this myth.

  104. Deaf Spy, grasping at straws, wrote, “Net result? Money still flowing to MS. Proof? Just look at how their server division is faring.”

    If you think ~1% share should hurt M$’s bottom line, you haven’t seen anything yet. The cloud is still an infant. There’s plenty of room to expand and multiply the number of Chromebooks out there. Like thin clients, their lifetimes should be long, so the meagre sales will accumulate to quite a slice of M$’s pie.

  105. dougman says:

    Spying you say…..Yes, lets discuss spying shall we?

    Considering all data traffic is archived by the NSA, everyone is subject to it, no ways around it. However, Windows 10, the best Win-Dohs ever!… snoops whether you like it or not.

    “Microsoft still tracks you, even after you harden your Windows 10 privacy to an extreme level by disabling all privacy-infringing settings.”

    https://thehackernews.com/2015/08/windows-10-privacy-spying.html

    So, to summarize, here are insecure items that M$ allows with its latest bomb.

    1) Shares your personal information with Microsoft by default
    2) Borrows bandwidth from your home Internet connection
    3) Can share your wireless password with your friends’ PCs
    4) Will continue to send information to Microsoft after you disable data-sharing settings
    5) Can scan for counterfeit games

    What’s even funnier, people are talking about starting a class action suit, but failed to realize that EULA disallows it, and when you agreed to using Win-Dohs 10, you agreed to the terms. M$, never shy about trumpeting its latest innovations whether real or just vaporware, has quietly changed its U.S. end user license agreement to forbid its customers from suing or joining in class action suits against the company. The 14th Amendment guarantees everyone the right of due process, but when it’s consumers against mighty corporations, that doesn’t mean very much these days.

    Eh.

  106. oiaohm says:

    Deaf Spy is when a idiot attempts to attack something.
    http://www.computerworld.com/article/2486757/windows-pcs/dell-s-chromebook-11-breaks-reliance-on-google-s-cloud.html

    It depends on the chromebook you are talking about. Dell ones have the option of private cloud not using any Google servers. Of course that is just dell ones. So business can go chromebooks and not have google snooping.

    –Just look at how their server division is faring.– Funny Microsoft is includes Linux services on Azure in that. So really server division looking good does not mean its products Microsoft has developed.

  107. Deaf Spy says:

    What happens when the businesses find that they can live with ChromeBooks?

    Nothing much in particular. They will be terminals, or webclients for some web-based CRM/ERP. No incentive that the terminal opens a Windows session with real Office and Outlook in behind. Or is a webclient to a .NET web app with SQL Server behind.

    Net result? Money still flowing to MS. Proof? Just look at how their server division is faring.

    Btw, I really wonder how businesses would like Google’s spying on all their users in their evengoing syphoning for data to power their search and ads. Android got already quite infamous for its insecurity, and businesses won’t touch it with a ten-meter poll for their critical operations. It may be only good for a console to some FE portal.

  108. dougman wrote, “M$ will be the next Kodak.”

    Nope. Kodak folded. M$ will split into various software/services/holding companies. They can live forever on the cash in their piggy bank. Pity. I won’t be able to use them as an example for the grand children about crime not paying.

  109. Deaf Spy wrote, “Chromebooks fail to break the 1.5%. Not so nice.”

    USA is a market that uses ~20% of the world’s PCs. 1.5% share is huge. What happens when the businesses find that they can live with ChromeBooks? That share will grow. Business use of ChromeBooks is likely underreported as they are probably locked onto some company-portal all day long rather than browsing the web as consumers do.

  110. oiaohm says:

    Deaf Spy I would say you are questioning far too soon.

    Tablets and Phones replacing PC market I see happening when wireless docking becomes a common feature.
    http://www.tomshardware.co.uk/sibeam-snap-60ghz-wireless,news-49859.html

    Different companies are still working out how to do wireless docking cost effective and power effective.

    The barrier to phone/tablet nuking PC major-ally is means to dock and provide PC style interface. Socket docking has always broken.

    The other question is how long before laptops don’t have any external ports. Yes wireless charging and docking no need for any ports. Reality is ports is holes in cases to let water and dirt in.

  111. Deaf Spy says:

    From the article:

    “All these results continue to point to strong channel demand for PCs and continue to belie the notion that any other devices are threatening the long-term business case for the notebook.”

    Hmm, Pogson, what happens with your notion that tablets and smart thingies are replacing PCs and laptops?

    Btw, on your graph, Chromebooks fail to break the 1.5%. Not so nice.

  112. kurkosdr says:

    In B2B sales. Allow me to “hmm…”

    Is HTML5+CSS+JS+WebGL the “vendor neutral API” that will do to most other APIs what H.264 did to wmv, realmedia and the like? Youngsters don’t remember it, but the ubiquity of H.264 and even MPEG4 ASP (divx/xvid) is a relatively new thing. Before those two paved over everything else, there were all kinds of proprietary codecs trying to enstablish lock-in.

    And HTML5 apps can be “packaged”, so that takes care of “what happens when there is no internet?”

  113. dougman says:

    I stand by my earlier predictions:

    In five years M$ is used less, and in ten years M$ is a shadow of it’s former self.

    M$ will be the next Kodak.

    “Kodak first introduced cameras to the masses in 1888, and built its business into a near monopoly. In fact, a “Kodak moment” might better refer to the failure of a once-dominant business to respond to a disruptive new technology–in Kodak’s case: digital photography. Ironically, this technology was first invented by a Kodak engineer back in 1975, and was diligently developed by the company in the years to come. But rather than marketing its own digital cameras, Kodak licensed the technology to other companies, to avoid cannibalizing the lucrative traditional and film business it conducted through its international system of distributors. In the meantime, cheaper mass-market alternatives proliferated, reaching consumers through electronics retailers, big box stores and later, online. Essentially, Kodak’s competitors sidestepped the sales infrastructure that made Kodak so formidable. Today, Kodak doesn’t even make cameras, instead restructuring itself as a much smaller commercial printing company.”

    https://www.linkedin.com/pulse/wrong-kind-kodak-moment-big-banks-become-next-victims-jerry-ross

    The “digital camera” to M$, is free and open-source software. M$ prefers to license its software, but now they are trying the free upgrade route with Windows 10, however cheaper alternatives are side-stepping them and providing a safer, more secure solution for everyone.

  114. YY says:

    I can feel the end of M$ comming nearer and nearer and that just makes me very happy 😀
    The world will then be free of this terrible organised crime blackmailing money from the poor and innocent.

Leave a Reply

Your email address will not be published. Required fields are marked *