An exploit that has been known since 2002 has been found to crack encryption on ASP.NET pages in a few minutes. ASP.NET issues an error message when a wrong cookie is passed and that message gives feedback to an iterative cracking process. You have to give it to M$. They know how to make users comfortable, even mis-users, malware artists and script-kiddies. The question is are you comfortable relying on their convoluted software which protects your systems like sponge rubber instead of armor plate with a ceramic shield?
That other OS, the one running on hundreds of millions of PCs and supplied by M$, has so many holes malware creators can rely on multiple vulnerabilities to spread rapidly and do their work. An OS is supposed to manage resources for the legitimate user, not some criminal on the web.
Mathematically, having so many vulnerabilities gives malware a much larger chance of infecting PCs and spreading and a greater rate of doing so. The developers of malware have learned the art of geometric growth just as mushrooms and dandelions have. They have also learned sophisticated means of escaping detection by malware scanners. Stuxnet was active for months without detection.
The world cannot afford to let random criminals control and exploit IT systems at the invitation that that other OS provides. Use GNU/Linux before the malware artists break into your system.
Where I work individual PCs were going down every week. The most common problem was malware. I added a firewall at the router and anti-malware (scan on access + internal firewall with checksums on applications) but still the mean time before failure of PCs due to that other OS failing was just a few months. When I had re-imaged one user’s PC the third time, I decided I had had enough. I began putting GNU/Linux on desktop PCs that failed. I installed it on all students’ PCs and most of the new PCs we acquired. Downtime has disappeared. In the past 6 months I have had a couple of cases of /home not mounting (a typo in /etc/fstab, my fault) and a confused BIOS (cycling the power cleared that problem). I have not had to re-image any GNU/Linux machine. I have time to do my day job and to give plenty of thought to expanding IT without fear of bogging down in problems.
Downtime is a serious problem in industry. You cannot avoid it completely but I found switching from that other OS on the desktop really reduced downtime here. The killers seem to be time to recover from a failure, frequency of failures and critical bottlenecks that affect large parts of the infrastructure. Redundant system and reliable systems should take care of most of that.
In my own organization that fits in one building I can envision several levels of disaster. Failure of a server could be recovered in 15 minutes or so, the time to swap equipment and boot. I have two servers and manual backups for both. Some data could be lost on both cases but that would not kill us because the really critical data is on clients. Larger organizations like those in the study linked above had average downtimes of six hours and a further four hours before all applications were up to date. It does take hours to rebuild RAIDs or to restore from backups. I am fortunate to be small and with immediately available redundant hardware. No need to direct an army of techs or to do long restorations.
With such a high cost of failure one wonders why anyone would use that other OS when it is so much more likely to fail.