IT in the Canadian Budget

Besides changes like eliminating the one cent coin (which costs 1.5 cents to manufacture…) there is actually some modernization of IT in the budget:

  • reduction of travel by means of video-conferencing,
  • reduction of paper documents by means of electronic documents,
  • easier access to venture capital and government procurement for small businesses,
  • improvement of several government websites,
  • consolidation of information technology (43 divisions unified…, one e-mail system, 300 data-centres consolidated into 20, savings ramping up to $150 million per annum),

Well, there’s no mention of cutting off M$’s cash cow, but at least they are finally looking at price/performance in IT, so it should not be long… Consolidation of data-centres may have that effect. They’ve already made moves to give FLOSS a level playing-field.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology. Bookmark the permalink.

10 Responses to IT in the Canadian Budget

  1. oldman says:

    “Suppose one of M$’s updates goes wrong…”

    Then the surviving node in the Windows cluster takes over the load, Pog, at least that how it goes in our shop. And BTW any shop that has terabytes of data is going to have more than one server serving it up.

    You really should stick with the back level small systems/environment you know about Pog…

  2. Suppose one of M$’s updates goes wrong and the fleet of PCs goes down for an hour while the guy in charge of the network has to shift many terabytes from the backup server to every machine in the system. The difference is $0. M$ is a single point of failure in many thick client systems. Suppose the systems are GNU/Linux and unlikely to become unbootable. You still have the possibility of the network going down or the LDAP server failing. That’s a lot less likely than a few PCs failing in my experience. Wiring can last 25 years. A good server can last 5 years. A backup server can take over in seconds. One can quantify the risks and find the risks with thin clients is less. There is a risk of a single point of failure but there is a certainty thick clients will cost more.

  3. ch says:

    “there is benefit to taking a risk that something will go wrong”

    In a school? Most probably yes. In a company that relies on it’s IT? The sheer mentioning of that idea would get you fired. If 100 users have to twiddle their thumbs for one hour because your single server failed, the cost would easily dwarf the cost of a second server.

  4. aardvark says:

    “We had two database servers, for instance, just because of a turf war. Guy A did not want Guy B controlling his database.”

    What was that about turf wars again? Or was it just a throwaway line?

    You don’t handle honest questions (based on your lead article, and quoted accurately) very well, do you, Mr Pogson?

  5. aardvark says:

    You didn’t actually listen to a word Ted said, did you?

    Remember: this is OS agnostic.

    If you want to preach Linux to the masses, the least you could do is to understand how to provision a Linux data centre.

  6. Servers are much better built than PCs usually and are much less likely to fail and do perform better than client PCs, so there is benefit to taking a risk that something will go wrong. It is simply wrong to have hardware idling, morally, and financially. It is wrong to accept lower performance just so an event will be smaller impact while inviting system-wide re-re-rebooting and sluggishness due to using M$’s crap. Everyone can judge whether or not IT is mission-critical. In education, it usually is not as teachers can, in a few seconds, revise the plan and carry on. Even in a computer lab, I could turn such a failure into a lesson in seconds.

    As a gauge of the reliability of servers consider this. In my whole career as a teacher using IT, I have only once had a server fail on me in the middle of a class. It was a case of failure of a memory module in the file server. I was able to reboot remotely and carry on. At the end of the day, I ran memtest86 and found one module had a huge hole and another had a single bit error which showed up every few hours. That was when the system was new, in the first few weeks of operation. Since then that server has given years of service. That vulnerability was well worth the risk because of the superior performance RAID and file caching gave. In the event very little harm was done to the organization, perhaps 20 users having been impacted, while many hundred person-years of great service were received.

  7. Ted says:

    Mr Pogson, “One Big Server” always equals one point of failure, regardless of how many redundant systems it may use internally. (A notable exception being Tandem).

    “Then there were redundant this and that, all idling… just to serve 100 PCs files, printing, data and permission.”

    In a critical system, there’s no such thing as too much redundancy – clustering, fail-over servers, replicated SANs, you name it. And a working and tested backups system, on separate hardware. And do it over multiple sites.

    “One could just slurp up a bunch of backups and spit them out into virtual machines.”

    Virtualisation does help to get the most out of your hardware, but you also have to take into account that two moderately used servers might need less power and cooling than one heavily loaded server. And you would want a backup host for your VMs too, so a second server always makes sense.

    “Oh, and a hundred thin clients hanging off something that is also acting as a database server? Forget it.”

    Absolutely true. You want a database servers RAM filled with DATABASE, but user sessions.

  8. aardvark says:

    Hard to believe that each of those powerful Dell Servers couldn’t have handled 100 PCs and the associated workload equally as well, even with TOOS.

    What was that about turf wars, again? Or was it just a throwaway line?

    On an OS-agnostic note, and I Am Not A SysAdmin, I think I’d still opt for at least two servers, one to act as warm backup for the other. In fact, if there’s a heavy database load (doesn’t matter which database, honest, guvnor), I’d consider having a less powerful third server to do the firewall/proxy and mail and print serving, together with all the other itty bitty admin tasks. Since the warm backup server is going to be doing little more than database replication, I’m sure it could double up as a warm backup to the third server.

    Oh, and a hundred thin clients hanging off something that is also acting as a database server? Forget it. The workloads are savagely different. It doesn’t matter what OS you use.

  9. aardvark wrote, “Consolidation typically means a concentration of IT and budgetary decisions.”

    Consolidation usually means duplication sticks out like a sore thumb and having too many servers and too many databases are some of the sore thumbs. Licences paid for services not rendered are also made conspicuous. One could just slurp up a bunch of backups and spit them out into virtual machines but someone will be accounting for the licences, all in one place. Someone will be wondering whether it is better to have 300 web servers running instead of just a few. Someone will have the power and cooling bills to examine all in one place, too.

    The most bloated place I ever worked had 7 servers when 1 or 2 would have done well, if they ran GNU/Linux. Because they ran that other OS, it just “seemed necessary” to have them all. We had two database servers, for instance, just because of a turf war. Guy A did not want Guy B controlling his database. Then there were redundant this and that, all idling… just to serve 100 PCs files, printing, data and permission. Four of the servers were powerful Dell machines each of which could have done everything running GNU/Linux. Putting all that mess in one place certainly focussed my mind. I am sure eliminating bunches of data-centres will do the same for IT in government.

  10. aardvark says:

    I wouldn’t hold your hopes up too high, Mr Pogson; Canada isn’t quite as different from the USA (subject of your recent post) as you would like to think.

    Besides, and in a purely neutral tone, it isn’t clear to me that consolidation “levels the playing field.”

    Whatever the merits or demerits of the Munich conversion, it’s fair to say that it would have been much, much more difficult to force through on a national level than for an individual Land. (Well, it’s a city not a federal state, of course, but that just reinforces the point.)

    Consolidation typically means a concentration of IT and budgetary decisions. This is the sort of thing that Microsoft specialises in, along with IBM, Accenture, Fujitsu, and many others. Red Hat might theoretically challenge, but I think the Canadian federal government is a little outside their ballpark at the moment.

    One other thing that is quite noticeable about large-scale government IT decisions is that the ultimate price is rarely much of a factor. This is clearly the case in Defense, but is also quite obvious when you consider various (horribly failed) initiatives in health provision, social security, etc.

    And no, that’s not an OS-biased observation. It’s not pro- or anti-FOSS.

    It’s something of an impediment to your brave new world, however.

Leave a Reply