Kenya

Kenya has been in the news a bit lately because of the drought in East Africa and the influx of refugees from Somalia. I came upon an article about IBMers consulting in Kenya and wondered how IT was going in Kenya. Here are some data:

  • total Internet bandwidth with the world – 20 gigabits/s.1
  • 63% of the population have mobile access and that figure is rising rapidly
  • 4.7million Internet subscriptions rising rapidly
  • estimated 10million users of the Internet
  • 98.8% of Internet subscriptions are mobile
  • a survey is underway on use of IT
  • Joomla is used on the e-govenment portal but it is still under construction.
  • The ICT in Education paper is promising – “Providing teachers and other education professionals with access to technology is one key component to developing the necessary human capital which the education sector requires for the wide adoption of technology. Throughout the development of this options paper, it has been apparent that, while a priority for education planners, the GOK is not able to provide every teacher and education professional with a computer at this time. If the GOK would like to increase access for educators, other more economically viable models must be considered.

    Operating System – Linux-based OS and Windows OS should both be strongly considered. TCO should determine selection
    …LANs can be established using thin-client or fat-client (stand-alone) machines.13 Discuss pros and cons here. Many computer labs in Africa experience virus problems when utilising Microsoft operating systems. (See http://www.bridges.org/software_comparison/report.html.) CFSK and SchoolNet Namibia have both successfully introduced thin-client computer labs in difficult environments.”

It seems Kenya is working hard to modernize to exploit IT to its maximum potential to benefit the country.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology. Bookmark the permalink.

70 Responses to Kenya

  1. Contrarian says:

    “That’s cloudy”

    Only in the sense of your mistaken definition of cloud computing. You are confusing cloud infrastructure such as Azure or Amazon EC2 with simple on-line web services, hosted somewhere, even in multiple datacenters where there is just a single instance running to service some connection.

    “Where the Hell is your cite?”

    It is not MY cite, #pogson, it was YOUR cite:

    http://www.zdnet.com/blog/microsoft/outage-hits-microsoft-crm-online-office-365-customers/10359?tag=nl.e589

    i.e. “Microsoft’s CRM Online system is currently separate from its Business Productivity Online Suite (BPOS) and Office 365 successor. All of those Microsoft systems run in Microsoft datacenters. None of them currently is hosted on Windows Azure.”

    Read the last sentence.

  2. Where the Hell is your cite? M$ and Amazon both had datacentres knocked off the air by a power failure. Both datacentres were providing cloud services.

  3. Compensation in the form of a discount for services not rendered does not undo the harm M$ causes.

  4. BPOS: The suite includes Exchange Online, SharePoint Online, Office Communications Online, Microsoft Forefront, and Microsoft Office Live Meeting.

    That’s cloudy. That’s M$. That’s M$’s data-centre.

  5. Contrarian says:

    “Using that other OS is probably the biggest mistake M$ could make in its cloud.”

    Do you not read the articles, #pogson, or the other posts? The incident you are harping on is not related to the Microsoft cloud at all. Your own cite points that out. It is localized to a particular datacenter, which is what the cloud is designed to eliminate.

  6. oldman says:

    “Not doing proper TCO assessments on what they were going to deploy. Not understanding what they are deploying. Worst evil not running a test system first.”

    These issues showed up under load – FOSS components were less than reliable, and the staff who were used to maintaining and QA’ing FOSS were already up to their eyeballs in work. The reality was then they did the math they were faced with needing to have an additional senior sysadmin to keep up with the additional load, which at going rates here in the US made the “free” package more expensive than the closed source commercial solution they nixed.

    The interesting thing is that TCO assessments were done. Whether they were proper or not is actually irrelevant, fix was in.

    “Of course without being able to do an assessment on your complete job I cannot tell what group you are 100 percent in.”

    Fortunately for us both you are nowhere near in a position to screw up my personal productivity. As far as you assessment of where I sit, I dont believe I asked you for it Mr. oiaohm, and frankly based on the bullying arrogance that I see you put forth regularly, I seriously question your ability to honestly perform such an assessment, even if you were in a position to be asked.

  7. Contrarian says:

    “Try selling that to customers.”

    You are a harsh master, #pogson! Microsoft has offered compensation to the customers who were affected by the outage, you say. Are you somehow against them doing that? It is not clear from your comment.

    Are there actually any CRM service customers still too irate to continue with Microsoft in this venture? If not, I would submit that the solution has already been sold to them.

    I myself recently “fired” my lawn insect control and fertilizer service company due to a string of incidents that occurred. I did not do so after the first incident, however, and it was after #4 that I decided that they were going to be dismissed if/when a #5 occurred. “One strike and you’re out!” is not a common business practice and some level of problem incidence will be tolerated in most businesses. If some customer is so irasicible that they are incensed every time anything happens, then most service companies recognize that they are better off without that customer anyway. Give the problem children to someone else!

  8. There were a number of problems that compounded the initial loss of electrical power but I don’t see any of them related to GNU/Linux.

  9. Backup generators did not start
  10. Humans agreed to flush data
  11. “the management system continued to route requests to the affected servers”
  12. So, it seems to me that anything that could go wrong did go wrong and Linux had little to do with it. It sounds like a learning experience for Amazon. I expect M$ has learned from its mistakes as well. Using that other OS is probably the biggest mistake M$ could make in its cloud. M$ was knocked off the air by the same outage.

    M$ is not immune to loss of data in multiple ways.