Novell’s Shame

I have admired the contributions of Novell to GNU/Linux (e.g. the Linux kernel, Beagle desktop search engine, Evolution e-mail client). I have admired the defence they rendered in SCOG v World. However, I see nothing but shameful acts as the details of Novell’s agreements with M$ come out.

Novell agreed to work for M$ to appear to be making OpenOffice.org work better with OOXML when in reality they were making it work better with a subset of Office 2010. They did things like “skip over” unrecognized content. How’s that for improving interoperability? It’s a sweet deal to allow M$ to claim they were interoperable while they were not. It’s a sweet deal to allow M$ to claim their product was better because it could produce content not identified by the open standard. Novell sold out, pure and simple. To work for M$ on a contractual basis is just business. Working with them while they undermined the open standards process and interoperability while proclaiming they were enhancing interoperability is not. That is collusion in anti-competitive acts.

Shame on Novell. They have sullied their own reputation entering into unconscionable agreements with the evil empire.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology. Bookmark the permalink.

2 Responses to Novell’s Shame

  1. Sounds like you may want a bibliographic database. You can do that in a low-tech way by printing but how will you ever find it again? Better to let your PC what it can do best: create, find, modify and present information. Here are some suggestions:

    • copy and paste the URL (http://…) and interesting text into a word-processing document. Call it research_2010-12-21.odt. If you have a desktop search engine running, such as Google Desktop, you can then search for part of the URL, the date, or keywords in the document and it will all come back to you years later.
    • create a proper relational database, with fields such as URL, tag1, tag2, tag3, notes, extract. You could install Apache, MySQL and phpMyAdmin to do that pretty easily in GNU/Linux. Everything is in the repositories of most distros. Then your web browser can find stuff on the web and record your research in the database
    • you could use software with bibliographic support to take care of stuff. I use LyX but OpenOffice.org can do it too.
    • or, just trust Google not to lose anything for you and make some key bookmarks. Learn to search for keywords and phrases. One of my favourite searches is (keywords “phrases” site:somesite.org).

    Stuff does disappear from the web. If it is important, save it.

  2. Thanks for some quality points there. I am kind of new to online , so I printed this off to put in my file, any better way to go about keeping track of it then printing?

Leave a Reply