Highest Performance ARM Desktop Ever

That’s the claim CompuLab (the folks who gave us TrimSlice) makes about their Utilite2 device. I think they are very close to being truthful. Performance is not just about the network, the CPU, the graphics, and RAM. It’s about how it all works together. TrimSlice has a winner every way except in RAM. These days, 2gB is limiting, even for browsing the web. Modern browsers like FireFox and Chrome cache so much stuff and Chrome preloads pages that a user might click, that the browser takes all available RAM and performance drops off in 2gB. On my system, with 4gB RAM and hundreds of processes, Chrome is taking gigabytes of virtual memory and sometimes causes swapping if I have a dozen pages open.

So, with that proviso, I agree that CompuLab’s latest creation is a contender. It is a bit too high-priced when you compare it with some ChromeBooks, though. I don’t see how they can sell millions of units if ChromeBooks cost less, and that’s both ARMed and Inteled ChromeBooks. They want USD$192 for a 2gB unit with wireless. I think that’s marginal. I would consider paying that if it had 4gB RAM and no wireless. I’m wired with gigabit/s copper here. I can see wireless is popular but they should at least cater to us old-fashioned types… The other reservation I have is that CompuLab is an Israeli corporation. Until Israel recognizes Palestine properly and leaves the West Bank and Gaza alone, I am reluctant to do business with any Israeli business. That Israel has a “partner” in USA is not sufficient to make their actions legitimate in my view.

There are other contenders on the market with similar hardware:

  • Nexus 7 tablet for CDN$249 – perhaps the display and battery are not worth $53 but it’s still a good unit and it’s on Walmart’s shelves in Winnipeg so there’s no freight to tack on.
  • You can buy the HTC Desire 510 smartphone for CDN$200 here.

This is already aging technology, so I’d wait a bit for something with more RAM at a lower price. Expect others to be competing for this space shortly. Qualcomm has introduced two more powerful series of processors since the release of this one. They are already appearing in smartphones at high prices. Wait a year or two and they will be in this price-range. Perhaps CompuLab is clearing out old stock of the previous generation… Their next model should be perfect.

See Utilite2 Overview.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology. Bookmark the permalink.

19 Responses to Highest Performance ARM Desktop Ever

  1. oiaohm says:

    DrLoser no ram is not my friend. His post is surprised that its so cogent and correct. Yet you have not admitted you were saying bogus crap about glibc yet either you criminal sod who loves making stuff up.

  2. DrLoser says:

    Once again oiaohm’s responses are cogent and correct.

    It is an inevitable fact of life, ram. And we fight against that fact at our peril.

    I’m tremendously relieved that you are now oiaohm’s imaginary friend.

  3. ram says:

    Once again oiaohm’s responses are cogent and correct.

  4. oiaohm says:

    I’m going to guess that the Debian library that has the largest memory footprint is glibc, which is admittedly bloated, but manageable. I’m going to guess that its memory footprint is around 100-150MB.
    DrLoser your history of wild guess says you should not. glibc max memory foot print is not much.
    http://www.etalabs.net/compare_libcs.html
    Yes glibc about 12 megs of disc(this is only 1 copy in ram no matter the number of applications) and about 1 meg per tgid/posix process in ram max the min is 48kB. Majority of glibc using applications only use the min. Compared to something like musl this is bloated to hell because musl is less than 1 meg on disc and under 200kb per application with the min being 20k. Of course we will be running dynamic not static.

    DrLoser 48kb and 20KB is really nothing to write home about unless you happen to be building embedded systems. Remember 48kb and 20kb is before copy on write duplication reduction.

    xlib , gtk, qt the libraries has a bigger memory foot print than glibc. glibc is no where near the worst memory using library Linux has. Yes glibc could be better but the memory usage of it is almost ignorable compared to some of the others.

    If you want something with a heavy fort print you are normally talking parts out of KDE. Majority of Linux programs total memory usage will not acceded 100 megs of memory usage. Acceptations to this rule are web browsers and vlc.

    Something to remember as compliers building Linux improve the libraries like glibc are shrinking.

    For Glibc to be using 100 megs of ram you have to have 2134 processes running in that don’t compact.

    100 to 150 megs does not match the usage of a fully running Debian Linux system either.

    DrLoser debian min ram to run a desktop is 64 megabytes without a swapfile. Ok its a very limited desktop. Yes that is using glibc.

    DrLoser Microsoft ruled the browser market in the Web 2.0 time frame. They could have used that to guide direction.

  5. DrLoser says:

    Who invented the DIV tag. Microsoft.

    I interrupt my alter ego, Deaf Spy, to bring you the not entirely surprising news that oiaohm is talking out of his hat here.

    The DIV tag was in fact invented by Mozilla. Or quite possibly late Netscape. (It can be hard to tell the difference.)

    I presume that the (pathetic) reason that oiaohm is determined to assign this magnificent invention to M$ is because, well, in oiaohm’s strange, subterranean world, this makes M$ responsible for all the failings of Web 2.0.

    Which Deaf Spy had detailed, just a post or so below. I’m good at this stuff, you know. Twitting oiaohm is so much fun.

    Otherwise, of course, oiaohm would never admit to the following:

    So Microsoft does have a set of inventions.

    Reluctantly extracted, as with Stupid Teeth. (They’re like Wisdom Teeth, but with even less of an evolutionary basis.)

    Buy: hey, it’s a start, oiaohm!

  6. DrLoser says:

    I did make a mistake my brain was thinking netscape 4.0 instead of 2.04.

    Well, that’s an easy transposition of digits to make, oiaohm.

    My condolences to your brain. She’ll be right!

  7. DrLoser says:

    Amazing what discussions come out of the two word phrase “ample enough” 😀

    It is indeed, ram. I mean, who could believe the following?

    As applications became fluffier and more reliant on huge file-systems, use of RAM for caches, buffers, larger fonts, bigger libraries, and code has grown dramatically.

    Certainly not me. I’ve never met a “fluffy” application, although down those dark streets a man must go … armed, presumably with a feather duster or something. Sooner or later I can see a “bunnykins” application taking the world by storm.

    There are, of course, “applications that are reliant on huge file systems.” Google is one. In a sense, I suppose most HPC scientific apps are further examples (ram would know more about this than me), but only because they store a potentially exponential amount of data points.

    Day-to-day apps? Not so much. I can’t even think of one. Even my company’s software product, which is basically CAD-CAM, stores each project easily inside 50GB — and that’s for large projects.

    You don’t “cache” this sort of stuff in RAM, Robert. It is ill-advised to “cache” 50GB of a “huge file system” in 4GB or even 8GB of RAM. The thing would thrash itself to death within minutes.

    And it’s really difficult to see any sort of equivalence between “buffers, larger fonts, bigger libraries, and code.”

    To start with, “code” is out of the equation. It’s already been compiled (static) or interpreted (dynamic).

    Fonts? Go on, I’m fascinated. (I assume that “larger fonts” was a regrettable mis-typing of “larger font libraries.”) A font library might have been a pain on a 1990s machine. It’s a pithering irrelevance on anything later than a 2000s machine.

    Buffers? Well, OK, if you’re a really, really, stupid programmer, you might (for deviant reasons of your own) choose to take on your own memory management. Other than that, no. Buffers come nowhere near into play.

    So, we’re left with “Bigger Libraries.”

    On the assumption that said libraries are stripped of debugging symbols, Robert, I’m going to go with a guess here. (I promise, I haven’t checked.) I’m going to guess that the Debian library that has the largest memory footprint is glibc, which is admittedly bloated, but manageable. I’m going to guess that its memory footprint is around 100-150MB.

    Now, if that’s a reasonably accurate guess, I think I can pay that price. It’s central to the entire system.

    It’s sort of an interesting question, and then again sort of not. I suppose I could run a script across Wheezy or Jessie and rank the shared libraries by size, but in all honesty it wouldn’t tell you much.

    Bottom line: ram is right. With 4GB or 8GB of RAM, you’re not going to see much in the way of memory issues.

    Oh, and if you do? Invest $70 in a 60GB Corsair SSD.

    It’ll cope with all the “fluff” you want to throw at it.

  8. DrLoser says:

    This here is Web 3.0 not Web 2.0.
    You still can’t tell semantics from technology, can you?

    I’m not entirely convinced that oiaohm can tell semantics from semiotics, Deaf Spy. And I apologise to the rest of this site for, obviously, arguing with myself here.

    I’ve always wondered. Whatever happened to “Web 2.1?”

    It’s an unusual sort of ordinality for IT, isn’t it? Rather like admitting, when you come up with “Web N.0,” that “Web {N-1}.0” was fundamentally broken in some way.

  9. ram says:

    Amazing what discussions come out of the two word phrase “ample enough” 😀

  10. oiaohm says:

    http://en.wikipedia.org/wiki/Web_operating_system Web Operating system idea is 1998. Not google but Sandro Pasquali.

    This here is about web Desktop 1998 leads to RIA in 1999. Not the other way around.

    http://en.wikipedia.org/wiki/Web_desktop
    In fact webtop is older again being 1994 started 1996 items on sale. Chromebooks are continuation of Web Desktop work. Except chromebooks are not a horible mix of Java, python and what other wacky languages people could dream up.

    Extended browsers running web based applications being the desktop that was 1996 Tarantella. Yes Tarantella 1996 allowed windows applications to be displayed and managed inside custom web browser. So yes the webbrowser was the shell.

    I would say chromebooks are old webtops implemented correctly. Webtops were known to be ram eaters as well.

    Deaf Spy history wrong Netscape 2.0 supported layer not div.
    Well, you may call some people, for example these ones, and tell them so.

    That site is right and I was only partly wrong.
    Netscape navigator 2.0 does not include it Problem
    Netscape navigator 2.0,2.01,2.03,2.04 those are all the versions of Netscape Navigator 2. Netscape Navigator 2.04 that comes after the first release of Internet Explorer 3 contains the div tag.

    I did make a mistake my brain was thinking netscape 4.0 instead of 2.04.

    The site you went to is a google trap. If the most recent version of X product contains the feature the site marks it as supporting. The site did you tell you when the feature was added. Yes when the feature was added in this kind of arguement is important.

  11. Deaf Spy says:

    Deaf Spy history wrong Netscape 2.0 supported layer not div.
    Well, you may call some people, for example these ones, and tell them so.

    This here is Web 3.0 not Web 2.0.
    You still can’t tell semantics from technology, can you?

    RIA is not about turning the web into a desktop. It is about bringing desktop to the web, oh, deeply confused one.

    The rest is just gibberish.

  12. oiaohm says:

    Deaf Spy history wrong Netscape 2.0 supported layer not div. Netscape 4.0 supports div. Div appears in Internet explorer first and Netscape adds it for compatibility. If we had followed Netscape div would be still called layer. Basically Microsoft invented the Div tag by renaming someone else’s work so making an incompatibility.

    CSS I will give you I made a mistake there but Microsoft did at one point of history claim they had invented this.

    Google started their crusade to turn web into desktop, because they had none. And now we have websites that need 2 GB of RAM to work properly.
    This here is Web 3.0 not Web 2.0.

    Please beware before we had Google billed web sites requiring 2GB ram there were super fancy flash and java website requiring huge insane amounts of ram.

    Google was not the first to attempt to turn the web into a desktop.

    http://en.wikipedia.org/wiki/Rich_Internet_application 1999 Microsoft but this is in fact late.

    http://www.jcraft.com/weirdx/INSTALL This is the most insane example I know of. Yes a full X11 server running in browser java applet. There are many other flash and java ones almost as bad. This is going on before Google release Gmail let alone Chrome.

    http://en.wikipedia.org/wiki/Web_operating_system Web Operating system idea is 1998. Not google but Sandro Pasquali.

    Web sites requiring gigabytes of ram existed before Google started their desktop push. Some of the flash and java gaming sites were particularly horible back in 2003 for leaking ram and crashing the complete computer.

    Google did not start the Web OS idea. If anything they have continued on what was already happening. In fact Google is attempting to bring some sanity. Remember before google we had complex web applications running in java and flash and those were not in any form of sandbox.

    Google is attempting to refine and fix the Web OS idea.

    Google starts with a disaster grade mess. Releases the chrome prototype to show browser makers how to make stuff more secure then ends up in the browser market.

    Something interesting newer chrome browsers are using less ram on complex sites as they are getting better at optimizing.

    This is the problem 2G of ram consuming website existed before Google. Google changes in web-browser with per process isolation allows us to see what site is doing this.

    History does not say Google is responsible for heavy web applications. In anything Google is responsible for making heavy web applications identifiable.

    By the time google starts the idea of web desktop is well and truly under way.

  13. Deaf Spy says:

    You are a master of mixing lies with some truths, Ohio. Indeed, MS invented Ajax via their DHTML extensions. Strangely though, you are the only open-source anti-ms proponent who would admit it. All the web open-source anti-ms proponents never, ever dare say it aloud.

    As for CSS, however, you are deeply confused. Please check out Wikipedia, you should be able to do at least that.

    It is perhaps easier for you to get away with your legend about DIV, though, especially among your less illustrious and educated readers. Here, however, you will get busted again. DIV, my dear Ohio, got it support in Netscape Navigator 2.0, which is well before IE got to support it.

    Twist and turn as much as you wish, Ohio. Fact is that MS neglected the Web for a long time (after getting 95% of marketshare with IE6), and then Google started their crusade to turn web into desktop, because they had none. And now we have websites that need 2 GB of RAM to work properly.

  14. oiaohm says:

    Deaf Spy
    Web 2.0 is exactly the Ajax hype
    Who invented Ajax http://en.wikipedia.org/wiki/Ajax_%28programming%29#History
    Microsoft invented Ajax Deaf Spy. How did Microsoft implement ajax at first Active x.

    Ok who invented CSS again Microsoft. “Outlook Web App” is the first Web 2.0 application in 1998 with heavy amounts of client side javascript. Who invented the DIV tag. Microsoft.

    So Microsoft does have a set of inventions.

    Semantic is Web 3.0. Web 2.0 Google was not in the browser game or in the webmail game. Google enters after we have already gone into Web 3.0.

    Deaf Spy you have your history wrong.

  15. Deaf Spy says:

    Ohio, I am afraid you are completely irrelevant. More so, as usual. You reference defines web by the semantics of the information and knowledge, not about the technology. I repeat: semantics vs. technology.

    Web 2.0 is exactly the Ajax hype, when everyone suddenly started loading pages bit-by-bit, coupled with heavy CSS that rendered HTML pages to a pile of DIV tags, completely breaking the semantic idea of tags. Put some heavy JS code in the mix, and there you go, Web 2.0, ladies and gentlemen.

    Silverlight and ActiveX are actually desktop technologies. They might be hosted in a browser, but run in the context of the local machine, and have access to certain (even all) local resources, including all the hardware. Same goes for Flash, which you conveniently forget to mention, because it is, hm, not Microsoft. But which predates both AX and SL.

    Do you have any further irrelevant remarks, Ohio? I am sure you do.

  16. oiaohm says:

    O my god let blame Google for Microsoft sin.
    http://www.labnol.org/internet/web-3-concepts-explained/8908/

    Deaf Spy Web 2.0 is Active x and Sliverlight and other parts. In other words web sites that are OS dependent.

    That was before Google turned the web into this Web 2.0 turd, before the browser became a fat client, interpreting tens of thousands of lines of JavaScript code, which runs like ass even JIT-ed.

    Really we are upto Web 3.0 so the Web 2.0 here is incorrect and miss informed. Thousands of lines of standard define JavaScript that any conforming web browser can run is Web 3.0.

    Microsoft and Adobe turned Web 2.0 into a turd.
    Google is attempting to turn the Web 2.0 turd into something somewhere near functional and restoring platform neutrality with Web 3.0. It will most likely take until after Web 4.0 before everything starts looking nicer.

    Deaf Spy so no matter how you cut it Google mess from javascript would not have happened if Microsoft and Adobe hand not bungled Web 2.0.

  17. Deaf Spy says:

    We used to be happy with just a few MB to browse the web

    That was before Google turned the web into this Web 2.0 turd, before the browser became a fat client, interpreting tens of thousands of lines of JavaScript code, which runs like ass even JIT-ed.

    Now, you can thank Google for the extra RAM you need to buy to be able to surf the web or check your gmail.

  18. ram wrote, “ample enough”.

    Ample RAM is a moving target. We used to be happy with just a few MB to browse the web and run applications with no virtual memory at all. As applications became fluffier and more reliant on huge file-systems, use of RAM for caches, buffers, larger fonts, bigger libraries, and code has grown dramatically. It’s kind of a chicken-and-egg thing. If there’s more RAM, developers will find a use for it. If there’s bigger demand, ie. swapping, system-builders will stick in more RAM. There are plateaus at a doubling point and 4gB, double 2gB, has been popular for a while. Perhaps as network bandwidth increases, more will be needed. I noticed that HP is thinking that their memristor technology will be used for fast and slow storage both, so RAM could explode to many gB very soon. This is one reason I like thin clients. Enough RAM to show the pix and send the clicks is still just a few MB. The Little Woman’s machine is using only 155MB these days and 90MB is for cached stuff, so only 65MB for code and data, when using no local applications. For ages, 64MB was enough for most thin clients until we started using the full distro on the thin client chroot. This is another reason thin clients can have a long life.

  19. ram says:

    Regardless of processor make, I’m inclined to wait till the memory is all DDR5, and ample enough as Mr. Pogson points out.

Leave a Reply