Android/Linux Still Accelerating

Google recently announced they had 10 billion downloads from Android Market. That’s more than decent but the thing that interests me is the rate of growth. In less than one year the rate of downloading trebled and the rate of downloading is still increasing. For most businesses that would be stressful, but Google cranks out more serving capacity pretty smoothly. They’ve had a lot of practice.

Mathematically, if they treble cumulative downloads every year the progression will go like this:

  • 2012 – 30 billion
  • 2013 – 90billion
  • 2014 – 270 billion
  • 2015 – 810 billion

About 2015 the growth should break unless every human on the planet buys a smart phone for the other ear… Sooner or later, Android/Linux could approach monopoly and they did it by competing on price/performance and not exclusive dealing. How is Wintel going to keep Linux in any form out of the “personal computer” space? They are not. OEMs and retailers will want this kind of growth in the PC market and it will happen. If M$’s current “partners” won’t do it, someone else will within a couple of years.

This kind of growth indicates that Android/Linux has mind-share. Consumers want it and no amount of advertising will change that for Wintel or for Apple. By the end of 2012 the matter will not be in doubt. The only question I have is how Android/Linux and GNU/Linux will carve up the markets. GNU/Linux should have some advantage in performance as much of Android/Linux apps is interpreted but the byte-code is compact so mobility is enhanced by using it. I would bet Android/Linux will dominate in the mobile space for the near future and GNU/Linux will take over the non-mobile space where people may expect/demand more speed than anything. If Java is more attractive for developers than other C-ish languages, GNU/Linux can run that too. It is pretty easy to port a portable language to GNU/Linux and the apps will work on any distro.

Why do I claim GNU/Linux will take over the non-mobile space? Because of the apps. GNU/Linux repositories work more of less the same way that app stores do and people like that. Wintel will have nothing like it until “8” ships and that’s not happening for months yet. FLOSS does apps stores better. Debian GNU/Linux has had APT working well for more than a decade and it handles all the apps, not just the OS. This is the one feature of Debian GNU/Linux that convinced me. Now that the idea of installing/updating all the apps and the OS from the network is proven/acceptable to ordinary folk, GNU/Linux will be proven/acceptable. M$ is vainly trying to reinvent the wheel but it will be too little and too late. This explosive growth in installing/updating from the web will burst through the walls of the garden. While the world considers whether or not “8” has merit, Linux will take over. 2012 will be a great and decisive year in the OS wars.

About Robert Pogson

I am a retired teacher in Canada. I taught in the subject areas where I have worked for almost forty years: maths, physics, chemistry and computers. I love hunting, fishing, picking berries and mushrooms, too.
This entry was posted in technology. Bookmark the permalink.

17 Responses to Android/Linux Still Accelerating

  1. Kozmcrae says:

    Phenom, you need to kick that dead horse harder. It just may come back to life. Or, maybe your foot will get stuck in its rotten gut and you will learn your lesson; Dead horses don’t come back to life. By the way, did you enjoy your crow sandwich?

  2. Phenom wrote, “x86, starting with 486, is no longer CISC”.

    I guess Intel is incorrect, too, and don’t understand their own technology:
    “The x86 processors belong to the CISC(Complex Instruction Set Computing) category, mainly because their instructions may be quite complex or have variable length. They use a relatively small number of registers and are capable of accessing memory locations directly. Complex instructions are sequenced in microcode in modern CISC processor.

    A different line, derived from CISC, is represented by the RISC (Reduced Instruction Set COmputing) processors introduced in the 1980s, which are characterized mostly by how they differ from CISC processors. The instructions are of fixed length, and of regular format. Operations are performed on registers only, of which a larger number are available than on CISC processors. The only memory operations are load and store. The hardware in RISC processors is simpler in principle than in CISC one, because a RISC architecture relies more on the compiler for sequencing complex operations.”

    see http://www.intel.com/intelpress/chapter-scientific.pdf

    That’s an excerpt from a chapter of Scientific Computing on Itanium®-based Systems, a book published in 2002 by Intel. You can still buy a new copy from Amazon.

    In 2008, for their 40th anniversary, Intel was still proclaiming the superiority of their CISC technology and that the war was over:
    “2. The CISC/RISC debate. The fundamental issue was not about the architecture or the instructions per clock (IPC) but really about software compatibility and RISC failed because it lacked such compatibility. The Intel® Pentium Pro processor with it’s out of order execution and integrated Level 2 cache brought about an end to the RISC vs. CISC debate as Intel was able to demonstrate to the world the transition to more efficient architecture that was fully compatible with existing software. I was a graduate student with Prof John Hennessey of Stanford and we had multiple teacher/student and academic/industry debates both privately and publicly. The debate continued even after Intel introduced the 80486, Intel Pentium® processor and the Intel Pentium® Pro processors. Intel even had RISC developments including the 860 micropocessor which proved equally unsuccessful. Many in the industry joined the ACE consortium to build an open RISC processor. In the paper Micro2000 which a team of us published in 1989, we made the prediction that software compatibility was very fundamental; even if something was better, it needs to be sustainable long enough time for people to develop software architecture.”

    Chuckle. Then came mobile and the issue of power consumption… Even by pulling out all the stops, Atom has not caught up with ARM.

  3. Phenom says:

    No, Pogs, you are incorrect. x86, starting with 486, is no longer CISC. The two terms are irrelevant nowadays. You may not like that fact, because it will take away the pleasure to mock at x86, but that is your problem.

    You may call x86 a CISC processor, you can call x86 a caterpillar, whatever you like. That can’t change the reality.

    Btw, Intel just decided to pay real attention to Atom. Intel know how make chips, and have the technology. I won’t be suprised to see a new reincarnation of Atom, which will blow the competition out of the water. Conroe did that once, remember. AMD still can’t recover from the shock.

  4. Kozmcrae says:

    Crow has become a delicacy for Phenom. He must love the taste.

  5. Phenom wrote, “there are no more CISC and RISC processors”.

    x86/amd64 is definitely CISC. It exists. ARM is definitely RISC. Why would anyone think differently?

    “While RISC became commercially successful in Round 1, to its credit Intel responded by leveraging Moore’s Law to maintain binary compatibility with PC software and embrace RISC concepts by translating to RISC instructions internally. The CISC tax was a small price to pay for the PC market.”

    see RISC v CISC Wars Part I

    ARM die size is 1.5mm2 while Atom die size is 6 mm2 QED.

    see RISC v CISC Wars Part 2

  6. Phenom says:

    You read it, Pogs, and understood nothing.

    Let me summarize it for you: there are no more CISC and RISC processors – this disctinction belongs to the past.

  7. Kozmcrae says:

    “And Android is the most secure OS evar:”

    A purveyor of misinformation if there ever was one.

  8. I read that. Paraphrasing: Risc is a tiny subset of x86. CISC is still flipping millions of bits needlessly wasting power.

  9. Phenom says:

    Pogs, did you miss that paragraph:

    “The terms CISC and RISC have become less meaningful with the continued evolution of both CISC and RISC designs and implementations. The first highly (or tightly) pipelined x86 implementations, the 486 designs from Intel, AMD, Cyrix, and IBM, supported every instruction that their predecessors did, but achieved maximum efficiency only on a fairly simple x86 subset that was only a little more than a typical RISC instruction set (i.e. without typical RISC load-store limitations). The Intel P5 Pentium generation was a superscalar version of these principles. However, modern x86 processors also (typically) decode and split instructions into dynamic sequences of internally-buffered micro-operations, which not only helps execute a larger subset of instructions in a pipelined (overlapping) fashion, but also facilitates more advanced extraction of parallelism out of the code stream, for even higher performance.”

    Emphasis mine. It is from the very same source you quoted.

  10. Nope. It takes time to compile/build/link. Developers don’t want to waste time.

    Examples of CISC instruction set architectures are System/360 through z/Architecture, PDP-11, VAX, Motorola 68k, and x86.

    see http://en.wikipedia.org/wiki/Complex_instruction_set_computing

    ARM may be more complex than it used to be but it is far simpler than x86. Intel is putting hundreds of millions of transistors in each core these days. ARM uses 13 million or so. There is still a huge difference in complexity of x86 v ARM instruction sets. On AMD’s hex-core chips 800 million transistors were cut out of the design leaving 1.2 billion transistors, 200 million per core.

  11. Phenom says:

    Pogs, Intel chips starting with PentiumPro are no longer CISC chips. Even 486 showed some elements of RISC design.

    There is no more CISC vs RISC architecture, Pogs.

    Speed of development has nothing to do with interpreted or compiled code. It is all about instruments – language features, libraries and IDE.

  12. oiaohm says:

    Dr Loser there is a classic issue with JIT. Slower start up time compared to native.

    JIT can work out great for long runs. But short runs of applications its not that great.

    Profile Guide Optimisation in modern day compliers also is about addressing some of the short falls of native code to JIT.

    Even with php we are getting compliers. Bytecode and jit really don’t cut it all the time. There is another term Dr Loser. AOT. This is where cache of preconverted code is kept from the bytecode. That is closer to native code.

  13. Dr Loser wrote, “this distinction between byte-code and compiled code is utterly irrelevant.”

    In a CISC that may be true since the microcode to decode the instructions takes several clock cycles and a long pipeline, but in a RISC system, everything is done in one clock cycle so interpretation is as slow as CISC while compiled code flies. Try decoding an instruction sometimes. It takes a few steps in CISC, but one clock cycle in RISC. In the old days of hard-wired computers, interpretation took about ten times longer than execution. It’s a lot faster now with caches and such but it’s still slower. You can see it when the typical PC executes PHP. One gains a factor of 2 or 3 by using various optimations but compiled code is 5 times faster. For the low-powered/battery-saving hardware it matters. In a few more steps of Moore’s Law it may matter less but we’re not there yet.

    The reason people use interpreted code is not for performance to the end user but throughput of valuable programmers and off-loading CPU work to the client machines. Instead of compiling and linking, they can interactively tweak their code speeding the process. Overall, development is faster using interpreted code but when it comes to market, executed code wins. That’s why there should be a Java compiler going direct to ARM native sometime. Often a client computer is idling so this may not matter but now we are doing all kinds of stuff on ARM. Speed does matter. I have used many programming languages, interpreters and compilers. I know what I am writing about. The worst interpreter I ever saw even interpreted floating-point maths. That was awful.

  14. Dr Loser says:

    (cease, sorry).

  15. Dr Loser says:

    Please, Robert, will you case this fantasy that you know anything at all about software?

    “GNU/Linux should have some advantage in performance as much of Android/Linux apps is interpreted but the byte-code is compact so mobility is enhanced by using it.”

    For pity’s sake, man, look up JIT on Wikipedia or something.

    Unless you’re modelling a particle accelerator on your mobile phone, this distinction between byte-code and compiled code is utterly irrelevant.

    Good Lord, man: you don’t think the comms stack runs on Java, do you?

  16. Kozmcrae says:

    Microsoft will be compelled to invoke more radical strategies to stymie the march of Android/Linux and GNU/Linux. They won’t see them as radical. They will see them as necessary. The legal system will respond more rapidly and more forcefully against Microsoft as a result. These are just my musings of course, one scenario parts of which may come to pass.

Leave a Reply