In the striptease to “8” M$ lets out that they will create graphs to show how downloads are going. Imagine that. Instead of a few words and numbers they are going to create a data-structure for every transfer and update it periodically and redraw the graphs. That should do a lot to waste resources on ARM, eh? Updating a periodic sample to create a moving average was too easy. Instead they want to make a big deal about a download.
see Microsoft unveils file-move changes in Windows 8
Aims to fix comical download/copy ‘time to go’ estimates
That’s M$ for you. Instead of improving the sampling or revising the algorithm, they are going to rip and replace with a much heavier burden as if we have nothing better to do with our IT.
Basically you are being a idiot oldman.
“peoplesoft has very specific support requirements that you are circumventing.”
How dare you claim this. Issue did speak to them before doing.
They have very specific requirements in require library versions and java is in fact openjdk or IBM JDK version if you ask both work on tile. Since that is the redhat enterpise support requirement since openjdk is default and AIX support requirements is IBM JDK by defaut.
http://download.oracle.com/docs/cd/B31343_01/psft/acrobat/itools848_062706_itdb2unixnt.pdf
Please read.
Only parts that require x86 or x86-64 or even particular versions of Linux are the additional parts.
At no point have I circumvented the support requirements.
I have extra requirements from asking.
Tuxedo you have to have a direct license for that so it can be used on tile. What I already had due to other things being run. Please note tuxedo is not a cheep license for tile. Oracle is a solution provider pay enough you can have anything. If I did not have tuxedo already most likely would have ended up cheaper to place Peoplesoft on a x86 if that is all you were running that was using it.
Just like running Peoplesoft on PowerPC, MIPS or ARM servers you have to pay for tuxedo as a extra. Even shockly to use Oracles own sparc servers you have to pay for tuxedo as an extra to run Peoplesoft fully. It was a guy using sparc who made me aware that Peoplesoft worked on stuff not listed in the spec sheet and it paid to contact them.
Yes tuxedo is the most expensive part.
Tuxedo that comes with Peoplesoft software is very much like the limited servers sql servers that come with some programs.
I do have one hack that Peoplesoft also do when on particular cpus. Microfocus cobol complier. That is running inside qemu on the tile using binfmt-misc. But since its being used only when cobol code gets changed to produce java performance hit is nothing and the work around was approved before I did it.
Again I had wrote to Peoplesoft before I did it and got approval in writing. Now if what I was considering doing they were not going to give me support on I would not have done it. Surprising what you can get when you ask.
Yes one of the things I had to check before I did it was contact Oracle/Peoplesoft about what I was going todo that support would not be voided.
This is your problem oldman you don’t know the limitations of what you are really allowed todo and remain fully supported. Asking is required.
Management understands that now tile uses less power gets the job done taking up bugger all space in the rack and has support contracts covering it.
Yes oldman I have had idiots before walk in find out peoplesoft running on the tile run to boss say hey that is an hacked up environment to attempt to get me fired and be fired the same day for incompetence. Since everything on there is approved by peoplesoft. Not something I have just done.
Same mistake people make with peoplesoft on arm servers. Seeing them and thinking impossible. Really should be do you have the paperwork on that.
Yes I would say setting up on tile to run peoplesoft is a pain in ass in paperwork and I don’t recommend doing it. It was only a case I had tile stuff on hand and it fixed load problem we were having without requiring massive more rackspace what we had just run out of and were going to be waiting quite a few months for more if we were lucky or years at worst. Yes tile chips are some of the biggest bang for space you can do.
Cost is not always the reason for going to some of these different cpus. Space can be a big driver.
Mind you peoplesoft on arm would be a fairly neat walk in the park. None of the nightmares of cross distribution package naming.
Peoplesoft is really a good example oldman. Really there is very little software for Linux if you ask is not cross cpu. Sometimes extra bills to pay. But just reading the spec sheets without talking to the company making the software will lead you trick you that things items are unsupported when in fact they are supported if you ask.
So yes my path is not the cheep path.
“oldman on tile is been items red5 with closed source billing. sap and peoplesoft many other closed source java based items. Including full pos and inventory control software java based and closed source. ”
What I find interesting is that you seem to think that running unsupported configurations of enterprise software is OK. As far as I know peoplesoft has very specific support requirements that you are circumventing.
Does your management understand that they are running business software in a hacked up environment?
oldman on tile is been items red5 with closed source billing. sap and peoplesoft many other closed source java based items. Including full pos and inventory control software java based and closed source. This is why I kinda blew stack at you oldman.
Red5 really is a through put hog.
Running on tile is harder than running on arm. What I am doing on tile is a complete walk in park on arm servers without much effort at all.
The tile hardware I am using is from tilera same with the distribution. Yes they are very old school Unix like. With no offical support from thrid party ISV’s at all. Yet most of the ISV products for Linux work basically perfectly. Just don’t tell them its on a tilera. I normally just tell them I an running debian Linux running on arm gets less what the. And there instructions work with ZOL. Yes ZOL tilera own Linux distribution.
Most ISV software for Linux is not CPU based. Its java or something else cpu neutral.
There are a few makers of ARM severs the are also quite a few mips makers these days.
Simple point is for a lot of workloads CPU is not that much of a factor. Power usage and responsiveness is key.
There is a standard platform in arm. But its Cordex-A 7 or greater revision.
Really we need another ARM CPU name. That says if it this the bugger supports telling generic kernels enough information that they can work. Then we would have distributions like Redhat releasing for it.
oldman basically x86 is important to windows. To Linux is almost completely not important.
“What you do with your system is your business but others could do well with other configurations.”
Thank you for this acknowledgement. It is really the only point I was making.
As far as what can be done, that too is the business of the person doing it, no one elses.
What you do with your system is your business but others could do well with other configurations. In my own home, I have several PCs and I could shove most of what they do onto a server with no problems and probably an increase in performance. The increased performance results from file caching, more RAM, faster CPU and RAID on the server. I would have done that already except my server died. It seems to have a faulty motherboard. My wife and I both could run thin clients perfectly well for what we do.
“If you had access to the source code, you could improve the performance of your system by using a cluster of PCs/servers to run things. That is essentially what you do already with multi-core CPUs. Using network protocols that can be extended to more powerful systems. At some point the most powerful PC on Earth will not be able to do some task well and a network of machines is required.”
You don’t get it Pog. Over and above the fact that it is MY resource bought with MY money, I have zero interest in maintaining source anything ( even if it were available) even if I did, I would be more likely to purchase multiple systems for MY use – that is the point in this case, It is personal computer technology. its is MINE to do with as I please.
“All kinds of creative people in arts and science have shared. It works.”
In the sciences, doing reseatch using expensive shared resources is SOP. In the arts not so much. But in the end this is not about institutional use, but about personal use.
Sharing does not work for me personally. period.
Where Pog? Most independent creative people that I know maintain their own equipment these days. If we are talking about expensive specialized equipment, then I can understand, but that is as they say a corner case. And in that case Sharing is baloney when your trying to create something.
That is a special case. Millions of users consume content rather than producing it. Many who produce content do so at a keyboard or with a digital camera and even a smart phone can keep up. If you had access to the source code, you could improve the performance of your system by using a cluster of PCs/servers to run things. That is essentially what you do already with multi-core CPUs. Using network protocols that can be extended to more powerful systems. At some point the most powerful PC on Earth will not be able to do some task well and a network of machines is required.
All kinds of creative people in arts and science have shared. It works.
“You suggested sharing is not good. I showed that sharing is essential.”
There was a time when the only way that one could compose music electronically was to take ones turn using and expensive shared resource. That is where I began my journey into the computer world. It is where I learned the power of what they called Computer music. It was also where I vowed to work to duplicate that capability in resources that were mine to use without limitations.
Fast forward 30+ years and I now have that capability literally under my fingertips as I type this. It is to be sure a big powerful system, it is some would argue overkill. What it also is is a tool with which I can compose the large symphonic compositions that I want to, when I want to.
Please explain to me why after finally having arrived at what I want, I should want to give it up and literally get back on line to get access to this as a shared resource, Especially when I can afford it personally.
IN this personal case Pog, Sharing is most certainly not essential.
“So there are some quite large work loads arm system or other cpu types can be doing for businesses. I know this from using tile processors. I don’t have support from Redhat Suse or any of the other major distributions. Yet I still run a large amount of closed source applications on them mostly due to power and threw put reasons.”
Interesting. I am very curious Mr oiaohm. Would you mind naming the closed source applications that are running on proprietary tile processors. I’d also be interested in the vendor of your hardware.
You suggested sharing is not good. I showed that sharing is essential.
One can set the priority of various processes to accomplish the same ends without pausing any process. In a copy process there are longish periods when the system is waiting for completion of some task. It makes sense to do the lower priority operations in those waiting periods.
Calxeda touts the advantages this way:
It’s hard to think there’s no role for those features in large datacentres or small ones that want to do big jobs. The software will come even if the legacy apps don’t get ported. Which came first, the PC or the software for the PC?
“Oracle and Redhat on there Linux line both have versions in the development lines for Arm.”
Mr. oiaohm, once again you miss the big picture.
There is no standardized ARM server platform, and no engineering “history” for building one.
None of the major software ISV’s are shipping production ARM ports of their software.
Most importantly, there is no demand for server class ARM systems. The theoretical superiority of ARM power wise is unlikely to overcome the inertia of the x86 market. Remember I can get servers that use low power x86 parts if I want. While they do not have anywhere near the power parsimony than an ARM based system would theoretically have, they are here now, have all the commercial and no commercial software that I need.
What you personally can hack together on one off hardware and software from no name vendors IMHO doesnt count. What does count is what the market dictates, and the market dictates that right now ARM is on the whome king of the mobile device market, nothing more.
“Surprise to most people how much high end closed source applications on Linux are in fact cpu neutral.”
Big Deal. Unless the vendor actually supports the ARM platform, that particular fact is irrelevant.
Wow, this is a great idea. Sometimes I copy diferent amounts of files at the same time, but with this new behaviour, I could pause one transfer to speed another, and see how this will affect the transfer by these graphs. The default behaviour is to hidden these graphs, so they will not consume CPU power by default.
Oracle and Redhat on there Linux line both have versions in the development lines for Arm.
“As of this point in time neither Red Hat nor (I believe) SUSE, the two recognized commercialized Linux distributions, are offering an ARM port of its Red Hat Enterprise Linux. This will need to exist before many business entities even consider using Linux on ARM. ”
There is a RedHat Enterprise Linux Arm port but is not generic. If you contact Redhat you be give a list of supported ARM servers/cpus. It is highly selective at this stage. But if you have the right hardware it does exist and yes you can get a full RedHat support contract.
There is a issue with ARM chips that there has been no standard for the way to detect what the heck is in them. So OS kernel loading up has had to know in advance where the memory controller and ide controller and other things was attached. Ie ARM in a lot of chips is still like the pre Dos days pre Plug and Pray. Basically pre even having a bios is some of them.
SUSE still has arm in OpenSUSE due to not being happy about percentage of coverage.
“Intel’s Itanium processor” team at Redhat is now the arm team.
oldman basically you about 8 months out on going on. 1 to 2 years is about the time frame for arm to appear in the Majors as a mainline supported. Redhat and Suse are not what you call fast movers on platforms.
Also you need to learn to keep your mouth shut on some things. SAP and Peoplesoft heavy use Java they provide instruction for installing on debian and in fact support installing on debian. Shock horror they work arm systems no issues other than no Oracle database.
Yes you don’t have oracle database. All SAP and Peoplesoft products back end onto postgresql as well as Oracle database.
Two of your cases against business using an arm server is fake. Since SAP and Peoplesoft are cpu neutral and fairly much database neutral or at least far enough that changing cpu does not matter.
So there are some quite large work loads arm system or other cpu types can be doing for businesses. I know this from using tile processors. I don’t have support from Redhat Suse or any of the other major distributions. Yet I still run a large amount of closed source applications on them mostly due to power and threw put reasons.
Tile gx chips leave x86 in dust in most cases for amount of network traffic they can handle.
Surprise to most people how much high end closed source applications on Linux are in fact cpu neutral.
Changing to arm tile what ever is mostly not a issue for high end. Only delay is really support contracts from companies like RedHat. Even then some companies don’t care. The are big enough to have there own internal support system.
“Well, we should get rid of roads, telcos, airlines, municipal water/sewage systems because you don’t need/want to share.”
So you are saying that because someone is exercising their right to purchase a resource that is dedicated to them, we should forego having societal shared resources altogether. Does that sound right to you, Pog?
Actually to me it sounds, forgive me, almost childish – it is as if you cant tolerate the notion that someone might choose not to share something that that person bought for their own needs.
Am I wrong about this, Pog?
Well, we should get rid of roads, telcos, airlines, municipal water/sewage systems because you don’t need/want to share.
“I can transcode a file on the server just as easily as the client and the server is available for others in my organization should they want to do the same. ”
IN a organization short of resources, one might indeed need to consider the pooling of resources. However, the point being made was from the view point of personal use, Pog. The whole point of a personal computer is to perform tasks personally using personally dedicated resource.
I don’t think Yonah gives a damn about sharing resources. Like most personal computer users he wants his resources for his needs, period.
The “waste” and “inefficiency of this may offend you, but that is your problem, not his.
“With FLOSS there does not seem to be a barrier since the GNU/Linux system is already ported to ARM and is familiar. Particular applications if written in a portable language like Java or PHP should not be a problem.”
It doesnt work that way in business Pog.
The vast majority of fortune 500 businesses don’t just use FOSS. They are running commercial closed Line of Business software (Oracle, SAP, Peoplesoft) none of which is supported on ARM. Even where they use FOSS – there is no guarantee that the application and its packages none of which on nor do they just use any old distribution of Linux, especially not hackers crap like Debian.
As of this point in time neither Red Hat nor (I believe) SUSE, the two recognized commercialized Linux distributions, are offering an ARM port of its Red Hat Enterprise Linux. This will need to exist before many business entities even consider using Linux on ARM.
The you need to have the major Hardware vendors build systems based on ARM.
None of which is going to happen unless there is a real benefit, and I’m sorry Pog, IMHO its i questionable whether the “alleged” benefit of reduced CPU power consumption brings is going to be worth the investment.
The “reserve” power idea is sound but it does not need to be on the client with a networked OS like GNU/Linux. I can transcode a file on the server just as easily as the client and the server is available for others in my organization should they want to do the same. The overall system saves power, capital cost and maintenance that way. I can put a Hell of a lot more computing power in one server shared by many compared to replicating that on N client machines. It’s just more effective use of time, money, energy, waste, etc.
The idea of conserving energy is an infinite reduction in watt-hours for the cost of conversion. That could be huge where there are many identical servers in a room, for instance. Suppose I have 100 x86 servers running 400watts. That’s 40KW. If I can replace them with 1000 ARMed servers running 10watts, I save 30KW all day long. Suppose the conversion costs $100K or some absurdly large amount. I can calculate how many hours I have to run to break even. At 10 cents/KWH, I save $72 per day. If I run the new system for 1400 days, 4 years, I break even on power consumption for the servers but even earlier if I save on cooling/space. Can I run them a few more years to get a return on the investment? Likely. Further, I may have a savings on maintenance with fewer fans in the system. Less noise, too, is worth something for the sanity of workers. I would bet lower temperatures might help devices in the system last longer as well.
There’s plenty of upside with ARM. The position should be to go with ARM unless there is some unbreachable barrier. With FLOSS there does not seem to be a barrier since the GNU/Linux system is already ported to ARM and is familiar. Particular applications if written in a portable language like Java or PHP should not be a problem.
“The ARMed CPUs are intended to do enough at the lowest cost and power consumption. ”
And in the end that will limit their utility. Once you get beyond mobile content consumption appliances into the area of content creation, the ARM processors are IMHO likely remain too anemic to play.
This is OK, however, because in the end as you have noted they are currently targeted at the mobile marketplace where they are doing quit nicely thank you.
“The clustered ARMed servers are another matter. They can equal the throughput of Intel at lower power consumption although not necessarily lower total cost. They are competitive by the numbers I have seen.”
The clustered ARM processors are basically worthless outside of some niche markets Pog. There is such a huge installed base of x86 and now x86-64 based software that it is simply not worth the effort of conversion to a system that offers very little benefit.
Nothing Microsoft does is going to impress you, but as a Windows user myself I think it’s a nice idea. According to the MSDN blog posting linked in the article you give us, this new design will be used for both Internet Explorer downloads AND copy/move operations via Explorer. They removed or moved some of the nerdy info that most people don’t understand anyways or find useful. Also, the graphs are only displayed when the “more details” option is clicked. I don’t think resource usage for such graphs are a legitimate concern.
These changes were made based on a study of how real people, not just computer nerds, use Windows. They even cite that less than 1% of people (and I’m one of them) use a 3rd party copy tool such as CopyHandler. By the way, idling CPUs do in fact consume less power than when running under a load. I like having more processing power at my disposal. I like that transcoding a video can run in the background and I still have enough power left to play a game or do anything else without any loss in speed. The less time I have to wait for my computer the more time I have for the rest of my life. You might be happy doing “enough”, but I’d rather have speed to spare.
I don’t think any ARMed CPU on the roadmap will come close to being as powerful as one of Intel’s hair-driers. That’s not the purpose of the CPUs. The ARMed CPUs are intended to do enough at the lowest cost and power consumption. That is entirely a different design goal. Intel, OTOH has almost always tried to make each new CPU more powerful than the last to support the bloat of that other OS. They are mostly way over-powered for running GNU/Linux. What’s the point of having the CPU idling 95% of the time? It’s a waste of silicon, cash, resources, and energy. The clustered ARMed servers are another matter. They can equal the throughput of Intel at lower power consumption although not necessarily lower total cost. They are competitive by the numbers I have seen.
Do they are betting ARM is going to be as powerful as the latest Intel ?
They are in a good position to know, so maybe 😉