There have been quite a few rumours surrounding AMD's next chip refresh, the Zen architecture. DigiTimes is adding to that with a story today which places the release date sometime at the end of 2016, at the earliest. Their sources suggest an issue with GLOBALFOUNDRIES 14nm FinFET process which is delaying the release and which is very bad news for AMD. The claimed 40% improvement over current generation processors is not going to mean as much in a year or more and with AMD's current financial situation releasing a new CPU for people to buy is something that needs to happen. Let us hope that the delay is exaggerated or that something happens to resolve the production issues in the coming months.
"AMD's next-generation Zen architecture is expected to arrive in the fourth quarter of 2016 at the earliest, but sources from motherboard players are concerned that the late arrival of the new platform may put AMD in a rather difficult competitive position."
Here is some more Tech News from around the web:
- Google Updates: Because you're sick of hearing about Apple @ The Inquirer
- Apple iPhone 6S: Same phone, another day, but TOTALLY DIFFERENT @ The Register
- Plug In an Ethernet Cable, Take Your Datacenter Offline @ Slashdot
- Microsoft is downloading Windows 10 to your machine 'just in case' @ The Inquirer
- Well, what d'you know: Raising e-book prices doesn't raise sales @ The Register
- Tech ARP 2015 Mega Giveaway #6 : WD My Passport
This is joke?
fourth?
is that
This is joke?
fourth?
is that typo?! like first Q. is to late for AMD, but fourth?! -.-
RIP. AMD
Not really if those Zen cores
Not really if those Zen cores are going on some HPC/workstation SKUs first, who needs just the gaming business to sell those ACE loaded GPU asynchronous compute engines. I say go for the users with the really big bucks, the HPC/Workstation/server market. There will be the console market also, in addition to the discrete gaming market. Why worry about only the gamers when there are also the games producers that will be using those workstation Zen based APUs with professional graphics to produce all the game’s graphics and 3d models, so get some of that professional graphics workstation business also. AMD’s CGN GPUs are more than just about gaming graphics they are about HSA compute of any and all graphics/gaming and other workloads, so who needs to only rely on only the Zen CPU cores, or Intel’s CPU cores, for gaming physics/other non gaming graphics calculations when a lot of the work will be shifted by gaming engines to the Vulkan/DX12 graphics/GPGPU APIs and more work done on the GPU anyways.
Why the need for all that CPU power going forward when more gaming work will offloaded to those AMD asynchronous compute units to run games better and with lower latency than any CPU could. GPUs curb stomp CPUs for graphics/number crunching abilities anyways, if you don’t believe me just pull out the discrete GPU and try to high end game on just the CPU, and see where CPUs will get you for any gaming!
Honestly even with a Q1
Honestly even with a Q1 release date AMD was going to have a low chance of penetrating any HPC/workstation market. Given their own slides, Zen is expected to perform around the level of Haswell which sounds impressive until you realize Broadwell and Skylake have already come out. Keep in mind those slides are almost certainly overoptimistic (read: I’ll believe it when I see it).
GPGPU is not the magic bullet a lot of people think it will be. There are still very many performance intensive tasks that cannot be parallelized in that kind of way.
Another point of note for professional graphics workstations is the reality that they would use dedicated cards (Quattro, FirePro) and even if they didn’t, Intel’s GPUs are getting better and better with each new release and are quickly catching up to AMD’s current APUs (not to say that Zen won’t fare better, we just don’t know right now).
I’m an AMD fanboy, still rocking a Phenom II X6 and would absolutely love Zen to come out but the reality is they have more or less completely lost any professional markets (servers, workstations) after the failure of bulldozer. Any delay beyond Q1 2016 is just going to add to that pain.
It’s the price to performance
It’s the price to performance ratio that the server/HPC industry will use as the metric for and new purchases of processing equipment, that and the overall performance per watt. It’s not about the brand or brand loyalty its about getting that maximum amount of performance to handle the server/HP workloads, and those closely integrated CPUs with the GPUs on AMD’s APU product will be very efficient for those complex analytical workloads where vast amounts of parallel information is processed in order to produce the proper analytical results. We are talking about large amounts of parallel data and having the calculations done on the GPU’s massively parallel vector units. Those dedicated GPUs connected over PCIe do not have the necessary bandwidth compared to AMDs new workstation SKUs, that will have loads of energy efficient HBM memory connected to the CPU and GPU on the Zen/Greenland GPU based APUs. Its not a simple question of Branding here for Firepro, these AMD HPC/Workstation APUs will have that HBM wide parallel bandwidth able to be run at much lower clock speeds and still provide more effective bandwidth that the narrower PCI interfaces that have to be run at around 7 times the clock speed of that HBM2 that will come on the new AMD server SKUs.
The power usage/leakage goes up with the clock speed/voltage at a much larger than liner rate. Also the use of the interposer, for these AMD APUs on an interposer will allow AMD to utilize much wider chip interconnects/coherent connection fabrics between all the dies placed on the interposer, we are again talking about the individual dies on the interposer getting the same very wide parallel connections to each other via the interposer’s ability to host 10s of thousands of traces to and from the CPU/GPU/other processing dies and HBM memory chips. The interposer will allow systems designers to provide each separate chip attached to the interposer that same amount of potential connectivity for these individual dies as if they where made of one single monolithic die.
The interposer being silicon based itself will allow for the same trace density as any silicon substrate would allow under the best current fabrication processes. The interposer is/will allow for a level of individual chip communication interconnection possibilities that the old PCB based process could never allow. For all points and purposes those individual dies attached to an interposer can be thought of as essentially single massive monolithic die from a computing standpoint, while still allowing the individual chips to be fabricated separately from the best most efficient fabrication to suit each chips individual processing function.
You need to read up some of AMD purposed exascale APU/HPC APU systems before you make an uninformed decision.
You are still a little CPU obsessed, because with the new GPU designs there are plenty of separate ACE units to offload any task to, and if there are any dependency issues unique to serial compute workloads then those ACE units can context switch SMT style and work on other processing threads until the serial dependency is resolved. Those ACE units are made for SMT style compute among the individual processing threads that are running so expect that the processing will not cease while waiting for any stalled thread’s dependencies to be resolved, that is What SMT enabled asynchronous processing is about! It’s not the old GPU technology with GCN, and Arctic Islands GPUs will be on a complety different GCN level of asynchronous processing when it comes online.
May explain why they spun off
May explain why they spun off the GPU division.
Speaking from ignorance about the manufacturing processes, could they do a Q1/Q2/Q3 paper launch and deliver very limited supplies until Q4? Wouldn’t help the situation on the ground but maybe the PR bluff tactic might salvage something.
No spinning off AMD’s GPU
No spinning off AMD’s GPU division when GPUs are useful in the server room/HPC market also for vastly superior to any CPU’s number crunching ability!!! AMD’s HPC/workstation APUs with built in Greenland graphics GPUs/vector processors will be very popular for loads of high end analytical workloads that those puny CPUs would take forever and a day to compute. With everyone in the mobile market already on board with more HSA style compute anyways, and AMD will be bringing that HSA compute to the HPC/workstation market with its Zen based Greenland graphics accelerated processing units for the server room/HPC market. No way can anyone who makes SOCs/APU/CPU type systems for markets from mobile to supercomputers can continue to ignore the need for GPU compute, and the computing power that GPUs are bringing to any and all types of computing workloads. AMD’s entire line of products are now wedded to merging the CPU with the GPU for any and all types of computing applications, so don’t expect any spinoffs at AMD in this stage of the game. Just go to the HSA foundations website and see all of the major companies that are all in with HSA and doing more general purpose compute on the GPU.
The APU systems systems on an interposer will have HBM, and even better integration between CPU and GPU with maybe a little FPGA compute added to the HBM stacks to round out even more compute ability in addition to the paltry amount of FP/other compute resources the CPU can bring to the whole equation. Its not going to be easy to separate the CPU’s from the GPU’s on AMD’s systems going forward, the level of integration between them will be so complete.
Look for some form of high end AMD Zen/Artic Islands gaming APUs with HBM for future gaming systems, running even more of the gaming engine code on the GPU’s ACE units, with even less need for anybody’s CPU resources. CPUs are such MOOKS when it comes to raw number crunching ability.
HSA is not going to last.
HSA is not going to last. Intel actually got ‘fusion’ right.
Intel is blurring the CPU/GPU concept, and nvidia/AMD death note is in Intel specification for its upcoming Xeon architecture.
If you are in the HPC sector, Intel proposed design is the ultimate solution. AMD, even so it got better compute GPU then nvidia in many areas as little to lose. But nvidia is not going HSA … they know intel got it right.
Tell that to the HSA
Tell that to the HSA foundation members like Imagination Technologies(IT) the maker/designer of the PowerVR series GPUs, and tell that to any company using the latest PowerVR GPUs as PowerVR has HSA enabled GPU features. Tell that to Arm Holdings the creators of the ARM CPU, and ISA, and the Mali GPU, ARM Holdings is another member of the HSA foundation. Even Apple who uses the Metal graphics API is doing HSA aware designs, utilizing the PowerVR GPU designs it licenses from IT along with the ARM ISA from ARM Holdings.
Just go an read up on the membership list for the HSA foundation and add up all of the market value of the companies that make up the membership of the HSA foundation. Samsung is not small and they are an HSA founding member along with AMD, Qualcomm, IT, ARM holdings, Texas Instruments! There are more members besides these founding members listed above.
Look at all the latest Graphics APIs Vulkan and DX12, they are HSA aware graphics APIs that can make use of lots of asynchronous compute ability on more than just AMD’s GPUs. HSA is big in the mobile market, and its now big in the gaming market, with all those new graphics APIs derived/direct copies of Mantle, and Mantle was designed from the ground up to be and HSA aware graphics/GPGPU API. Vulkan is in fact to public version of Mantle with with Mantle’s name changed to Vulkan!
Get over that Intel brand loyalty affliction that you have because Intel is never going to make it easy to compute on its GPU cores and have that ability steal business from its expensive Server SKUs, you see how Intel selectively gimps its consumer SKUs so as to not have the consumer SKUs cannibalize any of its Server SKUs in the marketplace. Same goes for Nvidia with its gimping of asynchronous compute on its gaming SKUs, Nvidia wants gamers to not have and extra compute ability on its consumer SKUs so anyone wants to do compute workloads will have to buy Nvidia’s more expensive professional SKUs to get that striped out compute ability back.
Does Intel even have unified memory addressing between its CPU and GPUs fully implemented yet, does it have asynchronous compute ability on its GPU cores/units, does Intel even have the numbers of execution units/FP units/other units compared to the larger number of EU/CU/other units that AMD and others have on their GPUs. With Intel those consumer SKU’s GPUs better not have more compute ability than any of Intel’s Server SKUs because we all know what Intel will do about that, so Intel’s GPU compute will be closely regulated so as to not compete with its server business SKUs.
Market segmentation and market milking is Job One for those companies that have monopoly market share, and Intel, and Nvidia are prime examples of companies that carefully segment their product offerings in order to milk as much profits from their customers as possible, so expect even more SKUs from Intel, and Nvidia that are strategically designed to limit compute performance available per GPU/CPU/SOC SKU. Those monopolies are experts at removing/limiting compute and graphics functionality to that bare minimum, and forcing the consumer to pay even more to get that compute and graphics ability back. How many of Intel’s lower priced skylake SKUs will even get that full complement of Intel style limited GPU resources “TOP END” graphics!
Intel seems to be attempting
Intel seems to be attempting to do compute as an extension of their CPU ISA. Knights Landing is, as far as I know, just simple multi-threaded CPU cores with AVX-512 extensions. This locks the implantation into the ISA. That implementation must be implemented or simulated in all designs going forward. This is probably not the best way to go. With gpu style compute, you can leave the code in a higher level form and do some kind of run-time compilation. This isn’t a performance issue like it is for most CPU code. The amount of code compared to the amount of data is very small, so the overhead for run-time compilation is insignificant. The run-time compilation allows the code to make the best use of the available hardware. This flexibility is not as easy to implement with the hardware fully specified in the ISA.
It seems like Intel would have learned from their mistakes. The massive failure that was EPIC/IA-64 had similar issues (although not the only issue). IA-64 had very low level hardware exposed in the ISA. When it became clear that the hardware implentation was not working for more modern hardware implementations, they could not change it. The entire architecture was abandoned after Intel spent billions of dollars. ISAs can be viewed as an interface. In that case, it is best to keep low level details out of the ISA to allow more flexibility to change the underlying implementation. This shouldn’t be as bad as EPIC since they can always just take AVX-512 code and translate it into a form to fit whatever the underlying implementation has evolved into. This isn’t anywhere near as flexible as going directly from high-level code to something explicitly tailored to the current hardware.
The problem with the Knights
The problem with the Knights landing(KL) is that there are not enough of the FP units compared to the average GPU’s, and because KL’s has a much lower FP unit count the KL has to be clocked at a much higher speed! The GPU accelerators have hundreds/thousands of parallel FP units clocked at a lower speed. GPUs are designed using high density layout libraries that utilize a denser and tuned for lower power using/lower clock speed execution and make up for this with lower clock with their total number of GPU cores and FP/other unit counts. GPU are overall more tuned for power savings relative to the total numbers of FP/other execution units operating in parallel.
And now since the introduction of AMD’s GCN GPU micro-architecture AMD’s GPUs have been enhanced with more of the CPU like features to allow for more asynchronous computing and Simultaneous multithreading abilities on the ACE units. I do not think it will be to difficult in the future for AMD to complete the transition to a point and give its ACE units even more of the instructions/abilities normally associated with only CPU, AMD has already applied its high density design libraries to its APU’s CPU cores and used those high density libraries in the creation of the Carrizo line OF APUs. So AMD is going to be in a position to lead in total compute ability of the GPU relative to others that are not going with the HSA types of hardware/software/graphics API ecosystems as standardized by the HSA foundation and its many members.
Even Imagination Technologies(IT) is adding this type of ability to its GPUs, with IT even adding virtualization abilities to its GPUs to allow for more sharing of GPU/GPGPU workloads among different users/applications on its GPU/accelerator based SOCs. The mobile market is most definitely on board with the basic hardware/software/API design tenets of heterogeneous compute across of all the CPU/GPU/DSP/Other processing units of the mobile markets many ARM/MIPS/Other based APUs/SOCs. Even Apple does HSA compute with its mobile devises SOC’s on its own via the Metal graphics/GPGPU API, and Apple may not be a direct member of the HSA foundation, but indirectly and by default Apple in in with HSA, as Apple is part owner(shareholder) of Imagination Technologies, and it licenses the ARMV8a ISA for its custom Cyclone A series CPU cores, so Apple has proxy representation at the HSA foundation.
Intel’s Knights landing will have a difficult time making the FP performance per Watt metric best the GPU’s accelerator’s P/W metric when it comes to the HPS/Server market epically against AMDs upcoming server/HPC APUs with HBM on an interposer! Even in the mobile devices market, people are still a bit to CPU centric when discussing the overall performance of SOCs for the mobile market when the HSA style sharing of GPU compute in addition to the CPU is already made use of extensively in the mobile market. That Apple A9/A9X with its most likely PowerVR derived custom GPU may just be able to outperform a good chunk of the Current laptop market, that market is so ultrabook oriented in the first place that the overall compute ability of CPU/cores used in the laptop market has shrunk relative to the old laptop market of the more powerful regular form factor laptops of the past. Still that should not distract too much form Apples claims as Apple is very big on using heterogeneous compute on its SOCs.
Hopefully Apple may make use of AMD’s Carrizo and its ACE units for some of its macbook laptops, because the Carrizo will have plenty of that heterogeneous compute ability, that Apple makes extensive use of on its Table/Phone SOC SKUs. I do not think Apple should wait for Zen, but Apple should try getting AMD to work up a custom Carrizo based APU for Apple’s exclusive use, starting maybe for its Macbook AIRs, and then transitioning the remainder of its macbook line when Zen becomes available. Apple could if it so chooses even commission and fund a custom AMD Zen based laptop APU from AMD and be first to the market with the Zen micro-architecture and AMD’s latest GCN graphics. Apple does have two prominent AMD engineers project leaders/division heads that its has had a relationship with in the past and now is the time to commission some new laptop SKUs, and AMD is much more accommodating in that custom APU market aspect of the business.
Intel seems to be attempting
Intel seems to be attempting to do compute as an extension of their CPU ISA. Knights Landing is, as far as I know, just simple multi-threaded CPU cores with AVX-512 extensions. This locks the implantation into the ISA. That implementation must be implemented or simulated in all designs going forward. This is probably not the best way to go. With gpu style compute, you can leave the code in a higher level form and do some kind of run-time compilation. This isn’t a performance issue like it is for most CPU code. The amount of code compared to the amount of data is very small, so the overhead for run-time compilation is insignificant. The run-time compilation allows the code to make the best use of the available hardware. This flexibility is not as easy to implement with the hardware fully specified in the ISA.
It seems like Intel would have learned from their mistakes. The massive failure that was EPIC/IA-64 had similar issues (although not the only issue). IA-64 had very low level hardware exposed in the ISA. When it became clear that the hardware implentation was not working for more modern hardware implementations, they could not change it. The entire architecture was abandoned after Intel spent billions of dollars. ISAs can be viewed as an interface. In that case, it is best to keep low level details out of the ISA to allow more flexibility to change the underlying implementation. This shouldn’t be as bad as EPIC since they can always just take AVX-512 code and translate it into a form to fit whatever the underlying implementation has evolved into. This isn’t anywhere near as flexible as going directly from high-level code to something explicitly tailored to the current hardware.
The underlined text that is
The underlined text that is supposed to be a link to the source article is actually a link back to this very page.
the text in question:
” the Zen architecture. DigiTimes is adding to that with a story today”
If you want to go read the source article, the lower right hand corner has the source link “DigiTimes”, click on that one instead.
As far as the story goes, it shouldn’t be that surprising. Trying to go to a lower process node is difficult, even Intel is having issues but it is a setback that came at a bad time.
The link you’re asking about
The link you’re asking about is actually two links.
“the Zen architecture” doesn’t point back to this article, it points to a list of PCPer articles with the keyword “Zen” in them. Granted, there’s only 4 of them, and this article is the top one, due to them being sorted chronologically.
The second link, “DigiTimes is adding to that with a story today” actually points to a DigiTimes article off-site.
Doh! I stand corrected.
Doh! I stand corrected. lol, totally my goof up by not paying closer attention to how the links are setup. 🙂
AMD need Trump to make their
AMD need Trump to make their wafer deals.
This is unfreakingbelibable
AMD get finned a quarter billion for capacity it reserved but didn’t use, and Global foundry immediately resold it all to qualcomm and others…
But ALL the time GF failed on their deadline, AMD doesn’t get a penny? Those failure to execute have costed AMD billions in market share.
Who is running the joint!??!
BTW.. Zen in 2017 ? AMD is losing about 100 to 200 million a quarter. they have 800 million in reserve, and said ~500 million is their limit to seek more funding.
I hope AMD sell their dozen Nano so the profit can pay those multi millions bonuses to AMD VPs… because the gravy train for AMD ‘managers’ is not going to last for much longer.
It seem that R. Read , with his insider info being CEO, left the company at its peak.
I would love to read a book from insiders after AMD is long gone.
It should read like a psycho thriller novel…
No one needs that real estate
No one needs that real estate chump Trump to bring his brand of bad hair days to any company in the technology industry. AMD looks like it may be on the way to becoming a private non publicly traded company by the name of AMD with the funding from the private equity markets. So those short term stock market ups and downs can be put out of the picture and AMD can get down to the business of engineering its future products.
The stock value is not a
The stock value is not a cause of AMD mismanagement, its a reflection of it..
So going private will not solve AMD problems. AMD require the capacity to make deal to its best interests.
CEO, Presidents, etc.. goals are not to participate in popularity contest, or spend $600 on a lame air cut. Their job is to get things done in the best interest of the company/country they represent got elected/appointed to.
AMD VP/management seem to have a big problem doing this,
and struggle (as we see from the data as a public company).
Going private wont change a thing, we just wont see how bad the company is managed because it all financial data will become private.
With a big private equity
With a big private equity group and access to billions in financing from that private equity group, AMD will only have to worry about the engineering problems and its new products that are coming to market. You wont see it reported because that’s a private interest, but Apple, Samsung, HP and others can invest through that private equity group to keep AMD going and get Intel OUT from inside their product portfolios.
A lot of big OEM players can very discretely provide a private AMD with enough liquidity to get it over any hump, or bump in the road and very few will be wise to what is going on under that privately held company RADAR. Privately held companies do not have the same Byzantine management structures or board of director structure that publicly held companies are required to have, so things can be done much more quickly without the BOD and stockholders types of worries that happen in the publicly traded companies.
I’m sure Apple could do fine with an AMD APU on its Macbook line of laptops, an APU with HBM and plenty of GPU ACE compute, and plenty of motherboard spaced saved by going all HBM across all laptop, and Mac lines, for those pathologically obsessed thin and light mavens at Apple!
With how much performance
With how much performance Apple is claiming for the A9, I have to wonder if we will be getting ARM powered laptops from Apple rather than AMD64. Apple does not want to be stuck with Intel as their only supplier, so it is in their best interest to support AMD to some extent. Without AMD, they are limited Intel for CPUs and graphics and Nvidia for high-end graphics unless they just make their own. I don’t think they are quite there yet, although the A9 may be getting close.
The silicon interposer technology is a game changer though. Intel has the Embedded Multi-Die Interconnect Bridge (EMIB) technology, but this seems to be a response to AMD/Hynix silicon interposer technology. I don’t know if it will be available in the same time frame. The HMC tech doesn’t really compete directly with HBM on silicon interposers. HMC takes a lot more power and is made to run through a PCB between different packages. Intel may end up using HMC memory stacks with a totally different interface to use them with EMIB interconnect, but this just ends up being a clone of HBM. Running the serialized interface of standard HMC over EMIB would be a total waste.
If AMD has APUs with HBM on a silicon interposer, it will outperform Intel IGPs easily, even with Intel parts with eDRAM. A 128 MB eDRAM cache isn’t going to compete with gigabytes of HBM. The silicon interposer technology also allows just about any kind of chip to be placed on the interposer, although it would need to be designed for use on an interposer. It could enable shrinking systems significantly by placing most die on the interposer instead of in a separate package mounted on the PCB. A lot of these possibilities fit in with Apples minimalist design styles. Given these capabilities, I could definately see Apple basing a MacBook on an AMD APU, if AMD can deliver the quantities. I am also wondering what will be in Nintendo’s NX console also. The XBox one and PS4 are basically 2 year old AMD APUs with faster memory systems than a PC system memory. An APU with HBM would make a great console part. AMD is still the best choice for delivering acceptable CPU performance and good graphics performance in a single chip, regardless of whether it has HBM or not.
You can bet that Apple’s Cook
You can bet that Apple’s Cook has talked with AMD’s Su, and that an APU on an interposer for laptop SKUs could very well be done through AMD’s custom APU division, and AMD could create a very nice space savings and energy efficient APU on an interposer for Apple’s exact needs. Apple makes use of Intel’s U series SKUs in its laptops, and a Zen based APU with its CPU taped out/designed on AMDs high density low power GPU style design libraries will offer even more space savings for other features, such as more GPU cores, DSP, etc.
Intel is not using any high density low power GPU style design libraries on its U series SKUs, Intel just down clocks and power gates its U series SKUs to reach Apple’s necessary thermal design envelop. If AMD could get Apple to fund a custom designed APU using AMD’s designed for low power usage APUs with both the GPU and CPU using the same densely packed designs/design libraries at say 14nm then that unique design decision that AMD introduced with its Carrizo line could net Apple an even more efficient APU at 14nm with even more space saved. It’s not as if Intel’s current crop of U series SKUs that Apple uses are that much more efficient than even the current Carrizo APU, and Carrizo has better graphics than Intel provides for its Apple SKUs.
That Zen micro-architecture combined with AMD’s next GCN/whatever Arctic Islands GPU micro-architecture will give Apple’s proprietary Metal graphics API plenty of asynchronous compute ability for Apple to squeeze every last bit of compute from its systems, and Apple is definitely about the space savings on its motherboards, and about getting every last bit of compute done on both the CPU, and the GPU, and whatever other processing IP/Chips Apple has to add to the mix to differentiate its products and keep its ecosystem going. Apple does have plenty of cash on hand to commission from AMD a very custom APU, and AMD has some experience in the custom APU business.
P.S. Apple would be
P.S. Apple would be responsible for the quantities, as AMD would be just another design subcontractor in the equation.
Trump has been through four
Trump has been through four bankruptcies. Four.
Even with all their current financial issues, AMD still has a better business record than Trump does.
A big architecture redesign
A big architecture redesign on top of a new process node has gotta have a lot of challenges with it. Isnt that why Intel does its tick-tock cycle, so that they’re only dealing with one challenge or the other at a time? I’m hoping the delays are just being blown out of proportion. Intel has been dominating the CPU side for so long, we need a refreshed AMD to boost competition for end users.The rumored 40% performance increase will drastically improve AMD’s abilities to compete.
What has Intel’s gains been between the last few generations? ~5-10% in general each? A 40% improvement over Piledriver would put AMD on par with which Intel architecture? Haswell? Still a generation behind, but much better than they are now.
Assuming Intel will not
Assuming Intel will not release anything new until 2017.
40% is just a hope. We heard this type of number from AMD before (remember bulldozer?)
AMD troubles: Intel is not standing still in term of CPU technology, and its process is in house and state of the art.
AMD is also spending very little in the R&D department, a very large chunk of AMD spending noways is toward management and not R&D.
Anyways, that to say AMD will have to settle for the low margin market because it wont be able to compete. And most of that margin is make on the fabrication side (in our case Global foundries)
Example the Ps4 uses a ~$120 APU, AMD pays TSMC over $100 to get the chip made. TSMC makes more money and profit per console then AMD does.
Now imagine if Intel was in the same situation? they would recognize the fab profit. huge margins.
AMD, even with Zen is not going to compete on Intel turf.
AMD is now a vertical integrator. Make chips on demand for very special customers… short term Intel doesn’t care…
Long term, the danger is that is that become a significant market, Intel can take it all.. “x86 Server style”
CPUs not standing still?
CPUs not standing still? They are not advancing much in design (a few percent each iteration) and since the process tech is now stalled, I don’t expect too much advancement over the next year or two. The main advancement will be in system architecture with HBM and such. CPUs are becoming increasingly irrelevant anyway. Most applications consumers care about are bottlenecked by the gpu. Intel has their EMIB tech, but this seems to be a response to silicon interposers, so it will probably not be available for a while.
Considering all of you post here, I would guess you are more than just a fan boy. I have worked places before where the marketing guys admitted to getting on forums and stirring up FUD against the competition. Are you a stupid Intel fan boy or do you have other motivations? I do consider it stupid to be a fan boy for the dominate player. Intel has been holding back progress for a while because they are making massive profits from the status quo. AMD had to develope a completely new graphics API to push the industry forward, although they may not have had that much of a choice. Without that, we would probably have been stuck with DX11 for a lot longer, and continued to be limited by single threaded performance. Intel wouldn’t have wanted to push HSA either since until recently they wher a long way behind on their GPU technology. Microsoft didn’t have any reason to push DX12 style APIs either. They would rather have that tech for their console only, to make it look better against PCs. AMD gets advantages from having their technology in all of the major consoles since optimizing for the console will optimize it for AMD GPUs automatically. The advantage is mostly against Nvidia, since Intel isn’t really that big of a competitor in gaming machine GPUs anyway.
Without AMD, you would be getting really expensive chips from Intel with HMC eventually. HMC would have been spectacular for Intel. It is a really expensive technology. Intel would have made a lot of money on it. It can’t really compete with HBM on price or power consumption though. The only advantage it has is scaling up to larger memory sizes. It may still have its place as post DDR4 board level interconnect though.
And here i was hoping for an
And here i was hoping for an HBM shared APU in 2016.
If AMD survives long enough to make 14nm products, they should really do some reorganizing the buyer can feel.
Like merging the mobile and desktop lines into one, since efficiency isnt and SHOULDNT go away.
For example: get the naming be r470 for the top end mobile instead of r9 495xm.
While the same exact r470 just with higher clocks would be availble on desktop. Also desktop could have a r480, that is too big for mobile.
And the lowest integrated graphics in an APU would be called r410.
So the buyer understands what level of graphics they are getting at the processor / mobile / desktop ends. Dunno, seems to me like this would be super reasonable and easy to understand for everyone.
With everything being
With everything being integrated onto the same die, or the same interposer, I am not yet sure what divisions actually make sense on the engineering side. The CPU isn’t really that important any more. If you look at the benchmarks used to test CPUs, they are irrelevant to most users. Many of the applications, like video encode/decode, are actively being switched over to specialized hardware or some combination of specialized hardware and gpu processing. Given this, does it make sense to have a separate CPU division? About the only place you will find a discrete CPU these days is server and workstation. The constraints for a server CPU are actually similar to a mobile CPU though; performance per watt is the main focus. The consumer market has become mostly about the GPU.
Contrary to what enthusiast think they want, the main focus will be performance per watt. This still an issue even for enthusiast CPUs where you may not care about the total power consumption since the CPUs still end up heat limited. We are trying to get a lot of heat out of a tiny area these days. A 14 nm CPU core might only be ~10 square mm trying to run at 4 GHz. Not everyone wants to buy a water cooler, and at some point you will reach the limit of water cooling due to hot spots. Performance per watt is also a big issue for GPUs since they are often limited by total power, not necessarily hot spots like CPUs.
It would be nice to see a mobile APU on a silicon interposer even if it is still based on excavator cores. HBM would allow excellent performance and very good power consumption. It may not do that well on CPU specific benchmarks, but most of those are irrelevant anyway. Anyway, I am wondering if AMD got another design win with Nintendo’s NX console. The XBox One and PS4 are two years old, and they did not compare well with high-end PC hardware of the time. They are essentially based on APUs, so a current generation APU, especially one on 14 nm, may significantly outperform the old consoles. If it is an APU on a silicon interposer with HBM, then it would probably perform significantly better than the older consoles. I am also wondering if Apple is planning on using some AMD parts also. Apple seems to be attempting to go their own way with all of the chips designed in-house. I don’t know if their A9 is up to running MacBook Pros, iMacs, or MacPros though.
An AMD APU might be a good choice for some of these products, but the high-end HPC style APU might have to wait for Zen. AMD has been stuck on the same process tech for a long time, but it seems like they have been doing a lot of design work with new design libraries that may be starting to pay off. AMD is still probably the best choice for console makers since they can offer a single chip solution with enough graphics power for a console. Nvidia could make a single chip, but you would be stuck with their ARM cores. Intel has caught up a bit on the graphics front, but I don’t think they have closed the gap yet. A laptop with a console style APU would offer excellent performance.
I just saw an article some
I just saw an article some where about AMD possibly going private and getting more private funding. There stock price is ridiculously low. It would be nice to get Zen sooner rather than later, but the CPU market isn’t actually moving very fast anyway. Each generation is a small improvement, and now the process tech has essentially stalled. The GPU is significantly more important, but since they will be integrated in an APU going forward, they still need a capable CPU. Unfortunately, APUs on a silicon interposer with HBM may have to wait for Zen. I don’t know if they have plans for making an APU on an interposer with current cores. It would be nice to have such an APU for my next laptop, but 2017 is probably too late. Maybe I will go cheap and upgrade sooner; that seems to be the best strategy anyway.
Maybe by the end of next
Maybe by the end of next year, Intel will have been free to jack CPU prices up so high that AMD will be able to release virtually anything, and still make a profit.
I don’t think it is as bad as
I don’t think it is as bad as it seems. Q4 means Holiday Season so a lot of people will be buying new stuff..
this is time frame for
this is time frame for consumer cpus, right? the server ones will already be available by then. if that’s so, not all hope is lost for amd.
AMD = A Massive Disaster,
AMD = A Massive Disaster, always going from one stuff up to the next it’s anyone’s guess when this company is going to go either bankrupt or get bought out. I can’t think of any tech companies in recent memory that has been as consistently incompetent on the same scale as AMD.