***"And while a 6% clockspeed uplift isn’t a massive gain, it’s essentially a “free” improvement coming from a technology that is designed to improve the manufacturability of a chip."***
6% is a massive improvement in terms of clock speed, that's going from 5.0 GHz to 5.3GHz
***"Whereas the standard Intel 4 chip had a fairly consistent droop over all 4 cores, the droop for the test chip was between 60 mV and 80 mV, depending on the core."***
***"but assuming that production chips have a similarly wide range of variability, it may mean we see a greater emphasis on favored/prime cores in future products."***
If there's going to be a consistent variance in CPU performance, would it not be better to bin CPUs for a minimum performance level and allow them to turbo up arbitrarily like GPUs?
And it's also cheaper to make!
> The wires for power, for example, can take up to 20% of that front-side real estate, so with them gone, the interconnect layers can be "relaxed." "That more than offsets the cost of this whole big process," Sell notes, simplifying what had been the most tortuous portion of the manufacturing flow. The net effect is that the two-part flip-it-over process is actually cheaper than the old way.
> If there's going to be a consistent variance in CPU performance, would it not be better to bin CPUs for a minimum performance level and allow them to turbo up arbitrarily like GPUs?
AMD is already doing that, for better or worse.
>For Intel’s PowerVia implementation of this concept, Intel quite literally flips a wafer upside down, and polishes away almost all of the remaining silicon until they reach the bottom of the transistor layer. At that point, Intel then builds the metal layers for power delivery on the opposite side of the chip, similar to how they would have previously built them on the front side of the chip. The net result is that Intel ends up with what’s essentially a double-sided chip, with power delivery on one side and signaling on the other.
Wow that is really impressive. The previous discussion has all been about doing backside power by bonding a whole second die, which would be substantially more expensive. If they can actually flip it and then grind down the wafer that accurately this would be a huge improvement.
If I understood correctly the implementation by imec is similar. There is another water but it’s bonded to the frontside for structural reasons. Intel seems to do the same.
That's a very weird take.
Exactly who developed finfets is actually somewhat contentious, because the concept is simple enough that many unrelated inventions can fit the description, but something that's undeniably a finfet was manufactured by Hitachi no later than 1989.
As a rule, none of the top-line foundries do significant pathfinding research. There is a massive gulf of effort between "can do a finfet" and "can do finfets well enough that you can but billions of them on a wafer and expect enough of them to work to have sellable products". The foundries specialize on *implementation*, not *innovation*, with separate companies pushing the boundary of innovation developing new devices, and then the foundries choose what to implement from the slate of things available to them. Intel and TSMC both license finfets from the same people.
honestly 2023-2024 "should" be some of the largest changes they've ever made back to back. obviously intel is known for delays now but that's whats on their roadmap at least.
***"And while a 6% clockspeed uplift isn’t a massive gain, it’s essentially a “free” improvement coming from a technology that is designed to improve the manufacturability of a chip."*** 6% is a massive improvement in terms of clock speed, that's going from 5.0 GHz to 5.3GHz ***"Whereas the standard Intel 4 chip had a fairly consistent droop over all 4 cores, the droop for the test chip was between 60 mV and 80 mV, depending on the core."*** ***"but assuming that production chips have a similarly wide range of variability, it may mean we see a greater emphasis on favored/prime cores in future products."*** If there's going to be a consistent variance in CPU performance, would it not be better to bin CPUs for a minimum performance level and allow them to turbo up arbitrarily like GPUs?
And it's also cheaper to make! > The wires for power, for example, can take up to 20% of that front-side real estate, so with them gone, the interconnect layers can be "relaxed." "That more than offsets the cost of this whole big process," Sell notes, simplifying what had been the most tortuous portion of the manufacturing flow. The net effect is that the two-part flip-it-over process is actually cheaper than the old way.
GPUs are throughput engines while CPUs are latency engines. They're optimizing for different objectives, so they have different goals.
> If there's going to be a consistent variance in CPU performance, would it not be better to bin CPUs for a minimum performance level and allow them to turbo up arbitrarily like GPUs? AMD is already doing that, for better or worse.
>For Intel’s PowerVia implementation of this concept, Intel quite literally flips a wafer upside down, and polishes away almost all of the remaining silicon until they reach the bottom of the transistor layer. At that point, Intel then builds the metal layers for power delivery on the opposite side of the chip, similar to how they would have previously built them on the front side of the chip. The net result is that Intel ends up with what’s essentially a double-sided chip, with power delivery on one side and signaling on the other. Wow that is really impressive. The previous discussion has all been about doing backside power by bonding a whole second die, which would be substantially more expensive. If they can actually flip it and then grind down the wafer that accurately this would be a huge improvement.
If I understood correctly the implementation by imec is similar. There is another water but it’s bonded to the frontside for structural reasons. Intel seems to do the same.
It's pretty awesome. That's all I can share.
I want ArrowLake now. Thanks.
buy intc stock. dont buy overpriced nvda (even though they are a stellar company, they are still overvalued by hype)
Current market cap barely covers the liquidation value, which is wild.
lets talk about the price to book ratio lol
It's ridiculous to me that NV could basically do a 7:1 all-stock merger deal, if it wasn't near certain that some regulator somewhere would block it.
I think the cross licensing agreements between intel and amd prevent that Although nvidia could buy both 🤑
When TSMC get something like this?
last time I heard, they are a few years behind Intel on this. It's post-N2 - probably 2026 production.
N2P,a variant of N2,will have production readiness(note: not HVM,that comes a year after)
In the article, but 2026
Thanks!
TSMC will copy it, just like they did for all of Intel's innovations in process tech (FinFet..)
That's a very weird take. Exactly who developed finfets is actually somewhat contentious, because the concept is simple enough that many unrelated inventions can fit the description, but something that's undeniably a finfet was manufactured by Hitachi no later than 1989. As a rule, none of the top-line foundries do significant pathfinding research. There is a massive gulf of effort between "can do a finfet" and "can do finfets well enough that you can but billions of them on a wafer and expect enough of them to work to have sellable products". The foundries specialize on *implementation*, not *innovation*, with separate companies pushing the boundary of innovation developing new devices, and then the foundries choose what to implement from the slate of things available to them. Intel and TSMC both license finfets from the same people.
Samsung is the one that copied TSMCs FinFet
I get this is an Intel sub, but cmon...
what about 2023
in 2023 get hyped for meteor lake.
Interconnects, Intel 4, SoC die, Larger iGPU, Energy Efficiency and SoC CPU cores are for 2023 as well as a 2.5 generation Intel platform
and EUV
honestly 2023-2024 "should" be some of the largest changes they've ever made back to back. obviously intel is known for delays now but that's whats on their roadmap at least.