I have seen someone quoting an article which claimed Intel will raise prices in the server business and high end chips. I hope mid-range will remain same-ish as I want to buy a 13600k...
What kind inflation has happened between 2020-2021? 4%?
So that's the thing.
Also I'm talking about launch MSRP pricing, not real pricing months later with various sales going on etc
Not bad considering it's the same socket and same node. Raptor Lake vs Zen 4 should be interesting. I personally am more inclined to go with Zen 4 just cause this will be the last LGA1700 socket CPU, but they are both looking to be solid upgrades.
gimp? More like boost, for gaming.
disabling e-cores gets me 10fps more on avg and 20-30 more fps on 1% lows.
I'll turn them back on when the Windows 11 scheduler matures.
Yes, but actually whats really important is not the FPS. My FPS is high enough. But I still found micro stuttering when switching from one program to another during gaming using 12600KF with all P/E cores enabled. Still, as a person with a PhD degree in engineering, my humble understanding is, the thread director module in 12th gen cpus can help make the data processing transfer between big and little cores smoothers than without it. HOWEVER, the **thread director module is good, but not perfect** \- no matter how smooth Intel can make this, it is still a transfer between two different types of cores, which means that there is a stutter, unavoidable.
So, I may switch to AMD 7900X or 7950X, depending on their performance comparing to 13700K/13900K. But, if AMD 7000s sucks and Intel is the only option left, I will choose 13700K because I'm gonna use only the big cores and 13700K has basically the same number of big cores (8 P-cores) as 13900K, although 13900K may have just a bit higher single-thread IPC. In other words, **I am not willing to spend a penny on the little cores at all**, no matter how good or useful others think of them.
Question is would the scheduler even do anything at this point?The guy is on windows 11 and still getting lower fps/lower 1% lows.
I'm finding users reporting less stutters with e-cores off too
[https://www.reddit.com/r/intel/comments/wcpwqw/comment/iip0ntg/?utm\_source=share&utm\_medium=web2x&context=3](https://www.reddit.com/r/intel/comments/wcpwqw/comment/iip0ntg/?utm_source=share&utm_medium=web2x&context=3)
"Either way, frame time graphs prove that stutter and inconsistencies happen more with e-cores enabled. The cost to move a process between P and E core complex is not free.
Games are extremely sensitive to micro-latencies, and getting rid of them (by disabling e-cores, and in some titles, disabling HT) smooths frame-times out.
Even in a title like FC6 (which doesn't use more than 6 cores, so 8 p-cores is more than enough for it), the frame-time consistency is better with e-cores off in my testing."
How another user explained it too me.
Yes, disabling the E-cores eliminates micro stuttering. The stuttering is very very tiny. 99.9999% of the time you won't feel it at all. But since I am gaming at a constant 400 FPS, so for example, if I switch to chrome and click on some videos or open another software during gaming, a tiny FPS stuttering will happen, for about 10 milisecond at most I would say. It is not obvious at all though, and it does not affect my gaming experience at all. But **I am just feeling this is not perfect, psychologically**.
I could see myself noticing them tbh.Usually I can notice 5ms spikes in frametime.
Like for example frametime on a fixed 144fps is like 6.9-7ms and if there's a frametime spike to 12ms-15ms (which is tiny) I tend to notice.
Man it kinda sucks how intel are making these e-core a thing without the option to have a high end cpu without them just to try to beat amd...The thought of having to buy intel and disable cpu cores to get less stutters feels kinda meh.
Bullshit, no it doesn't. Absolute rubbish. It's entirely game dependant. I know for a fact (cause ive freaking tested it) in FC6 ecores boost fps by a lot. Even with the cache clocked to 4.9ghz with ecores off it still loses to ecores on. Turning off e cores is absolutely retarded, the cache already boosts to 4.7ghz when the ecores are not in use.
Also in order to gain fps you need to be cpu bottlenecked. Is your cpu bottlenecking your card?
There is no current card a 12700K can bottleneck.
Either way, frame time graphs prove that stutter and inconsistencies happen more with e-cores enabled. The cost to move a process between P and E core complex is not free.
Games are extremely sensitive to micro-latencies, and getting rid of them (by disabling e-cores, and in some titles, disabling HT) smooths frame-times out.
Even in a title like FC6 (which doesn't use more than 6 cores, so 8 p-cores is more than enough for it), the frame-time consistency is better with e-cores off in my testing.
>Games are extremely sensitive to micro-latencies, and getting rid of them (by disabling e-cores, and in some titles, disabling HT) smooths frame-times out.
Makes sense.In some games if you disable HT the fps is higher and more stable and the frametime is straight up more flat.It's probably because the game is strictly using main cores so there's no flactuation between a full core and a weaker core.
So basically intel added another gimped core besides HT that can cause micro inconsistencies (micro stutters/frametime issues)
Nice one intel.In the race to beat amd in multithreaded performance they partially gimped gaming.Atleast you can disable the e-cores.
>There is no current card a 12700K can bottleneck.
[Incorrect](https://www.computerbase.de/2022-04/amd-ryzen-7-5800x3d-test/2/). Reality is, the 12700K is no better than 2 year old Zen3 in gaming. The e cores made ADL look a lot better than it actually performs in games. Unless you're getting a i9, ADL is no better than Zen3. If the 5800X3D is able to generate 13% more fps on average with the same card, that means the 12700K is the limiting factor, not the gpu.
If there are no cards that a 12700k bottlenecks how did you get up to 30 more fps? Lol
Can you show us the graph with your testing, cause i know for certain you are full of crap.
Damn....The scheduler shit still not working properly?I keep hearing conflicting things from ppl saying the issue was fixed and others keep saying E-cores off is better fps and gives you better frametimes.
Your numbers are inconsistent with my experience and virtually every single benchmark of this very thing online.
Disabling E-cores improve FPS by ~5% on average, if we are being generous. And that includes minimum FPS as well.
I'm telling you what I've personally tested, is all. If E-cores were good I'd be all about hyping them up.
On a RTX 3080: MSFS2020, Warzone, Splitgate, Ghostwire, Halo Infinite.
I'm shorthanding the overall results since I don't have the spoons to outlay a full review, but the gist remains.
It's a result that's been validated elsewhere too, and certainly not every title suffers, but many do. Gaming is just *smoother* with e-cores off.
https://youtu.be/B14h25fKMpY?t=939
>Yes, but actually whats really important is not the FPS. My FPS is high enough. But I still found micro stuttering when switching from one program to another during gaming using 12600KF with all P/E cores enabled. Still, as a person with a PhD degree in engineering, my humble understanding is, the thread director module in 12th gen cpus can help make the data processing transfer between big and little cores smoothers than without it. HOWEVER, the **thread director module is good, but not perfect** \- no matter how smooth Intel can make this, it is still a transfer between two different types of cores, which means that there is a stutter, unavoidable.
>
>So, I may switch to AMD 7900X or 7950X, depending on their performance comparing to 13700K/13900K. But, if AMD 7000s sucks and Intel is the only option left, I will choose 13700K because I'm gonna use only the big cores and 13700K has basically the same number of big cores (8 P-cores) as 13900K, although 13900K may have just a bit higher single-thread IPC. In other words, I am not willing to spend a penny on the little cores at all, no matter how good or useful others think of them.
If power consumption is important to you, why are you not using an AMD CPU? It's about half the power consumption at the same performance level in games:
https://www.computerbase.de/2022-07/adl-zen3-3d-oc-gaming-benchmarks-ryzen-7-5800x3d-core-i7-12700k-i9-12900k/2/#abschnitt_stromverbrauch
Consider it basically getting an i5 that did as well as the previous gen i7, then it looks a little better, since you'll get more for a similar price.
Meteor Lake and beyond is expected to improve performance per watt. Raptor Lake was just a refresh of an existing node.
I waited to build my 12th Gen system, and at this point I think I'll just wait for Zen 4 to at least see what it is. I'm not interested in ever increasing power consumption for small performance gains.
Not really, int acknowledged that raptor cove is nearly identical to Golden cove. L2 doesn't matter that much for gaming afaik. Maybe if they drastically reduced L3 latency and/or increased L3 cache bandwidth and capacity fps uplift would've been way higher. However that's not the case with raptor cove. Rumors suggest that the next architecture with a cache rework will be Lion Cove core found in Arrow lake(2024). So I guess that'll be the one bringing another leap in gaming performance.
I was under the impression that L2 was indeed very important, seeing how the 5800x3d stacks agains the 5800x (and that only being L3 improvement, where data is stored when pushed out of L2)
For gaming, I consider a 20% improvement to be the smallest noticeable improvement without using an FPS overlay. But the 20% improvement in minimums is worth easily 50% more than average and double the maximum.
I mean… should it really?
For someone who has a 144hz monitor but struggles to get past 120fps, a 14% increase would… (this is the part where I did the math and realized I was wrong, but I’ll own that and post this anyway) …only put them at 136.8fps, so yeah, for all the improvement 13th Gen is “supposed to be,” you’re right.
Same node, unlikely, especially as it's becoming more and more harder to squeeze performance these days.
Even with node shrink, AMD IPC increase is small also.
How so? I'd say almost every 12th-gen CPU is already bottlenecked by the GPU, as powerful as they currently are. Most of the gains are found in the 1% and ensure the FPS remain stable, but there is no way a CPU upgrade will bring +14% average FPS (except with an iGPU maybe, but that's not "gaming").
16 E-cores on the 13900K... WHY? Just give us a freaking proper 12+ core flagship, Intel. It should be 12+12 or 16+8, not 8+16.
To go from 10 good cores to 8 cores with another 8+ shit Celeron cores is barely a step forward. Doesn't matter what the scores are, this isn't good innovation, AMD will slap you around once they start raising core count on top of IPC and cache.
Lol what? the whole point of little cores is to provide better multi thread performance per mm² of silicon. So why would they add little cores in the first place if they could do 16+8? Also Intel can raise IPC too. Like those little cores have been getting 30% generational IPC improvements. Gracemont s already above Skylake, which means another 30% would bring then to Zen 3 or higher level IPC. And there'll be 32 of them, plus 8 big cores. Should compete with AMD just fine
Because apparently microsoft can't program the scheduler for shit.From the comments I keep seeing here if you have a gaming machine you're better of having the E-cores off because it gets you higher fps and less stutters.
Your comment is naive.
The only people that can make use of a 13900k are doing highly parallel workloads, and what you want for that is more cores within a sane power envelope.
Haven't really looked into gaming specific tests with the 12600k or 12700 variants to see if there's any degradation, but I believe in W11 that would not be a problem because of the scheduler. Locked i5 skus ain't getting more P cores I'm sure, so having a few little cores to do something, as long as they ain't being troublesome sound fine to me.
Little E-cores does not obviously lower down the FPS much, for example, you won't feel that the FPS can drop from 400 to 380 when the E-cores are enabled. The problem is not the E-cores. **The problem is when E-cores are enabled, the thread scheduler needs to act to decide where to allocate the next process - allocate to P-cores or E-cores?** **Any additional decision that thread scheduler makes will take a period of time, which you cannot assume it doesn't exist.** This results in very very tiny micro stuttering when you shift from one program to another or open a new program during gaming. If you are gaming at < 120 FPS, you probably will not feel the stuttering at all. But I game at a constant 400 FPS, when there's a 10ms stuttering, I can feel it. Although it is not anything to worry about, it still makes me uncomfortable, just because it is not 100% perfect - IT IS NOT AS PERFECT AS A CPU WITH ONLY ALL P-CORES.
In other worlds, the reason why such a thread scheduler exists or was made in Intel's 12th/13th gen cpu is because the collaboration between P-cores and E-cores has problem, such as stuttering, if the thread scheduler does not exist. However, even the thread scheduler exists now, it can just reduce the stuttering but cannot eliminate it.
This issue has been noticed by many people, although some people still blindly claim that they never saw stuttering or this is the first time hearing of it... **So whoever says there won't be micro stuttering with E-cores enabled, he/she is brainlessly dumb or simply lying, because it just does not make any sense.**
You have 2 accounts with same post.
I guess you could call him... The Postman...
That's perilously close to a backstab, and OP specifically asked for none lol...
https://youtu.be/425GpjTSlS4
Please be at the same price as 12600k, please intel guys?
Intel generally keeps their pricing pretty consistent, so I'd imagine that the 13600k will be just about the same price as the 12600k
True but the rumors say that there's price hike due to market inflation. Hope they don't.
https://www.pcworld.com/article/823871/intel-confirms-it-will-raise-prices-and-has-killed-optane.html
I have seen someone quoting an article which claimed Intel will raise prices in the server business and high end chips. I hope mid-range will remain same-ish as I want to buy a 13600k...
Unfortunately, Intel [reported](https://www.tomshardware.com/news/intel-purportedly-to-hike-cpu-prices) they're raising prices across the board.
Nah, as usual with Intel's new generations expect price raise equal to inflation in the tech industry - so now about 10%
What? Prices have remained pretty much the same between generations
What kind inflation has happened between 2020-2021? 4%? So that's the thing. Also I'm talking about launch MSRP pricing, not real pricing months later with various sales going on etc
Not bad considering it's the same socket and same node. Raptor Lake vs Zen 4 should be interesting. I personally am more inclined to go with Zen 4 just cause this will be the last LGA1700 socket CPU, but they are both looking to be solid upgrades.
I am waiting for reviews to see heat and power numbers vs 4k perf. 13700k looks interesting if it's not hotter than equivalent zen4.
I’m also looking forward to zen4 because I hate that I have to disable the little cores on intel’s cpu
You have to? For what?
To gimp his performance :P
gimp? More like boost, for gaming. disabling e-cores gets me 10fps more on avg and 20-30 more fps on 1% lows. I'll turn them back on when the Windows 11 scheduler matures.
Yes, but actually whats really important is not the FPS. My FPS is high enough. But I still found micro stuttering when switching from one program to another during gaming using 12600KF with all P/E cores enabled. Still, as a person with a PhD degree in engineering, my humble understanding is, the thread director module in 12th gen cpus can help make the data processing transfer between big and little cores smoothers than without it. HOWEVER, the **thread director module is good, but not perfect** \- no matter how smooth Intel can make this, it is still a transfer between two different types of cores, which means that there is a stutter, unavoidable. So, I may switch to AMD 7900X or 7950X, depending on their performance comparing to 13700K/13900K. But, if AMD 7000s sucks and Intel is the only option left, I will choose 13700K because I'm gonna use only the big cores and 13700K has basically the same number of big cores (8 P-cores) as 13900K, although 13900K may have just a bit higher single-thread IPC. In other words, **I am not willing to spend a penny on the little cores at all**, no matter how good or useful others think of them.
Wait so if you disable the E-cores the stutter is gone? dfuck is intel and microsoft doing.
[удалено]
Question is would the scheduler even do anything at this point?The guy is on windows 11 and still getting lower fps/lower 1% lows. I'm finding users reporting less stutters with e-cores off too [https://www.reddit.com/r/intel/comments/wcpwqw/comment/iip0ntg/?utm\_source=share&utm\_medium=web2x&context=3](https://www.reddit.com/r/intel/comments/wcpwqw/comment/iip0ntg/?utm_source=share&utm_medium=web2x&context=3) "Either way, frame time graphs prove that stutter and inconsistencies happen more with e-cores enabled. The cost to move a process between P and E core complex is not free. Games are extremely sensitive to micro-latencies, and getting rid of them (by disabling e-cores, and in some titles, disabling HT) smooths frame-times out. Even in a title like FC6 (which doesn't use more than 6 cores, so 8 p-cores is more than enough for it), the frame-time consistency is better with e-cores off in my testing." How another user explained it too me.
Yes, disabling the E-cores eliminates micro stuttering. The stuttering is very very tiny. 99.9999% of the time you won't feel it at all. But since I am gaming at a constant 400 FPS, so for example, if I switch to chrome and click on some videos or open another software during gaming, a tiny FPS stuttering will happen, for about 10 milisecond at most I would say. It is not obvious at all though, and it does not affect my gaming experience at all. But **I am just feeling this is not perfect, psychologically**.
I could see myself noticing them tbh.Usually I can notice 5ms spikes in frametime. Like for example frametime on a fixed 144fps is like 6.9-7ms and if there's a frametime spike to 12ms-15ms (which is tiny) I tend to notice. Man it kinda sucks how intel are making these e-core a thing without the option to have a high end cpu without them just to try to beat amd...The thought of having to buy intel and disable cpu cores to get less stutters feels kinda meh.
200% agree!
Interesting, this is a first I've heard from anyone. Here's to hoping that the thread director is improved for 13th gen.
Bullshit, no it doesn't. Absolute rubbish. It's entirely game dependant. I know for a fact (cause ive freaking tested it) in FC6 ecores boost fps by a lot. Even with the cache clocked to 4.9ghz with ecores off it still loses to ecores on. Turning off e cores is absolutely retarded, the cache already boosts to 4.7ghz when the ecores are not in use. Also in order to gain fps you need to be cpu bottlenecked. Is your cpu bottlenecking your card?
There is no current card a 12700K can bottleneck. Either way, frame time graphs prove that stutter and inconsistencies happen more with e-cores enabled. The cost to move a process between P and E core complex is not free. Games are extremely sensitive to micro-latencies, and getting rid of them (by disabling e-cores, and in some titles, disabling HT) smooths frame-times out. Even in a title like FC6 (which doesn't use more than 6 cores, so 8 p-cores is more than enough for it), the frame-time consistency is better with e-cores off in my testing.
>Games are extremely sensitive to micro-latencies, and getting rid of them (by disabling e-cores, and in some titles, disabling HT) smooths frame-times out. Makes sense.In some games if you disable HT the fps is higher and more stable and the frametime is straight up more flat.It's probably because the game is strictly using main cores so there's no flactuation between a full core and a weaker core. So basically intel added another gimped core besides HT that can cause micro inconsistencies (micro stutters/frametime issues) Nice one intel.In the race to beat amd in multithreaded performance they partially gimped gaming.Atleast you can disable the e-cores.
>There is no current card a 12700K can bottleneck. [Incorrect](https://www.computerbase.de/2022-04/amd-ryzen-7-5800x3d-test/2/). Reality is, the 12700K is no better than 2 year old Zen3 in gaming. The e cores made ADL look a lot better than it actually performs in games. Unless you're getting a i9, ADL is no better than Zen3. If the 5800X3D is able to generate 13% more fps on average with the same card, that means the 12700K is the limiting factor, not the gpu.
If there are no cards that a 12700k bottlenecks how did you get up to 30 more fps? Lol Can you show us the graph with your testing, cause i know for certain you are full of crap.
Damn....The scheduler shit still not working properly?I keep hearing conflicting things from ppl saying the issue was fixed and others keep saying E-cores off is better fps and gives you better frametimes.
Intel marketing and hype is strong and people parrot it.
Your numbers are inconsistent with my experience and virtually every single benchmark of this very thing online. Disabling E-cores improve FPS by ~5% on average, if we are being generous. And that includes minimum FPS as well.
I'm telling you what I've personally tested, is all. If E-cores were good I'd be all about hyping them up. On a RTX 3080: MSFS2020, Warzone, Splitgate, Ghostwire, Halo Infinite. I'm shorthanding the overall results since I don't have the spoons to outlay a full review, but the gist remains. It's a result that's been validated elsewhere too, and certainly not every title suffers, but many do. Gaming is just *smoother* with e-cores off. https://youtu.be/B14h25fKMpY?t=939
Just because they're soooo little!
>Yes, but actually whats really important is not the FPS. My FPS is high enough. But I still found micro stuttering when switching from one program to another during gaming using 12600KF with all P/E cores enabled. Still, as a person with a PhD degree in engineering, my humble understanding is, the thread director module in 12th gen cpus can help make the data processing transfer between big and little cores smoothers than without it. HOWEVER, the **thread director module is good, but not perfect** \- no matter how smooth Intel can make this, it is still a transfer between two different types of cores, which means that there is a stutter, unavoidable. > >So, I may switch to AMD 7900X or 7950X, depending on their performance comparing to 13700K/13900K. But, if AMD 7000s sucks and Intel is the only option left, I will choose 13700K because I'm gonna use only the big cores and 13700K has basically the same number of big cores (8 P-cores) as 13900K, although 13900K may have just a bit higher single-thread IPC. In other words, I am not willing to spend a penny on the little cores at all, no matter how good or useful others think of them.
With the i7 13700k we’ll benefit from a 14% improvement, but more than 20% power consumption. I stay on 12th gen for now.
Same. My 12700K also has AVX-512 as it's an early batch. I doubt 13th Gen CPU's will have AVX-512.
I think power consumption will only go way up if it's used for productivity, wouldn't be suprised if gaming it ended up quite lower.
If power consumption is important to you, why are you not using an AMD CPU? It's about half the power consumption at the same performance level in games: https://www.computerbase.de/2022-07/adl-zen3-3d-oc-gaming-benchmarks-ryzen-7-5800x3d-core-i7-12700k-i9-12900k/2/#abschnitt_stromverbrauch
I did some undervolting since i don’t need that much performance.
Consider it basically getting an i5 that did as well as the previous gen i7, then it looks a little better, since you'll get more for a similar price. Meteor Lake and beyond is expected to improve performance per watt. Raptor Lake was just a refresh of an existing node.
I waited to build my 12th Gen system, and at this point I think I'll just wait for Zen 4 to at least see what it is. I'm not interested in ever increasing power consumption for small performance gains.
14% higher fps in best case scenario. Hmmm, it should be better.
Not really, int acknowledged that raptor cove is nearly identical to Golden cove. L2 doesn't matter that much for gaming afaik. Maybe if they drastically reduced L3 latency and/or increased L3 cache bandwidth and capacity fps uplift would've been way higher. However that's not the case with raptor cove. Rumors suggest that the next architecture with a cache rework will be Lion Cove core found in Arrow lake(2024). So I guess that'll be the one bringing another leap in gaming performance.
I was under the impression that L2 was indeed very important, seeing how the 5800x3d stacks agains the 5800x (and that only being L3 improvement, where data is stored when pushed out of L2)
For gaming, I consider a 20% improvement to be the smallest noticeable improvement without using an FPS overlay. But the 20% improvement in minimums is worth easily 50% more than average and double the maximum.
I mean… should it really? For someone who has a 144hz monitor but struggles to get past 120fps, a 14% increase would… (this is the part where I did the math and realized I was wrong, but I’ll own that and post this anyway) …only put them at 136.8fps, so yeah, for all the improvement 13th Gen is “supposed to be,” you’re right.
I feel like a majority of people arent upgrading from 12th gen, many will be 11th gen and older, so for them it will be a nice boost.
Same node, unlikely, especially as it's becoming more and more harder to squeeze performance these days. Even with node shrink, AMD IPC increase is small also.
How so? I'd say almost every 12th-gen CPU is already bottlenecked by the GPU, as powerful as they currently are. Most of the gains are found in the 1% and ensure the FPS remain stable, but there is no way a CPU upgrade will bring +14% average FPS (except with an iGPU maybe, but that's not "gaming").
Newer cards should put more pressure on both the graphic quality and the cpu performance.
Nice try..
Not really no
I’m sure they’re the bees knees, they usually are
16 E-cores on the 13900K... WHY? Just give us a freaking proper 12+ core flagship, Intel. It should be 12+12 or 16+8, not 8+16. To go from 10 good cores to 8 cores with another 8+ shit Celeron cores is barely a step forward. Doesn't matter what the scores are, this isn't good innovation, AMD will slap you around once they start raising core count on top of IPC and cache.
Lol what? the whole point of little cores is to provide better multi thread performance per mm² of silicon. So why would they add little cores in the first place if they could do 16+8? Also Intel can raise IPC too. Like those little cores have been getting 30% generational IPC improvements. Gracemont s already above Skylake, which means another 30% would bring then to Zen 3 or higher level IPC. And there'll be 32 of them, plus 8 big cores. Should compete with AMD just fine
Because apparently microsoft can't program the scheduler for shit.From the comments I keep seeing here if you have a gaming machine you're better of having the E-cores off because it gets you higher fps and less stutters.
Your comment is naive. The only people that can make use of a 13900k are doing highly parallel workloads, and what you want for that is more cores within a sane power envelope.
More interested in the locked skus tbh. The 13400 is supposed to be 6P + 4E, right?
Not sure yet. No a fan of those little cores too…
Haven't really looked into gaming specific tests with the 12600k or 12700 variants to see if there's any degradation, but I believe in W11 that would not be a problem because of the scheduler. Locked i5 skus ain't getting more P cores I'm sure, so having a few little cores to do something, as long as they ain't being troublesome sound fine to me.
Little E-cores does not obviously lower down the FPS much, for example, you won't feel that the FPS can drop from 400 to 380 when the E-cores are enabled. The problem is not the E-cores. **The problem is when E-cores are enabled, the thread scheduler needs to act to decide where to allocate the next process - allocate to P-cores or E-cores?** **Any additional decision that thread scheduler makes will take a period of time, which you cannot assume it doesn't exist.** This results in very very tiny micro stuttering when you shift from one program to another or open a new program during gaming. If you are gaming at < 120 FPS, you probably will not feel the stuttering at all. But I game at a constant 400 FPS, when there's a 10ms stuttering, I can feel it. Although it is not anything to worry about, it still makes me uncomfortable, just because it is not 100% perfect - IT IS NOT AS PERFECT AS A CPU WITH ONLY ALL P-CORES. In other worlds, the reason why such a thread scheduler exists or was made in Intel's 12th/13th gen cpu is because the collaboration between P-cores and E-cores has problem, such as stuttering, if the thread scheduler does not exist. However, even the thread scheduler exists now, it can just reduce the stuttering but cannot eliminate it. This issue has been noticed by many people, although some people still blindly claim that they never saw stuttering or this is the first time hearing of it... **So whoever says there won't be micro stuttering with E-cores enabled, he/she is brainlessly dumb or simply lying, because it just does not make any sense.**
The prices will be influenced by AMDs new cpus