• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.
Oct 27, 2017
6,891
www.techspot.com

Intel confirms switch to LGA 1700 socket for Alder Lake CPUs

Soon after the Core i9-10900K CPU was revealed back in April, we got confirmation that the new 400-series motherboards would also support the 11th-gen Rocket Lake CPUs,...
While the LGA 1200 socket brought few changes over its LGA 1150 predecessor, LGA 1700, which is expected to launch in early 2022, will reportedly be a lot different to what's come before. In addition to the 500 extra pins, we've heard that the socket will be a rectangular shape (45mm × 37.5mm) rather than the usual square. We've also heard that Alder Lake will use a BIG.little design similar to Arm's, combining eight high-powered cores with eight energy-efficient ones. It will also support DDR5 RAM and PCIe 4.0—later versions might support PCIe 5.0.

Alder Lake is expected to use the 10nm++ process node and will likely feature Intel's Xe integrated graphics, which we recently saw running Battlefield V at 30fps on a laptop without a dedicated graphics card.
 

Magio

Member
Apr 14, 2020
647
Two years away, still 10nm. Meanwhile, AMD plans on being at 5nm by then.

To be fair, Intel 7nm is set to be more technologically advanced than TSMC's 5nm (or at worse equal to it). I don't think Intel will ever get back the huge foundry advantage they used to hold before the shambolic road to 10nm, but they're not being left far behind just yet.
 
Oct 25, 2017
2,454
46p7zj.jpg
 

SuperHans

Member
Oct 27, 2017
1,602
Alder Lake is expected to use the 10nm++ process node and will likely feature Intel's Xe integrated graphics, which we recently saw running Battlefield V at 30fps on a laptop without a dedicated graphics card

The 3400G does Battlefield V @ 64Fps average at 1440P on Ultra. And the 4000 series will be even better.

Edit: ignore I misread the benchmark.
 
Last edited:

Deleted member 46489

User requested account closure
Banned
Aug 7, 2018
1,979
This...sounds terrible. So they will still be using an older process node 2 years from now, and they are trying to compete with the efficiency of AMD and ARM CPUs by going for the BIG.little design. This is a bad idea.
 

faint

Member
Oct 27, 2017
1,156
I think some folks are confusing 10nm++ with 14nm+++. This is certainly not GOOD news, but it's not as disastrous as it sounds. Intel's 10nm is more or less equivalent to TSMC's 7nm, so this would be like a very refined TSMC 7nm process. Granted, at this point, AMD would be on 5nm TSMC.
 

Deleted member 17092

User requested account closure
Banned
Oct 27, 2017
20,360
It has a 45W TDP (configurable up to 65W), Intel's Xe demo is supposed to be a U-series chip, so 15W TDP (up to 25). Apples to oranges.

yeah but the last time Intel even did a decent igpu in the desktop space was the 5775c. I wouldn't expect this to be anything other than branding for Xe.
 

super-famicom

Avenger
Oct 26, 2017
25,207
A rectangle CPU means that everyone wanting to do an Intel build will have to buy an adapter for their existing cooler or buy a brand new one.
 

SuperHans

Member
Oct 27, 2017
1,602
It has a 45W TDP (configurable up to 65W), Intel's Xe demo is supposed to be a U-series chip, so 15W TDP (up to 25). Apples to oranges.
It does not? I'm just looking at a video: mid 60s average fps at 1080p with 60% resolution scale and most setting on low.
This is not true at all, what
Apologies completely misread the benchmark I was looking at.
 

Arkaign

Member
Nov 25, 2017
1,991
Can you please explain what that means exactly.

Without getting too much in the weeds :

TSMC 16nm was slightly worse than Intel's 22nm
TSMC 12nm was worse than Intel's 14nm
TSMC 7nm is approximately equal to Intel's (barely available) 10nm, but better than their 14nm with the caveat of hotspotting due to density which makes chasing high clocks really tough

TSMC 5nm, if it goes by the previous trends, would be about equal to a theoretical Intel 7nm.

The measurements used for the 'nm' are different, so much so now that it's basically meaningless. Better to perhaps call TSMC a full gen ahead of 14nm for now, with Intel's 7nm equivalent (10nm) not in enough volume and larger product packaging to be relevant at the moment.
 

the7egend

Member
Mar 6, 2018
356
No, but AMD 5nm (which will be launching in 2022 alongside Intel's 10nm++) is definitely better than Intel 10nm. The differences in the way they measure transistor gate length aren't large enough to make up for that gap.
Not to mention that Apple is also pushing TSMC for a 5nm chip with their A14, which is a 3Ghz ARM processor, so with Apple and AMD tossing money their way the 5nm node process is bound to be nice. TSMC is also working on a 2nm node process as well, but the smaller it gets the more expensive it gets, I think 5nm might be a good stopping point to get comfy in before going smaller.
 

Magio

Member
Apr 14, 2020
647
yeah but the last time Intel even did a decent igpu in the desktop space was the 5775c. I wouldn't expect this to be anything other than branding for Xe.

Of course until someone has a Tiger/Alder Lake U-series chip with Xe graphics to actually benchmark, the demo could be bullshit but if they're legit getting that kind of perf out of a 15W iGPU that's not just branding and AFAIK that's at the very least competitive with AMD.
 

Lagspike_exe

Banned
Dec 15, 2017
1,974
This may or may not be good news depending on how good 10nm++ is and what is the density scaling like on TSMC 5nm.
But Intel should be progressing much more quickly to 7nm. Being behind TSMC when AMD can use cutting-edge nodes as soon as they have acceptable yields on larger chips is terrible. If they don't fix the node issue, they'll either have (substantially) lower margins than AMD or start thinking about going asset-light.
 

Magio

Member
Apr 14, 2020
647
One thing to note that I don't see mentioned a lot, is that sure right now Intel is at best at parity with TSMC's best nodes and sometimes lagging behind (their 10nm is comparable to TSMC's 7nm, but not as widely rolled out), but if the day comes when TSMC hits a roadblock on one of their nodes (if it happened to Intel, it could happen to , the entire industry will come to a halt apart from Intel. Samsung's nodes are consistently subpar, Global Foundries is out of the game and there's no other major player.

Apple needs TSMC, AMD needs TSMC, Qualcomm needs TSMC, Nvidia needs TSMC, ... Meanwhile, Intel is in control of their own destiny. If they get back on a hot streak (like when they were wiping the floor with every other foundry), all it takes is TSMC having one bad node and Intel's back alone at the summit.
 

seroun

Member
Oct 25, 2018
4,464
One thing to note that I don't see mentioned a lot, is that sure right now Intel is at best at parity with TSMC's best nodes and sometimes lagging behind (their 10nm is comparable to TSMC's 7nm, but not as widely rolled out), but if the day comes when TSMC hits a roadblock on one of their nodes (if it happened to Intel, it could happen to , the entire industry will come to a halt apart from Intel. Samsung's nodes are consistently subpar, Global Foundries is out of the game and there's no other major player.

Apple needs TSMC, AMD needs TSMC, Qualcomm needs TSMC, Nvidia needs TSMC, ... Meanwhile, Intel is in control of their own destiny. If they get back on a hot streak (like when they were wiping the floor with every other foundry), all it takes is TSMC having one bad node and Intel's back alone at the summit.

Intel just lost the one guy who could give them that hot streak, though. And the fact that they've lost Apple, while TSMC can ask for "help", so to speak, to so many companies, including NVIDIA (who might have a bigger role in the future, but who the fuck knows, really), is a big advantage.
 

Magio

Member
Apr 14, 2020
647
Intel just lost the one guy who could give them that hot streak, though. And the fact that they've lost Apple, while TSMC can ask for "help", so to speak, to so many companies, including NVIDIA (who might have a bigger role in the future, but who the fuck knows, really), is a big advantage.

The thing with Jim Keller, is that the great things he kickstarts sometimes only show up after he's left. For example he led the early development of the Zen cores at AMD, and by the time he left in 2015 AMD was still lagging far behind and Ryzen was still cooking, but now? Certainly looks like his work has paid off. (Also AFAIK he doesn't really have an impact at the foundry level, he's more of an arch guy.)

So yeah he just left Intel, but he could absolutely have kicked off something great that will pay off in a few years in a big way.

It's good for TSMC to have the "support" of everyone in the industry (because they're all fucked if TSMC is fucked) at their disposal, but at the end of the day I doubt that any fabless company can really assist TSMC on the development of their nodes beyond just bankrolling them, and it's not like Intel is struggling for cash on their end.

I don't particularly like Intel (I tend not to like mega-corporations), but do not count them out.
 

liquidmetal14

Banned
Oct 25, 2017
2,094
Florida
A little worrying on the Intel side considering you have to invest in a new board practically every cycle.

I'm on Intel 9900k now but when I look to build in 2-3 years I'll likely go with AMD as I was supposed to this build but got a deal I couldn't refuse.
 

itchi

Banned
Oct 27, 2017
1,287
Would Intel be better able to compete with AMD if half their CPU die wasn't taken up by a useless GPU? Or am I just being dumb?
 

Replicant

Attempted to circumvent a ban with an alt
Banned
Oct 25, 2017
9,380
MN
By 2022 both AMD and Apple will have surpassed and gone way ahead of Intel.

What the hell is Intel doing?
 

hikarutilmitt

Member
Dec 16, 2017
11,423
Nope, too little too late and not enticing enough, Intel. nVidia already lost me to AMD because of Linux GPU drivers, you're about to lose me to Linux for general CPU because of the MDS vulnerabilities (you likely let happen in the name of pushing performance). It's ain't happening again.
 

Theswweet

RPG Site
Verified
Oct 25, 2017
6,415
California
With the way that Infinity Fabric works, once AMD goes DDR5 we'll be seeing a huge boost to gaming performance - in addition to any other improvements.

It's an exciting time in the PC CPU space, that's for sure.
 

Mark It Zero!

Member
Apr 24, 2020
494
The biggest concern i have is cooler support. I really don't want to switch my Kraken X62, that alone would be enough reason for me to stick to AMD assuming the AM4 cooler mounting remains the same.
 

tokkun

Member
Oct 27, 2017
5,407
Unless you want DDR5.
djfhANbjJ3QxjTSHkJHQQS-650-80.png


However, you should be able to build one this year with Ryzen 4xxx and Nvidia 3xxx.

First year or two of a new generation of RAM usually sucks though. You often see latency regressions since the process is not as mature, and prices are through the roof. With DDR5 I also wonder whether it will take a little longer to get into the sweet spot for desktop users given that the spec seemed to be driven more by the needs of servers.
 

Azai

Member
Jun 10, 2020
3,966
if they force people to buy a new cooling system because they want a rectangular socket (which probably isnt necessary just like the new socket for the 10XXX series they will have a lot of customers switching to AMD Id imagine (if they keep the regular design that is).
Just bought a Noctua NH-D15 CPU cooler and I wanted to upgrade to the 11th or 12th gen. But no way I will buy another cooler just because they make another (probably) unnecessary change to their socket...( I know a new socket is a safe bet but changing the shape is another story)
 

Milennia

Prophet of Truth - Community Resetter
Member
Oct 25, 2017
18,254
With the way that Infinity Fabric works, once AMD goes DDR5 we'll be seeing a huge boost to gaming performance - in addition to any other improvements.

It's an exciting time in the PC CPU space, that's for sure.
Waiting for 2022 for my cpu/mobo/ssd upgrade because of this.
Just getting a 3080ti this year and my 9900k and 970 evo plus will last me
 

low-G

Member
Oct 25, 2017
8,144
It's still more surprising that Intel doesn't seem to have their AMD-leapfrog yet ready.

I'd be surprised if they launch more 10nm chips, but who knows at this point.
 
Nov 8, 2017
13,110
How out of touch is Intel that they are still using 10nm?!

Can you please explain what that means exactly.

The "nanometer" number is just a label. It doesn't correspond to any physical dimension of the processor. You can only assume lower number = better when comparing different families of nodes from the same manufacturing company.

Intel's 10nm process has been massively troubled in it's production timelines (it was supposed to be on shelves in like 2016, which if it had been would have been exceptional and industry leading). But as a process, you cannot assume it has similarities to TSMC or Samsung's 10nm processes - Intel's 10nm process is approximately 5% denser than TSMC's 7nm node. Intel 7nm (due to be in production for early items around the same time Alder lake is launching in the desktop CPU space) is expected to be more comparable in density to TSMC 5nm and related processes.

Intel's first generation 10nm was so broken they shipped one single barely functional product to Chinese OEMs. The first proper release was 10nm Ice Lake in 2019. This year, we're getting 10nm+ Tiger Lake. The "+" names are just what we see in marketing, but internally, Intel iterates every process every year, whether or not they choose to assign a label "+" or "++" to it. Over 2019-Q1-2020, major strides were made with the node, resulting in less process leakage and fewer defects on wafers / higher yields. This is allowing a broader release for Tigerlake compared to Icelake, and also TGL processors will have much higher clockspeeds.

Intel 7nm is according to all signs running on schedule, which means it will be "ready" in 2022 - as in, products will ship on it. GPUs and possibly some laptop parts. But it probably won't clock as high as 2022's heavily refined 10nm node. This will mean they have a choice - either ship products that use less power, or ship products that perform better. For Desktop consumer CPUs, that's no contest - people obviously would on average greatly prefer the higher performance chip in that space.

My advice is to stop worrying about who has what nm product, and simply look at reviews, benchmarks and pricing as your determinant of which product is better. For someone strictly focussed on high refresh rate gaming performance, Intel's extremely refined but very old 14nm process using their 2015 vintage Skylake architecture is still yielding more powerful products compared to AMD's 2019 architecture using TSMC's cutting edge process. Just an example, but you can't look at this roadmap and assume that Intel's Q1 2022 Alder Lake is going to be a shit product. Maybe it will be! Maybe it won't! But you can't know yet.