Profitez des vidéos et de la musique que vous aimez, mettez en ligne des contenus originaux, et partagez-les avec vos amis, vos proches et le monde entier.
I had a 13900KF fail after a few months on an Asus Z790-based motherboard; started seeing memory errors when anything more than one core was active (disabling additional cores in grub during Linux boot). Worked fine prior to that. I believe that the failures were progressive to some degree; I initially only saw sporadic errors, then saw them increasingly-frequently until it wasn’t possible to even boot with multiple cores. memory testers didn’t trigger it. Doing builds with many cores did consistently do so (I built Cataclysm: Dark Days Ahead at -j32, was the first test case I could find that reliably failed at some random point during the build on the failing processor, though never the same point). Starting up Stable Diffusion also could pretty consistently fail. I scripted up test cases using these to investigate downclocking memory and trying to fiddle with other settings. Downclocking the memory may have helped a bit – I didn’t gather enough data to try to get solid figures – but at the end, even having it all the way down wasn’t sufficient to cleanly boot the system; you’d have errors just trying to mount the root filesystem. Tried different Linux kernels, including building my own out of latest nightly code, tried fiddling with the kernel preemption mode (on the off chance that it was a Linux bug triggered by multiple core use). Got a 14900KF to replace it, made sure to turn off the default motherboard settings that were recommended against by Intel before inserting or ever using chip, assuming that it must have been overly-aggressive motherboard defaults. Had very hefty cooling on this. At first I thought it might be voltage drops due to the Stable Diffusion startup issues (maybe the GPU drawing power was a factor) or maybe even cooling (though temps seemed fine), but no – swapping the CPU made all the problems go away at first…and then it failed in the same way after a few months. Variety of problems, including Linux kernel complaining about hardware bugs, memory errors, kernel threads hanging, same as before. Same progressive failures that got more-frequent over time.
Never saw any problems with either CPU when running on a single core (maxcpus=1 passed to the Linux kernel), so at least I could get the system functional and stable, but obviously the performance was abysmal in that case. Using even one additional core and the problems were present (unusably so, towards the end on each CPU).
Switched to an AMD motherboard and processor. Haven’t had any problems. I expect that I’ll continue using AMD processors moving forward unless they put some serious lemons out.
No change in DIMMs (and in fact, used the same DIMMs just fine with the AMD processor).
At least I know that I’m not just crazy and that a ton of other people are getting this too. And the fact that this guy has been running on a different chipset and has a large dataset running within safe specs does kind of rule out the motherboard being at fault – I didn’t try running a motherboard with another chipset and another CPU from that class. The guy did say that some CPUs in his dataset just don’t seem to experience the problems (I saw him say a “50% rate”), so maybe there’s some sort of problem with Intel’s manufacturing process rather than with their design, and whatever testing methodology they used didn’t deal well with that.
And the guy is very explicit that they saw progressive degradation too, and had tests with logged times for it. 14:50:
We have datacenter logs from where these systems first went online, and with these systems first going online six months ago, they would pass these specific tests. Re-running these specific tests on the exact same hardware, it will not pass. That’s wild.
One of the game companies said that they’re going to have to roll back some bans. They thought some people were cheating because the state of the game client was inconsistent with the state of the game server for some people enough that they were like “we don’t know what they’re doing, but the game client is inconsistent with the server…we’re just going to ban them”
Yeah, I’m wondering about what kinds of other nasty secondary fallout there will be. One reason that I didn’t want to spend time on this further – was willing to just eat the cost of the motherboard and a pair of CPUs and go AMD – was because I was developing root filesystem corruption just trying to boot with multiple cores, and I didn’t want to experiment on that further. It’s just not worth it to me as an individual dealing with a dicked-up filesystem to try to track down a piece of bad hardware. Like, there’s going to be unpleasant fallout out there with other people from data loss when a lot of CPUs are garbling data somehow.
AMD has been really solid. I’ve built a number of PC’s and there’s I’ve never run into an issue with the CPU’s. the R5 2600, 3600, and R7 5800 and 5800x are all surprisingly efficient chips out of the box, but I played around with each and found even crazier undervolt settings. My server PC draws practically nothing except if there’s something using the (NVIDIA) GPU extensively (and even then it’s like, oh no, is it almost 75 watts? better call the fire brigade! lmao).
And obviously the R7 5800x is just a monster, although I’ve consistently seen that it runs hot but… I air cool mine and it’s never really going above 85c when under full load on stock, and if you play with undervolting at all it’s pretty easy to keep the exact same performance while lowering the total power delivered. Although I’ve found that it goes up to 85c still and the chip just runs faster…
I mean, I don’t hate Intel – I’ve used exclusively their systems for, I dunno, maybe 25 years. And as Steve Burke says in the video, it’s not as if AMD has never had hardware problems on their CPUs. But this is a pretty insane dick-up on Intel’s part. Like, even if I’m generous and say “Intel had a testing regimen that these passed, because failures didn’t show up initially”, Intel should also have had CPUs that they kept running. They maybe didn’t know the cause. They maybe didn’t have a fix worked out or know whether they could fix it in software. But they should have known partway through the production run that they had a serious problem. And when they knew that there was a serious problem, my take is that they shouldn’t have been continuing to sell the things. I mean, I would not have picked up the second processor, the 14900KF, if I’d known that they knew that two processor generations were affected and they didn’t have a fix yet. Like, sure, companies make mistakes, can’t completely eliminate that, but they should have been able to handle the screw-up a whole lot better than they did.
Like, they could have just said “buy 12th gen instead, we can’t fix the 13th and 14th gen processors, and we’ll restart 12th gen production”, and I would have been okay – irritated, but it’s not like the performance difference here is that large. But they sold product in this case that they knew or should have known was seriously flawed for an extended period of time.
Plus, it’s not even the $1500 or whatever in hardware that went into the wastebasket surrounding this, but I blew a ton of time working on diagnosing and troubleshooting this. All Intel needed to do was say “we know that there’s a problem, we haven’t fixed it, this is what we know are affected, this is what we think likely are affected”, and it means that I don’t need to waste my time troubleshooting or go out and buying pieces of hardware other than CPUs to try to resolve the issue. Intel had a bunch of bad CPUs. I can live with that. But I expect them to do whatever they can to mitigate the impact on their customers at the earliest opportunity if they’re at fault, and they very much didn’t.
And obviously the R7 5800x is just a monster
I don’t think that this is cooling, and the video talks about the thing too. I initially suspected that cooling might somehow be a factor (or power), given that one of the use cases that I could eventually get to reliably trigger problems for me was starting Stable Diffusion, was inclined to blame voltage or possibly heat somehow. But the video says no, they logged thermal data and their test servers are running very conservatively. And I kept an eye on the temperatures the second time from the get-go.
It looks like the 5800x has a TDP of 105W.
I switched to a 7950X3D, which has a TDP of 120W, but on both the Intel processors and the AMD one, was using one of these water coolers (which was definitely overkill on the AMD CPU). Never used water-cooling before this system – was never something that I’d consider necessary until the extreme TDPs that the recent Intel processors had – but it does definitely keep the processor cool. I probably wouldn’t have bothered getting the thing had I just been using an AMD CPU, but since I had the thing already…shrugs
I hit this.
I had a 13900KF fail after a few months on an Asus Z790-based motherboard; started seeing memory errors when anything more than one core was active (disabling additional cores in grub during Linux boot). Worked fine prior to that. I believe that the failures were progressive to some degree; I initially only saw sporadic errors, then saw them increasingly-frequently until it wasn’t possible to even boot with multiple cores. memory testers didn’t trigger it. Doing builds with many cores did consistently do so (I built Cataclysm: Dark Days Ahead at -j32, was the first test case I could find that reliably failed at some random point during the build on the failing processor, though never the same point). Starting up Stable Diffusion also could pretty consistently fail. I scripted up test cases using these to investigate downclocking memory and trying to fiddle with other settings. Downclocking the memory may have helped a bit – I didn’t gather enough data to try to get solid figures – but at the end, even having it all the way down wasn’t sufficient to cleanly boot the system; you’d have errors just trying to mount the root filesystem. Tried different Linux kernels, including building my own out of latest nightly code, tried fiddling with the kernel preemption mode (on the off chance that it was a Linux bug triggered by multiple core use). Got a 14900KF to replace it, made sure to turn off the default motherboard settings that were recommended against by Intel before inserting or ever using chip, assuming that it must have been overly-aggressive motherboard defaults. Had very hefty cooling on this. At first I thought it might be voltage drops due to the Stable Diffusion startup issues (maybe the GPU drawing power was a factor) or maybe even cooling (though temps seemed fine), but no – swapping the CPU made all the problems go away at first…and then it failed in the same way after a few months. Variety of problems, including Linux kernel complaining about hardware bugs, memory errors, kernel threads hanging, same as before. Same progressive failures that got more-frequent over time.
Never saw any problems with either CPU when running on a single core (maxcpus=1 passed to the Linux kernel), so at least I could get the system functional and stable, but obviously the performance was abysmal in that case. Using even one additional core and the problems were present (unusably so, towards the end on each CPU).
Switched to an AMD motherboard and processor. Haven’t had any problems. I expect that I’ll continue using AMD processors moving forward unless they put some serious lemons out.
No change in DIMMs (and in fact, used the same DIMMs just fine with the AMD processor).
At least I know that I’m not just crazy and that a ton of other people are getting this too. And the fact that this guy has been running on a different chipset and has a large dataset running within safe specs does kind of rule out the motherboard being at fault – I didn’t try running a motherboard with another chipset and another CPU from that class. The guy did say that some CPUs in his dataset just don’t seem to experience the problems (I saw him say a “50% rate”), so maybe there’s some sort of problem with Intel’s manufacturing process rather than with their design, and whatever testing methodology they used didn’t deal well with that.
And the guy is very explicit that they saw progressive degradation too, and had tests with logged times for it. 14:50:
Also:
22:00:
Yeah, I’m wondering about what kinds of other nasty secondary fallout there will be. One reason that I didn’t want to spend time on this further – was willing to just eat the cost of the motherboard and a pair of CPUs and go AMD – was because I was developing root filesystem corruption just trying to boot with multiple cores, and I didn’t want to experiment on that further. It’s just not worth it to me as an individual dealing with a dicked-up filesystem to try to track down a piece of bad hardware. Like, there’s going to be unpleasant fallout out there with other people from data loss when a lot of CPUs are garbling data somehow.
AMD has been really solid. I’ve built a number of PC’s and there’s I’ve never run into an issue with the CPU’s. the R5 2600, 3600, and R7 5800 and 5800x are all surprisingly efficient chips out of the box, but I played around with each and found even crazier undervolt settings. My server PC draws practically nothing except if there’s something using the (NVIDIA) GPU extensively (and even then it’s like, oh no, is it almost 75 watts? better call the fire brigade! lmao).
And obviously the R7 5800x is just a monster, although I’ve consistently seen that it runs hot but… I air cool mine and it’s never really going above 85c when under full load on stock, and if you play with undervolting at all it’s pretty easy to keep the exact same performance while lowering the total power delivered. Although I’ve found that it goes up to 85c still and the chip just runs faster…
I mean, I don’t hate Intel – I’ve used exclusively their systems for, I dunno, maybe 25 years. And as Steve Burke says in the video, it’s not as if AMD has never had hardware problems on their CPUs. But this is a pretty insane dick-up on Intel’s part. Like, even if I’m generous and say “Intel had a testing regimen that these passed, because failures didn’t show up initially”, Intel should also have had CPUs that they kept running. They maybe didn’t know the cause. They maybe didn’t have a fix worked out or know whether they could fix it in software. But they should have known partway through the production run that they had a serious problem. And when they knew that there was a serious problem, my take is that they shouldn’t have been continuing to sell the things. I mean, I would not have picked up the second processor, the 14900KF, if I’d known that they knew that two processor generations were affected and they didn’t have a fix yet. Like, sure, companies make mistakes, can’t completely eliminate that, but they should have been able to handle the screw-up a whole lot better than they did.
Like, they could have just said “buy 12th gen instead, we can’t fix the 13th and 14th gen processors, and we’ll restart 12th gen production”, and I would have been okay – irritated, but it’s not like the performance difference here is that large. But they sold product in this case that they knew or should have known was seriously flawed for an extended period of time.
Plus, it’s not even the $1500 or whatever in hardware that went into the wastebasket surrounding this, but I blew a ton of time working on diagnosing and troubleshooting this. All Intel needed to do was say “we know that there’s a problem, we haven’t fixed it, this is what we know are affected, this is what we think likely are affected”, and it means that I don’t need to waste my time troubleshooting or go out and buying pieces of hardware other than CPUs to try to resolve the issue. Intel had a bunch of bad CPUs. I can live with that. But I expect them to do whatever they can to mitigate the impact on their customers at the earliest opportunity if they’re at fault, and they very much didn’t.
I don’t think that this is cooling, and the video talks about the thing too. I initially suspected that cooling might somehow be a factor (or power), given that one of the use cases that I could eventually get to reliably trigger problems for me was starting Stable Diffusion, was inclined to blame voltage or possibly heat somehow. But the video says no, they logged thermal data and their test servers are running very conservatively. And I kept an eye on the temperatures the second time from the get-go.
It looks like the 5800x has a TDP of 105W.
I switched to a 7950X3D, which has a TDP of 120W, but on both the Intel processors and the AMD one, was using one of these water coolers (which was definitely overkill on the AMD CPU). Never used water-cooling before this system – was never something that I’d consider necessary until the extreme TDPs that the recent Intel processors had – but it does definitely keep the processor cool. I probably wouldn’t have bothered getting the thing had I just been using an AMD CPU, but since I had the thing already…shrugs