Looking at the “Rules of Thumb of Data Engineering” we find that Internet Data Centers have tended to hew closer to the Rules than Enterprise Data Centers. What are the Rules? What are the implications for EDCs? Can IT matter?
The exponential changes in the underlying technologies make it difficult for even seasoned observers to grasp the coming changes. See the entire series here.
This is an excellent study, I wish more of these elaborate studies were performed to try and base conclusions of overall long term effects of cooling and heating in the data center. I wished more of the details of this study were published. I do have a few quick observations that did not seem to be emphasized from the study.
What effects would drive failures have if the average temperature were running at drive temperatures beyond 45 centigrade?
I think it is important to note that the conclusions drafted from this paper were based on a slim variance of temperatures.
As an example the AFR% chart explicitly shows the results for temperatures at 45 degrees and higher. However I would conclude that this statement has not been validated for temperatures significantly beyond 45 Celsius.
There is no question that processors can run at temperatures well beyond 50 Celsius and if the heat is not adequately removed from these devices it will dramatically increase the temperature of the local disks (providing you are using local disk as opposed to a disk array).
I’m quite certain that the findings would be significantly different if the tests were conducted for an 18 month period where the average disk temperature were 60 Celsius.
For my servers, the tjmax of their processors are 100 c. Via coretemp, I can see all the processors/cores (due to load) is normally running from about 60 c to 80 c.
The airflow first hits the drives, then the cpu, and then out via the power supplies. I would like to think that my drives would be effected more by the temperature in the cabinet then the temp on the processors.