STULZ News https://www.stulz.co.uk/en/ Here you find the latest blogarticles, pressreleases, professional articles and events from stulz.co.uk. en-gb STULZ Wed, 02 Sep 2020 01:11:59 +0200 Wed, 02 Sep 2020 01:11:59 +0200 news-2145 Wed, 11 Mar 2020 16:21:00 +0100 Data Centre World, London, 2020/03/11-12 https://www.stulz.co.uk/en/newsroom/event/data-centre-world-london-20200311-12-2145/ Bereits zum 12x finden im Rahmen der Data Centres World Reihe in London statt. Die DCW... STULZ at the Data Centres World

The 12th Data Centres World series is already taking place in London. The DCW events are among the leading data centre trade fairs. The theme of this year's trade fair is trends, opportunities and challenges for data centers and covers all interfaces and interests related to data centers. 

At our 108 sqm booth we are looking forward to meeting you and to talking to you. Please contact us to find out more about the various cooling options in the field of data centers. You will find us at booth D640.

]]>
news-2116 Tue, 05 Nov 2019 09:35:59 +0100 Comfort vs Precision https://www.stulz.co.uk/en/newsroom/professional-article/comfort-vs-precision-2116/ Poor knowledge of cooling approaches leads to business risk news-2091 Thu, 03 Oct 2019 09:00:00 +0200 CBRE Preferred Supplier Event 2019 https://www.stulz.co.uk/en/newsroom/event/cbre-preferred-supplier-event-2019-2091/ CBRE Preferred Supplier Event 2019 news-1976 Wed, 21 Nov 2018 14:56:00 +0100 Energy Efficiency - 26 billion connected devices across the world, are the challenges set to get worse? https://www.stulz.co.uk/en/newsroom/blog/energy-efficiency-26-billion-connected-devices-across-the-world-are-the-challenges-set-to-get-wor/ As the need for data centre services continues to increase, aren’t the challenges about energy efficiency only set to get worse?]]> news-1950 Fri, 21 Sep 2018 16:51:20 +0200 Feeling the heat? Prepare your data centre for extreme weather https://www.stulz.co.uk/en/newsroom/blog/feeling-the-heat-prepare-your-data-centre-for-extreme-weather-1950/ Why upgrading your systems can help defend against damage caused by soaring temperatures experienced... news-2077 Thu, 23 Aug 2018 11:21:00 +0200 Space-saving split air-con saves energy for server rooms https://www.stulz.co.uk/en/newsroom/news/space-saving-split-air-con-saves-energy-for-server-rooms-2077/ The New EC Tower news-1763 Wed, 18 Apr 2018 11:21:44 +0200 STULZ introduces Micro Data Centers with optional direct liquid-to-chip cooling for low to high density IT applications https://www.stulz.co.uk/en/newsroom/news/stulz-introduces-micro-data-centers-with-optional-direct-liquid-to-chip-cooling-for-low-to-high-dens-1/ Reliable, scalable infrastructure solution for edge computing, hybrid cloud and Industry 4.0... Reliable, scalable infrastructure solution for edge computing, hybrid cloud and Industry 4.0 projects

STULZ’s new Micro Data Center, The STULZ Micro DC, can be configured with all the key design aspects of a bricks and mortar data center, including critical power control and monitoring, fire suppression, physical security and precision cooling.

STULZ introduces a micro data center for low to high density applications. The STULZ Micro DC provides a cost-effective solution to quickly build up local IT capacity where it is needed. This modular, highly efficient solution is easily scalable to meet both the needs of today and the growth of tomorrow – even in places where space is limited. The standard 19" rack system supports assemblies ranging from 40U to 48U in height and can be equipped with accessories such as a UPS, PDU, cable management system, LED lights, fire suppression, cameras, software for environmental monitoring and environmental control.  

Micro DCs are available as a single-rack variant or as a locally integrated collection of up to six racks. To ensure reliable operation in harsh environments, the cabinet can be fitted with  dust and water or intrusion protection in compliance with IP55 specification. Possible applications of the STULZ Micro DC range from facilitating edge-computing architectures and IoT installations to connecting IT to production lines and manufacturing facilities.

As standard, the Micro DC features an Integrated Cooling System (ICS™) for low to medium density applications which can be mounted internally or on the side. An internal mount provides 3kW – 5kW of precise cooling and uses 6U, whilst a side-mount provides 5kW – 25kW of cooling without using any U space. The ICS is available in chilled water and direct expansion systems.

For high performance applications, STULZ has partnered with CoolIT Systems to offer the innovative Chip-to-Atmosphere™ cooling solution, which combines the standard ICS with a direct liquid-to-chip system - Direct Contact Liquid Cooling (DCLC™). This state-of-the-art technology uses the exceptional thermal conductivity of liquid to provide concentrated cooling to the hottest components inside a server, enabling very high density configurations. By incorporating DCLC technology from CoolIT, the STULZ Micro DC can dissipate extreme heat loads of up to 80 kW per rack. This widens the range of applications of the Micro DC including a variety of processor-intensive applications in science and research, big data and engineering.

The highly efficient and energy-saving direct liquid-to-chip cooling solution can be flexibly integrated into existing water circuits such as condenser return water, chiller return water and facility auxiliary water. Counter cooling with a temperature spanning 2°C to 35°C is carried out using chillers, cooling towers, fluid coolers or conventional heat exchangers. CoolIT DCLC technology has been embraced by the majority of the main hardware manufacturers which now deploy their servers with embedded DCLC technology.

For more information visit www.stulz.co.uk/en/micro-dc/

]]>
news-1764 Tue, 06 Feb 2018 16:48:49 +0100 STULZ introduces a low-GWP variant of CyberCool 2 ze chiller https://www.stulz.co.uk/en/newsroom/news/stulz-introduces-a-low-gwp-variant-of-cybercool-2-ze-chiller-1764/ Chiller using R-1234ze refrigerant now available in the output range 300 - 1000 kW STULZ GmbH is... Chiller using R-1234ze refrigerant now available in the output range 300 - 1000 kW

STULZ GmbH is extending its product portfolio to include new "ze chillers" in the CyberCool 2 series. These employ proven CyberCool 2 technology and have been specially optimized for use with climate-friendly HFO-1234ze refrigerant.

Hamburg, 1/31/2018 – Hamburg-based STULZ GmbH is expanding its product portfolio with additional "ze chillers" in the CyberCool 2 series. These new chillers are available in air-cooled versions and are designed for use with climate-friendly R-1234ze refrigerant. This HFO blend has a very low global warming potential and, in the context of (EU) regulation no. 517/2014 and the concomitant refrigerant shortage, is a future-proof and efficient alternative to conventional HFC-based refrigerants. STULZ CyberCool 2 ze chillers are available in various output capacities from 300 to 1000 kW and offer reliable and efficient cooling solutions for medium and large data centers, telecommunications and industrial applications.

Like the CyberCool 2 chillers that were introduced in 2013, the ze series is also equipped with the largest possible EC fans and sound-insulated compressor chambers to reduce noise emissions. Thanks to the dynamic control technology developed by STULZ, the CyberCool 2 also supports indirect free cooling. This intelligent way of changing over between compressor operation and indirect free cooling facilitates particularly resource-saving operation coupled with maximum energy efficiency. STULZ’s Mix Mode Boost technology can also use the entire surface of DX coils without needing to regulate the fan speed, which further increases energy efficiency and significantly reduces operating costs. Microchannel condensers constructed entirely of aluminum and equipped with air baffles also optimize flow into the inner coil elements. These STULZ CyberCool 2 ze series chillers are available now. Alternatives to conventional refrigerants are currently being tested.

]]>
news-1676 Wed, 20 Dec 2017 15:43:11 +0100 Teraco and STULZ triumph at the DCD Global Awards 2017 https://www.stulz.co.uk/en/newsroom/news/teraco-and-stulz-triumph-at-the-dcd-global-awards-2017-1676/ Teraco in collaboration with STULZ is celebrating winning an Award at the DCD 2017 Awards. The... Teraco in collaboration with STULZ is celebrating winning an Award at the DCD 2017 Awards. The companies were commended for increasing its energy efficiency and reducing the carbon footprint in a very challenging climate at the Isando Data Centre 7 in Johannesburg.


Hamburg, 2017/12/20 – Now in their eleventh year, the DCD Awards went global for the first time and celebrated the best projects, technologies, companies, individuals and teams from across the industry. The 2017 event took place in December at London's Royal Lancaster Hotel. Approximately 700 guests gathered to celebrate these hotly contested awards which received over 200 entries from 30 countries.

Barrister turned comedian and broadcaster Clive Anderson, who hosted the event presented the Energy Efficiency Improvers Award (sponsored by Starline) to Brendan Dysel, Head of Infrastructure Management at Teraco, who commented: "The ambient conditions in South Africa provide great challenges to meet data centre efficiencies deemed the norm in global circles. The STULZ partnership with Teraco proved most successful and the DCD award for Energy Efficiency Improvers Award speaks volumes. The benchmark has been set and the engineering teams will now continue to improve on our current designs and sustainable operations."

There was stiff competition from DigiPlex (Norway), Liberty Global in the UK and vXchnge in USA. Teraco is the first provider of resilient, vendor neutral data environments in South Africa and is leading the way in the region by making energy efficiency one of its highest priorities.

Kurt Plötner, Vice president of STULZ Germany commented, "We are delighted that Teraco and STULZ South Africa have received this accolade. It is a tremendous achievement which would not have been possible without the sales, product management and design engineers in Hamburg being absolutely focused and determined to build an energy efficient, state of the art data centre in Johannesburg’s challenging climate. Needless to say that we at STULZ are proud to be associated with Teraco and their team!"

The 27,500m2 colocation facility has a net cooling capacity of 2 MW, supplied by a chilled water system which is concurrently maintainable, with three CyberCool 2 chillers (N+1) and 14 CyberAir 3 chilled water units from STULZ. Due to extremely hot summers, the ambient temperature reaches 40°C. However, with water temperatures of 22°C/14°C Teraco is able to switchover into mix-mode at 20°C ambient temperature (approx. 5.700 h/a) to increase the energy efficiency by reducing the electrical power consumption of the chiller. The design of the installation allows the use of free cooling for approximately 65% of the year, saving a significant amount of energy (circa 45%) and reducing Teraco's carbon footprint.

STULZ were also proud to supply the cooling solutions for The MareNostrum Data Center in Spain which won the World’s Most Beautiful Data Center Award.

]]>
news-1619 Wed, 23 Aug 2017 15:17:00 +0200 STULZ presents chiller with integrated Free Cooling for data centers and industrial applications with low cooling needs https://www.stulz.co.uk/en/newsroom/news/stulz-presents-chiller-with-integrated-free-cooling-for-data-centers-and-industrial-applications-wit/ The new STULZ WPAmini delivers a cooling capacity of 160 kW in a very small footprint Two separate... The new STULZ WPAmini delivers a cooling capacity of 160 kW in a very small footprint

Two separate refrigerant circuits increase operational reliability. Four scroll compressors guarantee energy efficiency, even in small partial load stages.

Hamburg, 2017/08/22 – With the WPAmini, STULZ is expanding its product line of air-cooled chillers. The compact chillers deliver a cooling capacity of up to 160 kW and have been optimized for use in data centers or industrial chilling. The chillers are equipped with two redundant refrigerant circuits to increase operational reliability. The circuits are controlled according to the required cooling capacity, so that the respective circuit is activated with maximum efficiency when minimal cooling is needed. The WPAmini is equipped with oversized aluminum microchannel condensers and four scroll compressors, which are activated and deactivated (25, 50, 75, 100 %) in stages. The compressor operating time can be reduced to a minimum using the Free Cooling facility that can be incorporated as an option. This considerably cuts power consumption and operating costs. Three operating modes are available in all: DX mode, Free Cooling mode and Mixed mode. In Mixed mode, cooling capacity is generated by means of simultaneous Free Cooling and compressor cooling. In this way, significant energy savings can be achieved even at moderate outdoor temperatures.

For noise-sensitive locations, the WPAmini is also available in a "Low Noise" confirguration. Here, the compressors are enclosed in a special sound-insulated housing, which lowers the volume to normal conversation level. The chillers can also be fitted with optional fan diffusers, that further reduce the noise level as well as electricity consumption. The WPAmini can run in extreme climates. Its operating limits range from -40 to +50 °C, depending on the model. The switch gear cabinet is generously sized at the factory, ensuring sufficient space for optional electrical equipment. There is therefore no need for additional switch gear cabinets or extensions, with the interface problems these sometimes entail. Furthermore, the WPAmini series already satisfies the requirements of Ecodesign or ErP Directive Stage 2018.

]]>
news-1611 Wed, 16 Aug 2017 16:00:00 +0200 How can I compare chillers for data center applications? https://www.stulz.co.uk/en/newsroom/blog/how-can-i-compare-chillers-for-data-center-applications-1611/ Every planner/consultant or customer is familiar with this problem. Once the decision to purchase a... Every planner/consultant or customer is familiar with this problem. Once the decision to purchase a system has been made, the tendering process begins and you are confronted with the major task of weighing up which product is most suitable for this project.

This article deals with the question of what technical data can be used to compare equipment, and how we can question or verify the credibility of this data relatively easily.

Before we get started, we should first be clear about what our main focus is. What are the most important factors for my project? Is the focus on capital expenditure (CapEx), operating expenditure (OpEx), noise levels, or the easiest possible integration in an existing system? Comparing capital expenditure is relatively straightforward. However, it's very important to ensure that both machines can feature the same equipment. Does the standard version of a chiller have something to offer that is only available as an optional extra in the other model?

Where integration is concerned, the key aspect is collaboration with the manufacturer, and how flexible they are. Here, comparisons are already getting more difficult. However, it's clear that some manufacturers are more flexible than others. In this context, flexibility means far more than simply adding to the standard range of options; it could involve a larger compressor with the same footprint, the adaptation of load entry points, specific electrical requirements, and a great deal more.

Once we have gathered documentation for the same basic configuration from two or more manufacturers, the great comparison of technical data can commence. Defined KPIs such as EER and ESEER are a popular means of comparison. But how meaningful are these really? To gain more clarity, let's first define what these two values are actually about. First of all, here is an explanation of how they are calculated:

 

Energy Efficiency Ratio (EER)

Cooling capacity


Power consumption

 

EER is the ratio of cooling capacity to power consumption. This value should be as high as possible (i.e. not much energy is required to produce the desired cooling capacity). 

]]>
news-1556 Wed, 28 Jun 2017 07:16:26 +0200 STULZ wins Award for Most Trusted Product in Ventilation, Air Conditioning & Refrigeration https://www.stulz.co.uk/en/newsroom/news/stulzwinsawardformosttrustedproductinventilationairconditioningrefrigeration00-1556/ The air conditioning specialist received a prize in the "Compact Liquid Chillers" category A... The air conditioning specialist received a prize in the "Compact Liquid Chillers" category.

A high-caliber jury of 500 specialist planners, system builders and operators from the ventilation, air conditioning & refrigeration industry, awarded the Hamburg-based company top marks for product quality, planning support, after-sales service and trustworthiness.

]]>
news-1549 Thu, 30 Mar 2017 16:52:54 +0200 STULZ Publishes White Paper for Engineers Designing Cannabis Grow Rooms https://www.stulz.co.uk/en/newsroom/news/stulz-publishes-white-paper-for-engineers-designing-cannabis-grow-rooms-1549/ Frederick, MD (March 09, 2016) Leading mission critical cooling manufacturer STULZ Air Technology... As of today, 29 states and the District of Columbia have some form of legal cannabis. That change has driven a large interest in the cannabis industry and the rapid expansion of grow facilities. As new laws have passed within a territory, local engineers, architects, and construction professionals have been faced with the challenge of rapidly understanding and providing solutions for a facility that they may have never encountered before.

The new white paper was authored by Dave Meadows who is the Director of Industry, Standards, and Technology at STULZ. “With many engineering firms being asked to design a grow facility for the first time, my hope is that this white paper will become a valuable resource for engineering firms who need to get up to speed quickly with selecting the right HVAC equipment,” said Mr. Meadows. “STULZ has been working directly with customers in this industry to deliver the right equipment and features to make them more successful. By sharing what we’ve learned, I believe the entire industry will benefit.”

STULZ USA has provided environmental control equipment to some of the premier grow facilities in the United States. By working closely with growers, STULZ has developed their CyberOne system to address some of the most important issues facing the cannabis industry including the prevention of mold and mildew, pest control, CO2 augmentation, and prevention of pollen contamination.

According to the white paper, there are several similarities between a data center and the modern cannabis grow room. The most noticeable similarity is the energy intensive nature of both industries and the subsequent desire to reduce operational costs through energy efficiency. Computer room air conditioners (CRACs) are well suited in almost all respects for grow room applications with controls and software designed for maintaining tight tolerances of temperature and humidity while using as little energy as possible. However, unlike a data center the grow room has a large latent load which comes from the large amounts of water that are fed to the plants on a daily basis. This water, which is removed primarily by the HVAC system, is produced through a biological mechanism known as “transpiration.” Typical computer room air conditioners are designed to remove as little moisture from a data center as possible to prevent the need for re-humidifying the space, providing a challenge for any engineer designing for grow room conditions.

To address this issue, STULZ USA medical CyberOne cannabis units have an enhanced dehumidification feature that rapidly reduces moisture content in the grow space while making fewer air changes.  “We have designed our cannabis systems so that the STULZ  E2 controller monitors relative humidity in the grow space and when it identifies a moisture spike due to transpiration it slows the fans rotational speed to reduce the CFM, dropping the average coil temperature to the low temperature set point,” confirmed Mr. Meadows. “The colder coil rapidly cools air below its dew point and strips unwanted moisture.  Hot gas, electric, steam, or hot water reheat precisely adjusted by the controller maintains the leaving air temperatures during dehumidification to stabilize the grow room and limit stress on the plants.”

The complete white paper can be downloaded from the STULZ USA website at http://info.stulz-ats.com/medical-marijuana-environmental-control-white-paper.

]]>
news-1522 Tue, 28 Mar 2017 12:26:18 +0200 Digitization of documents https://www.stulz.co.uk/en/newsroom/blog/digitization-of-documents-1522/ Digitization continues to grow without alternative in almost all walks of life, and provides new possibilities for the documentation and logging of technical systems.

 

Safety and availability

Today, logbooks for chilling systems must be available to view at any time and in several places at once in a company, in order to satisfy stringent requirements in relation to legal stipulations, certifications and traceability.

This presents a major logistical challenge for conventional documentation resources. A large amount of time has to be spent copying and distributing paper logbooks, so that the information they contain can be available to all and also protected against loss.

The complete digitization of logbooks for chilling systems ensures that the documentation is freely available wherever needed, without any additional time being spent and with absolutely no delay. What's more, digital data can be protected better and more effectively.

It is especially important to protect data against loss and unauthorized access. What matters here is a comprehensive data backup concept in the data center on the one hand, and a variety of precisely harmonized safety mechanisms in the system architecture on the other hand. Encryption of stored data, among other things, is mandatory.

In order to satisfy the legal requirements for digital documentation and to make full use of further advantages, such as easy handling and high data security, the use of professional software such as the STULZ Service Portal is indispensable.

]]>
news-1521 Mon, 27 Mar 2017 12:07:00 +0200 CyberCool 2 Non-Glycol: Stulz offers Non-Glycol options for the CC2 chiller https://www.stulz.co.uk/en/newsroom/news/cybercool-2-non-glycol-stulz-offers-non-glycol-options-for-the-cc2-chiller-1521/ High-efficiency chiller meets requirements for refrigerant-free buildings

STULZ GmbH is adding a Non-Glycol option to its CyberCool 2 family portfolio. This allows the high-efficiency chiller to also be used in buildings that have stringent requirements regarding their facility management and must refrain from the use of glycol.

Hamburg, 3/7/2017 – STULZ, the Hamburg air conditioning specialist has added a Non-Glycol option to its tried and tested CyberCool 2 chiller. The specially adapted design enables data centers to be chilled entirely with water, and glycol-based brine is used solely in the external Free Cooling circuit of the chiller. As this new technology means that no glycol gets into the building, even the most exacting facility management requirements can be satisfied.

The Free Cooling and chilled water circuits are separated by an additional brazed plate heat exchanger in the chiller. This shifts the switchover points for Free Cooling and Mixed mode by a few degrees. The system layout with two brazed plate heat exchangers for the water inlet and water outlet circuits achieves a minimal terminal temperature difference of 2K and therefore suffers considerably smaller temperature losses than conventional non-glycol systems.

Moreover, thanks to the new Mix Mode Boost technology and maximum return temperatures of up to 35° Celsius, the Free Cooling period can be increased by up to 20 %, dramatically reducing electricity costs. This solution provides data center operators who wish to go glycol free with an efficient alternative that uses Free Cooling for long periods.

]]>
news-1501 Thu, 23 Feb 2017 15:40:21 +0100 Digital Transformation and Industry 4.0 - a brief look ahead https://www.stulz.de/en/newsroom/blog/digital-transformation0-1501/ Surely not many companies haven't asked themselves at least once what influence increasing... Surely not many companies haven't asked themselves at least once what influence increasing digitization and Industry 4.0 will have on their business model. STULZ specialized in data center air conditioning back in 1971, and as a company we have experienced many digital trends in the flesh. But if you get to grips with the latest reports talking about the digital transformation and Industry 4.0, you quickly realize that this time we are facing something bigger. And one thing I notice is that the arguments are sometimes rather drastic. My impression is this: If you don't begin digital conversion immediately, your company will be finished in a few years' time. Or you can secure your future, turn everything upside down and undergo digital transformation. 


Well, I don't see it as drastic as all that, but all companies should take action. They should analyze their business models, and they really need to try and determine to what degree digital transformation is necessary. Overhasty steps and digital actionism can rapidly backfire and wipe out a lot of capital. Companies specializing in complex industrial goods know that personal consultation, for example, cannot be replaced 1:1 by online consultation. However, when carrying out an analysis many companies will realize that they already started along the road to digital transformation a long time ago (see next paragraph).

Another difficulty during analysis is that digitization, digital transformation, IoT, big data, cloud computing and Industry 4.0 are often all lumped together, so no one can figure out whether they are affected and what exactly is important for the company.

The fact is: every company is affected, whatever they call the process. Consumer behavior in relation to products, services, communication, etc. is changing due to powerful mobile devices, fast data networks and state-of-the-art data centers. In addition, all data are collected and every technical device is connected. This is what the digital future looks like – and it is as inevitable as the rising of the sun in the morning.

Some negative aspects are undoubtedly apparent, but there are numerous advantages and opportunities that consumers and companies can use to their benefit. Every entrepreneur should ask themselves this: Exactly where must I digitally expand my business, my production, etc. in order to still find buyers in future, focus production better, or sell my services? The localization of real-time data and their use in personalized customer apps, in particular, offer brick-and-mortar stores the means of making personalized offers that are dispatched directly, for example. Here, the right timing matters. And for this you need to know exactly when a customer is ready to buy or – better still – when he has to buy, because the refrigerator is empty or his shoes are worn out.

 

Have the digital transformation and Industry 4.0 changed cooling solutions for data centers?

The digital transformation is without doubt one of the primary drivers of change in air conditioning systems for data centers, but further important reasons also exist: rising energy prices, changing regulations, new environmental requirements, higher safety standards, the allowance of more heat and humidity in data centers, and geographical considerations are some additional factors.

However, it is impossible to deny that new data center trends have arisen due to digital changes, especially in recent years. Digitization generates more data, which has led to the construction of many new, larger data centers. Moreover, existing data centers are regularly being equipped with new, more powerful servers.

But modernization in the form of new servers confronts data center operators with problems. Servers can easily be replaced with more powerful devices. This inevitably leads to a higher heat load over the existing surface area. The air conditioning units can generally be modernized, but it is not that simple to increase the number of chillers for removing the higher heat load. If a data center was planned for a heat load of 1 MW 10 years ago, and now 3 MW and more is being produced in the same space, the entire infrastructure – pipes, pumps, raised floors – has to be renovated. Additionally, more space is required in the data center for more air conditioning units, and further space for chillers is also needed outside the building.

 

Modernization with chillers and air handling units

In my opinion, data center operators have various means at their disposal for converting an existing structure. When renovating an existing CW system, chillers must be used that are specially designed for data center air conditioning, and feature maximum availability and Free Cooling, for example. This minimizes the energy disadvantages of conventional chillers. A standard chiller from the building's air conditioning system is out of place here, because it does not satisfy the project-specific requirements of a data center.

It is precisely for this reason that we developed our CyberCool 2. With the CyberCool 2, we offer a chiller with maximized heat exchanger surface areas and maximum size fans, which exploit the available space down to the last millimeter to improve air conduction. With the addition of effective Free Cooling and newly designed air conduction, the cooling capacity over an existing area is increased. Combined with maximum size CW indoor units, existing space can be put to better use, and then nothing stands in the way of expansion and more powerful servers.

]]>
news-1485 Tue, 31 Jan 2017 16:38:17 +0100 Data Centre World, London, 2017/3/15 - 16-1 https://www.stulz.co.uk/en/newsroom/news/data-centre-world-london-2017315-16-1-1485/ About Data Centre World 2017

Data Centre World is the world's largest exhibition for data centers. This is where you can discover all the latest trends and developments. More than 600 suppliers from the industry will be exhibiting, and of course we will also be there with our own stand. Once a year, you have the opportunity to meet thousands of experts in one place in just two days.

STULZ at the Data Centre World 2017

STULZ is one of the world's leading solution providers of technology for energy efficient temperature and humidity management. Our product range includes traditional room cooling, high-density cooling, chillers, and air handling units with adiabatic cooling.

We will be exhibiting an exclusive selection from our product portfolio over an area of 72 m2. Experts from each business unit will be at hand, to provide you with the best possible advice. So come and get to know us.

]]>
news-1487 Tue, 31 Jan 2017 10:52:52 +0100 Data Center Temperature Control https://www.stulz.de/en/newsroom/blog/data-center-temperature-control-1487/ Data centers with air conditioning by means of precision air conditioning units basically have two... Data centers with air conditioning by means of precision air conditioning units basically have two main types of control: the Control of air conditioning units based on the air inlet temperature (so-called return air temperature control) and the Control of air conditioning units based on the air outlet temperature (so-called supply air temperature control).

Return air temperature control is the best known and also the most widespread type of control. The CRAC or CRAH are equipped with temperature sensors (combined temperature and humidity sensors, as a rule) in the vicinity of the air inlet. A setpoint is set for the return air temperature and the unit controller keeps this setpoint stable. If the airflow is constant, fluctuations in the data center heat load influence the supply air temperature.

The supply air temperature is the temperature of the air as it leaves the air conditioning unit. It is approximately the same as the server inlet temperature. If the return air setpoint is set to 33 °C, for example, and the data center air conditioning system is designed for a temperature difference of 15 K, under full load the supply air temperature would be 18 °C. Since full load is seldom reached in a data center, but instead a partial load of 40 % to 60 % is common, the supply air temperature – at a constant airflow – would be 24 °C to 27 °C.

Diagram 1 shows the supply air temperatures produced at a return air temperature of 33 °C, a constant airflow and a temperature difference of 15 K at full load for various partial load scenarios.

]]>
news-1456 Mon, 09 Jan 2017 11:30:37 +0100 TOP 2016 Blogpost https://www.stulz.co.uk/en/newsroom/news/top-2016-blogpost-1456/ The year 2016 has gone but we want to share the TOP 3 blogposts on STULZ Blog with you. The year 2016 has gone but we want to share the TOP 3 blogposts on STULZ Blog with you.

TOP 1: AER – A new efficiency indicator for airflow in Data Centers  (March 2016)
Recently, a value has repeatedly popped up in air conditioning system specifications, defining the maximum permitted power consumption for the fans at a certain airflow.
Find out more: https://www.stulz.co.uk/en/newsroom/blog/news/aer-a-new-efficiency-indicator-for-airflow-in-data-centers/


TOP 2: Dehumidification in data centers when using CW units at high-temperature levels  (March 2016)
Most users and planners are now aware that temperature levels when cooling IT equipment in data centers have changed dramatically in recent years.
Find more information here: https://www.stulz.co.uk/en/newsroom/blog/news/dehumidification-in-data-centers-when-using-cw-units-at-high-temperature-levels/


TOP 3: The Data Center standard DIN EN 50600 in brief  (April 2016)
A standard for Data Centers? How will that work? Every Data Center looks different, there are countless sizes, types, uses and concepts of and for Data Centers. How can something like this be standardized?
Find the answers here:  https://www.stulz.co.uk/en/newsroom/blog/news/the-data-center-standard-din-en-50600-in-brief/

For 2017 we are planning more posts on interesting subjects. Stay tuned!

]]>
news-1437 Wed, 14 Dec 2016 10:44:36 +0100 Our facilities in Italy just got bigger! https://www.stulz.co.uk/en/newsroom/news/our-facilities-in-italy-just-got-bigger-1437/

Over 9000 sqm of state-of-the-art technology for STULZ S.p.A.

We are glad to announce the opening of our new production plant, officially presented during a ceremony held on the 27th of September 2016.

The new structure, next to the historical site of Valeggio sul Mincio (VR), covers over 9000 sqm, between stock (2500 sqm), offices (2000 sqm) and 4500 sqm for the new production lines. We are going to reveal more in the next few weeks, but in the meanwhile you can watch a short video that in less than 2 minutes will  show you the construction and completion of this important project.

]]>
news-1419 Fri, 18 Nov 2016 15:54:14 +0100 Top 10 Data Center Best Practices https://www.stulz.co.uk/en/newsroom/blog/top-10-data-center-best-practices-1419/ Data Center Best Practices

Load density, air distribution, floor tile positioning; data center design is more complicated than ever, but with some best practice considerations, creating an efficient, reliable data center design is within your grasp.

Let's explore 10 Important Data Center Best Practices:

]]>
news-1425 Fri, 18 Nov 2016 15:54:14 +0100 The Raised Floor https://www.stulz.co.uk/en/newsroom/blog/the-raised-floor-1425/ Today, the raised floor is still an important element in many new data centers. But why? What is it for, what does it do? Below are a few thoughts on the subject of raised floors.

To loosely quote DIN EN 50600 (also see "Data centre standard DIN EN 50600 (VDE 0801-600) in brief"), the raised floor is a system consisting of completely removable and exchangeable floor grills fitted onto adjustable base frames, which are interconnected by beams. Its purpose is to make the space under the floor available for facility services.

Now as ever, precision air conditioning units (CRAC or CRAH) are still the first choice for air conditioning data centers, even now in the age of in-row cooling, rack cooling and air handling units. This article does not go into the reasons for this. Instead, it deals with the raised floor which, in conjunction with precision air conditioning units, ensures maximum reliability and efficiency.

In the past, the raised floor concealed "facility services" such as power cables, data cables and piping, and the cold air had to painstakingly find its way through these to the air outlets. A good deal has changed since then. It is now common knowledge that the primary aim of the raised floor is to convey cold air to the servers, and so wiring is mostly routed above the racks. In addition, these days the height of the raised floor is planned to ensure that the air in this supply air duct has sufficient space to reach its destination without major losses or resistance.

It is vital that a raised floor is leak-proof if it is to be used for air distribution. Care must be taken to ensure that the cold air only leaves the raised floor in the direction of the servers where planned and where most effective. Leaks in cable glands, beneath racks or at wall connections must be meticulously sealed. Raised floor grills that are removed for maintenance purposes are a hindrance to air distribution. This should be reduced to the necessary minimum.

]]>
news-1411 Fri, 28 Oct 2016 00:00:00 +0200 Use of pressure independent control valves in CW units https://www.stulz.co.uk/en/newsroom/blog/use-of-pressure-independent-control-valves-in-cw-units-1411/ In a previous blog (CW standby management), we already discussed the fact that liquid cooling systems with centralized chilled water supply and so-called CW (= chilled water) precision air conditioning units are the most popular choice for cooling larger data centers. This is primarily because of their good scalability and comparably simple hydraulics. 

 

Along with the chilled water heat exchanger and fans, the chilled water control valve is the other principal mechanical component of a CW precision air conditioning unit. In the past, either 3 or 2-way control valves were used, depending on the type of hydraulic system and pump used (variable or constant speed). However, for some time now so-called "pressure independent control valves" or "PICVs" have also been frequently used as 2-way control valves or ball valves.


In order to better understand the method of operation and advantages of the PICV, we would do well to recall some fundamental hydraulic principles:

  1. A control valve ensures that a heat exchanger is always supplied with the correct quantity of water for the current operating point or cooling needs (full load or partial load). The appropriate valve position or degree of opening is determined by an external control signal. 
  2. The valve size (keywords: Kvs value, Valve Authority) must be based on the required quantity of water (full load operation) and the water-side pressure drop at the heat exchanger. 
  3. The pressure drop at the control valve resulting from the valve calculation is also referred to as "differential pressure". This differential pressure and the pressure drop at the heat exchanger must be correctly harmonized with one another: 
    • Differential pressure too low (valve too large): the valve has only a small stroke range, with adverse effects on control quality and unstable control behavior (fluctuations) as possible consequences
    • Differential pressure too high (= valve too small): major noise and cavitation possible, superfluous pump energy consumed 
  4. In every hydraulic system, pressure through the valves, heat exchanger and pipes vary depending on the type of system, installation location and distance from the pump, as well as on changing load conditions. 
  5. The definitive factor when determining the size and settings of the pump is to make sure that the last consumer in the system is always supplied with the necessary quantity of water at full load, and that the associated differential pressure can be surmounted. 
  6. The closer a consumer (e.g. a CW precision air conditioning unit) is situated to the pump, the greater the flow rate and, without so-called "hydraulic balancing", the differential pressure through the control valve of this consumer will also rise. Hydraulic balancing makes sure that each consumer in the system always receives the required quantity of water, and that the water does not take the path of least resistance.


But what is the role of the pressure independent control valve here?

A modern electronic pressure independent control valve basically always combines four functions in one valve unit – pressure independent control, measurement of the water flow, a shut-off function, and automatic hydraulic balancing. These functions are performed by the control ball valve, valve drive and flow sensor.

This means that in a pressure independent control valve, the setpoint is always the required water flow rate. Since the current flow is measured continuously, the valve adapts the required quantity of water in line with the load, and the valve's pressure drop (differential pressure) is therefore the result of the flow rate, not defined by valve size or the kvs value. Consequently, any difference between the setpoint and the current flow due to a change in differential pressure is compensated fully automatically by the opening angle of the control ball valve.

So "pressure independent" (or more accurately, "independent from differential pressure") means that the correct amount of water is always supplied to the consumer, and the control quality is dependent neither on the valve's position in the hydraulic system nor the prevailing pressure conditions.


Advantages of a pressure independent control valve of this kind

1. Planning/design:

  • Fast and simple valve design based only on the required quantity of water – kvs values, Valve Authority and varying differential pressures can basically be ignored
  • No balancing valves or circuit control valves needed – lower investment and installation costs

2. Start-up/operation:

  • No balancing valves or circuit control valves needed – therefore lower water-side pressure drops and the possibility of reduced pump power consumption.
  • No time-consuming, labor intensive hydraulic balancing required – the pressure independent control valve performs the task of hydraulic balancing; the required quantity of water is adjusted easily
  • Stable and precise control in all load states thanks to the defined quantity of water, regardless of the type of hydraulic system chosen
  • Water quantity can be flexibly adjusted in the event of extensions, conversions and/or modernization
  • Water quantity can be easily read – more in-depth analysis (e.g. cooling capacity) is possible It is clear that the use of so-called pressure independent control valves makes sense in most cases, as investment and operating costs can be lowered, and stable control is guaranteed irrespective of the chosen hydraulic system and current load conditions.

It is clear that the use of so-called pressure independent control valves makes sense in most cases, as investment and operating costs can be lowered, and stable control is guaranteed irrespective of the chosen hydraulic system and current load conditions.

]]>
news-1408 Thu, 06 Oct 2016 10:56:19 +0200 Cooling capacity - How to compare apples with apples https://www.stulz.co.uk/en/newsroom/blog/cooling-capacity-how-to-compare-apples-with-apples-1408/ Manufacturers often provide different kinds of information about cooling capacity in their... Gross cooling capacity is produced by the air conditioning unit via the heat exchanger. Fans are used to move the air through the air conditioning unit. These consume energy, which is ultimately converted into heat. This heat, also produced in the air conditioning unit, lowers the gross cooling capacity. The result is the net cooling capacity.

Modern precision air conditioning units cool the air without dehumidifying it. All the cooling capacity generated is therefore used precisely for what is actually needed: cooling the air. In older air conditioning units and those with components not of an ideal size, or with a bad choice of return air conditions, it can happen that some of the generated cooling capacity is inadvertently used to dehumidify the air during the cooling process. Valuable cooling capacity is lost and the air conditioning unit works less efficiently. The entire sum of cooling capacity generated is known as the total cooling capacity. The proportion that is used for purposely cooling the air is called sensible cooling capacity. Any proportion of the cooling capacity inadvertently used to dehumidify the air is called latent cooling capacity. In an ideal situation with no unwanted dehumidification, sensible cooling capacity is the same as the total cooling capacity. The ratio of sensible cooling capacity to total cooling capacity is referred to as the "sensible heat ratio", or SHR for short. In ideal conditions without dehumidification, the SHR is generally 1.0.

So as we can see, it is vital to take care to compare like with like when comparing technical data. If you are unsure whether the manufacturer documentation is talking about total gross cooling capacity or the effective, usable sensible net cooling capacity, it makes sense to ask before comparing data from different manufacturers.

]]>
news-1051 Tue, 20 Sep 2016 11:25:42 +0200 Power usage effectiveness PUE and pPUE https://www.stulz.co.uk/en/newsroom/blog/power-usage-effectiveness-pue-and-ppue-1051/ PUE, the abbreviation for Power Usage Effectiveness, was developed back in 2007 by The Green Grid Association (www.thegreengrid.org). Since then it has been adopted by the industry around the world. A detailed explanation of PUE and pPUE has been published by ASHRAE in the book "PUETM: A Comprehensive Examination of the Metric".

 

Today, there seems to be a lot of misuse or misunderstanding of PUE and pPUE in the data center industry. This is my understanding of PUE and pPUE:


PUE

Power Usage Effectiveness is the relation between the total facility energy used to run a data center and the energy used to run the IT equipment as shown in figure 1. The total facility energy can be produced from different forms of energy, not necessarily 100% electricity.

The PUE should be an averaged value over the course of one year in order to consider the influence of the ambient temperature in an appropriate manner. The PUE is intended to be used to document the efficiency of an individual data center over time. The PUE should not be used to compare different data centers.


What PUE considers

  • The efficiency of the power, cooling, and other infrastructure components and the form of the source energy are the major influencing factors for the PUE.
  • The total facility energy used to run a data center can consist of electricity, natural gas, fuel (regular generator tests), or water for adiabatic cooling or district chilled water instead of using chillers. The different forms of source energy will then be weighted with defined weighting factors in the PUE formula.
  • The location of the data center has an influence on the PUE. The colder the climate, the more economizer or free cooling can be used, the lower the energy consumption of the cooling system, and the lower (better) the PUE.
  • The air temperatures used in the data center have an impact on the PUE as well. The higher the air temperatures, the more efficient the operation of the cooling system, the lower the energy consumption of the cooling system, and finally the lower (better) the PUE.
  • The servers and other IT components themselves may operate highly efficiently or possibly with a very poor level of efficiency. The useful output, whatever it may be, may be produced very efficiently or inefficiently and this influences the PUE in a "negative" way. The more efficiently the servers work, the lower the energy consumption of the IT, and the higher (worse) the PUE. It may seem counterintuitive, but this is indeed the case.


What PUE does not consider

  • The electricity used to run a data center might be produced by a nuclear plant, by burning coal, or by using renewable sources such as solar, wind, and water. The PUE does not consider if the electricity has been produced by renewable sources.
  • Some data centers may reuse heat created during the cooling process. The PUE does not consider any energy reuse in the "energy balance". There are other metrics that consider this issue.

 

pPUE

The metric pPUE, partial power usage effectiveness, defines a certain portion of the overall PUE of a data center within a clearly defined boundary. In my example I have calculated the PUE and the pPUE for a transformer, UPS/PDU, chiller, CRAH, pump, and others to show how it is calculated and to show the differences in relation to the PUE.

]]>
news-838 Mon, 05 Sep 2016 10:00:07 +0200 Increased efficiency thanks to raised floor grilles with adjustable opening angle https://www.stulz.co.uk/en/newsroom/blog/increased-efficiency-thanks-to-raised-floor-grilles-with-adjustable-opening-angle-838/ Closed-circuit cooling via the raised floors allows the data center air conditioning to constantly satisfy the ever present demands for low running costs, high flexibility and redundancy with a well proven system.

However, in most cases the air is not regulated as it exits the raised floor. Here, sensor-controlled raised floor grilles with a variable opening angle are an essential element for restricting the energy consumption of the air conditioning system.

In the majority of cases, the raised floor manufacturer also provides the grilles for the outflow of air. As a rule, the customer can then choose between different degrees of perforation. Some manufacturers also offer restrictor panels, which are fitted to the underside of the floor grille and enable the airflow to be regulated manually.

The actual airflow rate required depends on the current load of the server. However, in this age of server virtualization and cloud technology, load fluctuations can occur as entire racks are switched on and off. Due to this fluctuating server load, flexible solutions are also in demand for closed-circuit air conditioning. The challenge lies in supplying the servers with sufficient cooled air that is targeted in line with demand.

Here, 'sufficient' means that precisely the amount of air needed at that moment by the servers exits the raised floor and 'targeted' means that as far as possible, the air exits the raised floor directly in front of the server rack's air intake. This last point is especially important when the cold and hot aisles are not separated by walls or partitions. The demand-based supply of cold air immediately in front of the server intake keeps the mixing of cold and hot air to a minimum. We can therefore refer to this concept as virtual containment.

With the AirModulator, STULZ offers a solution for a large variety of applications, and with dimensions of 600 mm x 600 mm that make it compatible with commercially available raised floors. The opening angle of the dampers can be regulated to suit demand by the building services management (BMS) system, or by the AirModulator's own controller, based on the temperature or pressure difference. In the event of a power failure, the dampers are opened automatically by a return spring. The unit as a whole is finished with a flow optimized grille, and is therefore also protected from mechanical stress, e.g. from lift trucks.

]]>
news-833 Thu, 25 Aug 2016 00:00:00 +0200 STULZ purchases the Tecnivel Group https://www.stulz.co.uk/en/newsroom/news/stulz-purchases-the-tecnivel-group-833/ STULZ GmbH, a German multinational Company specialised in the manufacture and the commercialization of precision air-conditioning equipment, announced today the purchase of the 100 % of the Tecnivel Spanish Group shareholders, a leading manufacturer in industrial air-conditioning and cooling solutions in Spain.

In Axel Schneider´s - the Managing Director of Stulz España, S.A. - opinion, “the purchase of Tecnivel represents a global important expansion of the Stulz products portfolio, and complements, as well, its solutions portfolio of air-conditioning for data centers, reinforcing its leadership position within the IT sector.

On the other hand, the integration into Stulz will represent an excelent opportunity for Tecnivel in order to consolidate its leadership in Spain, as well as develop its business in Europe and other growing markets such as the Middle East, Africa and South of America, due to the global geographic coverage of Stulz”.

 

Long history in Spain

Since its foundation in 1971, the main activity of Tecnivel has been the manufacture and the commercialization of the Air Handler Units (AHU´s). After more than 40 years of history, new solutions have been incorporating to its products range: batteries, air curtains, fan-coils, motoventilators groups, ventilating units (exhaust fans).

The quality of the products is accredited both in the design and in the manufacture with the Certificate Number 0.04.10236/01 given by the German organization TÜV Anlagentechnik GmbH, in accordance with the DIN EN ISO 9001:2008.

In the latest years, Tecnivel has developed an specific AHU´s line for Data Centers which has been adapted to the requirements of the end user with a great flexibility and with tailor-made solutions with the aim of achieving the maximum energy efficiency, at the same time that the operation safety, in the required levels by this kind of applications, is guaranted. This made that Tecnivel has become a reference point in the air-conditioning of Data Centers in Spain.

The integration of Tecnivel into STULZ will reinforce its leading position of the latter in air-conditioning solutions for Data Centers worldwide.

]]>
news-822 Mon, 15 Aug 2016 13:18:35 +0200 CRACs with underfloor fan section https://www.stulz.co.uk/en/newsroom/blog/cracs-with-underfloor-fan-section-822/ In recent years the market for closed-circuit air conditioning units is increasingly coming up with models where the fan unit is housed underneath the A/C unit in a raised floor. What is behind this trend? Installation is undoubtedly more complex, and the units are generally taller – and probably more costly as well. So there must be benefits that prompt customers to opt for such equipment. What are they then?

There are two main benefits here: Such systems offer a marked improvement in efficiency, in addition to a higher cooling capacity in relation to their footprint. So customers not only get a unit that is more efficient, but also one that requires less space to attain a specific cooling capacity.

That's easier said than done. See below for an explanation how these two benefits are achieved. A conventional precision air conditioning unit is generally approx. 2 m tall and stands on a raised floor. The raised floor under the unit does not normally contain anything more than cold air. In other words, wasted space. The designers have now taken advantage of this fact, removing the fans from the conventional precision air conditioning unit, installing them in a separate fan unit and positioning them under in a raised floor. The space gained this way in the A/C unit has been used to install a more powerful heat exchanger and larger filters. Above the raised floor the A/C units don't look any different and are still around 2 m in height. The fan unit contained inside the underfloor section then makes up the overall height to 2.5 m or so.

Positioning the fan unit under the A/C unit in an underfloor section means that the flow of air from the fans to the raised floor is now directly horizontal. In conventional precision air conditioning units with integrated fans – i.e. where the fans are above the raised floor – the air exiting the fan has to change direction twice before entering the raised floor horizontally (see figure). The associated turbulence and impact losses affecting efficiency are eliminated by positioning the fans under the raised floor. Fan power consumption falls, and efficiency improves.

]]>
news-791 Wed, 13 Jul 2016 12:39:25 +0200 Server cooling: Return air and room air conditioners compared https://www.stulz.co.uk/en/newsroom/blog/server-cooling-return-air-and-room-air-conditioners-compared-791/ Cooling server rooms with room air conditioners? That sounds enticing to many a data center operator. Room air conditioners were not originally designed for use in equipment rooms (precision air-conditioning systems capable of regulating room temperature much more accurately were developed for that job), but they have become much cheaper recently, and in tests they now deliver similar results to precision systems. So what is wrong with the idea of saving on an expensive precision air-conditioning system and instead investing in a room air conditioning system?

The answer is in fact: everything. Because the concept underlying room air conditioners is completely different from that of precision systems. Whereas roughly 90 % of a precision air-conditioning system's function is sensitive cooling – meaning it actually lowers the room temperature – about 40 % of a room air conditioner's function is so-called latent cooling. In this mode, the air is dehumidified rather than the room temperature being lowered – so in purely physical terms no cooling actually takes place. That seems absurd at first hearing, but it is nevertheless quite logical. Because the purpose of room air conditioners is to create a room climate which people find comfortable. But humans' temperature perception is linked to the humidity of the air. If it decreases, people will perceive what is actually the same temperature as being cooler. Room air conditioning systems utilize this difference between the actual and perceived room temperature: They first remove humidity from the air, and only switch to physically detectable sensitive cooling when the dehumidification effect alone is no longer sufficient.

 

Air-conditioning of equipment rooms

This air-conditioning method has proved successful in rooms used by people. If it is applied in equipment rooms, however, the air-conditioning provided will not be appropriate. That is illustrated even by the operating point: Whereas the target for equipment rooms is a temperature of 24 °C and 50 % relative humidity, room air conditioners are designed to produce a temperature of 27 °C and 48 % relative humidity. So for the cooling of equipment rooms room air conditioners vary from the optimum operating point, reducing their energy efficiency and increasing electricity costs. Another problem is latent cooling: In server rooms, especially, the air is so dry that often air humidifiers are used to prevent electrostatic discharge. If room air conditioners, which initially remove humidity from the air, are installed in such a room, efforts to humidify the room will be counteracted. A further factor is that if the air humidity is too low the room air conditioner's heat exchanger will dry out. This reduces the heat transfer surface area, and as a result the heat exchanger loses more than 25 % of its effect. This loss of efficiency may reduce the heat transport through the refrigerant to such an extent that the system temporarily shuts down because the evaporation temperatures are too low, and only resumes working again as the air humidity rises. The results of this on-off operation are enormous temperature fluctuations in the server room, which can lead to overheating and consequently damage the IT equipment.

But it is not only the excessive dryness of the air which make it seem unfeasible to install room air conditioners in server rooms. Room air conditioners systems are also unsuitable as a solution because of the air circulation required in such rooms. Being intended to optimize human wellbeing, they are designed not to create any unpleasant air flows (drafts), and so move as little air as possible. A room air conditioner circulates between 200 and 2,000 m3 of air an hour at a supply air weight between 0.2 and 0.5 m/s. That air movement is not enough, however, to reliably dissipate the concentrated heat loads of high-performance servers and so effectively prevent the creation of hotspots. Those hotspots, which can irreparably damage the IT equipment, can only be prevented with volume flow rates between 3,000 and 30,000 m3 per hour and at a suply air weight between 2 and 3 m/s.

]]>
news-809 Wed, 13 Jul 2016 12:39:25 +0200 STULZ presents water-cooled chillers for performance-critical applications https://www.stulz.co.uk/en/newsroom/news/stulz-presents-water-cooled-chillers-for-performance-critical-applications-809/ The STULZ Explorer WSW for indoor installation offers cooling capacities ranging from 230 to 1,530 kW combined with a small footprint. Its application area extends from industrial and process cooling through IT and telecommunications to chilling for the hospitality sector and commercial buildings. 

Hamburg, 2016/7/26 – With the Explorer WSW STULZ offers a water-cooled chiller for a wide range of performance-critical applications. This flexible chiller can be easily adjusted to different heat loads thanks to its two refrigerant circuits with semi-hermetic screw compressors and infinitely variable output sliders. Its application areas range from industrial cooling through data centers and telecommunications to chilling for the hospitality sector and commercial buildings. Depending on the required cooling capacity, the chillers in the STULZ Explorer series can be equipped with either one (230 to 430 kW) or two compressors (460 to 1,530 kW). The chillers feature shell & tube condensers and can be set to operate at different temperature levels, for example with well water, cooling towers or external recooling heat exchangers. The evaporation process in the refrigerant circuit is controlled by electronic expansion valves, which use pressure sensors, temperature sensors and the STULZ C2020 controller to optimize heat exchange between the refrigerant and chilled water in the evaporator. 

When developing the Explorer product range, the priorities included a compact, corrosion-resistant design as well as low noise emissions. A low-noise version is also available for especially noise-critical applications. Extra acoustic insulation here allows the sound power level as per ISO 3744 to be reduced further by up to 10 dB. Thanks to its versatility in terms of applications, the STULZ Explorer WSW also impresses with high efficiency in partial load mode. Depending on the service conditions, it can offer ESEER values of 5 and higher. In line with its wide range of applications, the STULZ Explorer series also offers a variety of options such as automatic transfer switch, an energy meter for measuring total power consumption, soft start and anti-vibration mounts.

For more details please visit our product page.

]]>
news-782 Sun, 03 Jul 2016 12:57:00 +0200 Chillventa - Nuremberg 11-13 Oct 2016 https://www.stulz.co.uk/en/newsroom/event/chillventa-nuremberg-11-13-oct-2016-782/ Stulz at Chillventa - the exhibition for energy efficiency, heat pumps and refrigeration

Whether compressors, heat exchangers, fans or systems, the industry, wholesale trade, professional associations and research meet at Chillventa to discuss the most important questions on refrigeration, ac & ventilation and heat pumps.

Be there and meet the experts at this leading exhibition. 

You will find STULZ in Hall 4, Booth 235.

]]>
news-783 Fri, 01 Jul 2016 13:19:10 +0200 STULZ and TSI announce joint venture to deliver modular data centre solutions https://www.stulz.co.uk/en/newsroom/news/stulz-and-tsi-announce-joint-venture-to-deliver-modular-data-centre-solutions-783/ Hamburg, 21 June 2016 - Leading global mission critical cooling solutions provider STULZ has... The joint venture will enable the two companies to cooperate closely in delivering unique modular data center solutions globally using the very latest cooling technologies. "We have identified modular data centers as a growing market segment" commented Oliver Stulz, Managing Director, STULZ GmbH. "This Joint Venture with TSI allows us to offer customers a complete solution for modular data centers from high performance computing to telecom enclosures using the latest bespoke designed cooling STULZ technology."

"This joint venture increases our ability to deliver and support our modular DC solutions around the globe with future proof designs, solutions and services. Partnering with such a well-known brand of quality products, STULZ enables us to compete at all levels and support our global clients with a worldwide service organization network which comprises approximately 6,000 service staff" commented Simon Gardner, Managing Director of TSI.

"TSI is attractive to the STULZ Group because it aligns well with our company which already has a long trading relationship and synergy. We believe the joint venture brings together the best of breed technology to the data center market," said STULZ Global Sales Director Christoph Stulz "We have seen a huge increase in demand for modular data centers globally and working together we will look to supply flexible highly efficient solutions".

For more information visit www.stulz.co.uk and www.tsiuk.com.

About TSI

TSI’s core business is the design, build and maintenance of modular data centers being supplied globally. The company has over 25 years of experience in this market and continues to successfully provide its clients with a professional service. TSI not only understands the autonomous systems implemented within a data center but appreciates that these systems can have an effect on the data center environment as a single entity and as such all aspects of a fault or repair should be investigated as a risk to the mission critical facilities. Located in Oxford, UK, and currently employing staff dedicated to the aforementioned functions. TSI holds the ISO9001 accreditation for Quality Management and have recently successfully maintained this accreditation and currently hold the ISO9001:2008 Quality Management certification. This certification ensures the processes and procedures are monitored and policed. We are also ISO14001 accredited. We are TIA942 Accredited Designers and Auditors.

]]>
news-776 Fri, 24 Jun 2016 10:59:10 +0200 20 Most Promising DataCentre Solutions Providers 2016 https://www.stulz.co.uk/en/newsroom/news/20-most-promising-datacentre-solutions-providers-2016-776/ Frederick, MD. June 13, 2016 – Leading global mission critical cooling solutions provider STULZ, through its US entity STULZ Air Technology Systems, Inc. (STULZ USA), today announced that it was named to the 20 Most Promising DataCenter Solution Providers 2016, which is developed by CIOReview-DataCenter. CIOReview recognizes organizations around the world that exemplify the highest level of operational and strategic excellence in information technology (IT).

"STULZ is proud to be recognized by CIOReview and our peers in the CIO community," said STULZ USA Vice President Brian Hatmaker. "Today’s constantly shifting data center landscape creates a real challenge for CIO's. While STULZ is focused on the mechanical cooling for data center operation, CIO's have many other layers and technologies to understand and consider. At STULZ, we believe our job is to give IT professionals' reliable options that allow them the flexibility to grow and change as their operation evolves. Being among the other Solutions Providers on this list is a real honor."

CIOReview selected STULZ as one of the 20 Most Promising DataCenter Solution Providers of 2016 based on the company's specialties in Data Center Cooling, Precision Air Conditioning, Precision Air Handling, Data Center Row Cooling, Ultrasonic Humidification and Desiccant Dehumidification.

"Attaining remarkable growth was indeed a tremendous achievement for STULZ, thereby, it is a pleasure to honor it by naming it in the list of 20 Most Promising DataCenter Solution Providers 2016", said Jeevan George, Managing Editor of CIOReview. "I congratulate STULZ and look forward to its continued success."

More information can be found at the CIOReview website.

 

About CIOReview

Published from Fremont, California, CIOReview is a print magazine that explores and understands the plethora of ways adopted by firms to execute the smooth functioning of their businesses. A distinguished panel comprising of CEOs, CIOs, IT VPs including CIOReview editorial board finalized the "20 Most Promising DataCenter Solution Providers 2016" in the U.S. and shortlisted the best vendors and consultants. For more info: http://www.cioreview.com

]]>
news-765 Wed, 15 Jun 2016 00:00:00 +0200 ENGLAND 1 - GERMANY 0 https://www.stulz.co.uk/en/newsroom/blog/england-1-germany-0-765/ Congratulations Phil Taylor "It's nice for the English to beat the Germans at something!" - Phil Taylor ]]> news-755 Tue, 14 Jun 2016 00:00:00 +0200 Believing is one thing – knowing is another: On the debate around glycol-free CW systems with integrated Free Cooling https://www.stulz.co.uk/en/newsroom/blog/believing-is-one-thing-knowing-is-another-on-the-debate-around-glycol-free-cw-systems-with-integr-1/ When it comes to CW systems for data center cooling, many air conditioning specialists believe that it makes operational and economic sense to do away with water-glycol refrigerant in the data center interior. But more detailed analysis suggests that this theory is the exception, not the rule.

Whether pure water should be used for data center cooling chillers, dispensing with glycol altogether, is an issue of ongoing debate. The basis for this discussion is the fact that using glycol comes with a range of disadvantages:

 

1. heat transfer is not as effective with a water-glycol mixture.

2. glycol is much more costly than water.

3. larger pumps are required to circulate a water-glycol mixture than for pure water, increasing not only the scale of the project but electricity consumption as well.

 

Proponents of glycol-free data center interiors argue that using pure water would reduce investment and operating costs and improve cooling capacity. In their view, glycol should only be used where anti-frost requirements render it indispensable: in the pipeline systems leading to the chiller outside the data center.

But closer examination of the facts quickly shows that there are loopholes in this argument, because it omits to say that system separation is a prerequisite for dispensing with glycol. Instead of a single chilled water circuit between the interior air conditioning units and the chiller on the roof of the building, a glycol-free data center interior means splitting the system into two chilled water circuits: an interior circuit with pure water and an outside circuit for Free Cooling, which is still filled with a water-glycol mixture. This system separation takes the heat load from the water circuit and transfers it to a brazed plate heat exchanger in the water-glycol circuit which then conveys the heat from the interior of the building to the outside chiller equipped with a Free Cooling system. Obviously, this uses less glycol than a CW system with only one refrigerant circuit.

But it requires additional components: alongside the heat exchanger it necessitates a pump as well as frost-repelling pipeline heaters for the pure water circuit, and a number of additional components such as special piping and wiring work. So dispensing with glycol doesn't just save money: it also creates extra work. Ultimately there is so much additional effort involved that it negates the savings made by removing glycol from the equation. So in the final analysis, the supposed reduction in investment costs is untenable.

But what about the theory that glycol-free systems are cheaper to operate? A system comparison using the example of a continuously operational data center in Hamburg provides some basic information on this. Operating costs were calculated for an air-cooled chiller with a 700 kW cooling capacity, integrated Free Cooling and input and output temperatures of 18°C and 12°C respectively. The electricity costs were estimated at 15 euro cents per kilowatt hour. Under these conditions, the annual operating costs for the pure glycol system with only one refrigerant circuit were 33,000 euros cheaper. This takes account of the fact that a water-glycol mixture requires more pump power and that the losses in capacity caused by heat transfer must be offset by increased electricity consumption on the fans in the precision air-conditioning unit.


So why is it that the glycol-free system ultimately costs far more to operate?

The key factor is the compact integrated brazed plate heat exchanger between the inside and outside circuits. Firstly, the heat transfer losses that occur in this part of the system increase compressor running time. Secondly, the pressure drop that occurs in the flow through the brazed plate heat exchanger pipes increases the pump power requirement significantly in both the interior and Free Cooling circuits. This additional energy consumption means that at least part of the efficiency benefits of Free Cooling is canceled out. In contrast, the pure glycol system can exploit the benefits of Free Cooling without compromise, and need not compensate for any heat transfer losses on the heat exchanger.

This means that on closer analysis, the argument that glycol-free systems are cheaper to run is equally untenable – at least under conventional pre-existing site conditions such as those in our example. But it can make financial sense to use glycol-free systems at sites where Free Cooling is unfeasible and therefore where permanent compressor operation is required. Although even then, there is still the disadvantage that each additional component increases the statistical probability of a system failure. So whatever the case, system separation entails other risk factors in the data center.

]]>
news-748 Fri, 03 Jun 2016 12:00:00 +0200 STULZ Announces Strategic Investment and Commercial Partnership with CoolIT Systems https://www.stulz.co.uk/en/newsroom/news/stulz-announces-strategic-investment-and-commercial-partnership-with-coolit-systems-748/ High Density Chip-to-Atmosphere Data Center Solutions to be the Focus

Hamburg, May 30, 2016 – Leading global mission critical cooling solutions provider STULZ announced a strategic investment and commercial partnership with CoolIT Systems, Inc. (CoolIT), a global manufacturer of energy efficient Direct Contact Liquid Cooling (DCLC™) technologies for High Performance Computing, Cloud and Enterprise markets.

The commercial partnership will enable the two companies to cooperate closely in delivering unique Chip-to-Atmosphere solutions across the globe. Both parties remain independently owned and operated.

"STULZ has identified Chip-to-Atmosphere cooling solutions as a growing market segment for data center environments," said Joerg Desler, President of STULZ USA. "This alliance with CoolIT allows us to offer customers more complete solutions that involve capturing the heat at source inside the servers and moving it out to the atmosphere with STULZ technology."

CoolIT's DCLC technology uses the exceptional thermal conductivity of liquid to provide concentrated cooling to the hottest components inside a server, enabling very high density configurations. CoolIT's centralized pumping liquid cooling solutions can be tailored to any server layout and have already been adopted by many server manufacturers as a reliable technology that is covered under standard warranties.

"Partnering with such an influential solution provider as STULZ provides a tremendous worldwide capacity to our company," said Geoff Lyon, CEO & CTO at CoolIT Systems. "This strategic relationship increases our ability to better serve the data center industry with forward thinking designs, solutions and services."

"CoolIT Systems is attractive to the global STULZ Group because they align well with STULZ' history of bringing innovation and highly efficient technology to the data center market," said STULZ USA Vice President Brian Hatmaker. "We see a growing need in the market for more flexible solutions to support high performance and high density compute applications. Chip-to-Atmosphere solutions will generate immediate CAPEX and OPEX savings with minimal footprint requirements."

The Chip-to-Atmosphere concept will be discussed in detail at ISC High Performance 2016 in Frankfurt during CoolIT Systems Exhibitor Forum Presentation: 4.40pm on Tuesday 21 June at Booth 500, Hall 3, Messe Frankfurt. Technical experts will also be available during ISC16 to answer any questions at CoolIT Systems booth, #1210.

Those interested in incorporating CoolIT Systems and STULZ solutions in their projects should start by contacting their local STULZ or CoolIT Systems sales representative.

]]>
news-775 Mon, 23 May 2016 09:33:17 +0200 Low-noise chillers for data centers: Peace and quiet for your cooling needs https://www.stulz.co.uk/en/newsroom/professional-article/low-noise-chillers-for-data-centers-peace-and-quiet-for-your-cooling-needs-775/ Low-noise chillers for data centers: Peace and quiet for your cooling needs For data centers close to residential areas, compliance with noise regulations is extremely important. Air conditioning systems can be problematic in this respect. Chillers with a cooling capacity of 500kW, in particular, often generate a lot of noise during operation. Here, low-noise solutions from STULZ ensure the necessary noise reduction.


As long as data centers are predominantly located on so-called greenfield sites outside of urban centers, noise emissions are not a major issue. However, these days even large data centers are being built ever closer to populated districts, so that data center operators can no longer avoid tackling the subject of noise optimization. This is especially the case in Germany, where strict noise regulations must be met, particularly in the hours of evening and night. Industrial and service companies – not to mention operators of concert halls and sports facilities – could tell us a thing or two about conflicts with local residents who say they are disturbed by noise. For a data center running in continuous operation 24/7 all year round, there is genuine cause for concern.


Avoiding critical thresholds from the start

To prevent potential conflict long before it rears its ugly head, data center operators must consistently use solutions with noise emissions that do not come anywhere near critical thresholds. Of course, this also applies to cooling systems with compressors, pumps and fans that can produce not inconsiderable noise. Here, we recommend only installing systems and components that are so quiet that their operation is unproblematic, even at night time. However, what sounds good in theory is not always simple to achieve in practice. A classic dilemma is posed by chillers, which exceed approx. 500 kW of cooling capacity and are situated on or next to a data center. The lowest noise emissions would be achieved using soundproof encapsulated compressors and maximum size fans, which produce the necessary airflow with comparatively low speeds and can also keep noise to a minimum – as a system. However, it is evidently not easy to reconcile soundproof encapsulated compressors and large fans with the standardized sizes of chillers. Hence the restrictions on length and width imposed by chillers mean that smaller fans are generally installed. These deliver the necessary capacity, but only at the price of higher speeds, which increases both noise emissions and energy costs.

The rise in heat loads that goes hand in hand with the increased heat density in today's data centers has made the problem even worse: in order to overcome the resulting amounts of heat quietly and efficiently, even more chillers would in fact need to be installed on numerous data center sites. Sometimes, however, there is insufficient space for this, so that the existing CW systems are under even more load than they were already. The result is higher electricity costs and high noise emission.

To provide data center operators with a way out of this dilemma and enable chillers to be used in residential areas, too, quite a number of manufacturers are equipping their fans with noise-reducing diffusors. Hamburg-based precision air conditioning specialist STULZ, on the other hand, is taking an alternative route, and manufactures its Cyber Cool 2 chillers with a view to reduced noise levels right from the start. Even during the development phase, thorough research has gone into compressors, fans and pumps, to examine their noise emissions during operation. Based on the results of these tests, systematic measures have been taken to minimize the amount of noise generated.


Soundproof encapsulated compressors, maximum size fans

The first approach to tackle noise reduction involved the compressors. In some chillers, they are so exposed that their operating noise is diffused into the environment largely unfiltered. Compressor housings, too, are frequently not noise optimized and, depending on their design, can even increase the noise level. At STULZ, on the other hand, the compressors are housed in a special, soundproof encapsulated chamber. Like the walls of a recording studio, their interior walls are completely lined with sound-insulating material, so that as little noise as possible reaches the outside. This first step in itself has greatly reduced noise emissions.

While the development of a soundproof encapsulated compressor chamber was a solution that enjoyed the benefit of experience from other industries, the question of optimizing fan noise levels was considerably more complex. Here, we needed to design a compact construction that enabled noise and efficiency values to be optimized while adhering to the chillers' standard surface dimensions. STULZ solved this difficult task by installing maximum size fans. These make the best possible use of the available space, and are lined up next to one another so closely that you can barely slip a proverbial sheet of paper between them. And finally, with diameters of 910 millimeters, they are so large that they can move the necessary volumes of air removed at moderate speeds, thereby working quietly and energy efficiently. In order for the strengths of this new fan system to be exploited to the full, the entire air conduction system, from the intake through the heat exchangers to the fans, had to be redesigned.

 

 

]]>
news-578 Thu, 19 May 2016 10:49:55 +0200 Time to announce The Winner of the STULZ Data Centre World 2016 Competition https://www.stulz.co.uk/en/newsroom/news/time-to-announce-the-winner-of-the-stulz-data-centre-world-2016-competition-578/ Congratulations Bechi Onuora Today we are announcing the winner of our Prize draw from the Data Centre World show.

Congratulations to Bechi Onuora of JCA Group Limited the winner of the customized STULZ bicycle.

We would like to say a big Thanks to everyone for participating!

]]>
news-557 Wed, 30 Mar 2016 10:13:43 +0200 Critical Communications World 31 May - 2 June 2016 https://www.stulz.co.uk/en/newsroom/news/critical-communications-world-31-may-2-june-2016-557/ Critical Communications World is the largest exhibition and gathering for critical communications professionals.

Visit us on booth: B23

 

31 May - 2 June 2016

RAI Amsterdam

More information: https://criticalcommunicationsworld.com/

 

]]>
news-553 Fri, 18 Mar 2016 13:36:22 +0100 Dehumidification in data centers when using CW units at high-temperature levels https://www.stulz.co.uk/en/newsroom/blog/dehumidification-in-data-centers-when-using-cw-units-at-high-temperature-levels-553/ Most users and planners are now aware that temperature levels when cooling IT equipment in data...

Most users and planners are now aware that temperature levels when cooling IT equipment in data centers have changed dramatically in recent years.

The main reason for the adjustment of air temperatures is ASHRAE recommendation TC 9.9 2011, which recommends air inlet temperatures to IT equipment in a range from 18 °C up to a maximum of 27 °C. Adding an average temperature difference as air flows through the IT equipment of 10-15 K, this produces return air flow temperatures back to the A/C unit in the range from 28 °C to 42 °C (see blog article "Delta T"). The actually most important "side-effect" of this recommendation, however, is utilization of so-called "free cooling" – that is, cooling of IT equipment as far as possible without the energy-intensive use of compression cooling (see blog article "Free cooling"). 

Based on their good scalability and their comparatively simple hydraulics, large data centers mostly use chilled water-cooled precision A/C units (so-called CW units), which require centralized chilled water production (see also blog article "Standby management"). To improve the efficiency of the chiller, and to utilize free cooling at comparatively high outside temperatures, chilled water systems are also being run at ever higher water temperatures. A positive side-effect of high water temperatures in conjunction with high air temperatures is that the purely sensible cooling targeted in data centers (in order to avoid cost-intensive humidification) is assured.

In summary: higher air temperatures + higher water temperatures = avoidance of dehumidification in normal cooling operation and improved utilization of free cooling.

To return briefly to the ASHRAE recommendations: The allowed range of relative humidity for the IT equipment is very generously spanned between 20 % and 80 %.

All these factors together mean, in principle and in theory, that nowadays there is no need for dehumidification or humidification in normal cooling of a data center. Sadly, this is another area in which theory and practice differ. There are requirements regarding ESD (electrostatic discharge) from IT equipment; the data center staff is in the room; the room is not 100% air-tight; and humidity is introduced from the outside; doors are opened and closed, etc. The possibly resultant and required humidification is comparatively easy to realize (humidifier in the A/C unit or in the room). The possibly required dehumidification is difficult however.

In the past (when the equipment was operated at lower air and water temperatures), the dehumidification with CW-units was done in the following way: 

The chilled water control valve is fully opened in dehumidification mode, increasing the water volume flow through the cooling coil. This increases the total cooling capacity of the unit, and the unit's water outlet temperature falls. The temperature difference between the air and water side increases, and the resultant drop below the dew point causes the required dehumidification. In some cases the speed of the EC fans (if installed) is also reduced in order to boost the effect.

When operating CW units at high air and water temperatures, the problem then arises that the drop below dew point necessary for dehumidification can no longer be achieved, because the general temperature level is simply too high.

So what can be done to provide dehumidification?

The technically most practical way is to use one or more so-called "dual-fluid" units in a GCW design. A dual-fluid unit is a combination of a direct expansion (DX) and a CW unit (see GCW refrigeration system). In the GCW design the unit's refrigerant circuit is closed. The heat is dissipated by way of a water-cooled plate condenser, which in this case is simply connected to the existing chilled water system. In normal cooling operation, the unit's CW circuit is additionally used; for dehumidification, however, the switch is made to DX mode. Dehumidification and the drop below dew point are very much easier to achieve in DX mode because the evaporation temperature is normally lower than the water temperature level, or can be more easily brought to the required dehumidification level by way of controls in the refrigerant circuit (expansion valve). The numbers and/or cooling capacities of these units then depend on the expected dehumidification capacity and the size of the data center.

]]>
news-552 Wed, 24 Feb 2016 17:43:47 +0100 Data Center 360° - Start: 2016/02/23 https://www.stulz.co.uk/en/newsroom/event/data-center-360-start-20160223-552/ This event specialized in Data Centers and organized by Ingenium will present innovative conferences... Target Audience: IT Directors, IT Managers, Supervisors, Infrastructure and Operations Coordinators, Specialized Engineers, Project Managers and general decision makers of public and private companies.

www.datacenter360.la

 

 

City-CountryExpected AttendeesTopicDateTime
Bogota, Colombia40Future Proof Data CentersTuesday February 23rd, 20168:00 a.m. to 1:00 p.m.
Medellin, Colombia40Future Proof Data CentersThursday February 25th, 20168:00 a.m. to 1:00 p.m.
Lima, Peru40Intelligent ManagementWednesday March 16th, 20168:00 a.m. to 1:00 p.m.
Panama City, Panama120Future Proof Data CentersTuesday April 19th, 2016

8:00 a.m. to 6:00 p.m.

San Jose, Costa Rica200The Data Center as a ServiceTuesday May 5th, 20168:00 a.m. to 6:00 p.m.

 

 

]]>
news-548 Thu, 18 Feb 2016 10:54:41 +0100 AER – A new efficiency indicator for airflow in Data Centers https://www.stulz.co.uk/en/newsroom/news/aer-a-new-efficiency-indicator-for-airflow-in-data-centers-548/ Recently, a value has repeatedly popped up in air conditioning system specifications, defining the... Recently, a value has repeatedly popped up in air conditioning system specifications, defining the maximum permitted power consumption for the fans at a certain airflow.

In the past, the primary consumer of energy in an air conditioning system was the compressor. Today, most Data Centers use air conditioning systems with Free Cooling. Now mechanical or compressor cooling is only required if it is very warm outside, and the Free Cooling is insufficient for transporting heat out of the Data Center. Due to these changes in air conditioning systems, fan power consumption is now moving into the spotlight, as air needs to be conveyed through the Data Center even in Free Cooling mode. Therefore, these days the fans in the air conditioning units are frequently the primary energy consumer.

To demonstrate how efficiently air is conveyed through a Data Center, we look at the ratio of fan power consumption to airflow. And so that this child has a name, we have called it AER, which stands for Airflow Efficiency Ratio.

The AER describes the ratio of fan power consumption to the airflow of an air conditioning unit at a given external static pressure. The unit used for the AER value is W / (m³/h). In order to obtain numerical values that are easy to handle, we did not choose kilowatts - the usual means of measuring fan power consumption - as the unit, but watts instead. The smaller the AER value, the better. The less power consumption required to achieve a certain airflow, the better.

Here are two examples:

  1. A precision air conditioning unit achieves an airflow of 30,000 m³/h at a static pressure of 20 Pa in the raised floor. The fans have a power consumption of 3.3 kW, or 3,300 watts. This translates as AER = 3,300 / 30,000 = 0.11 W / (m³/h).

  2. For a typical air handler, which conveys air at a rate of 80,000 m³/h at an external static pressure of 50 Pa in the ducts to the Data Center, a power consumption of 28.0 kW results in AER = 28,000 / 80,000 = 0.35 W / (m³/h). The AER can be used to compare different air conditioning units, identical units with differing airflows (e.g. with and without active standby units), or different air conditioning systems in comparable conditions.

It is safe to assume that in future the AER will become a common item among the technical data in air conditioning unit brochures.

]]>
news-546 Thu, 04 Feb 2016 14:55:32 +0100 A brief history of precision air conditioning technology https://www.stulz.co.uk/en/newsroom/professional-article/a-brief-history-of-precision-air-conditioning-technology-546/ STULZ and the road from computer room cooling to modern Data Center air conditioning: The history of... STULZ and the road from computer room cooling to modern Data Center air conditioning: The history of precision air conditioning technology begins in the early 1970s with the air conditioning of countless computer rooms that are springing up. With the transition to the modern Data Center, the exceptionally diverse landscape of precision air conditioning solutions that we know and trust today gradually came into being – a process in which STULZ repeatedly took a pioneering role. The most important milestone was the CyberAir 1, which was the world's first precision air conditioning system to be fitted with EC fans as standard.

]]>
news-544 Wed, 20 Jan 2016 14:38:31 +0100 Edge Data Centers are all the rage in the U.S. https://www.stulz.co.uk/en/newsroom/professional-article/edge-data-centers-are-all-the-rage-in-the-us-544/ A quick online search for "edge Data Centers" reveals some interesting results: a search in English... A quick online search for "edge Data Centers" reveals some interesting results: a search in English delivers over 10,000 hits, while a search in German ("Edge-Rechenzentren") produces only 5 (as on January 8, 2016). How can this be, and what kind of potential new trend is about to engulf the Data Center market? Or is it simply old wine in new bottles which, in the guise of a buzzword, is being sold to us as the next big thing?

I am going to tackle the subject in this blog, and attempt to explain to what extent this phenomenon is relevant to the German market. My American colleagues were able to confirm that the number of edge Data Centers in the U.S. has increased dramatically in the last 1-2 years, and that they are a hot topic in the American specialist media. As I read the numerous English articles available, my first impression is that it could be purely an American phenomenon. But why is that the case?


U.S. situation

In view of its surface area of nearly 10 million square kilometers and population of approx. 320 million, the American Data Center landscape must necessarily be very different from its German counterpart. Looking at the country as a whole, population density in the U.S. is many times smaller than Germany's, and varies considerably from one region to another. Accordingly, many of the U.S.'s large Data Centers have been established in the great urban centers on the West and East coasts. The country as a whole is supplied with web content from these regions. This functioned reliably in the past, because the volumes of data that had to be transferred were not especially large.

However, due to the enormous growth in cloud and online services, in which software, films and games encompassing huge amounts of data are streamed onto private screens, internet systems are increasingly approaching their limits. For applications with constantly high volumes of data, it is not only the available bandwidth that is important for transmission quality - the distance between a person watching a film and the Data Center from which the film is being streamed live is also a vital factor. Large volumes of data from a great many users increases latency, and buffering of live content becomes ever more frequent. In addition, transporting data from different networks right across the country entails high costs. User numbers for the leading American companies offering live content are extremely high. Therefore, it is not improbable that on certain days, many millions of people are sitting in front of large and small screens and streaming content in HD quality. Finally, what also matters is that the films, TV series and games beloved by many are shown to consumers at prime time in top quality and without interference. For niche services and at quiet times, the necessary quality is achieved more easily due to the smaller user numbers.


The edge is growing

Suppliers of edge Data Centers recognized this problem and began a massive operation to build Data Centers in regions that had previously received content from the large conurbations in the traditional way. In edge Data Centers, content can be cached and retrieved locally. And not every piece of data has to be available in each Data Center at the same time. Edge Data Center operators have established a network of Data Centers of varying size in the U.S. The larger ones are hubs, which are interconnected. Further, smaller Data Centers are connected to these networks, and they cache content and make it available locally. It's obvious that this ensures a faster and better supply. Just like a freeway: it only makes sense if there are many exits and no diversions. Pushing the source of data closer to users can be regarded as expanding the edge of the internet.


DE-CIX in Frankfurt breaks a record - 5 Terabits per second

DE-CIX in Frankfurt is one of the most important German internet exchanges for the international forwarding and exchange of data traffic. At the end of 2015, DE-CIX operators announced a new peering record of 5 terabits per second. This record-breaking success was recorded on December 8, 2015. It is remarkable that this was the second record of its kind in one year. The 4 terabits per second barrier was smashed in April 2015. In the operators' opinion, the main reason for this increase is the growth in online video content and growing numbers of mobile end devices. This could be the first hint that we are heading towards a U.S. style situation.

So now we must ask the question: when are the first edge Data Centers coming to Germany, or are they in fact already here in another form? As in the U.S., Germany also has numerous large Data Centers in metropolises, which supply the whole country. The distances involved here are much smaller, however. In Germany, with a very well developed network a Data Center can reach a great many users, and guarantee the required quality of connections and streaming. In fact, cloud providers believe that theoretically, Data Centers do not even have to be situated in Germany, or indeed Europe. This notion is unpopular in Germany, however, and not feasible in the long term. Therefore, last year many cloud providers were on an intensive search for colocation partners in Germany who cache web content in their Data Centers, or they built Data Centers of their own.

I do not see the need for edge Data Centers in Germany in the near future, because the existing Data Center infrastructure should be sufficient. In Germany, the streaming of very large volumes of data is still in its infancy. This will change in future in any event. As user numbers grow, smaller local Data Centers may eventually be required, to make sure that users can enjoy a film without any interruptions, for example. Looking at Europe as a whole, the situation could change more quickly. The first American providers of edge Data Centers are planning Data Center sites in Europe, and it will be fascinating to see in which countries and cities these will be.

]]>
news-534 Mon, 28 Dec 2015 07:43:16 +0100 Data Center climate control in West, Central, and East Africa https://www.stulz.co.uk/en/newsroom/professional-article/data-center-climate-control-in-west-central-and-east-africa-534/ Africa is the world's second-largest continent and also comes second in terms of population. It... Africa is the world's second-largest continent and also comes second in terms of population. It comes as no surprise, then, that the local Data Center market has developed strongly over recent years.

]]>
news-532 Wed, 16 Dec 2015 17:26:00 +0100 Low-noise Data Center cooling systems are becoming widespread in other sectors too https://www.stulz.co.uk/en/newsroom/professional-article/low-noise-data-center-cooling-systems-are-becoming-widespread-in-other-sectors-too-532/ Thanks to its wide-ranging benefits, the concept is now also becoming established in other sectors as a genuine cross-industry innovation. Demands in terms of reliability, availability and energy efficiency are particularly high in a Data Center. So it is no wonder that other industries are keen to enjoy the benefits of low-noise cooling. In this blog article on chillers we will be taking a look at one very special benefit: The comprehensive sound-proofing makes such a chiller highly attractive for many different sectors and projects.

The noise dilemma

Our customer in the scenario we describe had no idea that noise might become a problem. An industrial manufacturer, the company operates a large site with production running round the clock. Noise had not previously been an issue. It was only when a particular construction project was carried our that an industrial chiller needed to be installed directly adjacent to an office building. But it was essential for the chiller to meet a number of requirements: The system first had to be highly efficient, as the company's self-imposed CO2 limits could not be exceeded. Other crucial factors were that the chiller should operate as quietly as possible, and take up minimal space. Compact size and low noise emissions are contradictory demands in chiller design however. Even with identical cooling capacity, large chillers are usually more efficient and quieter than small ones. Data Centers are often sited in locations of mixed residential and commercial usage, and operate round the clock.

So a Data Center chiller has to adhere to more stringent noise limits, particularly at night. And Data Centers have changed significantly in recent years: Increasing packing densities have led to a continuous rise in heat loads inside Data Centers, though the footprint remains virtually the same as existing centers are upgraded or new ones built. As a result, the space available to install chillers on a roof or next to a building is rapidly shrinking. This means in order to achieve the required cooling capacity smaller chillers need to be installed, but they tend to be overly loud and not efficient enough.

Sound-proofing as a development concept

The running noise emissions of components such as compressors, fans and pumps were tested in great detail right at the start of the development process when designing the CyberCool 2 chiller. The outcome of this development work was that the CyberCool 2's compressors were housed in a special sound-proofed chamber. This first step in itself greatly reduced noise emissions. Sound-proofing the fans proved rather more complex however. Being small, they have to rotate faster in order to propel more air. This inevitably makes them louder, and they consume more power. To resolve this problem, the STULZ development team decided to fit the largest sized fans possible, while making optimum use of the space available on the chiller. They did so by ensuring that there was literally no room for even a sheet of paper to fit between the individual fans. To enable the new fan system to deliver its full power, it was necessary to redesign the entire air conduction system, from the intake, via the heat exchangers, to the fans.

]]>
news-530 Mon, 14 Dec 2015 17:42:00 +0100 STULZ expands its network of sales partners in Ireland https://www.stulz.co.uk/en/newsroom/news/stulz-expands-its-network-of-sales-partners-in-ireland-530/ We are delighted to be able to boost our global STULZ sales network by the addition of an extremely experienced partner in Ireland. In RWL Advanced Solutions Ltd., we now have a sales partner offering the entire STULZ product range of Data Center cooling solutions. With offices in Dublin and Cork, the company has a presence in Ireland's most important Data Center hubs. Moreover, RWL has an office in London and will work in close collaboration with our STULZ subsidiary in the British capital. "This partnership will produce numerous advantages and synergistic effects, which will bring added value to customers of both RWL and STULZ," says Thomas Steinberg, Head of Global Strategic Accounts, with conviction.

About RWL

RWL Advanced Solutions is a specialist distributor in the Information Technology, Security and Telecom industries, with sites in Dublin and Cork in Ireland and London in the UK. It has purpose-built offices in Dublin and Cork with trade counter and associated warehousing covering over 8,000 ft2 combined. RWL aim to stock and support all of the leading brands across markets we are involved in, offering best of breed solutions to our customers. With an experienced team of individuals with extensive industry experience, RWL are there to aid customers in all facets of the supply chain from pre sales design to post sales commissioning where needed.

www.rwl.ie

]]>
news-528 Thu, 03 Dec 2015 14:12:21 +0100 Increased efficiency through equipment tuning https://www.stulz.co.uk/en/newsroom/news/increased-efficiency-through-equipment-tuning-528/ Whenever I think about Data Centers, my thoughts immediately turn to energy efficiency. In this era... Whenever I think about Data Centers, my thoughts immediately turn to energy efficiency. In this era of constantly rising energy costs and the ever greater need for data transfer and storage, demand for Data Center capacity continues unabated around the world.

In order to cope with Data Centers' hunger for energy, ever more efficient servers and other IT components are being developed. Data Center infrastructure such as cooling, power supply and distribution are also becoming increasingly efficient. In the field of air conditioning technology - one of the biggest energy consumers in a Data Center beside the servers themselves - considerable strides have been taken regarding efficiency in recent years. Today Free Cooling, either Direct or Indirect, is now commonplace in large Data Centers.

]]>
news-526 Mon, 23 Nov 2015 15:27:00 +0100 Delta T – The air-side temperature difference https://www.stulz.co.uk/en/newsroom/blog/delta-t-the-air-side-temperature-difference-526/ Increased efficiency in the Data Center, improved PUE, lower losses – what does all this have to do... A server in a Data Center takes in air at a certain temperature. Once inside the server, this air warms up due to the heat produced by all the components in the server. The air that then exits the server is roughly 10 °C to 15 °C hotter.

An air conditioning unit in a Data Center also takes in air at a certain temperature. Inside the air conditioning unit, this air is cooled and the extracted heat conveyed to the outside. So the air that exits the air conditioning unit is approximately 10 °C to 15 °C cooler.

That all works out fine then, doesn't it? Unfortunately not.

The above-mentioned 10 °C to 15 °C is the so-called air-side temperature difference, or Delta T.

In a theoretical ideal scenario – a closed circulation of air between the server and the air conditioning unit – there would be a certain air-side temperature difference, and the air conditioning unit would work at its planned maximum level of efficiency.

In a real Data Center, this is sadly not the case. The cold air exits the air conditioning unit, flows through the raised floor, enters the cold aisles through the perforations in the raised floor grilles, is sucked in by the server, heated, blown out into the hot aisle, and then begins its journey back to the air conditioning unit. However, air is stupid and lazy. It doesn't know that this is the route it has to take, and showing it the way with blue and red arrows is no help at all.

Some of the air finds openings in the raised floor, e.g. cable cut-outs in the hot aisle that have not been sealed, gaps between the raised floor grilles or even grilles missing altogether below the racks. It then takes one of these shortcuts back to the hot aisle and straight back to the air conditioning unit, without ever having seen a server from the inside or having taken any of its heat away with it. Other bits of air take the planned route into the cold aisle, but then sneak between the servers through unused rack surfaces, or to either side of the servers, hot-footing it straight to the hot aisle and back to the air conditioning unit. This air does absorb a little heat, which the servers radiate to the outside.

Air that takes in only very little heat on its trip through the Data Center lowers the air-side temperature difference and therefore the efficiency of the entire air conditioning system.

Here's an example:

With an airflow of 45,000 m³/h and a Delta T of 15 °C (return air 35 °C, supply air 20 °C), an ASD 2010 CWU air conditioner from STULZ manages a capacity of 228 kW for a power consumption of 6.2 kW. The result is an energy efficiency ratio (EER) of 36.8.

Now, if the actual Delta T is only 10 °C (i.e. return air is only 30 °C) with the same airflow, power consumption and water temperature, capacity drops to 155 kW and the EER is cut to 25.0. As a result, the air conditioning unit has an efficiency 32 % below its possible or planned level.

]]>
news-524 Tue, 17 Nov 2015 15:39:31 +0100 STULZ wins Data Center Insider Readers' Choice Awards 2015 in the Cooling Systems category https://www.stulz.co.uk/en/newsroom/news/stulz-wins-data-center-insider-readers-choice-awards-2015-in-the-cooling-systems-category-524/ STULZ expresses its thanks to readers, partners and editors for the Platinum Award, and its delight... In a sweeping survey conducted from April 15 to August 31, 2015, the Insider portals of Vogel IT-Medien called upon their readers to choose their Manufacturer of the Year. From spring to late summer, readers of the BigData-Insider, CloudComputing-Insider, DataCenter-Insider, IP-Insider, Security-Insider and Storage-Insider information portals were able to submit their votes for the "Readers' Choice Awards 2015". For this purpose, the editors provided a total of 43 categories from their portals' different specialist areas, and drew up a shortlist of the ten most important companies of the past year in each category. In all, 27,687 votes were cast, spread across the individual portals and categories, and eventually the winners of the Readers' Choice Awards 2015 were chosen. STULZ was one of the lucky nominees of the DataCenter-Insider portal, in the Cooling Systems category.

At last, on October 29, 2015 it was time. During a large evening gala event in the Steigenberger "Drei Mohren" hotel in Augsburg, Germany, Readers' Choice winners received Silver, Gold, and Platinum awards in the various portal categories. Thanks to the fantastic support of DataCenter-Insider readers and STULZ partners, STULZ Sales Manager Mirko Hoffmann had the honor of receiving first place in the Cooling Systems category: the DataCenter-Insider Readers' Choice Award in Platinum. The STULZ team wishes to thank all those involved, in particular its extremely active STULZ partners, the readers of the DataCenter-Insider portal and its editorial team.

]]>
news-516 Mon, 19 Oct 2015 13:14:40 +0200 STULZ CyberLab: Technical Cooling for Test Rooms, Laboratories and Archives https://www.stulz.co.uk/en/newsroom/news/stulz-cyberlab-technical-cooling-for-test-rooms-laboratories-and-archives-516/ A compact air conditioning solution from STULZ that maintains constant temperature and humidity... A compact air conditioning solution from Stulz that maintains constant temperature and humidity conditions in special technical applications with a low heat load 

With a double-walled design, integrated inspection window and other equipment features, the STULZ CyberLab Series meets the requirements of VDI 6022 (hygiene inspections for air-handling systems).

Hamburg,14.10.2015 – STULZ introduces CyberLab, an air conditioning solution for test facilities, laboratories and museum archives. These compact air conditioners provide cooling capacity of 21 kW and have been specially designed for technical applications with demanding requirements for stable room-air conditions. STULZ CyberLab's cutting-edge control technology provides room-temperature control accuracy to within 0.5 K and a relative humidity fluctuation range of no more than 3%. These are precisely the kind of requirements in test rooms, where highly sensitive measurements are taken for quality assurance or standardization and where the need to comply with measuring accuracy means that stringent temperature and humidity limits are prescribed. The same applies to laboratory applications: whether biochemistry, pharmacy, lasers, optics or electronics – a stable air temperature and humidity level are some of the most important minimum requirements in the business. A low heat load is another factor common to most technical special applications. To provide energy-saving operation and precision cooling-capacity control of 0–100% in these conditions, STULZ CyberLab air conditioners are fitted with a refrigerant heater and a continuous-control electric heater. They also have an advanced, steplessly controlled EC compressor and outstanding partial-load efficiency, rendering them not only highly efficient but also robust enough for continuous operation. CyberLab units are supplemented with STULZ UltraSonic air humidifier units and a variety of filter options. With a double-walled design, integrated inspection window and other equipment features, the STULZ CyberLab Series also meets the requirements of VDI 6022 (hygiene inspections for air-handling systems).

]]>
news-514 Mon, 19 Oct 2015 07:49:00 +0200 Free Cooling – Direct and Indirect https://www.stulz.co.uk/en/newsroom/blog/free-cooling-direct-and-indirect-514/ The term "Free Cooling" suggests that you don't have to pay for this type of cooling. That is a... Free Cooling for Data Centers: this subject is on everyone's lips and is preoccupying specialists at conferences on Data Center infrastructure. There are now countless variations. But they all pursue the goal of lowering the Data Center's energy consumption and improving the PUE.

The term "Free Cooling" suggests that you don't have to pay for this type of cooling. That is a fallacy. Is anything free these days? Below I will describe the Free Cooling solutions in use today.

Free Cooling

Free Cooling means that the power consumption of the air conditioning system at the site is reduced to the necessary minimum by suitable means, without compromising on reliability and availability. The words "suitable means" and "at the site" open up a very broad range of possibilities.

Direct Free Cooling

To put it briefly, this could be described as follows: window open, blow cold air from outside through the Data Center, pick up the warm air, transport it back outside, voilà! And physically speaking, that's exactly what happens. Only the process of "moving the air" requires energy.

Unfortunately, in real life things are not that simple. Outdoor air is not always in a condition that the IT equipment is comfortable with. Sometimes it's hot and sometimes cold, sometimes it’s very humid and sometimes very dry. What's more, outdoor air is not always clean. The outdoor air is often full of particles which can be very hostile to modern IT equipment.

]]>
news-512 Fri, 09 Oct 2015 14:50:05 +0200 Standby management for CW units https://www.stulz.co.uk/en/newsroom/blog/standby-management-for-cw-units-512/ Today, the operator of a Data Center basically has two fundamental concerns: firstly, reliability,... In most cases, larger Data Centers continue to use closed-circuit air conditioning units. These so-called CW units basically consist "only" of an air/water heat exchanger, fans, air filters, control valves and the necessary electrical components, plus a controller. The cooled water supply to these units is provided by a centralized chiller.

 

To remove the heat load from the Data Center, a certain airflow is required, the amount of which depends on the air-side temperature difference. This airflow is supplied by the closed-circuit air conditioning units.

 

A certain level of so-called "redundancy" of air conditioning units is created, depending on the size and desired reliability level, to ensure reliable Data Center air conditioning. In other words, more units are installed (standby units) than are actually required for air conditioning. Normally, these units are only brought (automatically) into operation if a running unit switches off due to a fault (passive redundancy).

 

The latest closed-circuit air conditioning units make use of EC fans for ventilation. These fans are considerably more energy efficient than the older versions with AC motor. Another major advantage of these fans is that as the fan speed decreases, the motor's power consumption does not decline in a linear fashion as a function of the speed, but by the power of three.

]]>
news-510 Fri, 18 Sep 2015 14:07:54 +0200 The European Code of Conduct for Data Centres https://www.stulz.co.uk/en/newsroom/professional-article/the-european-code-of-conduct-for-data-centres-510/ Energy Efficiency in Data Centers The following figures appear on the website of the Joint Research Centre of the European Commission and the Institute for Energy and Transportation, which launched the Code of Conduct for Data Centres in 2008: "Number of participants: 105. Number of endorsers: 228".

The immediate questions that probably spring to mind are what this is all about, what it might mean for your business and whether these figures are good or bad. Those are the questions I'll be answering in this article, but first I would like you to know up front that we are all part of the reason that the Code of Conduct for Data Centres exists at all. In early 2007 for the first time I held a smartphone in my hands; a colleague in the USA had one already. My words at the time are still ringing in my ears: "I don't think I need this." No doubt you can picture how the story ends and why we all "have a stake" in the large number of Data Centers which now exist all over the world.

The "Noughties" was the decade when new media and mobile telephony took an outright foothold in private life, and now we can't imagine it without them. This boom resulted in Data Centers being built very quickly, and operational efficiency was not always a consideration. Winter or summer, Data Centers need air conditioning around the clock. The flipside to this is that badly planned Data Centers consume a lot of energy and therefore have high CO2 emissions. This is not a good thing. Moreover, it's avoidable. At the time, there were Data Center operators and manufacturers already capable of producing efficient solutions. Given that this was the case, it would have benefited a great many people had someone gathered them around a table and established this as a subject area. The European Commission's Joint Research Commission has recognized this and defined a voluntary code of conduct for data center efficiency.

Can a voluntary code like the Code of Conduct for Data Centres change anything at all?

As far as I'm concerned the answer is a resounding Yes! The code is aimed at Data Center operators, experts, consultants and businesses as well as manufacturers, all of whom are a necessary source of the many and diverse products and services required to build a Data Center. The intent behind the code is that all the businesses involved should pay greater attention to reducing Data Center energy consumption and explore the options for designing or adapting new and existing centers for better efficiency. So those responsible for the code of conduct have developed a variety of measures and guidelines which are available to everyone. Best practice guidelines have also been defined and an award launched to highlight the most exemplary solutions. As a manufacturer of efficient climate systems for data centers, we are involved in the initiative. We're especially pleased that two of the data centers singled out by the latest awards have a STULZ solution that uses Indirect Free Cooling. This also singles us out as a company in terms of our activities and the many years we've spent investing in developing and building efficient cooling solutions.

The success of the code is also based precisely on the fact that it is voluntary rather than being handed down from on high, and will continue to develop going forward. Furthermore, regular meetings mean that participants and endorsers can proactively join the initiative and bring their ideas and practices to the table.

How was the code developed?

105 participants and 228 endorsers – a very positive level of engagement given that we are talking about a highly specialized industry. While the initiative was founded by the European Union, lots of multinationals have joined. Participants include IBM, Telecity Group, ebay, British Telecom, France Télécom, Microsoft, Level 3, Unilever and many others, with over 250 data centers now involved. In short, this represents a huge amount of Data Center space and means that a great deal of energy has already been saved. For the most part, the endorsers and associations taking part are also multinationals, which is generating positive synergies in the field. The whole initiative is having a major influence and supporting businesses that have signed up to fly the flag for sustainability and environmental responsibility.

If you'd like to join us, we'll be happy to tell you how.

 

Joint Research Centre Institute for Energy and Transport (IET): iet.jrc.ec.europa.eu/energyefficiency/ict-codes-conduct/data-centres-energy-efficiency

2014 Awards European Code of Conduct for Data Centres iet.jrc.ec.europa.eu/energyefficiency/2014-awards-european-code-conduct-data-centre

]]>
news-508 Mon, 31 Aug 2015 07:01:00 +0200 Air-conditioning for special applications https://www.stulz.co.uk/en/newsroom/professional-article/air-conditioning-for-special-applications-508/ Alongside Data Centers, which have been successfully equipped with reliable and efficient precision... Alongside Data Centers, which have been successfully equipped with reliable and efficient precision air-conditioning technology for a number of decades, there are a raft of other applications that require constant climatic conditions. Laboratories, archives, storage rooms, test rooms, museums – as a result of the goods that are stored in these areas or the processes that take place there, all of these applications require highly stable temperature and humidity conditions for short to very long periods. What separates this from air-conditioning for data centers is the thermal load, which is very low or even zero in certain cases.

For example, museums and archive rooms are used for storing unique and priceless cultural objects for very long periods. Here, historic books, documents, parchments, works of art, and artifacts or films are stored under clearly defined room conditions to protect them in the long term and to preserve them for future generations. In addition to air quality, light, and the danger posed by pests, the air temperature and the air humidity are the main factors that influence the durability of the materials. High temperatures speed up the reaction of harmful substances with the materials, alter the acid content, and promote microbiological growth. Temperature fluctuations cause expansion and shrinkage, which in turn leads to material fracture. A high level of air humidity leads to corrosion, warping, cracks, and bacterial growth, whereas low humidity causes the material to dry out and shrink.

In test rooms, in which all kinds of measurements are performed on a wide range of objects and materials using highly sensitive apparatus, it is also necessary to adhere to defined temperature and humidity conditions for the purpose of measuring accuracy. The periods of time that apply in this case are relatively short and are measured in hours or days. A measurement normally consists of a stabilization phase, in which the required room conditions are set, and the subsequent measurement phase, in which the actual measurement takes place. Major fluctuations in temperature or humidity influence the measurement process, reduce the precision of the measurement, and must therefore be reduced to a minimum.

Laboratories are used in a wide range of areas. A distinction is drawn between biology, chemistry, and physics laboratories. The processes that take place there are so diversified that it is impossible to list them all in this text. To provide just a small selection, for example, there are laboratories for biochemistry, botany, pharmacy, organic and inorganic synthesis and analysis, lasers, optics, electronics, and much more besides.

Once again, all of these applications require a stable air temperature and stable air humidity. Further important factors include the air quality, movement, distribution, and speed, as well as the noise level and static underpressure or overpressure.

All these applications therefore share the need for constant air temperature and humidity conditions at a thermal load that is either zero or only very low. The air conditioning that is to be used must therefore be able to meet these requirements in a very reliable (and also efficient) manner in the long term using suitable components and control algorithms. In the case of precision air-conditioning units that were developed for data center air-conditioning, this is only possible if these units are adapted accordingly. The CyberLab from STULZ was specially developed for these requirements and precisely controls the temperature and the humidity with a tolerance of +/-0.5°C and +/-3% relative humidity. This makes CyberLab the first choice for applications with these special requirements.

]]>
news-506 Thu, 20 Aug 2015 12:09:40 +0200 Flexible Companies https://www.stulz.co.uk/en/newsroom/blog/flexible-companies-506/ Fast decision making and room for maneuver is what marks out owner-managed companies. The STULZ family business began in 1947 as an electrotechnical equipment factory, and successfully developed and produced a variety of electronic household appliances until the end of the 1970s. By the mid 1960s it was clear that technical innovation in household appliances would be more or less exhausted within a few years. Alongside this, Germany was importing more and more appliances from Asia, making sales harder still in a market that was already virtually saturated. When things reach a point like this, entrepreneurs have to start asking themselves what strategy will secure their business' future. So it would be logical to relocate production to a country with more favorable general conditions or even outsource it completely. STULZ didn't consider this even for a moment, because local production and customer proximity are prerequisites for a flexible business. Instead, the company began an intensive search for new products and solutions that we could integrate through production expansion. Ultimately, STULZ entered the air-conditioning business in 1965, and in 1971 it also began specializing in the development and manufacture of precision air-conditioning systems for Data Centers. Breaking into this future-oriented market was only possible thanks to the firm's financial independence. Our customers benefit from this too, because it means we can maintain high quality standards and resist driving down costs at any price, which could jeopardize the quality of our products.

We pay careful attention to what our customers say, and monitor the market closely

STULZ GmbH began internationalizing in 1956 when it established its first subsidiary in the Netherlands. But it was not until we entered the Data Center air conditioning market that we needed to expand globally. Following the maxim “Think global, act local”, we established ourselves in the countries where our customers are. Today we have 6 international production sites, 16 subsidiaries and over 140 partners worldwide. Our growth has allowed us to build close relationships with customers and to implement a large number of projects tailored to local markets. We know from experience that every project has particular features to take into account. Usually this means adapting the product, but thanks to our extensive range of options, we have that covered. However, we are increasingly developing special, targeted, market-specific solutions in partnership with customers.

 

Lots of solutions doesn't necessarily mean flexibility

Nowadays there is a host of different, flexibly constructed solutions on the market which appear suitable for Data Center air-conditioning. This creates the impression that customers can easily find the right solution. However, take a closer look and you will see things differently: many of those products are mass-produced, or spin-offs of other cooling solutions which are not specified for Data Center air-conditioning and yet are still used for the purpose. But the products concerned are neither customized nor open to adaptation. STULZ's air-conditioning range offers extensive product depth, different product variants and differentiating features. It is our basic premise that the climate systems we make should provide maximum efficiency in every product group and size. For example, room cooling, high-density cooling, chillers, modular Data Center cooling and air-handling units from STULZ are available with optional Indirect Free Cooling. Room cooling, air-handling units and modular Data Center cooling are also available with Direct Free Cooling.

 

Customization – a major trend

These days, customers can choose from a wide selection of different cooling systems, performance variables and manufacturers. So Data Center operators can find themselves confronted by an overwhelming array of potential solutions, all of which must be evaluated. Because with air-conditioning in particular, there is a high risk of choosing a solution that may well be sufficient for the Data Center's planned usage profile, but turns out to no longer be a 100% match over time. The payback is not just unnecessarily high energy costs, but also a lack of flexibility during future expansion, or even shortcomings in operational reliability.

We know that many operators face technical and planning challenges when expanding their Data Center, as they have to take account of complex parameters such as local climate, spatial and room considerations, environmental and noise protection, not to mention safety requirements. To help meet these, STULZ offers customized, modular system solutions which can be adapted to suit virtually every project requirement and expansion phase. Even if the interiors of Data Centers and server rooms all over the world are scarcely distinguishable from one another, the requirements for Data Center air-conditioning are becoming increasingly individual. As a customer, you need to be able to trust your business to a company that can deliver the right product for your project.

STULZ CLIMATE.CUSTOMIZED. offers you the reliability of a global player combined with the flexibility of a family-run business. With over 40 years of Data Center air-conditioning experience behind us, you can count on STULZ.

]]>
news-502 Thu, 30 Jul 2015 14:24:27 +0200 CW units with different heat exchangers https://www.stulz.co.uk/en/newsroom/professional-article/cw-units-with-different-heat-exchangers-502/ Data Center planners and operators are always interested in finding the optimum operating point... If we just take a look at CW units with an external chilled water supply, the principal factors for the optimum operating point are as follows:

  1. Data Center location (annual temperature profile)
  2. Data Center size
  3. Planning a new Data Center or optimizing an existing one
  4. Number of CW units (redundancy) and total airflow
  5. Type of control
  6. Data heat load
  7. Possible water temperature level of external chilled water supply
  8. Desired air-side temperature level in the Data Center (e.g. air temperature at the server inlet, return air temperature, supply air temperature, temperature difference between return air and supply air)

The last two points, in combination with the airflow, have an influence on the cooling capacity of the closed-circuit air conditioning units. It is therefore important to know and define these figures precisely right from the start.

The water temperature level of the external chilled water supply depends on the chillers used or the type of chilled water supply system. Modern, energy efficient chillers are capable of working with comparatively high water temperatures (20 °C inlet temperature, sometimes even higher). Older systems, on the other hand, are mostly unable to cope with these water temperatures. In some cities, a central district cooling system is in operation, which generally works with very high water-side temperature differences.

The next step is then linking this potential water temperature with the desired air temperature in the data center and the required cooling capacity for each unit.

The cooling capacity of an air/water heat exchanger, as used in CW units, is dependent on the internal design and size of the heat exchanger, and on the temperature difference between the air inlet temperature and the mean water temperature (possible glycol content is not taken into consideration here).

Therefore, the largest possible choice of heat exchangers is vital for planners and customers in the planning phase, so that they can find their optimum operating point.

For this reason, all CW units from STULZ's CyberAir 3 PRO series can be designed and ordered as standard with three different heat exchangers, which can be employed as requirements dictate.

Thanks to this choice of three heat exchangers, energy efficient operation for virtually any requirements is guaranteed, always and at all times. If very special conditions mean that optimum energy efficiency cannot be achieved with one of these three heat exchangers, however, individual or project-specific heat exchangers can be used.

]]>
news-465 Sun, 21 Jun 2015 02:00:00 +0200 Modularity and its bright future https://www.stulz.co.uk/en/newsroom/professional-article/modularity-and-its-bright-future-465/ These days terms such as modularity, pay-as-you-go (or in this case “grow”) and containerized Data... These days terms such as modularity, pay-as-you-go (or in this case "grow") and containerized Data Centers are widespread and are becoming a trend in the Data Center business. As a result, many research companies are measuring this trend and trying to use it for marketing purposes.

A press release from "MarketsandMarkets" states that the modular Data Center market is expected to reach $40.41 billion by 2018 at a CAGR (compound annual growth rate) of 37.41% between 2013 and 2018. (1)

Due to this trend, numerous different products and solutions are entering the market. Many people are talking about containerized Data Centers when thinking modular. But the market and manufacturers make an important distinction between the different types of modular builds.

]]>
news-470 Sat, 20 Jun 2015 19:01:00 +0200 Energy-saving technology https://www.stulz.co.uk/en/newsroom/professional-article/energy-saving-technology-470/ Energy efficiency pays in part 4 mission energy Low consumption over large areas

Large data centres are cooled by the low-consumption STULZ CyberAir® air-conditioning system with DFC. The DFC (=Dynamic Free Cooling) automatic air-conditioning system controls the output of the cooling fans in the blink of an eye, and switches to economical Free Cooling when weather conditions cool down. In this operating mode, the refrigerant in the system is cooled further with ambient air. Energyintensive compressor cooling (DX) is only switched on when absolutely necessary.

Free Cooling instead of compressor

In data centres with a thermal load of approx. 800 kilowatts or more, cooling the circulating air with water is a viable option. The cooling circuit is fed by an external chiller. Liquid cooling systems supplemented by economical Free Cooling are particularly energy efficient. Their investment payback times vary depending on climatic conditions at the site in question. Your STULZ expert adviser will carry out cost-efficiency calculations to help you with your decision.

Chilled water for efficient hotspot cooling

In combination with liquid cooled server racks, STULZ CyberCool produces chilled water for the direct cooling of high heat-density server racks.

Direct Free Cooling

Thanks to our many years of experience with precision airconditioning solutions, we have succeeded in optimising all components for Direct Free Cooling, ensuring compliance with specified data centre temperature tolerances as per ASHRAE TC 9.9 – 2011. With Direct Free Cooling, filtered ambient air below 18 °C is used to keep the data centre cool. This brings huge potential savings.

]]>
news-472 Sat, 20 Jun 2015 19:01:00 +0200 Thrifty in operation https://www.stulz.co.uk/en/newsroom/professional-article/thrifty-in-operation-472/ Look ahead with an Energy Audit in part 5 mission energy STULZ Service: Look ahead with an Energy Audit

With its intelligent service, STULZ ensures that you remain energy efficient on a permanent basis. The STULZ Energy Audit regularly checks the energy performance of your precision air-conditioning system. If measured values deviate from the setpoints, your air-conditioning system is recalibrated. If the cooling capacity is no longer sufficient, STULZ Service identifies the causes and makes suggestions for a system upgrade. As a competent partner for IT and facility management, we are at your side as you tackle these tasks.

A living IT landscape

During operation, the climate is constantly in motion. Like any technical system, air-conditioning systems in data centres need regular maintenance. When individual computers or racks are enhanced, converted or replaced by higher-powered equipment, this can become critical. For each new heat source changes the thermal load distribution, each new piece of hardware can force the flows of hot and cold air out of balance.

]]>
news-476 Sat, 20 Jun 2015 19:01:00 +0200 Good planning means efficient cooling https://www.stulz.co.uk/en/newsroom/professional-article/good-planning-means-efficient-cooling-476/ New building without compromise in part 3 mission energy Free Cooling with ambient air

Economical precision air-conditioning systems also make use of cool ambient air for indirect cooling of the data centre. Modern control electronics only switch on energy-intensive compressor cooling when really necessary. They continuously monitor the climate in the data centre and select the optimum operating mode in no time.

 

New buildings without compromise

Energy efficiency is a question of planning. In new buildings, you can design the air-conditioning system to the specific requirements of the room and computing equipment with particular precision. Many possible systems exist, but only one solution will supply optimum energy efficiency for you. We will be glad to help you choose the right one.

 

Spot-on cooling with water

Where high-powered computers produce hotspots, chilled water goes to the heart of the problem and dissipates the heat. Liquid cooled server racks work especially efficiently in these cases. All liquid-bearing parts are strictly separated from the electronics.

 

Cool air guided with precision

To ensure that the cooled air gets to where it is needed, careful planning of the air conduction is part of every good climate control plan. Hot and cold aisles, raised floors and cover panels convey the cooled air to the computer with precision. Particularly efficient systems make use of closed air circuits, for example, which feed the waste heat from the server racks directly back to the air-conditioning unit via closed air ducts.

]]>
news-496 Wed, 10 Jun 2015 18:46:00 +0200 Stand-alone air conditioning solution saves space in Data Centers https://www.stulz.co.uk/en/newsroom/professional-article/stand-alone-air-conditioning-solution-saves-space-in-data-centers-496/ The air handling system for installation outside the unit frees up valuable surface space in the... The air handling system for installation outside the unit frees up valuable surface space in the Data Center and is extremely efficient thanks to its Free Cooling and adiabatic module. STULZ CyberHandler is a ready-to-connect air conditioning solution developed specially for Data Centers and equipped with cutting-edge precision air conditioning technology. This complete air conditioning system in an outdoor housing saves precious floor space in the Data Center and can easily be installed next to a building or on a roof. STULZ CyberHandler is available in a range of output ratings from 55 to 460 kW and offers a comprehensive selection of energy-saving Free Cooling modules, including direct and indirect adiabatic modules.   

 

Hamburg, Germany, 10.06.2015 – With its CyberHandler precision air conditioning system, STULZ presents a ready-to-connect air handling system for medium to large Data Centers. The development of the new series was based on the latest requirements in Data Center air conditioning. When producing the STULZ CyberHandler systems, both the supply air temperature window of the ASHRAE TC 9.9 Thermal Guidelines and the efficiency requirements of the ASHRAE 90.1 were taken into account right from the design stage. The system is designed to exploit the potential for savings, achievable by direct Free Cooling and adiabatics, while maintaining maximum integrated reliability. If required, the entire cooling process in the STULZ CyberHandler systems is ensured via compressors, so that the full nominal output is available even without using Free Cooling and adiabatics. The air handling systems are available in a range of output ratings from 55 to 460 kW and create a maximum airflow rate of 20,000 to 71,000 m³/h. The anti- corrosion outdoor housing can be easily installed next to a building or on a roof. The CyberHandler system is connected to the Data Center on the air side. The system gives Data Center operators more floor space for server or storage applications and can also increase operational reliability. Furthermore, there is now no longer any need to access the Data Center to service the air conditioning systems.

]]>
news-478 Mon, 08 Jun 2015 15:29:00 +0200 Data Center Infrastructure Management https://www.stulz.co.uk/en/newsroom/professional-article/data-center-infrastructure-management-478/ Data center infrastructure management (DCIM) has, quite simply, been a buzzword in the IT/data...

Data center infrastructure management (DCIM) has, quite simply, been a buzzword in the IT/data center business for years now. And yet ignorance often prevails regarding what DCIM actually is, and what its purpose is. 

The market for DCIM software is very confusing. According to 451 research, there are approximately 75 suppliers, and this figure would climb even higher if smaller or local businesses were included.

In addition to this mass of suppliers, many manufacturers only offer individual bits of an "actual" DCIM, or only have expertise in certain areas.

To put it in general terms, DCIM is a tool for managing data centers. Basically, it is the data center's ERP system, with extensions such as CRM (customer relationship management), energy monitoring, etc.

So, many manufacturers may talk about DCIM, but perhaps only offer power monitoring and power management, for example. However, these only form part of the complete DCIM solution. As a rule, this happens because the monitoring of power or cooling can deliver especially high savings, therefore allowing the costs of a software solution to be directly calculated in terms of ROI. This makes it easier to justify the purchase of this type of software.

The savings achieved with management tools, on the other hand, are more difficult to calculate as an ROI. Therefore, their purchase is mostly hard to justify. What's more, many data centers use tools they have written themselves. Even Excel worksheets are still widely popular. Introduction even in these areas – combined with alarm management and workflows – can quickly convert into ROI and increase reliability and availability. Microsoft, for example, is convinced that the benefits and savings provided by a DCIM solution can best be achieved with standardized hardware.1

It may not be possible to convert the purchase of a DCIM solution directly into an ROI. But it offers the customer added value and therefore raises customer satisfaction.

Some large suppliers are able to offer a complete DCIM solution. However, as a rule these solutions are extremely complex, time-consuming and expensive, and therefore would probably not come into question for many potential customers.

If you are interested in DCIM and are considering buying a system, you should note the following points: 

1. Decide exactly what you need the tool to do before you begin your research.

2. One you have defined your requirements and objectives, pick out a few manufacturers who cover precisely these areas.

3. From these manufacturers, you should select those who offer the most upgradable and open system possible (manufacturer upgrades for other fields, or interfaces to other tools/environments).

4. From the manufacturers that you have left, take a closer look at those who best satisfy your requirements and set objectives.

5. Finally, choose the manufacturer with whom you can best imagine working together.

It is also important to bear in mind that the cheapest manufacturer is not necessarily the right one. Conversely, the most expensive manufacturer is not necessarily the best!

]]>
news-480 Mon, 08 Jun 2015 15:29:00 +0200 The Service Portal as the data center manager's digital assistant https://www.stulz.co.uk/en/newsroom/news/the-service-portal-as-the-data-center-managers-digital-assistant-480/ The STULZ Service Portal assists data center managers and operators in their everyday work, and... A data center manager's principal responsibility is to make sure operation is as reliable and available as possible. However, he must also keep a close eye on cost efficiency. 

The central role played by the data center manager makes him a vital point of contact for IT departments, management, service providers and customers. Being constantly up to date and accurately in the know about all processes is an important part of his work.

A prerequisite for reliable operation with a highly available IT landscape is an on-site technical infrastructure that is tailor-made for the specific requirements and provides redundancy. This infrastructure includes key facilities such as the power supply, air conditioning, and safety equipment such as fire protection and access control systems.

]]>
news-474 Tue, 02 Jun 2015 19:01:00 +0200 A question of fine-tuning https://www.stulz.co.uk/en/newsroom/professional-article/a-question-of-fine-tuning-474/ First aid for your data centre in part 2 mission energy STULZ room tuning: First Aid for your data centre

Room tuning optimises your energy usage quickly and effectively. Cover panels seal gaps in server racks, processor power is evenly distributed, raised floors are free from cable spaghetti, and operating values are tuned to the optimum level. Your data centre can then breathe freely. Cooling capacity is put to more effective use, and energy consumption drops.

Be chilled, not chilly!

Computers are at their best at a supply air temperature of 18 °C to max. 27 °C and 30 % to 60 % relative humidity. If the cooling power is turned up, the cooling compressor runs more often, and the air loses humidity. The result? The air-conditioning system dehumidifies the air. If humidity drops below the setpoint, it humidifies it again. Energy consumption rises – due to the longer compressor running time and the necessary extra power for humidifying and dehumidifying.

Traffic jams in the air flow

Data centres are divided into hot and cold aisles to ensure the best possible air distribution. The cold aisle conveys cooled supply air through the raised floor to the front of the server racks. In the hot aisle, heated exhaust air flows back to the air-conditioning unit. If the air flow is blocked or misdirected, the cooling effect is diminished – and power consumption rises. This is caused by raised floors clogged up with cables, short circuits of air in server racks, and an incorrectly set room temperature.

Hotspots

Often, planning of an air-conditioning system is based on the assumption that heat is distributed evenly. But the reality is different: Heat from high-powered computers, or misdirected cooling air, lead to so-called hotspots. If the thermal load on site lies above the planned average, not enough cold air gets to the computer. Simply reducing the target temperature results in considerable extra consumption, without solving the hotspot problem. For the flow of air is too weak to reach the hotspot.

]]>
news-494 Sat, 30 May 2015 09:02:00 +0200 Flexible air conditioning solution for modular Data Centers https://www.stulz.co.uk/en/newsroom/professional-article/flexible-air-conditioning-solution-for-modular-data-centers-494/ Outdoor air conditioning container combines energy efficiency with short installation times: The... Outdoor air conditioning container combines energy efficiency with short installation times: The 20-foot air conditioning containers from the STULZ CyberCon series are available with a cooling capacity of 243 kW per unit and offer state-of-the-art energy-saving technology such as Free Cooling function, EC fans and adiabatic cooling.

 

Hamburg, 30.05.2015 – With the CyberCon series, STULZ is introducing a flexible outdoor air conditioning solution for Data Centers in a container format. These precision air conditioning systems are available as DX or CW versions and are delivered pre-installed in standardized 20-foot ISO containers. For the air conditioning of 40-foot Data Centers, two STULZ CyberCon containers can be combined as an end-to-end installation. The systems, which have a vertical air outlet, are specially designed for container Data Centers, and can simply be mounted on a container module with the server equipment that needs cooling. All connections take place on the air side only. The standardized all-in-one design of the CyberCon series meets all the requirements of mobile Data Centers. As server capacities grow, thanks to the STULZ E2 control system further air conditioning containers can easily be added and integrated in the building services management system (Modbus, BacNet). This enables even complex redundancy strategies – for multi-tier Data Centers, for example – to be achieved without problem. What's more, all models are available as dual fluid versions with two independent refrigeration systems, based either on the direct evaporator DX/DX system, the liquid cooled CW/CW System, or the DX/CW system. With two independent cooling sides, redundancy is already integrated in the unit.  

]]>
news-467 Fri, 22 May 2015 19:01:00 +0200 Too much energy for cool computers https://www.stulz.co.uk/en/newsroom/professional-article/too-much-energy-for-cool-computers-467/ Save electricity, increase performance in part 1 mission energy Half disappears into thin air

Data centres run 365 days a year. Their tightly packed server racks generate ever increasing computing power in an ever decreasing area – power that is almost entirely converted into heat. Climate control ensures reliable operation. It conveys heat outside right away. But then, the air conditioning in data centres devours a huge amount of electricity. In the worst cases, it uses more than half of the energy supplied to the data centre.

 
Energy efficiency in optimising, building and operating

Whether you are building a new data centre or optimising or running an existing one – choose energy-efficient air conditioning from STULZ. With expert advice, intelligent products and lasting service, we will be there for you throughout the life of your air-conditioning system.

 

Save electricity, increase performance

Gain room for manoeuvre in the management of your operating costs. Our energy-efficient precision air-conditioning systems cut the power consumption of your data centre by up to 40 %. Save on electricity bills. Or invest

]]>
news-498 Sun, 01 Mar 2015 18:04:00 +0100 Hamburg air conditioning specialist wins award for excellent work environment. https://www.stulz.co.uk/en/newsroom/news/hamburg-air-conditioning-specialist-wins-award-for-excellent-work-environment-498/ At the beginning of February, the prize for "Hamburg's Best Employers" was awarded for the seventh... At the beginning of February, the prize for "Hamburg's Best Employers" was awarded for the seventh time. In this competition, prizewinner STULZ proved its exceptional attractiveness as an employer, even in these times of scarcity of skilled labor.

 

Hamburg, 01.03.2015 – On February 4, air conditioning specialist STULZ was awarded the quality seal "Hamburg's Best Employers 2015". The family-owned company won this accolade thanks to its high level of staff satisfaction and its motivating work atmosphere. This time the prize-giving ceremony, now in its seventh year, was held in the Albert-Schäfer-Saal conference room at the Hamburg Chamber of Commerce. The award was handed to Ms. Jana Seifert, HR Manager at STULZ, and Personnel Officer Ms. Christiane Claus.

]]>