My planned test environment #2 domain Structure

My test Environment grows. Today I finished the domain structure.

Domain Structure

Domain Structure

I created a one forest, a root domain and two child domains.

The root domain only consist of two domaincontrollers and has no other servers or services at the moment.

The first child domain is my resource domain for physical systems that I need for the lab. It holds my hyper-v hosts, storage systems, switches, firewalls and router. So now you will ask yourself “why so complex and two domains?”. I like to follow some security best practices. One is, that you should split administrativ rights for your Hyper-V hosts and storage systems. That means no administrator who is not part of the environment and allowed to make changes on that systems, should be able to connect to them. The easiest way is to creat a resource and work domain. Both have different administator accounts and because of the root domain and the restricted access to it, you cannot deligate administrators on other domains.

That also prevents your application servers and active directory from corruption, from someone who maybe have occupied your Hyper-V and physical systems.

 

My planned test environment #1 physical Structure

To answer some question on the testsystems I use. I want to give you a short overview about my planned testenvironement and which systems are currently in place.

Currently in place
DCF-SVR-HV01
USB Backupdisks
Brotback
Jetfire
Switche, Firewall and Router
Planned until end 2014
DCF-SVR-HV02 / DCF-SVR-HV03
Synology Storage
Untitled

Physical Environment (Storage, Switche & Hyper-V Hosts)

1

Virtual Network & Converged Networking

High Efficiency Cooling in Data Centers PUE about 1,05 – 1,10

A few weeks ago I had the chance to visit a high effiency cooling cell here in Berlin.

Together with Norman Beherzig, who wrote this blog, I want to share with you what I have learned.

If you have more questions, do not hasitate to reach out to me or Norman.

A cooling concept for modular indirect free cooling will be tested in the test data center of dc-ce in the technical University of Berlin

The cooling concept developed by dc-ce RZ-Planung, whose centerpiece is the AirBlock MIFC, allows achieving a high efficiency cooling in new and in existing buildings by optimally combining architecture and technology. This is possible in standard and in high efficiency data centers, which are consuming up to 40 kW per Rack.

The concept can be individually customized to the client needs. The AirBlock MIFC fits optimally in the spatial dimensions and it can also be delivered as Plug&Play solution. The AirBlock consist of reliable and highly available components such as an air-air heat exchanger, modern free controllable EC-Ventilators and other components, which are all exactly coordinated to avoid the pressure deficiency, and thus reducing the energy consumption. All these reasons along with the investment, energy and operation costs as well as the flexibility of the AirBlock MIFC make of it one the most efficient systems available on the market.

During the cooling of data centers most of the energy consumption comes from the mechanical production of cooling and the overcoming of pressure deficiency. Therefore, a high efficiency cooling would be one operating with the lowest pressure deficiency and having most of the time the compressor-based cooling off.

The solution is found in the most important component of the AirBlock MIFC which is a cross flow heat exchanger. It can operate longer than traditional cooling solutions in free cooling mode in temperatures from 10°C up to 12°C. This operation condition is supported by the free controllable EC-Ventilators, which by means of an intelligent control make it possible to always run the AirBlock MIFC exactly with the required air volume.

 

Increasing the Efficiency

The AirBlock MIFC offers a considerable efficiency increase in different areas:

 »» Full power operation in free cooling under appropriate temperatures of the data centers supply air (this value is free to be chosen by the Client) which oscillates in the range of 25°C down to 19°-22°C.

»» 85 % – 90 % free cooling per year according to the location

»» In case of having a partial load of 25% and with an outdoor temperature of around 20°C, only 1% of the IT-Load is required for supplying the cooling. This represents a very good optimization of the partial load behavior of the system.

»» In average only 5 % – 10 % of the IT-Load are required to provide cooling per year. Other modern cooling concepts achieve 20 % – 30 % in average per year while old systems have performances even over 50% of the IT-Load.

Please visit the test data center at the Technical University Berlin. Mr. Norman Beherzig is available for you to arrange your visit at the test data center  – n.beherzig@dc-ce.dewww.dc-ce.de.