We are now fully in the age of digital buildings. Building systems can now be networked together using a single common – or connected – infrastructure, which allows devices and systems to talk directly to each other and make needed adjustments without any direct human interaction. This means that every strategy deployed within a new build or a retrofit must do two things: 1. enable edge computing; and 2. allow for the next wave of broader connectivity.
Won’t you step into the freezer…It’s gonna be cold, cold, cold…or not?
Welcome to 2018! With 90% of the US bringing in the new year in below freezing temperatures, what better time to discuss one of my favorite topics: Cooling and Airflow Management! While it is crucial to keep ambient operating temperatures cool in your mission critical facility, we are not looking for the deep freeze temperatures that most are experiencing outdoors this week.
The American Society of Heating, Refrigeration and Air-Conditioning Engineers (ASHRAE) has been tracking and defining allowable temperatures for data centers over the past 14 years. Data published through ASHRAE has transformed the legacy data center from an inefficient ice-box to a facility with more moderate temperatures and smart design, while still maintaining crucial uptime. In the past, one may have grabbed a jacket before heading into their data center; however, as of 2011, ASHRAE recommended operating temperatures ranging from 64.4-80.4°F (18-27°C).1
As a child of the ‘80s, I can remember rotary dial phones evolving into push button phones. The day my dad brought home the cutting edge technology of a push button phone, with a 10 foot cord, was life changing for 12-year-old me. Okay, maybe not life changing, but certainly made it easier to call my girlfriend six times an hour - only to hear that annoying ‘busy signal’ because ‘call waiting’ had not been invented yet. Today, my kids have no clue just how hard it was to use the phone “back in the day.” Texting, Instagram, Snapchat and the rarely used method of calling someone in their contact list are the new norm.
Technology advanced. This is not a new story – we’re bombarded with advancements in technology daily. The flat screen TV we bought yesterday is obsolete tomorrow.
This is the world we live in, but when it comes to fiber optic network topologies, change happens at slightly a slower pace. For over the last four decades the deployment of traditional 12 fiber based connectivity has served the market well. Any data center that has been built to 10G specifications has used this traditional method. But as bandwidth requirements continue to increase, technology has advanced and data center managers are now looking for a new solution.
Enter Base 8. I’m sure you’ve heard the new buzz term Base 8, but you might be confused as to what this really means.
Back in April of last year, I wrote up a CTD on transceivers that talked about what a pluggable optical transceiver is and the need for “more speed” in the data center.1 The CTD also described what an MSA is and some considerations to think about in selecting the right transceiver for your application. This CTD goes into more depth on optical transceivers and talks about some of their definitions and applications as well as the rapidly advancing developments in fiber optic transceiver technology.
In the last iteration of CTD, we began to discuss the interest and drivers behind terminating the device end of 4 pair horizontal cabling with 8 position modular plugs, or a Modular Plug Terminated Link (MPTL).
To build what we discussed in the last publication, what follows is an attempt to pose and provide answers to some increasingly frequent questions. Before we get started, I would like to thank Cindy Montstream for her expertise and input on TIA and other standards bodies we consider. I’d also like to thank Carol Oliver for work and input on BICSI’s 005 (ESS) and 007 (Intelligent Building) standards. As always, the DCD Training & Technology department is an excellent resource for standards and best practices.