The second data center trend we will be focusing on as part of our Connected Infrastructure: Data Center Trends series is a technology that is challenging traditional UPS designs. The technology I’m referring to is called the flywheel. It’s a technology that has been around for ages, found in automobiles and sewing machines and started showing up in data centers in the 1990s as a means to store energy.
Flywheel UPS systems store energy as kinetic energy in a spinning disc as opposed to storing energy into variable-regulated lead acid batteries which rely on a chemical process. The flywheel becomes the energy storage medium, spinning a steel disc at a very high rate within a vacuum sealed chamber. When the power is interrupted, the Flywheel can convert the kinetic energy of the disc spinning into usable energy for the facility’s equipment. This allows the system to achieve a 98-99% energy efficiency when traditional battery-based systems experience, on average, a 93% energy efficiency rate.
In addition to more efficiently storing energy, a flywheel UPS can realize a space savings as well, with up to 50% less space being required than battery-based systems due to the elimination of large battery banks. Flywheels also have the advantage of having a wider operating temperature range of 0 – 104°F whereas lead-acid battery systems have a recommended operating temperature of 77°F.
There’s even more efficiency to be had when it comes to maintenance of the system. Flywheels require maintenance only once a year and are projected to last 20 years before replacements might be necessary. The lead-acid battery systems often require maintenance 2-4 times per year and battery replacement about every 5 years. The reduction in maintenance time alone should be enough for facilities to adopt this system, at least that’s what seems to be the case.
However not everything is greener on the other side. With the increased efficiency comes a tradeoff that has made the adoption of flywheel-based systems difficult. Traditional UPS systems have a runtime average of 15 minutes. Flywheel UPS systems on the other hand only provide backup power for 15 seconds. This staggering difference alone provides enough hesitation that the solution can be solely excluded as an option. The 15 seconds that it provides though is enough for most backup generators to kick on and provide power, however that still seems like a very limited and small timeframe to ensure everything is back up and running properly. A battery-based system not only provides 15 more minutes of power but also that much more peace of mind if the generators aren’t able to kick on fast enough.
Even with the limited runtime provided by a flywheel, many organizations have adopted them over the years, even large companies such as Verizon Terremark. Would you consider using a Flywheel UPS system in place of batteries in the future or are waiting out on the potential that Li-Ion may bring. Let us know by tweeting at us @Ortronics .
As part of our recent blog series, Connected Infrastructure: Data Center Trends, we will be examining some of the top 10 trends currently in data center design. The first of these trends, 415VAC power distribution, is already commonplace in Europe (as 400VAC), but has still yet to be widely adopted in North America.
The traditional data center design, in North America, utilizes 480V AC from the utility and steps it down to 208/120VAC for the rack equipment, even though the typical IT equipment’s voltage has risen over the years to 240 volts. With a 415V/240V distribution, the line-to-neutral voltage, 240V, is already compatible with most data center equipment. By running higher voltages at lower currents, the results are smaller cables using less copper which weigh less, take up less space and cost less.
Running 400V also allows the elimination of voltage transformers, realizing anywhere from a 2-5% energy cost savings. IHS, a market research group, has reported that there is a growing adoption of 400V AC in the 3-phase PDU market with current estimates at around 8% of the market. So while not the majority by any means, 400V is something to keep an eye on if you’re looking to improve your data center’s efficiency and contribute to having a more sustainable operation.
For more information on PDUs and other related topics, including this one, check out the Raritan blog.
One of the first things critics of the Internet of Things (IoT) will tell you is that the infrastructure for a network of that size simply isn’t available yet. Such skeptics miss the rapid growth that IoT is having in our space and the affect it is already having within that same space. Data centers are expanding, multi-tenant data centers are expanding in regions previously unheard of, and the networking within our buildings are growing more than originally anticipated.
Many people think the Internet of Things is a Minority Report-esque system of interconnected devices, analyzing your internet search history and pupil dilation to send you target ads, or even record your day to day activities while simultaneously streaming and storing them on your mobile devices. This reality isn’t out of the question but it’s not the purpose of the IoT today.
What the IoT boils down to is a network of uniquely identifiable endpoints that contain embedded technology. This technology senses, collects, communicates, and exchanges data locally or with external endpoints independent of human interaction. The recent, rapid advancements in cloud technology have provided an infrastructure for a fully functioning Internet of Things right now. The cloud’s ever-growing size and interoperability has enabled previous discrete devices to communicate with each other over a common network.
Enabling those devices to communicate on the back end is where Legrand falls into play. Legrand manufacturers the supporting infrastructure to a lot of commercial IoT devices, from wireless access points and networked cameras to completely new LED lighting systems powered by category cabling. To learn more about how Legrand plays in this space read our latest IoT white paper or our IoT Application Guide.
Big data is hot in the streets, so to speak. Microsoft just spent $26.2 billion to acquire LinkedIn, for no reason other than to gain access to a massive, accurate, and previously private cache of user-generated data. In a similar vein, companies like Facebook and Amazon now employ as many economic PhD’s as a large U.S. bank -- not to manage their investments, but rather to analyze the data they take in from their users in order to optimize their products. Simply put, as interpersonal interaction continues to shift to digital platforms, so too does the way in which we share and store our information. Right now, the Internet of Things provides users total connectivity with their devices, but thanks to its ability to record data, it could soon provide companies total connection with their customers, too.
According to Howard Brass, a partner and entertainment advisor leader at Ernst and Young, “In an IoT world, media companies will be able to understand what a person is watching, as well as measure how, where, why and with whom consumers are viewing content.” Information collection of this scale is totally unprecedented, but the level of connectivity offered by the Internet of Things is changing the way we view business.
To learn more about how complete connectivity via the IoT can (and will) help you connect to your customers, click the link below.
It’s been called “the infrastructure of the information society”. By the end of the decade it will have profoundly impacted the scope and evolution of residential, government and enterprise automation, management and communications networks. Businesses alone may add as many as 9 billion connected nodes or more. The Internet of Things is a storm of change existing at a granular level, generally not noticed from the perspective of day-to-day experience, where simple devices supporting simple tasks can generate a relentless gale of data. One of the key challenges of the Internet of Things is the “basket of remotes” problem.
Jean-Louis Gassée described the basket of remotes problem like this; there are too many devices which don’t provide sufficient self-description or two-way communication capability when deployed, and this will result in countless applications that will need to interface with countless more devices that don’t necessarily share the protocols for speaking to one another. It’s like having a basket of remotes to control each of the components in your home theater system instead of having a single, universal remote that consolidates all the system functionality. This is typically because each device uses its own remote “language” or can’t advertise its condition to other devices which have the potential for control. Insufficient communication equals unpredictable performance. For IoT to attain its expected mass, a whole lot of devices are going to need to find a way to communicate with a whole lot of other devices on their own, with no human intervention. Think “plug and play” for the background set. And this thinking ultimately brings us to cables.
We’ve written a lot in this blog about USB Type-C. You’ll find articles here, here and here, just for starters. Not so long ago, an article surfaced about certain dodgy cables destroying an engineer’s laptop and test equipment. The astonishing flexibility of the USB-C connection is an early manifestation of the emerging IoT experience. As we work to connect more things, to streamline more processes and to pack more power into smaller, lighter and more efficient packages, we are also going to find that we must carefully invest in technology that is optimized for self-description and two-way communication as an element of its environment. The manufacturer of the USB Type-C cable that damaged the Google engineer’s gear in the above article clearly hadn’t invested in understanding the need for intelligence in something as unprepossessing as a USB patchcord.
USB Type-C is just the first of what will certainly be an evolution in many connectivity systems. The traditional USB connection on your laptop or tablet has a straightforward job – transport data across a differential pair of conductors, and deliver 5 volts DC up to a maximum of 10 watts across another pair. Not so with USB-C. The new format’s 24 pins can support a dizzying array of power profiles that span a range ten times greater than past specs allowed. Additionally, the auto-configuration capability of USB-C allows the connection of audio, video, control and power elements in a less hierarchical, more intuitive order than in the past. You’ll be able to connect your smartphone to your tablet, or your tablet to a colleague’s tablet to share a battery charge or transfer content. To make this possible, USB-C uses a complex system of e-marker chips and billboard chips embedded in the cables and connectors themselves. These “smart” cables and docks are little islands of information, far removed from the simple, stupid pipes of 20th Century connectivity.
In the near past, good connectivity was simply a matter of using quality construction techniques and the proper tools to attach the desired form factor interface to either end of a copper, fiber or hybrid cable. Making a quality, useful VGA interconnect didn’t take a lot of research or demand a big investment in engineering skills. The game changed very little when analog sunset gave way to a new, digital sunrise. For certain, HDMI cables are more complex and more challenging to build than an HD-15 terminated analog VGA patch cable, but this is a matter of scale and not substance. Pin 1 on one end connects to Pin 1 on the other end. The next generation of connectivity solutions, however, will demand a wholly different competency in system design and integration on the part of a manufacturer. It won’t take too many fried $1200 Chromebooks to ruin a reputation.
As the Internet of Things goes mainstream, suppliers of solutions and devices will have to focus on applications and standards as much as, or even more than, they do on processes and efficiencies. The best producers will ensure they have a strong presence at the leading edge of the storm, all the better to understand how the landscape will be changed by the coming winds of technological change. Instead of asking a simple “How long do you want this connection?”, they will instead ask “What do you want it to do?” Instead of listing how many things they can connect, the best manufacturers will be sharing their vision of new and emerging A/V apps and services.
The communications industry, and the A/V community in particular, will be driven to look at cabling and connectivity as a fully-fledged component of the system’s design. “Cable trust” will mean a lot more than simply avoiding counterfeit cables that fail to meet fire and flame resistant standards or that don’t deliver the performance indicated by their published specs. In the TechRadar™: Internet Of Things, Q1 2016 report, Forrester states that IoT “standards are nascent, as vendors are only a couple of years into the process of creating general-purpose interoperability standards.” As the tempest called the Internet of Things scours the landscape, and as a new normal where patch cords and cables have to communicate with the devices to which they’re attached begins to take shape and grow, you’ll need to know that your connectivity solutions are coming from a vendor who’s committed, involved and excited about making your installations work to the very limits of their design.