Visit to Logicalis

Print Friendly, PDF & Email

Last Tuesday (the 22nd Feb) I visited the new Logicalis data centre in Slough. My reasons for going were to take a look at what “cloud” technologies are on offer both for personal educational purposes and for consideration as a supplier for the University. Whether that be just as another location to provide some resilience for our mission critical services or later as a possible out sourcing of our infrastructure.

Slough is a good choice for location of a data centre due to the close proximity of a power station, bought by Scottish and Southern Electric in 2008, in the Slough Trading Estate, which was built to accommodate local development but later was used solely for the trading estate. Companies such as Virgin Media and Amazon for example have taken advantage of this. This enables Logicalis to provide high density and multiple levels of resilience for power making them an attractive provider for infrastructure. Customers such as Nottingham Trent University and Loughborough University are making use of their integration services and hybrid cloud solutions. The Public Sector Broadband Aggregation project, facilitating over 8000 connections across Wales, is another example of their work within the public sector.

On offer at the open day were some new angles and interpretations on what the “cloud” means. It certainly seems that this new buzz word is being stretched to fit all possible scenarios. Logicalis seem keen to connect into collaboration between themselves and Universities to tap into advantages of geographic disperse data centres connected to the JISC academic network. They have buy in from the JISC and have made peering links into their network to facilitate the co-location of academic services. Their product entitled “Virtual Containers”, presented as “Cloud as a Platform” abstracts the common defining aspects of hardware level configuration (embedded into ROM chips in servers e.g. MAC addresses and host based controllers WWNs) so that fully contained infrastructures can be virtualised and migrated between sites. Applications, commonly possessing license restrictions to hardware aspects (such as Oracle database servers) can be moved without re-installation or lengthy outages. It is their view that the virtual layer does not require nor involve a Hypervisor (a term commonly associated with virtual machine monitors) but is more comprised of Computation, Storage and Network virtualisation. The added value that they provide is in the automation component which seems largely to be provided by CA Technologies tools. They boast they can orchestrate entire IP network routing changes, machine shutdowns, storage asynchronous replication, even fully replicating the operating profile of a blade server from one site to their own data centre as fast as 3 minutes. To be honest, this just sounds like re-badging High-Availability services as “Cloud”, as nothing particularly sounds new about their approaches.

What was impressive was a tour of their data centre, boasting 10k square feet, 400 racks and 1.6Mw of power consumption. Their Power Usage Effectiveness, a measure of the ratio of power going to machines vs the total power required to host the entire data centre was 1.43 for this site. Their secondary, and other site at Bracknell, works at 700kw power consumption and 1.87 PUE, mostly owing to the fact it is older and at full capacity. The tour started with an overview of the site security and their keenness to appear to the outsider to not draw attention to themselves. The building’s outer shell consisted of reinforced anti-ram-raid beams and surveillance. Site security personnel are restricted to the outer layer and do not possess the ability to enter the 2nd layer, which is controlled by a sliding air-lock style tubular door. Camera detection ensures that only one person enters at any one time (although this could be overridden to allow us in which kind of defeated the demonstration). Once inside the second layer, our guide and designer of the complex Simon Daykin, proceeded to explain that a completely concrete sealed inner layer, capable of resisting up to an hours worth of fire damage with a second inner layer into the machine rooms resisting another hour. Network communications, UPS batteries and the UPS’s themselves (moving 40-70Kw of heat) are separated for purposes of security, temperature management and cooling, assuring optimal operating conditions.

The design is based around the concept of 2N, meaning that all dependencies are provisioned for 2 times the requirement for redundancy and failover, so that if any component fails the 2 completely independent machine rooms continue to function. The 2 inner machine rooms, powered by 4 separate phases and colour coded distributions units, supplying 11k Volts each side from a local Scottish and Southern owned sub station and 2 more Logicalis managed sub-stations. If that should fail, 4 absolutely mammoth generators will kick in; consuming 250l of diesel per day at full load, providing 11Kv each, requiring the 3rd largest (500 tonne) crane in the UK to lift them in. Network is provided from 3 separate dark fibre connections to multiple uplink peers, including LINX, JISC and other local peers, including Virgin Media and Griffin to name a few.

After the data centre tour, a demonstration of the software automation tools was given. A combination of CA IT process application coupled with Cisco UCS and Netapp appliances enable the orchestration and migration.

To conclude their were some pluses and some minuses to their products. Logicalis provide 2 services to assess the requirements for change within the organisation to their way of working, a Virtual Migration Risk Assessment and a Could Migration Risk Assessment (the latter still in some development it seems). This may prove to be a useful service as they don’t seem to require you to actually make use of their kit. They seem quite happy to show us how to replicate what they have internally. They seem confident that we wouldn’t want to and it would mean that we bought the products that they are effectively reselling from Netapp and CA Tech. They only seem to support applications or infrastructure which operates on x86 architectures which would prove to be a problem for our Solaris Sparc production kit. The CA automation tools did look interesting however. I had a chat with the rep from CA after the demo and it seems like there are many tools which may make a holistic overview of service operations and infrastructure, with adaptive learning and configuration management, possible. Unfortunately, at a very high cost at an enterprise scale.

Overall view: very cool but, Slough may be quite far for us should we need site access for installations or what not.