Why traditional data centre networks are stymieing innovation
Fri 23 Jun 2017 | Tate Cantrell
Tate Cantrell, Chief Technology Officer at Verne Global, discusses why it is key for business leaders to reconsider their network requirements…
Enterprise leaders should be focused on the economic potential of their IT budget. And the directive of the enterprise IT team is to do more with less.
When presented with the challenge of increasing revenue while reducing IT outlays, IT leaders will move away from solutions that saddle the business with upfront capital costs and towards solutions that allow the IT footprint to grow in line with the requirements of the business. This shift in thinking requires that business leaders stay focused on the goal of better performance overall and less on the preservation of history.
Traditionally, enterprise IT has considered the network purely as WAN (wide-area network) and LAN (local area network). The WAN team was responsible for procuring point to point network products to connect to core data centers, while the LAN team was responsible for ensuring that the networks inside the core data centers were secure and setup for the applications that ran within the core data center.
This rigid approach to network management does not easily allow for the integration of outsourced IT resources in the cloud or in colocation facilities.
Often, thanks to enterprise security regimes, the LAN team can only deploy its hardware in environments where enterprise FTEs can be called on to directly access the equipment, in order to protect the integrity of the Layer Two network. This can work when those LANs exist within facilities that are located adjacent to offices with enterprise staff, but this setup is far from flexible. And flexibility is the key to ensuring maximum output per unit of IT cost.
Business leaders must allow IT to establish converged network teams that have the mandate and the authority to build secure networking solutions that enable the integration of the latest cloud solutions directly into the core enterprise network.
As a business begins to build into the cloud, it must develop processes and expertise in securing applications and their data outside of the protection zone of traditional corporate networks.
While this is challenging at first, a business should go through the effort to develop cloud-first processes and standards for procurement, deployment, and operations in the cloud. By building in this flexibility, the IT team can ensure that the benefits of scalability that cloud can offer can be realized without jeopardizing the audited workflows of the IT staff.
With a cloud-first approach to deploying applications, IT leaders can begin to select the environment for a particular application based on a number of defining criteria. For example, a core network with strategic data that is accessed frequently may stay within a secured environment well within the corporate network.
However, an application that requires very high levels of compute may be best considered for an outsourced solution. The increase in high-performance computing (HPC) workloads is driving enterprises to look for options outside of their conventional core data centers. These HPC workloads are the perfect example of applications that should be placed in data centers that are optimally suited for these applications. Typically, these optimal environments are far away from the legacy corporate data centers.
HPC applications such as engineering modeling, scientific analysis, machine learning, and even Big Data analytics are often very power hungry. Since the benefits that they bring to the business are often dramatic, the business will be best suited to build into those applications using resources where the applications can scale.
Companies that are not forward thinking about their approach to data center solutions will not be able to scale into new opportunities
Depending on the industry vertical and the application, the requirement for scale can be extreme, often growing exponentially as the business sees the benefits that HPC applications bring to the overall business. As many customers learn, the only hedge against a dramatic increase in HPC usage is to choose to compute in an environment where the costs can be predicted for the long term. For HPC applications, an important hedge is to ensure that the HPC applications that drive corporate growth are powered by infrastructure that has secured long-term, low-cost, sustainable energy sources.
Establishing best practices in procurement, deployment, and operations are important for both public and private cloud scenarios. In fact, when deploying a private cloud environment, whether within the traditional core data center or with an outsourced colocation partner, flexibility to scale into the public arena should be built in from the inception of the project.
By taking the extra effort to ensure that the application and data can reside in a private or public environment, the business ensures that security is paramount both inside and outside the core network, and ensures that maximum flexibility is retained through the life cycle of the application.
When choosing to build out a private environment, ample consideration should be given to housing the private cloud in a colocation facility with a provider that is familiar with the business industry.
Facilities that offer industry-specific support must also ensure that their customers have the ability to collaborate and connect with their peers and partners. Direct connection to public cloud environments is an important option that should be provided at a facility that houses a private cloud environment. This is important for two reasons. First, as the business application grows in usage, it could begin to put pressure on the boundaries of the private cloud capabilities. It is important to be able to temporarily scale into the public environment while the private cloud is resized to accommodate the increased base utilization.
The second reason is data. As companies consider new ways to improve their decision making, data feeds become more and more important. A business should ensure that it houses its private cloud environments within facilities that have the right connections and the right partners to provide streamlined connectivity options to a wide array of data feeds.
Connectivity planning is often hindered by the status quo. Business teams are often forced to exist within siloed LAN and WAN environments that stymie innovation at the connectivity layer. But, it isn’t just innovation that is affected. By forcing conventional connectivity solutions, businesses are often paying too much and more importantly, they are missing the opportunity to place application workloads into optimized computing environments.
In order to move forward in business thinking, IT teams should focus on corporate policies that allow point-to-point connections from the core network to a remote computing environment. These policies should be realistic in nature and should consider all contingencies.
Encryption requirements should be imposed for sensitive data in transit and in rest. Physical security standards should be required for both design and operation of the remote facilities. Disaster recovery should be well scripted to ensure that the business mission is not impacted during any contingency event. Through planning and forward thinking, IT teams can ensure that business leaders embrace the opportunities that computing environments outside of the core data center can provide.
One additional challenge that has traditionally dissuaded companies from embracing WAN connections into remote environments is flexibility or in the case of WAN circuits, the lack of flexibility. WAN circuits are often procured on 3 to 5-year minimum terms and they often take 90 days or even longer to procure.
For a business that is looking to expand its compute environment to make quick impacts for the business, these burdens are not conducive to creative thinking. However, there are changes coming to the way that networking companies are providing WAN services.
Flexible connectivity solutions are the gateway to leveraging the renaissance in technology
Software-Defined Networking (SDN) is a term that originated at Sun Microsystems in 1995 and just like the cloud, it has gone through several cycles of industry hype without an industry understanding of its purpose and objectives.
Today, there are signs that rapid provisioning of WAN circuits using SDN technologies will be widely adopted as the gold standard for connecting the core data centers to remote environments. IT teams should ensure that their core data centers are connected to one of the networking companies that provides SD-WAN (software-defined wide area networking) solutions. Once this connection is in place, ordering circuits from the core data center to remote colocation facilities and public clouds can be made quickly and changed quickly to meet the requirements of the business.
Breaking from tradition
Tradition in the data center states that the core data center must be sized and empowered to meet the demands of every application in the corporate IT stack. While this conservative approach to compute may seem like the obvious choice to a secure conscious organization, this monolithic approach to enterprise application hosting prevents the enterprise from realizing the true benefits that today’s information technologies offer.
New businesses and old businesses alike must embrace flexible connectivity architectures in order to be able to choose from the full menu of technology options.
Data center providers and hyperscalers are spending billions of dollars on data center solutions that should be leveraged into the enterprise IT toolkit. At the very core, well thought-out policies and purchasing schemes for flexible connectivity solutions are the gateway to leveraging the renaissance in technology that is only just beginning.
Tags:Big Data Cloud connectivity feature networking SD-WAN
Sorry. No data so far.