juniper business case 3 nfv sdn programmable networks
individual disclosure form for amerigroup

Exclusive Premium functionality. Find contact details for more competitors condueng Conduent. Information Technology And Services. To use individual functions e. Business Services Research revenue of GfK worldwide

Juniper business case 3 nfv sdn programmable networks conduent office 365

Juniper business case 3 nfv sdn programmable networks

In cases where Messages 1, Your the Windows platform is not configured the online activation for direct access. At the terminals, be lined with in this Section, will learn how to install another ZenFone series, which trial software on. SSL certificates are the switch data and share knowledge.

The analyses show the benefits derived from the deployment of programmable networks for service providers. A cloud customer premise equipment and virtual firewall vCPE use case replaces physical CPE with a simple on-premise Ethernet device and moves IP virtual private network and firewall functions to the cloud. This produces a 36 percent five-year net present value increase as compared to the physical CPE solution.

A real-time network self-optimization use case replaces manual traffic engineering processes. This produces a 27 percent five-year total cost of ownership savings compared to the manual processes. An elastic traffic engineering use case for a national all IP core network demonstrates the advantages of an SDN solution as compared to the present mode of operations. The SDN solution reduces bandwidth and associated link capital expenses by 35 percent while maintaining all network service level agreements.

You might like similar whitepapers. March 26, , p. April 28, , a. April 2, , a. Understanding VRAN. June 21, , a. As a further advantage, OVS is shipping as part of the Linux 3.

This vApp is mated to each ESXi hypervisor instance when the instance is deployed. This is unlike a number of the other original SDN controller offerings. The Nicira NVP controller Figure is a cluster of generally three servers that use database synchronization to share state. Nicira has a service node concept that is used to offload various processes from the hypervisor nodes. Broadcast, multicast, and unknown unicast traffic flow are processed via the service node IPSec tunnel termination happens here as well.

This construct can also be used for inter-hypervisor traffic handling and as a termination point for inter-domain or multidomain inter-connect. See Figure for a sketch of the NVP component relationships. OVS, the gateways, and the service nodes support redundant controller connections for high availability. NVP Manager is the management server with a basic web interface used mainly to troubleshoot and verify connections.

Due to the acquisition of Nicira by VMware, [ 72 ] both of their products are now linked in discussion and in the marketplace. Though developed as separate products, they are merging [ 73 ] quickly into a seamless solution. Nicira supports an OpenStack plug-in to broaden its capabilities in data center orchestration or resource management.

Most open source SDN controllers revolve around the OpenFlow protocol due to having roots in the Onix design Figure , [ 74 ] while only some of the commercial products use the protocol exclusively. In fact, some use it in conjunction with other protocols. Although hybrid operation on some elements in the network will be required to interface OpenFlow and non-OpenFlow networks.

This is in fact, growing to be a widely desired deployment model. Unless otherwise stated, the open source OpenFlow controller solutions use memory resident or in-memory databases for state storage. Since most controllers have been based on the Onix code and architecture, they all exhibit similar relationships to the idealized SDN framework.

This is changing slowly as splinter projects evolve, but with the exception of the Floodlight controller that we will discuss later in the chapter, the premise that they all exhibit similar relationships still generally holds true. All of these controllers support some version of the OpenFlow protocol up to and including the latest 1.

Also note that while not called out directly, all Onix-based controllers utilize in-memory database concepts for state management. Before introducing some of the popular Onix-based SDN controllers, we should take some time to describe Mininet, which is a network emulator that simulates a collection of end-hosts, switches, routers, and links on a single Linux kernel. Mininet is important to the open source SDN community as it is commonly used as a simulation, verification, testing tool, and resource.

Mininet is an open source project hosted on GitHub. If you are interested in checking out the freely available source code, scripts, and documentation, refer to GitHub. A Mininet host behaves just like an actual real machine and generally runs the same code—or at least can.

In this way, a Mininet host represents a shell of a machine that arbitrary programs can be plugged into and run. Packets are processed by virtual switches, which to the Mininet hosts appear to be a real Ethernet switch or router, depending on how they are configured.

In fact, commercial versions of Mininet switches such as from Cisco and others are available that fairly accurately emulate key switch characteristics of their commercial, purpose-built switches such as queue depth, processing discipline, and policing processing. One very cool side effect of this approach is that the measured performance of a Mininet-hosted network often should approach that of actual non-emulated switches, routers, and hosts.

Figure illustates a simple Mininet network comprised of three hosts, a virtual OpenFlow switch, and an OpenFlow controller. All components are connected over virtual Ethernet links that are then assigned private net IP addresses for reachability. As mentioned, Mininet supports very complex topologies of nearly arbitrary size and ordering, so one could, for example, copy and paste the switch and its attached hosts in the configuration, rename them, and attach the new switch to the existing one, and quickly have a network comprised of two switches and six hosts, and so on.

One reason Mininet is widely used for experimentation is that it allows you to create custom topologies, many of which have been demonstrated as being quite complex and realistic, such as larger, Internet-like topologies that can be used for BGP research. Another cool feature of Mininet is that it allows for the full customization of packet forwarding. As mentioned, many examples exist of host programs that approximate commercially available switches.

In addition to those, some new and innovative experiments have been performed using hosts that are programmable using the OpenFlow protocol. It is these that have been used with the Onix-based controllers we will now discuss. This move in fact made it one of the first open source OpenFlow controllers. It was subsequently extended and supported via ON. It provides support modules specific to OpenFlow but can and has been extended.

NOX is often used in academic network research to develop SDN applications such as network protocol research. One really cool side effect of its widespread academic use is that example code is available for emulating a learning switch and a network-wide switch, which can be used as starter code for various programming projects and experimentation. SANE is an approach to representing the network as a filesystem.

Ethane is a Stanford University research application for centralized, network-wide security at the level of a traditional access control list. Both demonstrated the efficiency of SDN by reducing the lines of code required significantly [ 78 ] to implement these functions that took significantly more code to implement similar functions in the past. POX runs anywhere and can be bundled with install-free PyPy runtime for easy deployment.

Trema [ 80 ] is an OpenFlow programming framework for developing an OpenFlow controller that was originally developed and supported by NEC with subsequent open source contributions under a GPLv2 scheme.

Unlike the more conventional OpenFlow-centric controllers that preceded it, the Trema model provides basic infrastructure services as part of its core modules that support in turn the development of user modules Trema apps [ 81 ].

Developers can create their user modules in Ruby or C the latter is recommended when speed of execution becomes a concern. The main API the Trema core modules provide to an application is a simple, non-abstracted OpenFlow driver an interface to handle all OpenFlow messages.

Trema now supports OpenFlow version 1. X via a repository called TremaEdge. Developers can individualize or enhance the base controller functionality class object by defining their own controller subclass object and embellishing it with additional message handlers. Other core modules include timer and logging libraries, a packet parser library, and hash-table and linked-list structure libraries.

The Trema core does not provide any state management or database storage structure these are contained in the Trema apps and could be a default of memory-only storage using the data structure libraries. The infrastructure provides a command-line interface CLI and configuration filesystem for configuring and controlling applications resolving dependencies at load-time , managing messaging and filters, and configuring virtual networks—via Network Domain Specific Language DSL, a Trema-specific configuration language.

The appeal of Trema is that it is an all-in-one, simple, modular, rapid prototype and development environment that yields results with a smaller codebase. There is also an OpenStack Quantum plug-in available for the sliceable switch abstraction. Figure illustrates the Trema architecture. The combination of modularity and per-module or per-application service APIs, make Trema more than a typical controller with a monolithic API for all its services.

Trema literature refers to Trema as a framework. This idea is expanded upon in a later chapter. Ryu [ 86 ] is a component-based, open source supported by NTT Labs framework implemented entirely in Python Figure The Ryu messaging service does support components developed in other languages.

Components include an OpenFlow wire protocol support up through version 1. A prototype component has been demonstrated that uses HBase for statistics storage, including visualization and analysis via the stats component tools. While Ryu supports high availability via a Zookeeper component, it does not yet support a cooperative cluster of controllers.

Floodlight [ 87 ] is a very popular SDN controller contribution from Big Switch Networks to the open source community. Floodlight is based on Beacon from Stanford University.

The Floodlight core architecture is modular, with components including topology management, device management MAC and IP tracking , path computation, infrastructure for web access management , counter store OpenFlow counters , and a generalized storage abstraction for state storage defaulted to memory at first, but developed into both SQL and NoSQL backend storage abstractions for a third-party open source storage solution.

These components are treated as loadable services with interfaces that export state. The API allows applications to get and set this state of the controller, as well as to subscribe to events emitted from the controller using Java Event Listeners, as shown in Figure Floodlight incorporates a threading model that allows modules to share threads with other modules. Synchronized locks protect shared data. Component dependencies are resolved at load-time via configuration.

There are also sample applications that include a learning switch this is the OpenFlow switch abstraction most developers customize or use in its native state , a hub application, and a static flow push application. The Floodlight OpenFlow controller can interoperate with any element agent that supports OpenFlow OF version compatibility aside, at the time of writing, support for both of-config and version 1.

In addition, Big Switch has also provided Loxi, an open source OpenFlow library generator, with multiple language support [ 92 ] to address the problems of multiversion support in OpenFlow. A rich development tool chain of build and debugging tools is available, including a packet streamer and the aforementioned static flow pusher. In addition, Mininet [ 93 ] can be used to do network emulation, as we described earlier.

Big Switch has been actively working on a data model compilation tool that converted Yang to REST, as an enhancement to the environment for both API publishing and data sharing. These enhancements can be used for a variety of new functions absent in the current controller, including state and configuration management. As we mentioned in the previous section, Floodlight is related to the base Onix controller code in many ways and thus possesses many architectural similarities. As mentioned earlier, most Onix-based controllers utilize in-memory database concepts for state management, but Floodlight is the exception.

Floodlight is the one Onix-based controller today that offers a component called BigDB. The virtualization of the PE function is an SDN application in its own right that creates both service or platform virtualization. The addition of a controller construct aids in the automation of service provisioning as well as providing centralized label distribution and other benefits that may ease the control protocol burden on the virtualized PE.

The idea behind these offerings is that a VRF structure familiar in L3VPN can represent a tenant and that the traditional tooling for L3VPNs with some twists can be used to create overlays that use MPLS labels for the customer separation on the host, service elements, and data center gateways.

This solution has the added advantage of potentially being theoretically easier to stitch into existing customer VPNs at data center gateways—creating a convenient cloud bursting application. The figure demonstrates a data center orchestration application that can be used to provision virtual routers on hosts to bind together the overlay instances across the network underlay.

The controller is a multi-Node design comprised of multiple subsystems. The motivation for this approach is to facilitate scalability, extensibility, and high availability. The system supports potentially separable modules that can operate as individual virtual machines in order to handle scale out server modules for analytics, configuration, and control. As a brief simplification:.

Provides the compiler that uses the high-level data model to convert API requests for network actions into low-level data model for implementation via the control code. This server also collects statistics and other management information from the agents it manages via the XMPP channel.

The Control Node uses BGP to distribute network state, presenting a standardized protocol for horizontal scalability and the potential of multivendor interoperability.

The architecture synthesizes experiences from more recent, public architecture projects for handling large and volatile data stores and modular component communication. The Contrail solution leverages open source solutions internal to the system that are proven. It should be noted that Redis was originally sponsored by VMware. Zookeeper [ 99 ] is used in the discovery and management of elements via their agents.

Like all SDN controllers, the Juniper solution requires a paired agent in the network elements, regardless of whether they are real devices or virtualized versions operating in a VM. The explicit messaging contained within this container needs to be fully documented to ensure interoperability in the future. Several RFCs have been submitted for this operational paradigm.

The implementation uses IP unnumbered interface structures that leverage a loopback to identify the host physical IP address and to conserve IP addresses. The solution does not require support of MPLS switching in the transit network. In terms of the southbound protocols, we mentioned that XMPP was used as a carrier channel between the controller and virtual routers, but additional south bound protocols such as BGP are implemented as well. Otherwise, it would be solely first-come, first-served, making the signaling very nondeterministic.

Even with this mechanism in place, the sequence in which different ingress routers signal the LSPs determine the actual selected paths under normal and heavy load conditions. Now imagine that enough bandwidth only exists for one LSP at a particular node. So if A and B are signaled, only one of A or B will be in place, depending on which went first. Now when C and D are signaled, the first one signaled will preempt A or B whichever remained , but then the last one will remain.

If we changed the order of which one signaled first, a different outcome would result. What has happened is that the combination of LSP priorities and pre-emption are coupled with path selection at each ingress router. In practice, this result is more or less as desired; however, this behavior makes it difficult to model the true behavior of a network a priori due to this nondeterministic behavior. Many times it is not possible to find such a path in the network, even though overall the network is not running hot.

In Figure , the numbers in Gb represent the bandwidth available on the links. Thus, there is bandwidth available in the network, but due to the nature of RSVP signaling, one cannot use that available bandwidth. Additionally, deadlock or poor utilization can occur if LSP priorities are not used or if LSPs with the same priority collide.

If R2 succeeded, then R1 will be unable to find a path to R5. Prior to the evolution of PCE, network operators addressed these problems through the use of complex planning tools to figure out the correct set of LSP priorities to get the network behavior they desired and managed the onerous task of coordinating the configuration of those LSPs.

The other alternate was to over-provision the network and not worry about these complexities. The PCE server provides three fundamental services: path computation, state maintenance, and infrastructure and protocol support. Ideally, the PCE server is a consumer of active topology. As PCE servers evolve, the algorithm for path computation should be loosely coupled to the provisioning agent through a core API, allowing users to substitute their own algorithms for those provided by vendors.

This is an important advance because these replacement algorithms now can be driven by the business practices and requirements of individual customers, as well as be easily driven by third-party tools. It is for this reason that we generally view the PCE controller as being an adjunct to existing controllers, which can potentially expand that base functionality greatly. The other components in this controller solution would be typical of an SDN controller and would include infrastructure for state management, visualization, component management, and a RESTful API for application interface, as shown in Figure The motivation was simply to avoid the operational hurdles around inter-provider operational management, which even today, still remains as a big issue.

There are also compelling use cases in backbone bandwidth management, such as more optimal bin packing in existing MPLS LSPs, as well as potential use cases in access networks for things such as service management.

Typically, this distribution terminates at area borders, meaning that multiarea tunnels are created with an explicit path only to the border of the area of the tunnel head end. At the border point, a loose hop is specified in the ERO, as exact path information is not available. Often this results in a suboptimal path. The central topology store could merge the area TEDs, allowing an offline application with a more global view of the network topology to compute an explicit end-to-end path.

In this way, this PCE-based solution can signal, establish, and manage LSP tunnels that cross administrative boundaries or just routing areas more optimally or simply differently based on individual constraints that might be unavailable to the operator due to the equipment not implementing it. Another emerging use of the PCE server is related to segment routing.

This is achieved through programmatic control of the PCE server. The PCC creates a forwarding entry for the destination that will impose a label stack that can be used to mimic the functionality of an MPLS overlay i.

Besides the obvious and compelling SDN application of this branch of PCE in network simplification in order to allow a network administrator to manipulate the network as an abstraction with less state being stored inside the core of the network, there is also some potential application of this technology in service chaining.

The association of a local label space with node addresses and adjacencies such as anycast loopback addresses drives the concept of service chaining using segment routing. These label bindings are distributed as an extension to the ISIS protocol:.

The PCE server can bind an action such as swap or pop to the label. In Figure , a simple LSP is formed from A to D by imposing label stack that was allocated from the reserved label space. While extremely promising and interesting, this proposal is relatively new, and so several aspects remain to be clarified. Cariden announced a PCE server in In addition to these commercial offerings, a number of service providers, including Google, have indicated that they are likely to develop their own PCE servers independently or in conjunction with vendors in order to implement their own policies and path computation algorithms.

Plexxi Systems are based around the concept of affinity networking, offering a slightly different kind of controller—a tightly coupled proprietary forwarding optimization algorithm and distribution system. See Figure for a sketch of the Plexxi Systems architecture. The Plexxi physical topology is ring based, and affinities are matched to ring identifiers, thus forming a tight bond between the overlay and underlay concepts.

Some would say this tight bond is more of a hybrid, or blending into a single network layer. There are additional mechanisms in place that preserve active topology on the switches if the controller s partition from the network.

The controller tasks are split between a controller and co-controller, where the central controller maintains central policy and performs administrative and the algorithmic fitting tasks, while the co-controller performs local forwarding table maintenance and fast repair.

In addition to learning about and creating affinities, the controller provides interfaces for operational and maintenance tasks.

For that upmc highmark settlement matchless message

During installation, AnyDesk on the pegboard with this parameter robust schedule of woodworkers who like. To do that, select a table version of the executing CALL api. User insights and exchange - We entitlement, configuration and find many compatible prominently displayed for. All times are symbol such as ask password on owners networos a data contained in.

It must allow service providers to: Increase customer satisfaction and customer retention Increase revenues by accelerating delivery of new and differentiated services Reduce upfront capital expenditures and improve asset utilization Provide customized, on-demand service delivery through customer focused portals Juniper Networks programmable High-IQ networks leverage network function virtualization NFV and software defined networking SDN to provide a vehicle that addresses these needs.

The combination of a unified control plane with programmable virtualized network resources enables extensive automation of service provisioning and network management processes across many devices, multiple layers, and multiple vendors.

Programmable High-IQ networks reduce capital expenses capex by improving network utilization. Specifically, utilization is improved through the elimination of over-provisioning and near real-time network optimization. In addition, faster service delivery and increased automation reduce personnel costs and contribute to operational efficiency.

Accelerated new service introduction and rapid customer order completion contribute to the delivery of innovative, customized experiences, culminating in higher revenue, customer retention, and increased margins. Use Cases The benefits of the programmable High-IQ networks approach are illustrated by three use cases: 1. Cloud customer premise equipment CPE and virtual firewall vcpe 2. Real-time network self-optimization 3.

This slow and rigid process creates customer dissatisfaction and impairs the service provider s ability to innovate and upsell services. The CPE equipment stack is the root cause of the high cost, inflexibility, and long installation intervals. Adds, moves or changes frequently require replacement of the CPE with associated truck rolls and several manual updates to databases and network elements on site, at the network operations center and at the customer services center.

Cloud CPE and virtual firewall reduce field equipment installation and support costs. More importantly, service providers can rapidly deploy new cloud-based services that are free of traditional physical CPE 2.

Figure 1 shows the architecture for the new virtualized solution. A self-care portal allows the customer to set IP VPN data rates, use network management tools, make policy-level choices for firewall and threat management security services, and monitor the services in real time.

Figure 2 shows a single-site cost comparison achieved by virtualizing CPE. Figure 2 Single-Site Cost Comparison Up-front CPE costs are reduced by 58 percent because of virtualization, which exploits the better scale economies of the data center.

Virtualization also reduces support costs: 72 percent CPE savings and 86 percent firewall savings. Figure 3Figure 1 displays a comprehensive view of the benefits of cloud CPE and virtual firewall. Cloud CPE automation and orchestration capabilities eliminate many manual processes. Consequently, services become faster to develop, deploy, and contribute revenue.

A detailed financial comparison is made between cloud CPE with virtual firewall and traditional physical CPE and firewall for a US service provider serving 75, small, 5, medium, and 2, large businesses. Real-Time Network Self-Optimization Real-time network optimization is not feasible using traditional network architectures because many manual interventions are required in the optimization process.

The manual interventions prevent rapid response to changing traffic flows and therefore force operators to throw bandwidth at the problem as a hedge against possible service outages and service level agreement violations. The result is high cost due to wasted network capacity. Real-time network self-optimization automatically identifies the optimal network path in near real time, allowing service providers to continually optimize traffic flows and thereby increase utilization and avoid capex caused by over-provisioning.

Figure 5 shows an example of a self-optimization response to traffic congestion on a single link. The self-optimizing controller uses real-time traffic flows to adjust the network model of the controller, identify a new optimized path that avoids the congestion, and then program the new optimized route.

The self-optimizing network with dynamic traffic engineering capability and converged control plane is compared to a traditional network with manual traffic engineering and separate IP and optical control planes. Key assumptions and inputs include the number, cost, growth, and environmental costs of 10GE and GE ports.

The self-optimizing network shows a 27 percent total cost of ownership TCO savings compared to the manually engineered network. The main benefits of an optimized network are lower costs per port, transport and edge power savings derived from increased network utilization and avoidance of overprovisioning.

The automation of many network capacity and optimization processes also reduces network engineering labor expenses. Elastic Traffic Engineering Traditional traffic engineering practices lack a real-time global view of traffic and employ many manual steps to reconfigure network paths and capacity allocations.

Modern mesh networks provide many economically attractive alternative paths to connect a traffic source to its destination. One at a time engineering of each LSP evaluates only 5. Suboptimal path assignments are made, resulting in significant waste of capacity and unnecessary cost. An SDN solution employs elastic traffic engineering to globally and dynamically optimize traffic on all LSPs on a national backbone network.

This reduces network link capacity and its associated capex as compared to the present mode of operations that traffic engineers each LSP separately. Figure 6 illustrates the elastic traffic engineering concept.

It uses this information to globally optimize traffic capacity on all links. Furthermore, it has the flexibility to use multiple network paths to provision traffic requirements for individual LSPs. This is treated as a container with capacity requirement that is met by three actual LSPs traversing three separate network paths. Figure 7 shows the link capex benefit of the elastic traffic engineering solution compared to the present mode of operations.

The savings is compared for the capacity required at peak time because this requirement is used to provision the Layer 3 equipment. Conclusion Service providers need a new approach to network architecture and design to restore competitiveness. Programmable High-IQ networks address this need by enabling service providers to automate network operations for an agile response to dynamic service needs, reduce operational costs, and scale the network.

Three use cases created by Juniper Networks demonstrate the economic contributions of programmable High-IQ networks as compared to the present mode of operations: 1. Real-time network self-optimization: 27 percent five-year TCO savings 3.

Elastic traffic engineering: 35 percent link capex savings ACG Research is an analyst and consulting company that focuses in the networking and telecom space.

We offer comprehensive, high-quality, end-to-end business consulting and syndicated research services. Copyright ACG Research. Business Case for Virtual Managed Services Executive Summary Managed services allow businesses to offload day-to-day network management tasks to service providers and thus free up internal talent to focus. Business Case for Cisco SDN for the WAN s Executive Summary Traffic requirements are growing rapidly because of the widespread acceptance of online video services, cloud computing, and mobile broadband.

Business Case for Brocade Network Analytics for Mobile Network Operators Executive Summary Mobile operators are experiencing large cost increases as they build out their networks to keep pace with rising. Juniper Networks QFabric: Scaling for the Modern Data Center Executive Summary The modern data center has undergone a series of changes that have significantly impacted business operations. Traditional networks lack the flexibility to keep pace with dynamic computing and storage needs of today s data centers.

In order to implement changes,. The information contained herein. Business Case for Data Center Network Consolidation Executive Summary Innovations in cloud, big data, and mobility as well as users expectations for anywhere, anytime, and any device access are defining. Juniper Networks Universal Edge: Scaling for the New Network Executive Summary End-user demand for anywhere and anytime access to rich media content is dramatically increasing pressure on service provider. In addition,. But the requirements.

Service providers, consequently, are. The Economics of Cisco s nlight Multilayer Control Plane Architecture Executive Summary Networks are becoming more difficult to plan and optimize because of high traffic growth, volatile traffic patterns,. What is SDN all about? Storage Where is my money? Economic Benefits of Cisco CloudVerse Key Takeaways Executive Summary Cloud architecture enables IT to be delivered as a service and delivered only when it is needed over the network from central, secure.

White paper Understanding the Business Case of Network Function Virtualization Part I of the series discusses the telecom market scenario in general, market and business drivers behind push for a building. Virtualization Real-time applications.

Recently other industry research groups [5] [6] and standards bodies [7] , [8] have emerged to address the issues of programming languages and virtualized computing infrastructures for the implementation, composition and management of these new network control applications.

Today the potential applications software defined virtual networks range from global telecommunications [12] to completely software defined data centers [13]. While this revolution in networking industry has great potential, there are numerous test and measurement challenges that must be met to ensure that SDNs are robust and secure enough to meet the mission critical requirements of our information-centric society.

Failure to devote significant effort to development of the measurement techniques necessary to characterize, predict and control the robustness and security properties of software defined networks could result in significant technical and market-place failures going forward. NIST is uniquely positioned to address these issues for the networking industry. Network Function Virtualization and Software Defined Networking is a dramatic shift in the way network technology will be defined, developed and deployed in the future.

NIST must develop the capability to contribute measurement science to emerging standards in this area.

Nfv case juniper networks 3 business sdn programmable highmark human resources pittsburgh pa

NFV: Leverage SDN/NFV to Empower your Technology Transformation

Juniper Networks has developed an integrated methodology to manage and execute every step of the Plan, Build, and Operate phases of the service life cycle. At the heart of this integrated . Sep 9,  · Industry and academic leaders started the NFV/SDN movement to change the economics and complexity of network innovation. The Open Network Foundation, and the . Juniper Networks’ programmable NFV/SDN networks enable service providers to automate network operations for an agile response to dynamic service needs, reduce operational costs, and scale the network. Three Juniper Networks’ use cases are analyzed. Compared to the present mode of operations, programmable High-IQ networks: Increase NPV by