Introduction and motivation
Huge growth of mobile data traffic, inevitably, requires a significant increase in wireless network capacity. The European Telecommunications Standards Institute (ETSI) has envisioned the fifth-generation (5G) cellular system, with the aim of developing a commercial system by 2020. Excessive numbers of mobile users along with associated data traffic generated by mobile devices (smart phones, laptops, and tablets) are not matched with the improvement pace of battery life and CPUs deployed in mobile handsets. Mobile battery life can be prolonged by means of computation offloading whenever a compute intensive task is not efficiently affordable by local computing resources.
In addition to exponential growth of data traffic in cellular networks, mobile operators are facing a serious issue that is likely being underrated. Mobile operators’ traditional primary revenue sources (e.g., voice and messaging) continue to gradually decline due to wide penetration of over-the-top (OTT) services such as Skype, Tango, Line, FaceTime, Whatsapp, iMessege, among others. It is crucial for mobile operators to not only rely on traditional revenue sources, but to envision and realize a new ecosystem to generate unique revenue and value by transforming base stations into intelligent service hubs that are capable of providing compelling services directly from the very edge of the network by means of mobile-edge computing (MEC) .
Cloudlets are located at the edge of the Internet, just one wireless hop away from associated mobile devices, enabling new applications that are both compute intensive and latency sensitive [2,3]. The resultant two-level cloud-cloudlet architecture leverages both centralized and distributed cloud resources and services, where an infrastructure based on WiFi access technology is deployed . Cloudlets are owned and managed by mobile end-users, while MEC servers are operated by mobile providers. Mobile subscribers access cloudlets via WiFi (see Fig 1). Since cloudlets are not connected to the mobile network, they do not share network operator related knowledge. Thus, cloudlets are suitable for offloading resource-intense tasks from mobile subscribers in order to prolong battery life.
To maintain the consolidation achievable in traditional clouds and ensure a smooth evolutionary migration path from current 4G LTE-A HetNets relying on important performance-enhancing features such as coordinated multipoint (CoMP) transmission and reception among base stations (BSs), we allow a macrocell BS or a small cell (i.e., micro-, pico-, or femtocell) BS to be connected to the cloud through a conventional cloud radio access network (C-RAN) that is based on a reconfigurable radio-over-fiber (RoF) backhaul between the central baseband units (BBUs) and remote radio heads (RRHs). Conversely, the complementary Ethernet-based FiWi access network is realized via a distributed RAN (D-RAN) based on so-called radio-and-fiber (R&F) technologies that rely on EPON/WLAN medium access control (MAC) protocol translation at the optical-wireless interface . Traditionally, Common Public Radio Interface (CPRI) was used as the transmission technology in the front-haul. However, if existing fiber network infrastructures are to be used, compatibility with Ethernet based technologies is inevitable. Ethernet is a promising front-haul solution due to its maturity and adaptation with widely deployed wireless/wired access networks . The operation, administration, and maintenance (OAM) capabilities of Ethernet provide a standardized means of management, resilience, and performance monitoring. Moreover, the use of Ethernet as the underlying transport technology in the front-haul may offer the following advantages:
- Use of low-cost industry-standard network equipment
- Ability to share network equipment with fixed access networks and enabling greater convergence and cost reduction
- Use of switches/routers to enable statistical multiplexing gains
- Monitoring through compatible hardware probes
Fig. 1. Cloud-cloudlet empowered FiWi enhanced LTE-A HetNet architecture .
Internet of Things (IoT) technologies comprising wearable and low processing power devices suffer from serious limitations to locally perform traditional compute-intensive applications such as augmented reality and surveillance systems. The issue can be resolved if latency-sensitive applications are offloaded to rich servers co-located with base stations. Therefore, offloading tasks to edge servers will significantly reduce latency. Computational offloading in MEC deals with several challenges such as: How to split an IoT task? How to decide a task to offload or not? Which server to offload to? When to offload? Given that applications involving computation offloading are typically more delay-sensitive than those requiring simple data offloading, developing low-latency offloading strategies is of high importance. These strategies execute delay-sensitive tasks in local MEC servers or remote clouds.
FiWi enhanced mobile networks with MEC servers are able to provide seamless service to mobile subscribers. Since mobile subscribers move services have to migrate among MEC servers. Because different servers are attached to different base stations, a decision needs to be made on whether and where to migrate the service when a user moves outside the service area of a server that is providing the service.
Artificial intelligence (AI) already has enough power to surpass people in tasks, which were considered typically for humans. However, it will be key to keep in mind that computers should be complements of humans, not substitutes. Unlike AI capabilities designed primarily to take humans out of the loop, many innovative and unforeseen applications require people and robots to collaboratively work together in close interaction with each other. Towards this end, advanced MEC capabilities of future mobile networks empowered with AI will pave the road towards novel services, applications, and new revenue sources.
The research directions related to AI based MEC in FiWi enhanced networks include but are not limited to the following challenges:
One of the key design issues in integrated MEC servers and FiWi networks is service migration. Service migration deals with the situation where the location of a subscriber being served by an MEC server changes and hence a decision has to be made whether to migrate the service to an alternate MEC server. More specifically, we will investigate by means of probabilistic analysis and verifying simulation the impact of different dynamic computation allocation schemes of sub-tasks and various key design parameters such as cloud availability, cloudlet size, cloudlet connection likelihood, and user mobility on the computing capacity and speed as well as delay performance of FiWi enhanced networks.
TensorFlow enabled “Cooperative” Automation
Most of the research related to machine learning has focused mainly on developing automated capabilities for computers to substitute humans in many tasks. However, it is important to note that humans and machines are categorically different. Moreover, the most intelligent machines are not and in the near term will not surpass humans in some specific tasks, hence interest has grown in the topic of “cooperative” automation, where AI empower humans to realize new capabilities and services which never used to exist. Towards this vision, early researches have focused on the interaction of people and machines to keep human in the loop and leverage human participation effectively.
TensorFlow is Google Brain’s second generation machine learning open source library software released on November 9, 2015 . TensorFlow provides an open source machine learning library which has been used in hundreds of Google products. TensorFlow computations are presented as stateful dataflow graphs. This library of algorithms stems from the need to instruct neural networks, learn, and act similarly humans do and design novel applications to highlight the role of AI in empowering MEC servers.
Fig. 2. TensorFlow .
In TensorFlow, data flowgraphs are used for numerical computations. Nodes in the graphs represent mathematical operations, while the edges are multidimensional data arrays, which are also known as tensors. Tensors are communicated between vertices. TensorFlow provides flexibility and portability. In addition, it allows researchers to push their innovative ideas to products. TensorFlow comes with Python and C++ interfaces (as frontend languages for clients) to build and perform computational graphs. An example fragment to build and execute a TensorFlow graph using Python and the resulting computation graph is depicted in Fig. 3.
Fig. 3. Example of TensorFlow code fragment followed by corresponding computation graph .
Edge Content Delivery
MEC servers provide a platform, where additional content delivery services can be developed at the network edge. Content traditionally hosted by Internet services/CDNs has moved towards the network edge. MEC servers operate as local content delivery nodes and serve cached content, thus resulting in reduced traffic load and latency in the core.
Instead of routing all mobile subscribers’ data separately to remote clouds, MEC servers are capable of aggregating related traffic and thereby reduce core network traffic loads. Moreover, MEC enables the downscaling of user-generated traffic before it is transmitted to the core. Additionally, they enable real-time scaling of Internet content if any congestion occurs at base station sites.
M. Patel et al., "Mobile-Edge Computing Introductory Technical White Paper," White Paper, Mobile-edge Computing (MEC) industry initiative, 2014.
M. Satyanarayanan et al., "The case for vm-based cloudlets in mobile computing," IEEE Pervasive Computing, vol. 8, no. 4, pp. 14-23, 2009.
] M. Satyanarayanan et al., “An open ecosystem for mobile-cloud convergence,” IEEE Communications Magazine, vol. 53, no. 3, pp. 63-70, 2015.
M. Maier and B. P. Rimal, "Invited paper: The audacity of fiber-wireless (FiWi) networks: revisited for clouds and cloudlets," China Communications, vol. 12, no. 8, pp. 33-45, 2015.
NokiaNeworks, "Intelligent base stations," White paper, 2014.
N. J. Gomes et al., "Fronthaul evolution: From CPRI to Ethernet," Optical Fiber Technology, vol. 26, part A, pp. 50-58, 2015.
M. Abadi et al., "TensorFlow: Large-scale machine learning on heterogeneous systems, 2015," www.tensorflow.org.