Artificial Intelligence, Cloud, Mobile Devices and Networks

A key takeaway from Jeff Welser’s (Vice President and Lab Director, IBM Research) presentation is that we are creating data at twice the speed than bandwidth in our networks. Large parts of Big Data Analytics and Artificial Intelligence (AI) products and services today have been built using the resources of cloud computing. According to  Craig Martell (Head of Science and Engineering, LinkedIn), Artificial Intelligence can be understood as programming plus statistics. Easily scalable cloud computing resources processing large data sets can lead to better and faster decisions and decisions that humans won’t have to make. One estimation from IBM shows that that daily data creation has increased to more than 2.5 Billion Gigabytes per day and that 90% of the data was generated in the past two years (https://www.flashmemorysummit.com/English/Collaterals/Proceedings/2017/20170809_FD21_IBM.pdf). We will see the amount of data increasing at an even faster speed in the near future. Craig Martell commented that having data means having power these days, whether in science or in business. The fastest growing source of data in the future will be sensor data from IoT connectivity. The following Forbes article gives a good roundup of numbers and market value from various sources regarding IoT: https://www.forbes.com/sites/louiscolumbus/2017/12/10/2017-roundup-of-internet-of-things-forecasts/#4e2c8ca51480. Here we can see the number of connected devices to increase to more than 30 Billion connected devices by 2020. With sensors prices having extremely fallen, we can expect every electronics product to be carrying sensors in the future, thus generating data and transmitting collected data. Having collected these vast amounts of data, the question arises where this data will be processed?

Potential locations are on a high level:

  1. At the edge where the data is collected
  2. In the cloud where the large compute resources are located

In general, it can be expected that that cloud-based AI and on-device AI will run alongside each other. So, digging deeper, the question will be what will be the benefits of processing the data at different locations?

Cloud-based AI benefits:

  • Use of large datasets
  • Processing required for training and inference calls

 

Edge-based/on device AI benefits:

  • Local performance without having to go all the way to the datacenter
  • Increased privacy by keeping the data locally

 

The compute locations, where edge-based/on device AI takes place, can be:

  • CPU
  • GPU
  • DSP (Digital Signal Processor)
  • Any other dedicated Processor

 

Specific vendor solutions for more computational power on devices are:

  • Qualcomm: Snapdragon NPE (Neural Processing Engine)
    • Combines power fro, CPU, GPU and DSP to form a subsystem to handle computational AI tasks

https://www.qualcomm.com/news/releases/2017/07/25/snapdragon-neural-processing-engine-now-available-qualcomm-developer

  • Huawei: Kirin 970 CPU Chipset
    • Includes built-in processor called Neural Processing Unit (NPU)

https://consumer.huawei.com/en/press/news/2017/ifa2017-kirin970/

  • Apple A11 Bionic
    • Implemented neural engine inside A11 Bionic chip to take advantage of CoreML functionalities

https://www.engadget.com/2017/12/15/ai-processor-cpu-explainer-bionic-neural-npu/

  • Mediatek: P40/X40 Chipset with built-in neural processing units
    • Deep Learning SDK for devices with Mediatek chipsets to provide machine learning capabilities

https://www.myfixguide.com/mediatek-helio-p40p70-specifications-leaked/

  • Samsung: Standalone AI chip for their flagship phones

https://venturebeat.com/2018/07/17/samsung-debuts-fast-low-power-lpddr5-memory-for-5g-and-ai-mobile-apps/

Considering all these announcements of products we can see that AI is an integral part of the modern smartphone, with AI engines introduced into the hardware side. Machine learning processing together with heavy algorithmic calculations were traditionally done on the cloud and are spreading to the device. We can AI to be architected in the future using a combination of edge and cloud solutions. Ideally, low-powered training an inferencing happens offline on the device. Integration of AI frameworks (Google Tensorflow, Facebook’s Caffe2, Apple’s CoreML) seems to be essential in the future. This is because large consumer datasets sit with giants like Google, Facebook, Tencent which gives them a significant advantage. Hardware vendors like Huawei and Apple are in theory at a competitive disadvantage if their datasets will be limited. One of their approaches here can be leveraging open platforms and partnerships.

Looking at the edge-based / on-device AI, we can further differentiate the edge-based AI into the following subsets:

  • Device as edge
  • Enterprise premise network edge (AI running in a facility that does not rely on a constant connection to the cloud-based compute resources
  • Operator network edge (the compute resources here may be located at a micro-data center sitting a base station, edge router, central office, internet gateway or radio tower)

Especially having those compute resources at the operator edge, often referred to as multi-access edge compute (MEC) can address the limitations of the traditional cloud-client model with compute resources located in the core cloud. With the increasing speed at which data is generated, the link to the core cloud compute resources tends to become a bottleneck.

Benefits for AI at the network edge are low latency for real-time services. This opens up the opportunity for AI-as-a-Service. Further reading can be found in the following article:  https://ai.intel.com/artificial-intelligence-at-the-edge/ .

 

 

 

 

 

 

 

 

0