Evolution of Compute and Big data
The primitive cloud, better known as “virtualization” evolved as a way to abstract the physical infrastructure which used to be manually deployed. The evolution of cloud computing from plain server virtualization is depicted in the flow below developed by Gartner in 2012 .
The first big step in compute evolution was from physical servers to virtual servers in data centers bringing higher server efficiency, hardware independence and uniformed environments.  This virtualization laid the foundation for cloud computing as a model to enable on-demand network access to a shared pool of resources that can be rapidly provisioned with minimal effort or interaction. The next big step in this journey is “Serverless computing” where the developers only need to worry about the business logic and the platform dynamically determines how much infrastructure is needed and then automatically provision or de-provision it to support the application. Here cloud instances are no longer allocated but only provisioned when an event occurs. An example of this is the “Internet of things” where sensor based devices react to triggers on the fly and the virtual machines in the cloud retrieve and serve up the information .
The advances in cloud computing are set to bring other technological advances. For eg. Machine learning is becoming more powerful than ever . With all the companies moving to cloud based computing, there is so much data to train the models and specialize them for tasks like image processing, audio filtering etc. This has enabled the cloud providers to develop sophisticated Machine learning APIs which can be used off-the-shelf empowering application developers without having to gain expertise in underlying machine learning technologies. As machine learning becomes easier to work with, companies are moving from performing analytics to deep learning, uncovering new information and insights which are revolutionizing entire industries and business models.
The biggest challenge with cloud computing – which stores and retrieves data from offsite locations – is bandwidth. This problem is only going to increase with more and more physical objects coming online (a.k.a Internet of Things) to transmit and receive data. Here enters “Edge Computing” , the next big thing in cloud computing. It solves the bandwidth problem by keeping data at the edge of the cloud where the real world starts, i.e. in local computers and devices. For example, say you have a laptop and 2 phones at home. Now instead of all devices retrieving updates from the cloud themselves, what if the laptop could download the updates from the cloud and share it with the phones?
The principle of edge computing is to use the enormous computing power all around us and communicate internally . This setup reduces communication lag times and cuts bandwidth by doing more computations locally. The prime candidate for Edge computing will be self-driving cars. They need signals from the cloud, but cannot rely on spotty internet network before accidents happen!
Users who have LIKED this post: