How cloud is evolving to handle the demands of AI
Just as cloud is becoming more widespread, and organisations are wrestling with the new challenge of managing multiple cloud providers, a few experts are predicting cloud’s demise. In my view, writes Richard Blanford, the managing director of Fordway, they have realised that business needs are changing but are asking the wrong question. Here’s why, and how I believe cloud will evolve.
Cloud is the next logical step in the regular waves of centralisation and decentralisation that characterise IT and telecoms. First, we all think that the best place for intelligence in the network is at the edge, and then technology changes and the most logical place for that intelligence becomes the centre. The network then adapts accordingly. For example, mainframes never died when client-server came along; instead, the terminal became a PC, and the mainframe became the database server.
Cloud has enabled organisations to consolidate data centres and benefit from economies of scale. Small businesses no longer need to invest in their own infrastructure, while large organisations can free up real estate and benefit from cloud’s flexibility and scalability for routine applications – although moving legacy applications remains more challenging. Cloud also offers significant advantages for archiving, back-up and disaster recovery. Many telecoms operators are benefiting by offering their own cloud services, creating new income streams.
However, we are also seeing a rapid growth in intelligent devices, such as manufacturing robots, medical diagnostic systems and autonomous vehicles – what you might term ‘intelligent client mark 2’. These devices are in effect data centres in their own right and need to process information in real time, and for them the latency of cloud is becoming a major issue.
Take a robot scanning fruit on a conveyor belt and picking off substandard items. It needs to make instant decisions, not wait for information to transit six router hops and three service providers to reach the cloud data centre and then do the same on the return journey. Having intelligence at the edge is vital for applications that seek to do things in real time, and this need will increase with the growth of ‘smart’ devices, embedded systems and artificial intelligence.
In sectors such as insurance, where actuaries analyse massive amounts of data to enable underwriters to make policy decisions, cloud’s economies of scale will continue to offer significant advantages. There is no major benefit to them in moving intelligence to the edge, as such decisions are not sensitive to latency. Organisations therefore need to use cloud for what it is good at – scale, training and developing algorithms and large-scale data stores, and bring the intelligence to make decisions to the edge when appropriate.
An example is a facial recognition system, where cloud can be used to store petabytes of photos to enable the system to be trained. The algorithm developed can then be loaded into the camera control system so the initial facial recognition is at the edge. The system can then revert to data stored in the cloud if further confirmation is required.
Each organisation needs to consider its own use case and choose the most appropriate solution, depending on how much real time processing is required. All will benefit from the scale and flexibility of centralised cloud processing and storage, from construction companies putting together consortia to deliver specific projects such as Crossrail and HS2, who require capacity for a finite amount of time, to public sector organisations who can hand routine applications to a cloud provider in order to focus on their core activities.
Even organisations working at the cutting edge of robotics and artificial intelligence (AI) will benefit from cloud’s scale and capacity. However, their smart devices will need to rely on inbuilt intelligence, with cloud in a supporting role.