The amount of new technologies in 2017 has been overwhelming: The cloud was adopted faster than analysts projected and brought several new tools with it; AI was introduced into just about all areas of our lives; IoT and edge computing emerged; and a slew of cloud-native technologies came into fruition, such as Kubernetes, serverless, and cloud databases, to name a few. I covered some of these a year ago in my 2017 predictions and it’s now time to analyze the trends and anticipate what will likely happen in the tech arena next year.
While we love new tech, the average business owner, IT buyer, and software developer glaze over this massive innovation and don’t know how to start turning it into business value. We will see several trends emerge in 2018, and their key focus will be on making new technology easy and consumable.
Integrated Platforms and Everything Becomes Serverless
Amazon and the other cloud providers are in a race to gain and maintain market share, so they keep on raising the level of abstraction and cross-service integration to improve developer productivity and strengthen customer lockins. We saw Amazon introducing new database-as-a-service offerings and fully integrated AI libraries and tools at last month’s AWS Re:Invent. It also started making a distinction between different forms of serverless: AWS Lambda is now about serverless functions, while AWS Aurora and Athena are about “serverless databases,” broadening the definition of serverless to any service that hides underlying servers. Presumably, many more cloud services will now be able to call themselves “serverless” by this wider definition
In 2018, we will see the cloud providers placing a greater emphasis on further integrating individual services which come with higher level abstractions. They will also focus on services related to AI, data management, and serverless. These solutions will make the jobs of developers and operations professionals simpler and hide their inherent complexities. However, they do carry a risk of greater lock-in.
In 2017, we saw all cloud providers aligning with Kubernetes as the microservices orchestration layer, which relieved some of the lockin. In 2018, we will see a growing set of open and commercial services built on top of Kubernetes that can deliver a multicloud alternative to proprietary cloud offerings. iguazio’s Nuclio is of course a great example for such an open and multicloud serverless platform, as is Red Hat’s Openshift multicloud PaaS.
The Intelligent Edge vs. the Private Cloud
The cloud enables the required business agility that is necessary to develop modern and data-driven applications, whether it be at startups or at large enterprises operating like startups. The challenge is that you can’t ignore data gravity, as many data sources still live at the edge or in the enterprise. This—augmented by 5G bandwidth, latency, new regulations like GDPR, and more—forces you to place computation and storage closer to the data sources.
Today’s public cloud model is of service consumption, so that developers and users can bypass IT, bring some serverless functions, use self-service databases, or even upload a video to a cloud service that returns it with a translation to the desired language. But you must build the services ourselves when you use the on-prem alternatives, and the technology stack is evolving so rapidly, it is virtually impossible for IT teams to build modern services that can compare with cloud alternatives, forcing organizations out to the cloud.
IT vendor solutions labeled “private cloud” are nothing like the real cloud, because they focus on automating IT ops. They don’t provide higher level user and developer-facing services—IT ends up assembling those out of dozens of individual open source or commercial packages, adding common security layers, logging and configuration management, etc. This has opened the opportunity for cloud providers and new companies to enter the edge and on-prem space.
In 2017, we saw Microsoft CEO Satya Nadella increasing focus on what he calls “the intelligent edge.” Microsoft introduced the Azure Stack, which is a mini version of Azure’s cloud and unfortunately contains only a small portion of the services Microsoft offers in the cloud. Amazon started delivering edge appliances called Snowball Edge, and I expect it to double down on those efforts.
The intelligent edge is not a private cloud. It provides an identical set of services and operations models as in the public cloud, but it’s accessed locally and is in many cases operated and maintained from a central cloud, just like operators manage our cable set-top-boxes.
In 2018, we will see the traditional private cloud market shrinking while at the same time the momentum around the intelligent edge will grow. Cloud providers will add or enhance edge offerings and new companies will enter that space, in some cases through integrated offerings to specific vertical applications or use cases.
AI from Raw Technology to Embedded Feature and Vertical Stacks
We saw the fast rise of AI and machine learning technologies in 2017 but despite the hype, it is in reality mainly getting used by market leading web companies like Amazon, Google, and Facebook. AI is far from trivial for the average enterprise, but there is really no reason for most organizations to hire scarcely available data scientists or build and train AI models from scratch.
We can see how companies like Salesforce built AI into its platform, leveraging the large amount of customer data it hosts. Others are following that path to embed AI into offerings as a feature. At the same time we see AI getting a vertical focus, and we’ll start seeing AI software solutions for specific industries and verticals such as marketing, retail, health care, finance, and security. Users won’t need to know the internals of neural networks or regression algorithms in these solutions. Instead, they will provide data and a set of parameters and get an AI model that can be used in their application.
AI is still a very new field with many overlapping offerings and no standardization. If you used a framework like TensorFlow, Spark, H2O, and Python for your learning phase, you’ll need to use the same for the inferencing part (scoring). In 2018, we will see efforts to define AI models that will be open and cross-platform. In addition, we will see more solutions which automate the process of building, training, and deploying AI, like the newly introduced AWS Sage Maker.
From Big Data to Continuous Data
In the past few years, organizations have started developing a big data practice driven by central IT. Its goal has been to collect, curate, and centrally analyze business data and logs for future applications. Data has been collected in Hadoop clusters and data warehouse solutions and then used by a set of data scientists who run batch jobs and generate some reports, or dashboards. This approach has proven to fail according to all leading analysts, with 70 percent of companies not seeing any ROI (according to Gartner). Data must be actionable to gain ROI insights from it. It must be integrated into business processes and derived from fresh data, just like we see in targeted ads and in Google and Facebook recommendations.
Data insights must be embedded into modern business applications. For example, a customer accessing a website or using a chatbot needs to get an immediate response with targeted content based on his or her recent activities or profile. Sensor data collected from IoT or mobile devices flows in continuously and requires immediate actions to drive alerts, detect security violations, provide predictive maintenance, or enable corrective actions. Visual data is inspected in real time for surveillance and national security; it is also used by retailers to analyze point-of-sales data like inventory status, customer preferences, and real-time recommendations based on observed customer activities. Data and real-time analytics reduce business costs by automating processes that were once manual. Cars are becoming connected and autonomous. Telemarketers and assistants are replaced with bots. Fleets or trucks, cab drivers, or technicians are orchestrated by AI and event-driven logic to maximize resource utilization.
All this has already started happening in 2017.
Technologies like Hadoop and data warehousing were invented ten years ago and predated the age of AI, stream processing, and in-memory or flash technologies. Enterprises are now seeing that there is limited value in building data lakes, as they can perform data mining by using simpler cloud technologies. The focus is shifting from mostly just collecting data to using data continuously, an area in which technologies focused on data at rest and central IT-driven processes just won’t fly.
In 2018, we will see an ongoing shift from big data to fast data and continuous data-driven applications. Data will be continuously ingested by a wide variety of sources. It will be contextualized, enriched and aggregated in real-time, compared against prelearned or continuously learned AI models so that it can then generate an immediate response to users, drive actions and be presented in real-time, interactive dashboards.
Developers will use prepackaged cloud offerings or integrate their solutions by using relevant cloud-native services. In the enterprise, the spotlight will move from IT to the business units and application developers who will be embedding data-driven decisions in existing business logic, web portals, and day-to-day customer interactions.
The bottom line for 2018 is:
- The intelligent edge will grow, and the traditional private cloud market will shrink.
- We’ll start seeing AI software solutions for specific industries and verticals. Also, AI models will start being open and cross-platform.
- Fast data, continuous applications and cloud services will replace big data and Hadoop.
- One way or another, cloud services will be easier to consume, thereby increasing in the gap between them and traditional and private cloud solutions. So bring on the shackles and get ready to be even more locked in!
Happy New Year!