Cloud and SaaS companies invented the notion of micro-services and the “cloud-native” model to gain efficient scaling along with continuous development and operations. Legacy approaches don’t work for global services like Facebook, Google or eBay, which are always on. Containers and Docker were created as the ultimate packaging for such micro-services and new orchestration platforms like Kubernetes, Docker Swarm, and DC/OS handle their deployment, scheduling and life-cycle. Serverless and FaaS are basically an evolution of this model with more automation.
What Does Cloud-Native Mean?
We want to deliver elastic applications which evolve or scale over time. The way to do this is by breaking apps into multiple tiers (micro-services), each with its own elastic scaling, while communicating between these micro-services with reliable messages.
A micro-service cannot be stateful if we want to scale, handle failures or change versions on the fly. Unlike legacy apps, micro-services use immutable images and store configuration, logs, stats and data in elastic cloud data services (object, NoSQL/NewSQL, log/message streams).
Cloud data services are usually built by clustering a set of commodity servers (with local disks). We use pre-integrated cloud provider data services or role our own using open-source or commercial software.
Developers and business owners immediately get the benefits of a cloud-native approach: it allows them to develop apps faster in an agile and continuous methodology, while elastic scaling meets demand fluctuations.
Why Containers Are Not VMs
Traditional infrastructure teams and vendors don’t think like cloud users and providers. They still see the world as VMs, vNICs and vDisks (virtual infrastructure, a.k.a “private clouds”) and try to make containers work with existing practices. This means they focus on refactoring legacy or monolithic apps to run in containers and gain minimal packaging automation benefits without the agility, elasticity and CI/CD benefits. They may as well keep those apps in VMs and just forget about it.
At DockerCon, last week in Austin, I saw numerous vendors trying to make containers work like VMware or OpenStack. The most extreme example was of those pitching SAN or hyper-converged block storage as a solution to the “container problem.” My conversations with three of the major storage vendors in the show went roughly like this:
Me: What does your product do for containers?
Them: We orchestrate the creation of block storage volumes for containers, provision FS, snapshots, dedup, ..
Me: But why do I need this, if Docker provides an (immutable and deduped) image file system and persistent data is optional, expecting to see file or database abstraction (not vDisks)?
Them: Oh, that’s why our sophisticated automation allocates and provisions block space, creates a file system on the disk volume (with fixed capacity) and attaches it to the container.
Me: But a disk volume cannot be shared among microservices (it’s quite essential if you are elastic) and I want capacity to be elastic, not fixed. Why would I need all that mess of provisioning disk capacity and formatting file systems if I use a shared file system? Can’t I just mount a share into the container…?
Them: Yes, the right solution is to use a clustered/shared file system or object, but it’s too hard for us to develop, we don’t have that. Maybe in the future. We do provide value to persistent things like NoSQL databases.
Me: Right. Can’t I just use Cassandra, MongoDB, Elasticsearch on their own…? They all perform distribution and replication in the app level. They have built-in versioning (snapshots), and anyway we can’t snapshot them in the storage consistently because it’s distributed. I also noticed they do offer compression techniques and they actually work better with ephemeral (local) storage.
Them: Oh, we didn’t know that. When we talk to IT guys (not developers) we hear they want to run legacy apps and databases in containers, just like they run VMs!
…This highlights the confusion surrounding containers and modern cloud technologies – there are still huge information gaps between developers and infrastructure teams.
Take The Extra Step to Focus On Cloud-Native
Ok, I get it. How do we build cloud-native? Just use your head and don’t get wrapped up in the hype. There are two ways of going about this:
If you’re looking for the fastest time to market and aren’t worried about performance, lock-ins or long-term budgets, or if your company is relatively small, just use a public cloud with its native services. Take a container orchestration system, preferably an open one like Kubernetes. Use elastic cloud native object storage, databases and messaging systems and focus on your stateless apps. You can even use “serverless” to avoid orchestration and CI/CD tools.
Or – build it like cloud providers, not like legacy apps, if your company is large enough, or if you do care about performance and data locality. Build the baseline cloud-native services which include object storage, logging, monitoring, identity management, configuration management, messaging, orchestration, etc. Use open-source tools or more robust commercial products. Keep in mind that for efficiency and robustness, data services like object storage, databases, or clustered file-systems would be better off using dedicated servers which have higher disk or flash capacity, just like it’s done in the public cloud.
Once you created public cloud-like services, add your stateless applications and application lifecycle management tools which hook to infrastructure services through service bindings (i.e. URLs and credentials).
To build really fast cloud-native platforms, see how iguazio does it (in my KubeCon session) and watch my DockerCon interview on why micro-services are not mini-VMs and how analytics plays in that space:
Also, check out the Cloud-Native Foundation (CNCF) for more resources.