Updated on August 19, 2023.
Under the soothing shade of an apple tree, basking in the sun’s relentless warmth and overlooking the iconic landscapes of southern France, I recently found myself lost in contemplation. As my eyes traced the patterns of the drifting Clouds across the horizon, a question arose in my mind: how is it that such an abundance of data and technology can gracefully float together in such an orderly and fluffy manner? “Ah, it must be the work of DevOps,” I mumbled. “An orchestra of meticulously crafted infrastructure configurations, orchestrating a symphony of functionalities.” But then, like a sudden realization dawning upon me, it hit me (figuratively speaking): What if it’s not the shape Clouds take that matters, but rather their unique drifting pattern?
After all, while DevOps have undoubtedly paved the way for efficient and resilient Cloud architectures; it’s worth considering whether its heavy processes might sometimes slow the adoption of new functionalities, and hinder the onboarding of new members to the Cloud journey. Perhaps the true magic lies not just in crafting the most intricate spells but in choosing the right ones at the right time.
What then? Should we resolve back to configuring resources from the AWS Management Console, leveraging the untapped potential of ClickOps and emancipate from the burden nature of YAML files and infrastructure code?
Now, before you raise an eyebrow, assume that yet another sunstroke has taken its toll on my sanity, and launch at me a barrage of well-architected criticisms; allow me to set the stage. Yes. We’re all familiar with DevOps as a practice and culture. Its tools and its emphasis on automation and code-driven workflows have shaped our digital landscapes for the better part of the last decade. DevOps has empowered us to orchestrate complex infrastructures, deploy applications at scale, and achieve operational excellence across the vast global network that Clouds provide. Much is still true. Yet, in our relentless pursuit of efficiency and reliability, have we overlooked something crucial?
In my own Cloud migration adventures, I, too, have devoted countless hours to the
antisocial practice of coding and automating every nook and cranny of my AWS deployments. From verbosely describing immutable and seemingly forgettable VPC configurations to micro-managing the deployment of any ever-evolving, business-critical workflow: I was an ardent advocate of infrastructure code, believing that this rigorous approach was the catalyst
for my organization’s migration success; our YAML files the testament to our commitment the Cloud, and proof of operational excellence.
However, I soon found myself reaching a tipping point. Instead of forging ahead and building the future, as I used to, I was forced to focus on the demanding task of maintaining the past (albeitmnot the “old” past but the new one we just quite finished building): Time and resources became scarce again. Just then, a realization dawned upon me — surely a cloud-native mindset should not be determined by the number of successfully migrated workloads but by the number of practitioners within the organization. And unfortunately, by enforcing such a strict infrastructure code policy, I had inadvertently constructed a labyrinth between my stakeholders and the Cloud itself. A maze, where only a select few possess the map while others stand stranded at the entrance, yearning to explore but lacking the means to do so.
The burdensome nature of infrastructure code poses a significant challenge, particularly for smaller organizations: Navigating its intricacies demands the expertise of skilled engineers, which most companies can only afford a few.
All in all, from once being open-minded evangelists, our local Cloud Center of Excellence (CCoE) had gradually regressed back to its original incarnation: the all-too-familiar whimsical “I.T.” service where every value-driving inquiry languishes for an indefinite amount of time, forever lacking the adequate time or resources. Our initial vision of breaking down barriers and fostering collaboration between Cloud and stakeholders, saving us much needed time in the process, had become muddled in the complexities of maintaining workloads, whether “server-full” or “server-less” . The very essence of innovation and agility had been suffocated within the rigid confines we had inadvertently constructed.
Okay, I am probably being little too over-dramatic.
Yet, I would definitely question the essence of DevOps applied to Cloud Computing — it does seem a little nonsensical, when we think about it. With the increasing adoption of serverless technologies and managed services, the traditional notion of “Ops” is kind of becoming a thing of the past (and so is the “Dev” part too, some would argue); as it blurs the line between where the “Dev-and-Ops” ends, and the “Business” starts.
True. There surely was a time in my career when meticulously maintaining Ansible deployment scripts was imperative for ensuring our business resilience and success: A single point of failure in maintaining our databases, a single misconfiguration of our Kubernetes cluster; could have resulted in real catastrophic consequences. However, in today’s world, where services offer built-in, unbeatable capabilities for fault tolerance, data protection, and disaster recovery, the question arises: Is maintaining such a high level of caution still as relevant as it once was?
In light of this, I propose the (re)introduction of “ClickOps” – a new paradigm, never once heard /s, that will challenge the status quo and invites us engineers to rethink our Cloud journey.
ClickOps advocates for empowering business stakeholders to directly manage their workloads through the (relative) user-friendliness of the AWS Management Console. Instead of relying solely on code-driven processes, ClickOps embraces the power of intuitive (-ish) interfaces, providing a clear pathway for non-technical individuals to bring their visions to the Cloud . It would live in harmony behind the well-defined guardrails and well-architected framework that “CloudOps” build and maintain within organizations, ensuring the balance between the control and governance offered by code-driven workflows and the freedom and accessibility of configuring applications through the Console.
But would we really be able to strike such a balance? Could we ensure compliance with architectural principles, security measures, and operational best practices while still encouraging stakeholders to play around and harness the full potential of the Cloud?
In an upcoming series of articles, I will explore the principles and practical implementation of my vision for ClickOps “2.0” and provide real-world examples of how organizations can embrace this paradigm shift, highlighting the benefits it brings and the challenges that may arise.
Stay tuned for the forthcoming articles if you find yourself intrigued by such questions on putting human intuition back at the center of our Cloud operations. You can also continue the journey by reading the next article in the series here, where I delve into the role of the AWS Management Console in this transformative shift towards user-centric Cloud architecture.
Going further ?
In the process of crafting this blog post, I stumbled across several thought-provoking articles from some of the industry “big-boys”. These articles, much like mine, question the fundamentals of DevOps practices in the Cloud. If you’re hungry for more insights and perspectives, I highly I encourage you to explore those. Happy reading!