The Role of Middleware Admins in a DevOps World

    By: Eric Mader on Oct 16, 2017

    A few years ago, a large public utility in the Midwest was running several important web services on WebLogic 11g. These web services were a critical component of the company's billing and financial services, and were deployed to a very traditional IT infrastructure: each service operated on a WebLogic cluster installed across three bare-metal Linux server blades inside an on-premise data center.

    Bill Baker, the founder of the Business Intelligence team within Microsoft's SQL Server group, accurately described the company's approach to these servers as that of "cattle vs. pets." They treated each blade like a pet. When one was "sick," they moved heaven and earth to "heal" it. On several occasions, managed servers on individual blades would crash, which then required several hours of digging through server logs and analyzing massive stack traces. New deployments and WebLogic patching would occur during scheduled outages that often lasted for unacceptably long timeframes - sometimes as long as 12 hours of complete downtime. New versions of the web services were deployed very rarely, with quarterly release cycles that often required human intervention and were difficult to roll back if an issue was discovered.

    In a DevOps world, these servers would be treated more like cattle. Yes, all of the servers within a WebLogic cluster would still need to be online in order to maintain redundancy and throughput. However, if an individual node went down or began failing health checks, it could have been automatically (and metaphorically) shot and replaced with an identical configuration. New releases and rollbacks would have been automated, required minutes instead of hours, and offered no noticeable downtime to end users and client services.

    DevOps, as the name implies, is the merging of both Development and Operations practices into a unified and cyclical development pipeline. Both new application code and WebLogic infrastructure are submitted at one end, tested side-by-side over a series of stages, then published as production-ready at the opposite end.

     

    1.png

     

    There are two key benefits of this methodology:  Continuous Integration (CI) - the regular merging of new application code into a central repository which automatically triggers builds and automated testing - and Continuous Delivery (CD) - the automated release of new application code, allowing for exponentially shorter and more frequent release cycles.

    It is a common misconception is that DevOps is all about offloading the building, configuration, and tuning of WebLogic environments to Application Developers. In reality, WebLogic Administrators fill a new and vital role as DevOps Operation Developers, working in sync with Application Developers to build, test, and release WebLogic environments as if they are just another feature of the application code.

    A powerful tool that allows Operation Developers to easily work with WebLogic environments in the pipeline is containerization, namely Docker. Oracle maintains a Container Registry (https://container-registry.oracle.com/), which contains baseline Docker containers built and supported by Oracle for WebLogic 12.2:

     

    2.png

     

    Without any modifications, these Docker images are pretty basic. Running with minimal arguments will result in containers that have a baseline domain configured with an AdminServer. The true power of these containers is their ability to be used in Dockerfiles, which allow Operation Developers to templatize components of a WebLogic environment, and perform traditional administrative functions.

    For example, the following very simple Dockerfile extends the latest WebLogic 12.2 Docker image, runs YUM updates, and auto-deploys a sample application:

    FROM container-registry.oracle.com/middleware/weblogic:latest

     

    # Run YUM updates

    RUN yum -y update

     

    # Autodeploy application WAR file

    COPY VTSample.war /u01/oracle/user_projects/domains/base_domain/autodeploy/

     

    The build function of the docker utility will build a new Docker image based on this Dockerfile. In the following example, a new Docker image named vtapp and tagged as version 0.1 will be created:

     

    > docker build -t vtapp:0.1 .

     

    The image can then be run from any platform that is running Docker, including a local workstation:

     

    > docker run -d -p 7001:7001 vtapp:0.1

     

    Using the forwarded port, the application is immediately available using a web browser:

     

    3.png

     

    The Dockerfile essentially becomes part of the application itself, allowing the WebLogic environment to simultaneously progress down the CI/CD pipeline. This has many advantages over a traditional development methodology. For instance, Application Developers can use the image when working on the code on their local workstation, quickly spinning up and destroying copies of the actual infrastructure that will be used during testing and production. Likewise, the Dockerfile can be used to pull the latest application build, which ensures that the application will be tested on the same WebLogic environment that will eventually be used in Production. Issues caused by the WebLogic stack can be identified in QA and UAT, before ever reaching an environment used by actual end users and services.

    The major benefit of both an Operation Developer and a containerized WebLogic environment, however, comes when deploying the application to Production. Containerization platforms such as Kubernetes and AWS Elastic Container Service can automatically roll out new Docker instances, allowing for no noticeable downtime of the application. If a container doesn't start properly or fails health checks, it can be automatically destroyed, restarted, and even rolled back to a previously working version.

    A robust DevOps methodology does require a new role: the Pipeline Tools Architect, who is responsible for the deployment, maintenance, performance, and reliability of the underlying containerization platform. Pipeline Tools Architects don't necessarily care about what's running inside the containers, but do care that they stay running and are not hindered by the outages and performance issues of the supporting Infrastructure stack. This role allows Operation Developers to focus more on building robust WebLogic environments, and less on the platform on which they run.

    Overall, Middleware Administrators should be excited about - not afraid of - DevOps. Gone are the days of being handed a WAR file, digging through log files when it doesn't deploy, and trying to explain that the application code is the culprit.  Filling the role of Operation Developer will take some getting used to, but the methodology represents a huge opportunity to be involved with the entire development process. The overall environment build is much more templated, much easier to deploy, and allows WebLogic servers to be treated like cattle - cowboy hat not required.

    Released: October 16, 2017, 1:41 pm
    Keywords: Department | middleware


    Copyright © 2017 Communication Center. All Rights Reserved
    All material, files, logos and trademarks within this site are properties of their respective organizations.
    Terms of Service - Privacy Policy - Contact

    Independent Oracle Users Group
    330 N. Wabash Ave., Suite 2000, Chicago, IL 60611
    phone: 312-245-1579 | email: ioug@ioug.org

    IOUG Logo

    Copyright © 1993-2017 by the Independent Oracle Users Group
    Terms of Use | Privacy Policy