Nowadays, industries move towards microservice and cloud native architectures that makes use of containers to scale and deploy applications. But containers have its own challenges and adds complexity to the infrastructure to maintain especially in large dynamic environments (if you are not familiar with containers and Kubernetes don’t worry: catch up with our latest post, Containers and Workload Automation 101).
IBM Z Workload Scheduler collects many information about its workload execution and data base definitions in different formats. You can obtain info from messages issued on SYSTEM and/or on IzWS product LOG, or you can obtain it from AUDIT function reports.
Starting from this info, an analysis can be done to better tuning workload, prevent problems and so on.
All the questions you never dared ask about containers, Docker and Kubernetes
You have heard a lot about containers in the last years, but you still do not know how you can leverage them with Workload Automation? This is the right place to start then.
We collected the most frequent questions our users do when we talk about containers.
A new easy way to Create, Modify, Replace and Backup Variable Tables
The Workload Automation Programming Language (WAPL) interface allows to easily manage Variable Tables through batch jobs. In fact, it supplies a Batch Loader-like processor through which it is possible to CREATE and MANIPULATE variable tables inside IBM Workload Scheduler Database.
Agility and Management of Large Enterprise Workloads through IBM Z Workload Scheduler. Fast, Intelligent, Dynamic!
Drives application modernization with container-based deployments and support for batch resources and data analytics
IBM Z Workload Scheduler is the IBM Z offering that allow you to manage and automate batch execution across your enterprise, including applications running on mainframe, distributed platforms as well as on cloud and in containers. Dynamic Workload Console is a user-friendly Web UI available with the product (strongly improved in v 9.5), that provides end-to-end visibility and control on the batch plan and execution. Embedded predictive scheduling and what-if simulation are key feature of the product to keep under control SLA deadlines in despite of un-predictable workload and events.
It is with great pleasure that I introduce you to HCL HERO, an intelligent HCL Software Solution:
HERO, a Healthcheck and Runbook Optimizer, enables WA Administrators to easily monitor the health of their servers and perform informed recovery actions with specialized Runbooks. HERO frees up administrator time, reduces manual labor, reduces downtime of servers and improve IT operational efficiency across the enterprise.
Enterprises depend on Workload Automation to manage business critical workloads, reduce operating costs and deploy new services faster.
Worried about how to summarize the stats of your historical jobs? Want to know the hassle-free solution for it?
Then Let’s Get Started!
If you are looking for statistics to detect success, error rates; minimum, maximum, and average duration; late and long duration statistics, then your choice could be “Job Run Statistics (JRS) Report”.
When a customer is approaching for the first time to schedule workloads on iSeries environments, a set of questions are usually raised. This blog is a collection of these questions and related answers, on a WA 94FP3 base.
How much does it cost? This is the key question asked by a buyer when evaluating a software purchase.
The answer is not always easy because the cost of a software solution is not just the price to be paid but it’s something more complex: it’s TCO.
As you probably know, TCO stands for “Total Cost of Ownership” and it’s a high importance parameter to evaluate the Return on Investment (ROI) which definitively defines if the software solution is worth the expense.
During the years, many features have been developed involving more and more complex analysis and actions in the Current Plan modifications due to ad hoc requests.
At the same time customer’s daily workload increased in size and complexity, becoming more dependent on ad hoc actions (ETT processing, Automatic recovery actions etc.)
Storage consuming also increased, so that we delivered via SPE (930 and 950) the possibility to use a Data Space dedicated to MCP actions.