Blog Post What is Traditional Job Scheduling? And How did Enterprise Job Scheduling Evolve?

Discover the first evolutionary step after the manual execution of jobs and tasks to learn how enterprise Job Scheduling has evolved.


Job Scheduling has been around for decades. It can be defined most simply as the orderly, reliable sequencing of batch program execution. Put simply, a job scheduler is a tool for automating IT processes. Historically, a scheduler has been an IT Ops data center tool, rather than something that is application-centric. It was established based on the need at the operating system level to run jobs. Job Scheduling uses clocks and calendars to determine the timing of various processes critical to the business. For example, the Finance department establishes a regular time when the books will close, which in turn determines when IT Ops will need to run month-end financial reports.

Job scheduling packages are used for controlling the unattended processing of batch jobs on a single system. Job schedulers are generally platform-specific. They are additionally configured to submit batch jobs for execution according to a pre-defined schedule or after a dependent event occurs. In most cases, this is done on a platform-by-platform basis. In fact, there are many native job schedulers built into operating systems. Microsoft Windows features Windows Task Scheduler, while Linux and UNIX platforms have CRON as their native job scheduler. These job schedulers emerged as tools for executing tasks during the "batch window,”— that is, running batch jobs after-hours when business processes are finished. Job schedulers take care of activities like database maintenance and prevent the need for operator intervention during the job schedule.

Job Scheduling takes one small evolutionary step beyond the fully manual execution of jobs and tasks known as batch processing. Realizing the limitations of batch processing, IT operators employing batch processing eventually began looking to operating system tools to help automate labor-intensive manual jobs management. In the UNIX world, for example, that tool was CRONtab. Because CRONtab fulfilled the urgent need for a job scheduling tool, it was widely used,  despite significant drawbacks such as:

  • No business calendars
  • No dependency checking between tasks
  • No centralized management functions to control or monitor the overall workload
  • No audit trail to verify jobs had run
  • No automated restart/recovery of scheduled tasks
  • No recovery from machine failure
  • No flexibility in scheduling rules
  • No ability to allow cross-platform dependencies

Despite these limitations, job scheduling increased in both use and popularity. On the mainframe, a combination of internal operating system functionality with third-party scheduling tools were a bit more sophisticated in what they provided, but were still limited in addressing the more advanced emerging automation challenges. Over time, data centers began to shift from a mainframe-centric world to a distributed systems environment using multiple servers. Likewise, businesses were increasingly moving off of singularly powerful legacy systems and onto distributed servers built on Unix, Linux, or even Windows. This required the purchase of more servers, which didn’t have the same unit of power as a mainframe as standalone units, but working in combination could still offer equivalent power at a fraction of the cost. 

Challenges of Job Scheduling

In concept, the demands for distributed job scheduling were similar to those of mainframe driven, time-based sequencing and dependencies models. However, the move to distributed computing created new difficulties. For example, the lack of centralized management control became an issue when jobs were running on many different servers, rather than on a single central computer. As mentioned above, in contrast to this distributed environment, traditional job scheduling software generally runs jobs only on one machine. This introduces a number of problems for organizations, including:

  • Silos: job schedulers generally don’t talk or work with each other across platforms. This led to a lack of synchronization between job schedulers running on different operating systems, such that critically related jobs on different operating systems had the possibility of not running, or running out of sequence. 
  • Complexity: more time is needed for IT personnel to perform their duties. Scheduling jobs and maintenance becomes more complicated as several schedulers on different operating systems must now be manually configured and maintained, increasing complexity and the potential for errors. 
  • Manual intervention: job schedulers frequently require manual intervention to correct problems between related scheduled processes on different machines, such as when a file is created on one machine and FTP’d to a second machine to be processed.
  • Extra Programming: job scheduling frequently requires additional scripting or programming to fill in gaps that occur when coordinating processes between machines and operating systems. 
  • Missing Links: traditional job schedulers do not come with integrated or managed file transfer capabilities, limiting or altogether halting the data supply chain until manual workarounds are configured. 
  • Rigidity: job scheduling functions best when each sequence of jobs begins and ends in a single platform. Moving jobs across different platforms, therefore, is quite difficult, causing new challenges when business processes inevitably change or evolve.
  • System Incompatibility: job schedulers often lack compatibility with systems management solutions, leading to frequent critical update issues, errors, and additional network maintenance. 
  • Visibility: although job schedulers are always accompanied by service level management agreements, running multiple schedulers on multiple machines or platforms greatly limits the ability to accurately evaluate run times via service level management
  • Time-Based: Job schedulers typically run batch processing at a certain time, which is referred to as time-based scheduling. The upcoming automation use cases however require a dynamic automation approach that is based on real-time events. 

Conclusion

Job scheduling was a necessary and revolutionary step forward in IT automation, allowing IT operations to eliminate much of the manual processes filling their schedules and opening a world of possibilities for digital enterprises. Given the complexity of modern business environments, however, traditional job scheduling alone is not a set-and-forget automation solution. In order to achieve truly dynamic, real-time automation, it is necessary to invest in an enterprise job scheduling software that can coordinate IT workflows and IT jobs across platforms and environments, orchestrating previously siloed systems into one harmonious force. If your organization is interested in taking your traditional job scheduling to the next level, contact us today to learn how.  

Start Your Automation Initiative Now

Schedule a Live Demo with a Stonebranch Solution Expert

Further Reading

Static vs. Dynamic IT Automation, and How They Work Together

Although some organizations have completely evolved their IT operations using dynamic, event-based workload automation, others utilize both static and Dynamic...

Dynamic IT automation whitepaper

Dynamic IT Automation: Why "dynamic" is different and how it enables real-time automation.

In this whitepaper on dynamic IT automation, Stonebranch discusses how dynamic, real-time automation can help meet the modern challenges of digital...

What is DevOps, Why Does it Exist, and How Does it Help?

Organizations following the curve of modernization require the adoption of DevOps automation in order to remain competitive and offer the kind of seamless...

Cloud Orchestration and Automation Explained

When combined with Workload Automation (WLA), Cloud Orchestration is a powerful approach to breaking down platform and application silos. Learn to create...