Blog Post The Hybrid Cloud Conundrum

Hybrid Cloud environments pose a major obstacle for organizations. Discover how to provide seamless experiences for both the user and administrator.


There is a marked move to a more hybrid technology platform for many organizations. Many see this as a ‘mixed’ platform, with certain workloads running on more physical platforms, others running on virtualized servers owned by the organization, and yet other workloads running on private and public cloud systems. This diverse landscape will create massive issues for those organizations who go down this route, as bringing such a mixed platform together at a data level will require lots of point-to-point hard coded integrations. Thus, organizations demanding a fast capability to deal with changes in its markets are presented with a challenge: hard coded fragments will need to be re-written, tested and maintained, which takes time and can be costly.

Within a well orchestrated hybrid IT environment, IT Ops teams have automated workloads that operate across mixed platforms in a way that provides a seamless experience. In these scenerios, workloads are portable: data needs to be flexible enough to be portable to anywhere it is needed at any specific point in time. Stateless computing, microservices and containers all offer solutions to this at the basic computing level, but data transfers can still be a sticking point.

For example, consider a mainframe workload containing a need to transfer data from its own environment to a cloud platform. Sounds easy enough and, sure, it could be coded in such a way that the data is moved to an Amazon S3 or Azure binary large object (BLOb) storage environment directly…but what if greater flexibility is required? Wouldn’t it be nice to be able to say that whereas AWS S3 made sense at a financial or performance level yesterday, Azure BLOb makes greater sense today, and therefore a simple change of target needs to be made?  It is always good to have a Plan B, should Plan A not work out–not to mention the fact that it would be useful to have the capability to change storage target at will, no matter what.

The target storage system is not the only problem, either. What if the compute workload requires the aggregation of input data from different environments (Windows, Linux, Unix, etc.), or requires a complex set of events across a data environment to be monitored and followed? Hard coding such workflow requirements can be highly counterproductive. Essentially, it forces an organization to continue to use outdated and constraining ‘rules’ while its business changes and continues to demand greater flexibility.

What is really required is a powerful underlying IT Automation engine that will ensure that data can be transferred from a source to a target in a fully secured and monitored manner, when required. This engine must also simultaneously enable the source or target to be changed as required. The engine needs to be as agnostic as possible, able to integrate itself into all platforms that the organization will be using, either natively or via open APIs. It must also carry out such data transfers in real time or in batch mode. 

Grant that engine the capacity to monitor events and react accordingly, along with the capability to use open APIs such that other systems (such as job schedulers and management systems) can also benefit from it, and the result is a solid environment that enables an organization to evolve at a rate that makes sense to it, rather than one prescribed by its IT department.

Add in the capability to deal with failover and network issues in an intelligent manner, and that seamless experience becomes a reality. 

Finally, such an engine sounds good on paper, but can become a burden if everything that you want it do has to be heavily coded. Look for systems that come with a library of existing functions, and especially for those that have a range of third-party libraries as well. Better still, look for ones that have community support through the likes of GitHub, enabling you to find appropriate solutions to your specific needs without much need for coding at all.

Such engines are not just a dream—they actually exist. Hybrid cloud platforms, can make the management and automation of the platform far easier, freeing up both administrative and user time to focus on what the organization really needs.

The future for IT platforms is a true hybrid compute platform, with a range of different underlying systems and workloads. Among other best practices, you need to be sure that you hide much of this underlying complexity with a solid abstraction layer. For data, this means ensuring that there is a fully flexible and agnostic data transfer engine in place.

Because of its supreme affordability, ease of use, and universal capability, Stonebranch is an excellent choice to consider when developing a plan for implementing hybrid cloud file transfer for your organization. Contact us today to learn more about how we can make your cloud file transfer seamless, simple and secure.

Start Your Automation Initiative Now

Schedule a Live Demo with a Stonebranch Solution Expert

Further Reading

Read the blog: Automation Trends 2024: Top Takeaways from the Annual Global State of IT Automation Report

Automation Trends 2024: Top Takeaways from the Annual Global State of IT Automation Report

The results are in! IT Ops, DevOps, DataOps, CloudOps, PlatformOps, and ITSM pros worldwide share their thoughts on automation and orchestration in the 2024…

Header Card Analyst Report: 2022 Global State of IT Automation Report

Stonebranch 2024 Global State of IT Automation Report

The results are in! New research reveals 2024's IT automation and orchestration benchmarks for IT Ops, DevOps, CloudOps, and DataOps teams.

Read the blog post: 6 Top Trends in Infrastructure and Operations (I&O) for 2024

6 Top Trends in Infrastructure and Operations (I&O) for 2024

2024's top trends in infrastructure and operations (I&O), including genAI, IT orchestration, MLOps, and self-service automation.

Watch the webinar now! Orchestration, Observability, and Control Over the Hybrid Data Pipeline

Orchestration, Observability, and Control Over the Hybrid Data Pipeline

Hybrid data pipelines involve multiple schedulers and real-time data issues — see how an orchestration layer can enhance observability and control.