More Than MFT: Unify Automation and Secure File Transfers Across Hybrid IT
Watch how orchestrated MFT turns file transfers into intelligent, automated workflows that connect your entire hybrid IT environment.
Hello, and welcome to today's webinar on More Than MFT, Unify Automation and Secure File Transfer across Hybrid IT. In this webinar, you will see how StoneBranch Universal Automation Center unifies secure event driven data transfers with full orchestration capabilities across mainframe on prem and cloud platforms. I'm Lauren Tanzini from Stone Branch, and I will be your moderator today. Before we begin, let's cover a few things. This event is meant to be interactive between you and the speaker. If you have any questions, click on the Q and A tab on the dashboard and submit your questions there. We will answer them at the end of the session. And in the event we can't get to your question, we will follow-up afterwards. Additionally, in the right hand corner of your screen, you will find some handouts that are related to this session, including the new press release for UDMG that came out this morning, as well as additional resources for MFT. With that said, it's my pleasure to introduce our StoneBranch speakers for today's session, Gwen Clay, chief product officer, and Robert Oslin, principal product manager. Let's go ahead and get started. Gwen, take it away. Good morning. Good afternoon, wherever you're visiting from the world today. And it's always great if you could just drop a drop a line in the, in the q and a about where you're connecting from today. It's always great to see that. So let's let's dig right into it. So today, we're we're here to really talk about our managed file transfer capabilities. And these are really key and integral components of our Universal Automation Center solution. At the heart of our Universal Automation Center solution, of course, is our controller, which is the big brain and the central control. We also have a nice self-service capability, which allows people to ease into the solution. And then, of course, down below in the Connect, we have, you know, agents or tentacles into all of the variety of servers, platforms, cloud infrastructure apps that make up the modern enterprise. And then kind of at the top where we're going to be focusing today is really on our managed file transfer and our transfer capabilities. It started over twenty years ago with the Universal Data Mover, which is a, which is being used in some of the largest and most demanding enterprises that exist today to transfer, everything from small files to incredibly large datasets, from the mainframe all the way to delivering files within containers today. It has a very powerful protocol that's capable of delivering at high speed and high volume, and then also with a very clean and intuitive scripting language that allows you to do many pre processing and post processing of your files. Today, we're going be a little more focused on the Universal Data Mover Gateway, UDMG, as we will be referring to it many, many times throughout this presentation. This is a newer addition to our family, and, you know, it's what we'll be talking about today is UDMG three point two. Over the past eighteen months, we've done an enormous amount of work re envisioning what an MFT solution can be, building on top of a new technology platform with a really solid focus on security and encryption, the user experience, and then also providing, you know, core methodology that really reflects the changing nature of environments in modern IT landscapes. And this is kind of where we've landed to right now in terms of what the full spectrum of a modern IT landscape actually looks like. It starts with on premise, which could represent some of the traditional platforms that have been used in use for many years and may still run some of the most absolutely mission critical systems and applications within your environment. It also could evolve into more of a private cloud infrastructure. It could also be a sovereign cloud infrastructure. On top of that, you know, we have the whole topic around hybrid cloud. The majority of you just don't have one cloud anymore. If you do, you're among the select few. The majority of our customers have two, three, possibly more clouds that make up their landscape. And because of that, they need to, be able to communicate and exchange data and files, between clouds effortlessly, which is really where that intercloud file transfer comes in to be able to stream files, share data, and also synchronize files between, different public clouds. And sitting really at the heart of all of those three different infrastructures that make up your IT environment is really the whole business to business aspect. The ability to transfer and share files with your trading partners, is really critical. Know, businesses don't exist in the bubble and today files seem to be as relevant than ever. If you really look at it, you know, there is always a lot of talk about in the future, if you can move forward, Robert, that files were going to go away. There were going to be new technologies. You'd use web services. You'd use enterprise system buses to exchange files. And a lot of that didn't really fit well within the security paradigms of a lot of large regulated businesses. File volumes have continued to grow, file sizes have continued to grow, and exchanging data via files is an enduring way and a secure way of moving data between, throughout your organization and with your trading partners. And, you know, this space has really evolved. If you kind of look at it when we started out in the late 90s, 2000s, you know, these were typically standalone FTP servers or very early generation MFT products, and there wasn't really a lot of automation. And, you know, it was back in the days where servers were more pets than cattle, where they actually had names. You'd name your servers after the moons of Jupiter. You know, it wasn't at a point where you had ten thousand Linux virtual machines running within your enterprise. All the way to then moving to the 2010s, when solutions started maturing and you had this consolidated centralized MFT capability that was multi protocol, that not only handled more advanced FTP protocols, but also handled a lot of the, EDI type protocols like AS2 or some of the more bespoke industry specific protocols. Their automation became a little bit more important, became event driven, but tended to be a little bit more bolt on. And of course, the scale of connections tended to grow. Somewhere between twenty ten to twenty fifteen, a lot of the world of managed file transfer seemed to stall. It seemed to stall, you know, I think, for a lot of reasons. There was a lot of consolidation in this space, and, there were now vendors that had, you know, two, three, four managed file transfer solutions, where one of them was more of a move forward product, the other ones were more the cash cows. And then also, I think a lot of these products, the architecture just didn't advance to where we are today in a hybrid IT, hybrid cloud environment. Today, B2B MFT is more of an orchestrated capability. Also, you move forward, you have to start really looking at how do you integrate your MFT solutions. A lot of those solutions were built in the days of Windows Server, in the days of Comm, and don't really have a full REST API stack. And just the whole variety and volume of targets has exploded. And then, moving forward, you know, let's take another click down. So, when we start talking about integration and orchestration, traditional MFT was relatively limited in what it could manage. However, when you have orchestrated MFT, you really have that deep integration with your automation solution, and you have that API first design I was speaking about. And then, kind of from an architecture and security, you had to kind of work around the way that these solutions were designed to act. There were In certain cases, you would have, you know, risk within the DMZ, and they acted more like a destination. With modern orchestrated MFT, you have a lot more flexibility in terms of how you, configure your architecture. Security standards have advanced, to enable you to, mitigate some of those risks. And it really acts as a true gateway. And, of course, from an economics perspective, while, you know, a lot of these solutions continue to function and provide, you know, critical capabilities, you know, we continually hear feedback from our customers that, you know, yes, we have the solution, but it's becoming more and more limited on what it can do versus what we wish to do into the future. And really, with our orchestrated MFT solution, one of the reasons we've moved into this space was just like, you know, over fifteen years ago with UAC, what we saw was a mission critical area of IT that was really ready for reinvention and redesign, based on the modern requirements of today's enterprises. So with this, I'm going to hand this over to Robert. Robert, I think you have a poll question for us. Yeah, thanks, Gwen. That was a brilliant introduction and kind of going over the evolution of NFT. So we do have a poll question, you can see the question here, we'll go over it, but I figured it may help to clarify what we mean before asking the question. I think everybody knows what it means to not have an MFT or if you're not sure about that you can just choose unsure or an MFT solution which is not integrated with any your back end workload automation. In that case really we're probably talking file transfer not an NFT solution to begin with. It's the difference between partially integrated and fully integrated that I'd just like to elaborate on just a little bit and then in some of the following slides we'll get into that in a little bit more detail. With a partially integrated it doesn't mean that you don't have some sort of automation capabilities, it just means that these are typically standalone or not deeply integrated within the rest of the organization's workload automation. So you may still accomplish the same, accomplish transactions or modifications on files that are received by your NFT, but there isn't a complete sort of life cycle of the transaction from beginning to end where the B2B data that's ingested can be monitored and governed throughout the ingest of the data as well as the workflow that follows. With a fully integrated solution, your workload automation and NFT are kind of one, right? They're really deeply and tightly coupled. And so you have that complete kind of end to end life cycle. So that's kind of what the differences are. So if you choose partially or fully, hope that helps you make the right choice in the poll here. Thanks for that explanation, Robert. I'm gonna go ahead and close the poll. So it looks like thirty two percent are fully integrated MFC, twenty eight percent partially integrated MFC. Well, that's interesting. So looking at those results and contrasting them with a study that we had a third party conduct for us over a large number of IT professionals, and we'll be sharing this later in the year. We conducted our own kind of similar study and the results are similar, although the no MFT is quite a bit different, but it was sort of evenly split between those who had some sort of MFT solution. It was about fifty fifty split between those who had a more deeply integrated with their workload automation versus not. So that seems to match kind of what the what we're seeing here in these results that I find that kind of interesting. Let's elaborate just a little bit more on that because I think it helps to paint that picture. We're starting very high level here and we'll drill down a little bit more deeply and kind of cover that a little bit more depth before we get into the actual demo where we demonstrate how that that transaction lifecycle with your NFT and workload automation can work. So here's a visual I think helps a little bit. On the left you have that traditional NFT where you may have some automation. It could be built into the tool itself. Oftentimes an NFT will have some sort of needed automation capabilities. Maybe it can launch a script once files are received from partners. It's just not integrated into the rest of the organization's workflows. It's just kind of standalone. And so it's difficult to really accomplish anything complex. If there's any complexity involved, oftentimes some other process will need to come in and take care of that, moving that file off the MFT, do something with it because the MFT just really can't can't handle that sort of complexity. And at that point it loses the connection with what's going on. Contrast with an orchestrated NFT solution, really your workload and workflow automation is really at the heart, right? It's the DNA of everything in managed file transfer is an integral component of that, right? The ingest of B2B data is necessary as a part of that broader workload automation, which may be tremendously complex. And by complex, I mean in what things can be accomplished in terms of integrating with different systems, terms of massaging and modifying the data. That's what a true fully integrated B2B orchestrated NFT might look like. So let's go one step a little bit deeper, kind of more into sort of at the transaction level, if you will. Typically in days past, NFT solutions would either need to be placed kind of deep within the organization's network, which then posed a challenge for receiving data originating with partners where that had to go through the DMZ through kind of the insecure zone of the organization into the secure network zone. In the early days, were just NFTs would bypass that altogether, poking holes through the network firewall, which kind of introduces some pretty bad security risks. But then vendors adopted a different approach, which is sort of put the NFT inside of the DMZ, sort of act as a drop zone and receive data there and then have either another NFT solution within the organization or some scripts come in and pull the data out, right? So maybe on a routine basis, on a scheduled basis, kind of go grab the files that were in the DMZ. That has its own security risk, primarily being the fact that data was residing in the DMZ, even if it was temporarily, right, that introduced risk. And there was no really real timeliness to the transactions, right? So there's a delay between files arriving into the organization and when those files were post processed. In both scenarios, once the data is received, it was typically stored locally on disk. And then you might have some scripts or some baked in automation within the MFT to kind of move those files off and then kind of cross your fingers and hope that the back end workflows and workload automation could take care of the rest. Right. There's sort of this kind of a decoupled sort of handoff. In a more modern orchestrated NFT, what you're going to have is you're going to have a mechanism for one ensuring the real time delivery of files into the secure zone, but you're also going to have a means of protecting that data as it progresses through the insecure zones of your network like the DMZ. In our case, we have a companion product to UDMG called Secure Proxy, and what it allows is for the mitigation of some of those risks that are typically present when you're moving data through the DMZ. And in my next slide, I'm going to demonstrate that a little bit more detail. Suffice to say that there's no inbound holes necessary through your network firewall and no need for storing data temporarily in the DMZ like a DMZ drop zone. Once data is received by UDMG, however, that's when the real power takes place, The real differentiator with orchestrated versus traditional in that you have a very powerful automation workflow automation system, be able to take over, perform any sort of manipulation of the data that's received, integrated with any system that you can possibly imagine on the back end, but then complete the transaction lifecycle, letting the partner know or returning data, modify data to the partner bidirectionally or receipts or that sort of thing, letting kind of completing the transaction lifecycle. And then with a full monitoring governance audit ability of the transaction from the moment it crossed your firewall to the moment that that data was processed and then returned to your partner. Going a level deeper still, here's a network diagram. I'm not going to go over every little single thing. We could probably spend an hour just going through this kind of the process by which this works. But at a very high level, looking at this diagram with a partner kind of originating some transfers. And by the way, this could be bidirectional. But here we have a partner originating transfers. This could be scripted. It could be them using dedicated S2P clients or applications or interactively, for example, using a web transfer client, which we'll look at later in the demo. But once files are received or pushed through your external firewall, that's where our secure proxy can take that information and either it can it can do what's called a TCP direct pass through through a secure tunnel or it can even do something called a session break, which is a differentiating feature as well that allows for the interruption of the session and then reauthenticating users on the back end against UDMG all through a secure tunnel that was established from within the secure network zone. So there's never a point where holes are punched through the network firewall, nor is there a situation where you need to store data in the DMZ. On the back end, once files are received in UDMG, as I mentioned earlier, that's when we do the integration with Universal Controller and we'll actually take a look at that. Sometimes the picture is worth a thousand words. And then we'll talk about some of the features and capabilities of the product as well. So hopefully this network diagram helps just a little bit. All right, last but not least, UDMG is a feature rich application. It's built on a modern stack so we can pretty quickly add new functionality to it. Suffice to say that it's a very flexible solution, So all of the standard protocols that one would that are most commonly used, including some of those more legacy protocols like AS2 are supported no matter how you need to deploy the solution. If you're a Microsoft environment, you need to deploy on Windows or if you're part of the pool kids and you want to deploy on Linux. If you need to scale the solution, we support full active active or if you just need Doctor, then we got active passive mode. You need to deploy on prem or in a hybrid type cloud environment, private cloud or even SaaS. We provide that as well. So flexibility is kind of the key. Our API first architecture allows for that as well. So if you need to programmatically interact with the product, can do so through a one hundred percent coverage API. No matter what database you were using on the back end, no matter how you want to provision your internal users or external account partners, we've got that covered as well. So again, the key here being really flexibility. That's the most important thing we see, right? Because for every client that we talk to, everyone has slightly different needs. And so we want to make sure that we're covering those needs adequately. And last but not least, security is always kind of front and center. Really everything we do has sort of this secure by design mindset. So all the typical stuff you would expect like password policies and whatnot, but even nice to have things like ability to define a very robust IP filtering allow and block list. So you don't have to go and work with another team to go build rules for your firewall. You can just create rules within the product itself and kind of manage them there in everything through integration with hardware security modules as well for certificates. Just a lot of security stuff baked in. We could spend a lot of time talking through those, but it's really good. Okay. So enough slides. I think we have one more poll question and then we will jump into the demo. How many MFT tools does your organization currently use? Four or more? Three, two, one, none or unsure? It looks like twenty four percent have two, thirty six percent stay unsure. Interesting. Okay. Yeah, that's some interesting numbers. I think we could all speculate and talk about that for some time and what that means. And it's not uncommon, right, to have multiple NFT solutions, but that just speaks to the partially integrated the challenge with governance there, right? And like how do you monitor across multiple NFTs and really orchestrate across those multiple systems? So here I'm going to jump into the demo and at a very high level what I want to show is kind of the basic configuration of the administrative component of UDMG for managing the partner accounts and how we move data in and out of the NFT. But then we're gonna get into the integration with a UASC workflow. So we can kind of see that end to end and orchestrated NFT lifecycle that I was referencing earlier because I think that's really what this is all about. So the first thing we have here is the screen. You'll notice that this is a web administrative interface. It's all built on React, but I'm missing my username and password. I'm asked for a domain. Domains are a sort of kind of a construct like a namespace that is a cool security feature that allows for organizations who may need to segregate configuration by their business units or maybe have a multi tenancy type setup. It allows for the physical and logical isolation of configurations. In my case, I only have a single domain, so I'm just gonna go ahead and type it in and connect to it, but your organization may have more than one. Now I am up based with an authentication screen. This could also prompt for a second factor if configured, or it could be avoided altogether and use a single sign on since we support fully federated access through a different single sign on sources. I'll show that in a second. And let's take a look at that setting. So we can configure all sorts of mechanisms for ingesting both the users, which are those who control, if you will, the administrative side of UDMG, as well as accounts, which are our partners. And again, this could be federated sources like maybe LDAP or Active Directory where those users and accounts can be provisioned from as well as single sign on sources. So here we have, let's take a look. I'm gonna just demonstrate real quickly. We can choose from SAML type sources as listed here or odixoauth two point zero type sources, which have all been tested but support these protocols. So if you have one of those like you need to ingest users with Okta, we could do that. We support full just in time provisioning of both accounts and users as well as the authentication and authorization as well. So we can map both users and partner accounts to basically settings within the product that instruct how that user is attached to their specific role or how those accounts are associated with their secure transfer pipelines, which we'll talk about here in a minute. So all that can be automated. Here we're using a manual provisioning process, so we support that as well. If you want to just add users or accounts manually, that can be done as well. We also have a centralized credentials management capability for creating credentials once that can be referenced across the product multiple times so you don't have to kind of recreate those, whether it's a remote partner credential that you want to use to connect to a remote partner or host key or a BGP key. We kind of have a create once reuse many times. But the most kind of the most important part of what we need to configure and what's required really for creating those B2B transactions is covered within endpoints and pipelines. Think of an endpoint as a a potential target or a potential destination depending on how it's glued together later in the pipeline as we'll demonstrate And we have a large number of these already built in and more are coming. So here we have various like a remote sftp server which represents a partner's sftp server or a local SFTP server which represents UDMG listening for files over the SFTP protocol. We support that for FTP as well both bidirectionally, AS2 for receiving as local or sending. Are soon we'll be supporting cloud endpoints as well that's forthcoming here in the next release, So you'll you will be able to either pull data from the cloud or push data into the cloud. And then we have a local file system endpoint. So these local file system endpoints represent UDMG file system, the UDMG file system itself, which can be a destination or it can be a source. Just to kind of drive this home a little bit, let's take a look at just one of these, this demo SFTP. We'll take a look at go edit, you'll notice we have the listener IP address and the port, you can specify the type of authentication that's going to be expected or required of the remote partner, and you can even, you know, override the default key exchange algorithms and ciphers and HMAX. So there's a few settings there. Okay. And now once you've established your endpoints, these endpoints again are reusable, you kind of create them once, but really where the core or the crux of how you use those endpoints is going to be through what's the creation of a pipeline. So pipeline takes two endpoints and marries them together. For example we have here this d pipeline upload. So a local SFTP server that's where files are received into the system and then they're deposited into the local file system that is residing also within UDMG. So you can kind of construct kind of an any to any sort of pipeline to move data from a source to a destination. For example, and what's forthcoming, you'd be able to receive a file on a local SFTP and then push it directly to a remote cloud connection as an example. Let's jump into one of the pipelines to take a deeper look before we go through the actual workflow. So this particular pipeline consists of a source that is the web transfer client. So this is an HTTPS server that's built into UDMG. And once a partner connects to that HTTP service and uploads files, the files are then deposited into a local file system repository. That's another endpoint that we've defined earlier. We can define the virtual path, which is what the partner sees when they connect, and we establish the actual physical path using a set of variables. And that allows us to create sort of a single pipeline and then reuse it across multiple partners, whereas with the traditional MFT, you might have to go and create those directories manually for each partner that connects and you have kind of a one to one setup. Here we have sort of a reusable template, if you will. The permissions are also defined here for what those partners can do once they connect to the system. Things like upload files or download them or delete files, make directories, rename them, etc. We even have for the web transfer client a share file capability for P2P transactions and we'll demonstrate that here if we have time as well. Last but not least, in fact this is probably the most important in which we were alluding to in all the different slides, is that integration with our Stone Branch automation that occurs here as well in the pipeline. So when a pipeline is triggered by nature of a file being received from a partner, we can trigger that integration with one of UAC's events, Universal Event Monitor, and pass in a bunch of metadata to the event so that way UAC can take that file and do some post processing on that data. So without further ado, I'm going to jump over into UAC so we can kind of see what that construct looks like, and then we're going to fire off an upload into this web transfer client endpoint to see that actually take place. Right. Log in. So this is the UAC workflow editor. I'm going to zoom in a little bit so we can see these individual components a little bit better. So the very first thing that we have defined in this workflow is a decrypt event. So we are expecting our partners to upload PGP encrypted files. Once those files are received we will decrypt them with the PGP key that was provided by our partner plus our private key. Then, we're going to scan the file for viruses. So, we are integrating with a third party AV scanner. If the file is infected, we're going to quarantine it. And we might as well, if we're going to quarantine it, we probably should send we probably should send an Email notification to that partner letting them know that there was a problem. So, I'm going to create a notification event and we'll edit this event later but we want to make sure that they're they get an email that tells them that there was a problem with this file. If there wasn't a problem with the file then it proceeds to moving to an internal destination where we're going to perform some operations on that file, we're going to merge it with some additional information that we have on the back end. We also are demonstrating here in this workflow the ability to monitor a folder for data, so we might be receiving data from yet another process that then needs to be merged with the partner's data that was uploaded. The next step we have is an approval process. So this approval process is a manual step for kind of like a human in the loop to make sure that the data looks good before it proceeds. This doesn't have to be there again, this is something optional. This also could integrate with Stowbranch's universal portal, so you could have kind of a manual sign off through that portal. Once that approval is good, we're going to ingest the data into Snowflake, SQL and S3 bucket. And then once those are complete, we're going to kind of create a report file that will be kind of close the loop, send that file back to the partner letting them know that we received their data and it'll have a summary of the transaction information and we're going to send an email. So that's the process. Now let's go through and trigger that process. This next page here is the front end for our web transfer client, which is integrated with UDMG and is what would see, your partners, suppliers, vendors, or whomever you give access to, and this would allow for interactive file transfers to take place, sort of a person to business, right, type of transaction. It also supports P2P and we'll look at that later. This is completely brandable so you could have your own logo, your own look and feel, you can have your own warnings or terms of service. Again, single sign on is supported. Here we're just going to use a login of a normal account. This is a full featured web transfer client, so it's got a basically it's like a file manager in the web where you can upload and download and do all kinds of things. Here I have these two folders that were part of those pipelines. I have the download folder which is where my receipt is going to be returned and I have the upload folder which is the one that's expecting my PGP file to be uploaded into to trigger that workflow on the back end. So, we're going to take we're going to upload that file. Let me go find it first. Helps if I had Oh, there we go. Okay, sorry it took me a minute to find the file. So, I've uploaded my my PGP file. Now, I'm gonna jump back over into the UAC workflow and let's take a look at our activity monitor. And here we can see an active workflow. Let me zoom in real quick. Okay, it's moving pretty fast. So, it successfully decrypted file. It scanned it. Apparently, there were no viruses. That's good And it's stuck here now awaiting approval so it kind of blocked at this point. I'm going to release this block. So we're going to do a command approve. And that should move file. K. Now we've ingested it into these different systems, still ingesting into Snowflake, running some bill billing calculations. Looks like we've made the report file available to the partner, and we sent them the notification. So to confirm that, I'm gonna jump over to my email client, and there we go. There's the file just arrived, or my email just arrived, alerting me that my file was processed successfully. And if I go back into the web transfer client and this is kind of where we close the transaction life cycle. If I now go to my download folder, we can see the file that was processed. So, so far we've launched the workflow that was attached to the pipeline, we've processed the data that was uploaded by the partner, and now they've received this receipt. I can take another step here. I can delete or download this report if I want to, I can rename it, I can delete it, I've got permissions to delete. And last but not least we can do something called sharing. So sharing is like a P2P function that allows me to create a secure hyperlink to the file so somebody else that's not a part of the system can upload it. Again, this is only if they're authorized to do so. So here we're going to make this report available. I can set the link to expire in a certain time frame. I can set it down the limit. In this case, we're gonna choose one. I can password protect it if desired. And finally I can create the link. This link could be distributed to a third party using standard methods like the email or secure messaging or whatever. Here I'm going to go ahead and pretend like I am a third party who clicked on that link. And as you can see I can now view the file that was sent to me, I can download it if I enter my password, and there I grabbed the file. Notice that it says there's no downloads remaining, so if someone were to take that link and send it to somebody else that link would no longer work. So we have completed the life cycle there, we have done a P2P sharing, and if we jump back into UDMT and we can look at their transfers tab, oops I got logged out. I can see the files that were transferred. So we have a complete monitoring visibility of all everything that was processed by whom, when, with the complete with ability to kind of deep dive into into the transactions themselves. And we also have the files that were shared through the secure links are also tracked independently so the administrator can see who is sharing what with whom and can even revoke those links if they are deemed unsafe. And so that's the kind kind of the auditing aspect, which we could spend a lot more time on, but we don't have time, for this particular demo. And so that kind of completes the the the kind of the product overview demo. All right let's get to the questions. So the first question that we have, sorry what are the types of storage are supported? S3, NFS or block? Yeah, pretty much right now we're the traditional file system storages for Linux and Windows, but we will be supporting the various cloud provider storages here very soon. Right? So S3 and remote file systems, blob storage, like all that's coming. It's just right now it's more limited to local file system and your NAS type storage. Is Keycloak as IDP supported? That is a great question. I have actually not heard of that particular vendor. So long as they support SAML or ODIC properly according to the protocol specs, then it should work. If it doesn't, let us know and we'll take a look into it. We've tested with what we thought were considered the major vendors as those were that were requested by most of our prospects and clients. And so we've tested the protocols a standard. We follow the standard. It should work. There are a lot of different vendors out there. But as long as it supports ODiC or SAML, it should work. Okay. How are the authentication keys managed? It's a great question. I mean, depends on what authentication keys we're talking about. If we're talking about the partners themselves, right, we're using secure hashing techniques to they're not reversible, right? If there are authentication keys that are going to be used for that we need to be able to retrieve, then those are encrypted using an envelope encryption like a deck type encryption where we have a deck key that encrypts all the data stored in our database and then a master Keck key which is also user configured Unlike some of the MFT's, which have a baked in key, which can be reversed out of the product, we allow that to be configured and managed by our clients. So it's a more secure approach, more modern approach to protecting credentials that need to be reversed out, right, that need to be reused. So we have I think we should have a white paper, but if we don't have a white paper, we have some good documentation about that in our help documentation for the product and how we perform that securely. That's a great question, by the way. Does the description and virus check happen within DMZ or inside tenancy? Within, we didn't cover this in this demo, but our secure proxy actually has the ability to perform streaming ICAP, which is a protocol where we can offload a file as it's being received buffered by send that off to the AV scanner and scan for viruses all in a streaming fashion. It's a differentiating feature that the majority of NFTs do not provide. So it's all done in the DMC. Now you can do it in the secure zone if you prefer. Right. So you also have ICAP built into UDMG. So if you prefer to receive the file completely on disk first, then you can scan it after the fact. So we offer both, both in the secure proxy in a streaming fashion or within UDMG once it's received. Granted, you do need to decrypt the file first if it is PGP encrypted. Most AV scanners aren't going to take a PGP file. They're just not going to handle it or a zip file. Like you got to perform some sort of of post processing on the file in order to scan it. But if the files are not in that mechanism, then yeah, they be scanned. Just a two set process. Great. All right. Can an anonymous user upload files to the web transfer client website? No. So we don't have kind of the concept yet of sort of a drop zone, like anonymous drop zone. Right now it does require authentication, so it's very secure that way. There may be a purpose or a use case for that, right? So be able to kind of initiate an inbound like, hey, here's link to upload a file to me. That's something we're looking at for a future release. Right now it's kind of one way. So authorized partners can share a file to somebody else. They just can't receive file back anonymously. There has to be a provision account. Is the UDMG architecture active passive or active active? It's either. So we support whatever you guys desire and it's both in the at the proxy, right, for the for the companion proxy as well as UDMG. They can be supported in either mode. So it really kind of depends on what your organization's needs are. Right. If looking to scale out horizontally, ActiveAct is probably the right approach. More of a Doctor type setup. If SLAs are more important than anything, then even ActivePassive is appropriate. How do the credentials or keys get stored within UDMG? Yeah. I think that's a repeat of the earlier question. So we use strong encryption and it's a technique called envelope encryption. So there's a key encryption key and then I think it's data encryption keys of deck and a keck that are used and that's covered in our help documentation in-depth. But we can also talk with customers about that and explain how that works and how secure it is, especially compared to the vast majority of NFTs that are out there that don't secure store things store keys very securely. Okay. If a set of files received on MFT needed altering by a Windows application before being sent out to a third party, would the Windows application have to be installed on the MFT server to be used? Let me see if I understand. Instead of files received in the MFT need altering by Windows application before we send it to Yeah. No. I mean, you you you it depends on your workflow. Right? So currently our workflow process is once the data is finalized, right, it's received into the target, what we call the destination folder is when we kick off our workload automation. In our next version, we have a two step kind of workflow process where you'll have the ability to execute workflows on the data when it is staged. So what is received in sort the in the temporary folder before it's moved into its final destination. And so you could do sort of a two step approach where you could perform some sort of operation on the file locally within the UDMG system itself and then maybe do some post processing elsewhere. It kind of depends on where you want that processing to occur. So you could pull the file off the system or you could do things on it like run a command against whether it's a published event or you could run a command locally to do something, delete the file or tippet or like whatever you want to do, depending on what you've got installed in that system. So it'll be kind of really up to how the administrator wants to configure it ultimately. And there will actually be a third there'll be a third use case, which is the on air. So we'll have really three different triggers, if you will, for once the pipeline is executed. So an on stage, on completed and an on air, because sometimes you want to execute some sort of cleanup task if there is either a problem with the post processing that occurs in one of the earlier steps or maybe if the transfer, you know, the failed transfer didn't complete successfully, you just want to do some sort of cleanup and send a message or things like that. For third party uploads, can you limit the type of files that can be uploaded? Example, preventing exe or dot com, etcetera. That's a great question. Feels like this is the feature request list that's already on the roadmap request list. Natively, it's going to be in the next version. There'll be what we call essentially the band file types. It'll be built baked into UMG for now that we need to be handled through our UEC integration. So from a post processing perspective, when we kick off that publish event, it would need to check to see if those files match that criteria and then remove them or isolate them. So today it's kind of done through our post processing and that's the beauty of the integration with UAC. Anything that's not supported natively within the product can be easily done through UAC. Like we don't support native PGP decryption in UDMG right now. When a file is received it doesn't have a deep decrypt, but UAC does. And so the beauty of it is that any sort of capability that is maybe still pending for kind of native integration within the product be easily accomplished with your automation workflow that sits right next to the product. So that's the really the cool, the best answer that we can give you is that we can say yes to you right now. Is UDMG containerized and can deploy on Kube cluster? Great. Sounds like another question from our roadmap. Yes, that's coming. We have developed it and tested it. We just have not formally authorized or formally QA it in that sort of setup. That that's something that's not supported at this very moment, officially supported, if you will. But that is also forthcoming. So the ability to run it in containers is something that's kind of on the near horizon. Great questions, though. All right. This is the last question that we have time for. Is there any size limit on the web transfer client for uploading a file? None that I know of. I know we've tested like in the hundreds of gigs. I think we have one partner who is talking about uploading a petabyte size file. I don't know that I can empirically say that we've tested the upper boundary of that. So maybe we can leave that as a to do item. That's a great one. Like how how big can it get? Like, is it just maybe a matter of disk space? But I don't really have a great an empirical answer for that one. Yeah. That'll be disk space and also your network integrity. The larger file transfer is regardless of any network, the higher the risk of it, there being a a challenge in, in completing the transfer. But if you're talking about standard files and the gigabytes, you know, we've we've got that covered. Yeah, I always ask them that question. What do you need? Like, what's your use case? Sometimes there's theoretical limits on things, but really what's the practical limit? Right? What are you trying to accomplish? And maybe there is a use case for uploading. I don't know. Maybe it's the NASA space shuttle schematic diagram or something. The person that asked that question does have a follow-up. They said in the transfer client while sharing a file, is that limit to individual or group? The while sharing a file is limited to individual group. So on the sharing, it's a hyperlink, right? It's a secure hyperlink. And technically that link can be sent to to anyone. So if you allow a certain number of downloads from that link, then that link could be right. It could be published out to multiple parties and anybody who receives that link can pick it up. Right. So it's kind of like a shared public link. You have to know the link, right? It's standard GUID style, very long link, so hard to guess. But technically, if somebody has that link, they can pick up that file. There's no restrictions today other than having ownership of the link. And that's why we recommend setting a maximum number of downloads, maybe attaching a password to it. And I hope I'm answering the question correctly, but you could also protect it that way, right? You could set expiration number of downloads and a password, which kind of adds some more protections just in case you send the for the link to somebody and they forwarded to somebody else. Right. You don't want that third party or fourth party to get the file. But that's a great question. And you're trying to balance the P2P and security, but you can turn off that feature entirely as well. Great. We do have a couple other questions that we will follow-up after the session. But I wanna say thank you to Gwen and Robert for the deep dive into Stone Branch managed file transfer and UDMG. I wanna thank all of you for joining today. Later today, you will receive an email that contains a link with a recording from today's session for you to rewatch or share with your colleagues. We're also excited to announce that the twenty twenty six Stone Branch Global State of IT Automation Report will be released very soon. So keep your eye on your email to stay up to date with current IT automation trends. Thank you all for joining, and we hope to see you on our next session. Bye, everyone.
It’s time to evolve beyond standalone managed file transfer (MFT) solutions. In this on-demand webinar, discover how orchestrated MFT—powered by a service orchestration and automation platform (SOAP)—transforms traditional file transfers into intelligent, automated workflows that connect your entire hybrid IT environment.
Watch as we demonstrate how the Stonebranch Universal Automation Center (UAC) unifies secure, event-driven data transfers with powerful orchestration capabilities across mainframe, on-premises, and cloud platforms.
Through a detailed walkthrough, you’ll see how to enable real-time data movement that is fully auditable, compliant, and seamlessly integrated with your enterprise workflows. Whether you’re exploring alternatives to legacy MFT tools, aiming to reduce manual handoffs, or building greater resilience into your data pipelines, this session shows how to move data anywhere and orchestrate everywhere—from a single modern platform.