New
Free
Amazon Bedrock
The Amazon Bedrock integration enables leverage of enterprise foundation large language models directly into UAC workflows, creating intelligent automation solutions capable of processing, generating, and transforming content. Integrating AWS Bedrock's AI capabilities with the automation infrastructure allows the development of systems that merge UAC's robust orchestration framework with advanced natural language processing and reasoning.This integration opens up a wide range of automation possibilities, from analyzing logs and generating human-readable summaries of complex system states, to intelligently routing and classifying data, transforming unstructured information into structured formats, or even making context-aware decisions within workflows.The integration provides native support for AWS Bedrock foundation models, offering versatility in AI model deployment and management. Users can enhance prompts using file inputs or UAC Variables, while outputs are stored for potential subsequent reuse across tasks and workflows, allowing them to create chained AI operations or distribute intelligence throughout the automation environment.Key Features Sending prompts to foundation models through the AWS Bedrock Converse API. Offer for consistent interface across multiple Bedrock-supported models. Handling of prompt-based inputs and model-generated responses. Support for text responses and JSON outputs with schema enforcement via Tool Use. Storing the LLM response as a local file.Enriching prompts with data from files and UAC Variables.Configuring response behavior using Advanced Options (temperature, top‑p, and max tokens).
New
Free
Azure OpenAI
The Azure OpenAI Integration brings the power of enterprise-grade large language models into your UAC workflows, enabling intelligent automation that can understand, generate, and transform content. By connecting your automation workflows to Azure-hosted AI models, you can build solutions that combine the reliability and orchestration capabilities of UAC with the reasoning and language capabilities of modern LLMs.This integration opens up a wide range of automation possibilities: from analyzing logs and generating human-readable summaries of complex system states, to intelligently routing and classifying data, transforming unstructured information into structured formats, or even making context-aware decisions within your workflows.The integration works with both Azure OpenAI Service and Azure AI Foundry platforms, giving you flexibility in how you deploy and manage your AI models. Prompts can be enriched by files or UAC Variables and outputs can be persisted for reuse across multiple tasks and workflows—enabling you to chain AI operations together or share insights across your automation ecosystem.Key FeaturesBuilt on the OpenAI V1 API standard for long-term compatibility and seamless model upgrades.Works with both Chat Completions API and the newer Responses API for comprehensive LLM interaction patterns.Platform-flexible design supporting models from Azure OpenAI Service and Azure AI Foundry.Enrich prompts with data from files and UAC Variables, and save outputs for use throughout your automation ecosystem.Configurable data retention controls via "Allow Conversation Data Retention parameter", with support for zero data retention when used with Modified Abuse Monitoring.Structured output support with JSON schemas for predictable, machine-consumable responses on models that support this capability.What's New 1.1.0:EnhancementsAdded: Improved support for upcoming agent releases.Added: Importable configuration examples.
Free
Video
Azure Synapse
Azure Synapse is a cloud-based analytics service that combines big data and data warehousing capabilities to enable organizations to ingest, prepare, manage, and analyze large volumes of data for business insights.This Universal Task provides the capability to run, monitor, and re-start Azure Synapse Pipelines from Universal Automation Center.Key FeaturesThis Universal Task provides the following key features: Run a Pipeline. Run a Pipeline with parameters. List all Pipelines in a Workspace. Monitor the started Synapse Pipeline. Cancel a Pipeline Run. Cancel a Pipeline Run Recursive. Rerun a Pipeline from a specified activity or the beginning. Service Principal-based Authentication to Azure Synapse. Azure Managed Identify based Authentication to Azure Synapse Certificate-based TLS connection What's NewEnhancements: Managed Identify has been added as additional Authentication method. It eliminates password and secret management, since Azure automatically handles credential issuance, rotation, and security. This enhancements works also for Universal Agent running a Kubernetes Pod, which has been configured for managed identity.  Fixes: Whenever the Authentication Token expires, a new Authentication Token is automatically generated.
New
Free
Google Vertex AI
The Google Vertex AI Integration brings the power of enterprise-grade large language models into your UAC workflows, enabling intelligent automation that can understand, generate, and transform content. By connecting your automation workflows to Google-hosted AI models, you can build solutions that combine the reliability and orchestration capabilities of UAC with the reasoning and language capabilities of modern LLMs.This integration opens up a wide range of automation possibilities: from analyzing logs and generating human-readable summaries of complex system states, to intelligently routing and classifying data, transforming unstructured information into structured formats, or even making context-aware decisions within your workflows. Prompts can be enriched by files or UAC Variables and outputs can be persisted for reuse across multiple tasks and workflows—enabling you to chain AI operations together or share insights across your automation ecosystem.Key FeaturesWorks with Chat Completions API for comprehensive LLM interaction patterns.Platform-flexible design supporting MaaS models from different providers.Enrich prompts with data from files and UAC Variables, and save outputs for use throughout your automation ecosystem.Structured output support with JSON schemas for predictable, machine-consumable responses on models that support this capability.Provides "Advanced Options" to tinker the behavior of the selected model, like increasing the determinism of the response, penalizing repetition and others.