0 / 0
Getting started with the Orchestration Pipelines editor

Getting started with the Orchestration Pipelines editor

The Pipelines editor is a graphical canvas where you can drag and drop nodes that you connect together into a pipeline for automating machine model operations.

You can open the Pipelines editor by creating a new Pipelines asset or editing an existing Pipelines asset. To create a new asset in your project from the Assets tab, click New asset > Automate model lifecycle. To edit an existing asset, click the pipeline asset name on the Assets tab.

The canvas opens with a set of annotated tools for you to use to create a pipeline. The canvas includes the following components:

Pipeline canvas components

  • The node palette provides nodes that represent various actions for manipulating assets and altering the flow of control in a pipeline. For example, you can add nodes to create assets such as data files, AutoAI experiments, or deployment spaces. You can configure node actions based on conditions if files import successfully, such as feeding data into a notebook. You can also use nodes to run and update assets. As you build your pipeline, you connect the nodes, then configure operations on the nodes to create the pipeline. To organize nodes visually, you can also customize the size of nodes by dragging node corners with your mouse with the exception of some, such as loop nodes. These pipelines create a dynamic flow that addresses specific stages of the machine learning lifecycle.
  • The toolbar includes shortcuts to options related to running, editing, and viewing the pipeline.
  • The parameters pane provides context-sensitive options for configuring the elements of your pipeline.

The toolbar

Pipeline toolbar

Use the Pipeline editor toolbar to:

  • Run the pipeline as a trial run or a scheduled job
  • View the history of pipeline runs
  • Cut, copy, or paste canvas objects
  • View the edit history of your pipeline and undo or redo actions
  • Save the pipeline
    • When you save, you can see all the users who is viewing the flow
  • Delete a selected node
  • Drop a comment onto the canvas. The comments accept basic styling of:
    • Basic HTML/CSS syntax, such as <div>, <span>, <p>, <font>, <size>, <font-style>, <font-clour>, <text-align>, <background-color>.
    • Basic Markdown syntax, such as ** (bold), _ (italic), # (header).
  • Drop an annotation on the canvas. The annotation supports HTML with its built-in styling toolbar and does not accept HTML coding.
Note:

The styling of any comments are preserved when migrating pipelines, for example when migrating a flow to a new project or recreating it in a DataStage environment.

  • Configure global objects, such as pipeline parameters or user variables
  • Manage default settings
  • Arrange nodes vertically
  • View last saved timestamp
  • Zoom in or out
  • Fit the pipeline to the view
  • Show or hide global messages

Hover over an icon on the toolbar to view the shortcut text.

The node palette

The node palette provides the objects that you need to create an end-to-end pipeline. Click a top-level node in the palette to see the related nodes.

Node category Description Node type
Copy Use nodes to copy an asset or file, import assets, or export assets Copy assets
Export assets
Import assets
Create Create assets or containers for assets Create AutoAI experiment
Create AutoAI time series experiment
Create batch deployment
Create data asset
Create deployment space
Create online deployment
Wait Specify node-level conditions for advancing the pipeline run Wait for all results
Wait for any result
Wait for file
Control Specify error handling Loop in parallel
Loop in sequence
Set user variables
Terminate pipeline
Update Update the configuration settings for a space, asset, or job. Update AutoAI experiment
Update batch deployment
Update deployment space
Update online deployment
Delete Remove a specified asset, job, or space. Delete AutoAI experiment
Delete batch deployment
Delete deployment space
Delete online deployment
Run Run an existing or ad hoc job. Run AutoAI experiment
Run Bash script
Run batch deployment
Run Data Refinery job
Run DataStage job
Run notebook job
Run pipeline job
Run Pipelines component job
Run SPSS Modeler job

The parameters pane

Double-click a node to edit its configuration options. Depending on the type, a node can define various input and output options or even allow the user to add inputs or outputs dynamically. You can define the source of values in various ways. For example, you can specify that the source of value for "ML asset" input for a batch deployment must be the output from a run notebook node.

For more information on parameters, see Configuring pipeline components.

Next steps

Parent topic: Pipelines

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more