Wednesday, December 12, 2012

Quick review for Tridion 2013 Workflow


With the next SDL Tridion release we will receive several improvements regarding the way we envision and develop workflows. In this post I want to do a quick review and provide some guidance and samples about how to setup and configure a workflow in SDL Tridion 2013.

What is new?

There are several cool new features that make SDL Tridion 2013 workflow a more stronger and capable tool to develop content approval process and why not, BPM like processes... I will talk about that new feature later in this post.
  • Multiple items in a single process
Not as previous versions where the Subjects collection always contained one single item. In this version we will be able to start a workflow process for multiple items by just passing a collection of tcm uris. This feature will allow us to check out a set of items and process them in a single process.
There is not user interface for this feature so that we can just start a process instance for multiple items by using an API like Core services.
  • Bundled items in a single process
It is also cool to have the possibility to group items in a single business unit called a bundle. A bundle is a special type of virtual folder where content editors group items.
Using bundles has more advantages than just sending a collection of items because the system provides a GUI to manage bundles in both CME and Experience Manager. Additionally there are improvements related to security and process definition related to bundles like Bundle Management permissions and bundle specific activity definition options.
  • No items process
@Personal opinion: I love this new feature. SDL Tridion 2013 brings the concept of Task which is nothing different than a process with no items involved directly. I can define a task with several activities to do maintaining tasks or migration tasks or some BPM like tasks.
  • Native Core Services integration
Core services become the main API for workflows development. The new API brings a set of pre-defined core services variables that are available for workflow processing.
  • Process Instance State Management
If you are a Tridion Developer you may be familiar with Templating development and the concept of Packages. SDL Tridion 2013 comes with a similar way of state management called process variables where we can manipulate data that is available across a process instance.
  • Improved process suspend and resume
In previous releases it was hard to suspend and activity for a given amount of time. This new version comes with an improved mechanism for threads suspend, this means that activity suspend won't directly affect the amount of workflow threads available in the system. Additionally we have a time based resume mechanism.
  • Undo transactions
This functionality is not specific for workflows but it is a common used feature. Imagine that one of your tasks should rollback a previous Publish/UnPublish transaction; note that RollBack is not always the same as UnPublish. It is possible in the new version to rollback a previous transaction.

Implementing a 2013 Workflow

  • Process definition creation
A process definition is normally created by using the Visio Plug in 2013. The new Visio plug in has several improvements like bundle based settings and C# based automatic activities.


Note the new Constraints sections and the new Script Type. In this release we can develop Automatic Activities directly in C# without needing to register a .Net Class as a COM component.


  • Automatic Activities development
Core services are the preferred API to develop automatic activities in SDL Tridion 2013. This comes with a set of pre-defined variables that will give us access to the most important objects in a workflow process. In this post I will list the most important ones.
    • SessionAwareCoreServiceClient
This variable holds a session aware core service instance using the netTcp endpoint. This one maps to an ISessionAwareCoreService object.
    • CurrentActivityInstance
This variable holds an ActivityInstanceData object for the current activity.
    • ProcessInstance
This variable holds a ProcessInstanceData object for the current process instance.
    • ResumeBookmark
This variable holds a String object containing the bookmark to resume a suspended activity.

The following sample shows how a Workflow C# script will look like.

<%@ Assembly Name="System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"%>
<%@ Assembly Name="WorkflowTest, Version=1.0.0.0, Culture=neutral, PublicKeyToken=204ab1ccd7d1736e"%>

<%@ Import Namespace="System.ServiceModel"%>
<%@ Import Namespace="WorkflowTest"%>

WorkflowManager workflow = new WorkflowManager(SessionAwareCoreServiceClient);
workflow.PublishActivityHandler(CurrentActivityInstance, ProcessInstance, PublicationTargets.Dev, ResumeBookmark);

In the script above we can notice that we have a C# Fragments similar syntax, where you can use the pre-defined variables and also use custom assemblies that are registered in the GAC.




  • Custom Assembly Development

o    Publish Activity

/// <summary>
/// Publishes a Bundle to an specified Publication Target
/// </summary>
/// <param name="activityInstance"></param>
/// <param name="processInstance"></param>
/// <param name="target"></param>
public void PublishActivityHandler(ActivityInstanceData activityInstance, ProcessInstanceData processInstance, PublicationTargets target, string resumeBookmark) {
    if (string.IsNullOrEmpty(resumeBookmark)) {
        // Bundles are stored as Virtual Folders - Retrieve the bundle in the current Activity
        VirtualFolderData bundle = GetBundleForActivity(activityInstance);

        PublishInstructionData publishInstruction = new PublishInstructionData();
        publishInstruction.ResolveInstruction = new ResolveInstructionData();
        publishInstruction.RenderInstruction = new RenderInstructionData();
        publishInstruction.ResolveInstruction.IncludeWorkflow = true;

        // Retrieving the Publication Target
        string publicationTargetTitle = Enum.GetName(typeof(PublicationTargets), target);
        string publicationTargetId = GetPublicationTargetId(publicationTargetTitle);

        // Publish the bundle to the retrieved Publication Target
        string[] itemsToPublish = new string[] { bundle.Id };
        string[] targets = new string[] { publicationTargetId };
        PublishTransactionData[] publishTransactions = channel.Publish(itemsToPublish, publishInstruction, targets, PublishPriority.Normal, readOptions);

        // Store the Publish Transaction Id in the Process Instances Variables
        string publishTransactionKey = publicationTargetTitle + "PublishTransaction";
        if (processInstance.Variables.ContainsKey(publishTransactionKey)) {
            processInstance.Variables[publishTransactionKey] = publishTransactions[0].Id;
        }
        else {
            processInstance.Variables.Add(publishTransactionKey, publishTransactions[0].Id);
        }

        if (target == PublicationTargets.Live) {
            channel.SuspendActivity(activityInstance.Id, "Content Published to Live", DateTime.Now.Add(TimeSpan.FromMinutes(3)), "PublishLive", readOptions);
        }
        else {
            // Finish the Activity
            ActivityFinishData finishData = new ActivityFinishData() {
                Message = "Content published"
            };
            channel.FinishActivity(activityInstance.Id, finishData, readOptions);
        }
    }
    else if (resumeBookmark.Equals("PublishLive")) {
        // Finish the Activity
        ActivityFinishData finishData = new ActivityFinishData() {
            Message = "Content published"
        };
        channel.FinishActivity(activityInstance.Id, finishData, readOptions);
    }
}


o    Unpublish Activity

/// <summary>
/// Expires a Bundle from an specified Publication Target
/// </summary>
/// <param name="activityInstance"></param>
/// <param name="processInstance"></param>
/// <param name="target"></param>
/// <param name="resumeBookmark">Holds the ResumeBookmark predefined variable</param>
public void ExpireContentHandler(ActivityInstanceData activityInstance, ProcessInstanceData processInstance, PublicationTargets target, string resumeBookmark) {
    if (string.IsNullOrEmpty(resumeBookmark)) {
        // Bundles are stored as Virtual Folders
        VirtualFolderData bundle = GetBundleForActivity(activityInstance);

        UnPublishInstructionData unPublishInstruction = new UnPublishInstructionData();
        unPublishInstruction.ResolveInstruction = new ResolveInstructionData();

        // Retrieving the Publication Target
        string publicationTargetTitle = Enum.GetName(typeof(PublicationTargets), target);
        string publicationTargetId = GetPublicationTargetId(publicationTargetTitle);

        // Unpublish the bundle to the retrieved Publication Target
        string[] itemsToPublish = new string[] { bundle.Id };
        string[] targets = new string[] { publicationTargetId };
        PublishTransactionData[] publishTransactions = channel.UnPublish(itemsToPublish, unPublishInstruction, targets, PublishPriority.Normal, readOptions);

        // Store the Unpublish Transaction Id in the Process Instances Variables
        string unPublishTransactionKey = publicationTargetTitle + "UnpublishTransaction";
        if (processInstance.Variables.ContainsKey(unPublishTransactionKey)) {
            processInstance.Variables[unPublishTransactionKey] = publishTransactions[0].Id;
        }
        else {
            processInstance.Variables.Add(unPublishTransactionKey, publishTransactions[0].Id);
        }

        // Suspend the activity if expired in live
        if (target == PublicationTargets.Live) {
            channel.SuspendActivity(activityInstance.Id, "Expiration Gate (Cluth)", DateTime.Now.Add(clutch), "ExpireLive", readOptions);
        }
    }
    else {
        // Determines if the workflow should be finished otherwise it will Undo and finish
        if (resumeBookmark == "ExpireLive") {
            processInstance.Variables.Add("ClutchExpired", Boolean.TrueString);
        }

        // Finish the Activity
        ActivityFinishData finishData = new ActivityFinishData() {
            Message = "Content unpublished"
        };
        channel.FinishActivity(activityInstance.Id, finishData, readOptions);
    }
}

o    Reject Activity
/// <summary>
/// Rejects a Bundle, Undo the Publish Transaction and assign the Activity to its last performer
/// </summary>
/// <param name="processInstance"></param>
/// <param name="activityInstance"></param>
/// <param name="target"></param>
public void RejectPublishActivity(ActivityInstanceData activityInstance, ProcessInstanceData processInstance, PublicationTargets target) {
    // Get the last performer for previous Activity
    TrusteeData lastPerformer = GetLastPerformerForActivity(processInstance, "Create Or Edit Bundle");

    // Bundles are stored as Virtual Folders - Retrieve the bundle in the current Activity
    VirtualFolderData bundle = GetBundleForActivity(activityInstance);

    // Retrieving the Publication Target and Publish Transaction
    string publicationTargetTitle = Enum.GetName(typeof(PublicationTargets), target);
    string publishTransactionKey = publicationTargetTitle + "PublishTransaction";

    if (processInstance.Variables.ContainsKey(publishTransactionKey)) {
        string publishTransactionId = processInstance.Variables[publishTransactionKey];

        // Undo Publish Transaction
        channel.UndoPublishTransaction(publishTransactionId, QueueMessagePriority.Normal, readOptions);
    }

    // Finish the Activity
    ActivityFinishData finishData = new ActivityFinishData() {
        Message = "The bundle " + bundle.Title + " has been rejected and reassigned to " + lastPerformer.Title,
        NextAssignee = new LinkToTrusteeData() { IdRef = lastPerformer.Id }
    };
    channel.FinishActivity(activityInstance.Id, finishData, readOptions);
}

Monday, July 2, 2012

Ambient Data Framework in a Nutshell

In this post I want to briefly cover an important part of Tridion that is ignored in several implementations but that is always present because is a key part of most of the Content Delivery related products like UGC, Smart Target, etc.

What is ADF (Ambient Data Framework?

When we refer to ADF we should think about state management, in general words ADF works as a repository of information related to an specific session or to an specific request that can be accessed or updated during a web operation.

How ADF works?

ADF uses a modular design based on Cartridges that are executed depending on dependencies on the input/optput claims configured in the claim processors. ADF will determine the sequence of execution based on dependencies and registration order. Cartridges are used to manipulate state in the form of claims, claims can contain almost any form of data. The only restriction is that if your claim data is used in .Net the data must be serialized from Java to .Net.

Being technology agnostic ADF cannot access to technology specific objects like HttpServletRequest/HttpServletResponse or HttpContext, that is why we need a java filter or a .Net Http Module to start it. Having this in mind depending on the technology we are using we have to register the starting point for ADF as a Java Filter or as a .Net Http Module after that everything would be the same regardless the technology we are using.

Which are the ADF pieces?

Cartridge
A set of Claim Processors grouped sequentially. ADF can contain zero, one or more cartridges.

ClaimProcessor 
A processing entity that will receive the current claim store and will manipulate it. They have 3 important methods that can be implemented. onSessionStart which is executed when a new session starts, onRequestStart at the beginning of each request, onRequestEnd at the ending of each request.

ClaimStore
A set of data in form of claims. It is implemented as a Map object in Java.

Claim
Data being stored, updated or removed from ADF.

Default Web Claims
ADF comes with a set of default claims called Web Claims. Those claims are stored in a claim store called "WebClaims". You can access to that claim store in the following way.

Server Variables: Map variables = (Map)claims.get(WebClaims.SERVER_VARIABLES);
Session Attributes: Map variables = (Map)claims.get(WebClaims.SESSION_ATTRIBUTES);
Request Url: Map variables = (Map)claims.get(WebClaims.REQUEST_FULL_URL);
Request Headers: Map variables = (Map)claims.get(WebClaims.REQUEST_HEADERS);

Those are the most important ones. Please notice that those claims are populated automatically by the ADF initiator (java filter or http module).

 

How ADF is configured?

The main configuration file is cd_ambient_conf.xml which should be located in your class path generally in the classes folder in java or in the bin/config folder in .net

cd_ambient_conf.xml example

<?xml version="1.0" encoding="UTF-8"?>
<Configuration Version="6.1"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:noNamespaceSchemaLocation="schemas/cd_ambient_conf.xsd">
    <Cartridges>
        <Cartridge File="my_custom_cartridge1_conf.xml"/>
        <Cartridge File="my_custom_cartridge2_conf.xml"/>
    </Cartridges>
</Configuration>

This configuration file contains basic configuration just configuring two cartridges to be executed in sequential order. Each cartridge is configured in its own configuration file.

my_custom_cartridge1_conf.xml

<?xml version="1.0" encoding="UTF-8"?>
<CartridgeDefinition Version="6.1" Uri="taf:extensions:cartridge:mycartridge" Description="My Cartridge"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:noNamespaceSchemaLocation="schemas/cd_ambient_cartridge_conf.xsd">
    <ClaimDefinitions>
        <ClaimDefinition
                    Uri="taf:extensions:claim:myclaim"
                    Subject="taf:extensions:claim"
                    Scope="SESSION"
                    Description="My Claim" />
    </ClaimDefinitions>
    <ClaimProcessorDefinitions>
        <ClaimProcessorDefinition
                            Uri="taf:extensions:processor:myclaimprocessor"
                            Scope="SESSION" 
                            ImplementationClass="com.tridion.ambientdata.extensions.myclaimprocessor"
                            Description="My claim processor.">
            <RequestStart>
                <InputClaims />
                <OutputClaims>
                    <ClaimDefinition Uri="taf:extensions:claim:myclaim" />
                </OutputClaims>
            </RequestStart>
        </ClaimProcessorDefinition>
    </ClaimProcessorDefinitions>
</CartridgeDefinition>

This cartridge configuration file is defining a SESSION claim called "myclaim" and SESSION claim processor called "myclaimprocessor" which is implemented in the "com.tridion.ambientdata.extensions.myclaimprocessor" class. It is also defining an output claim called my claim which will contain the result of my operations.

Java specific configuration

In java the ADF initiator is configured as a Java filter in the web.xml file.

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.4" xmlns="http://java.sun.com/xml/ns/j2ee"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"
    id="WebSite">
    <display-name>WebSite</display-name>
    <description>Web Site</description>

    <filter>
        <filter-name>Ambient Data Framework</filter-name>
        <filter-class>com.tridion.ambientdata.web.AmbientDataServletFilter</filter-class>
    </filter>
   
    <filter-mapping>
        <filter-name>Ambient Data Framework</filter-name>
        <url-pattern>/*</url-pattern>
    </filter-mapping>
</web-app>

.Net specific configuration

In .Net the ADF initiator is configured as a .Net Http Module in the Web.config file.

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <system.web>
        <compilation debug="true" targetFramework="4.0" />
    </system.web>
    <system.webServer>
      <modules>
        <add name="Tridion.ContentDelivery.AmbientData.HttpModule" type="Tridion.ContentDelivery.AmbientData.HttpModule" />
      </modules>
    </system.webServer>
</configuration>

Input/Output claims verification

ADF does a verification of presence of input claims if they are defined as required in the cartridge configuration file. For instance if you define a claim processor like this.

<RequestStart>
    <InputClaims>
        <ClaimDefinition Uri="taf:extensions:myinputclaim" />
    <InputClaims>
    <OutputClaims />
</RequestStart>

ADF will verify the presence of a claim called "myinputclaim"  in the current claim store, if it is not present it will throw an exception.

How a claim processor is implemented?

 A claim processor is implemented as a java class which extends AbstractClaimProcessor. All the claim processors implementors must override the 3 methods onRequestStart, onRequestEnd and onSessionStart. As you may notice onRequestStart and onRequestEnd apply to REQUEST scope claim processors and onSessionStart applies to SESSION scope claim processors.

Claim Processor sample:

package com.tridion.ambientdata.extensions;

import com.tridion.ambientdata.AmbientDataException;
import com.tridion.ambientdata.claimstore.ClaimStore;
import com.tridion.ambientdata.processing.AbstractClaimProcessor;
import com.tridion.ambientdata.web.WebClaims;

public class myclaimprocessor extends AbstractClaimProcessor {
    @Override
    public void onRequestStart(ClaimStore claims) throws AmbientDataException {
    }
   
    @Override
    public void onRequestEnd(ClaimStore claims) throws AmbientDataException {
    }

    @SuppressWarnings("rawtypes")
    @Override
    public void onSessionStart(ClaimStore claims) throws AmbientDataException {
        try {
               claims.put(new URI("taf:extensions:claim:myclaim"), "myclaimdata");
            }
        } catch (Exception ex) {
            throw new AmbientDataException(ex);
        }
    }
 }

How a claim is used outside of ADF?

The idea behind ADF is to share and use common state data regardless of the technology we are using (.net or java) so that once we have completed developed our claim processors and configured our cartridges we are ready to use the data in our own applications.

Java usage sample

ClaimStore claims = AmbientDataContext.getCurrentClaimStore();
if (claims.contains(new URI("taf:extensions:claim:myclaim"))) {
    String claimdata = claims.get(new URI("taf:extensions:claim:deviceproperties")).toString();
    // Logic here.
}

.Net usage sample

ClaimStore claims = AmbientDataContext.CurrentClaimStore;
if (claims.contains(new URI("taf:extensions:claim:myclaim"))) {
    string claimdata = claims.Get<string>("taf:extensions:claim:deviceproperties");
    // Logic here.

}