Showing posts with label Server code. Show all posts
Showing posts with label Server code. Show all posts

Friday, 27 April 2018

Plugins in the sandbox, and why you don't get System.Security.Permissions.SecurityPermission

A relatively common error with plugins is "Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed". This is the general message that you get when you have a plugin registered in the sandbox, and it is trying to do something that is not permitted. You may get variations on this error, depending on exactly what the code is trying to do, for example:
  • FileIOPermission - access the file system
  • InteropPermission - mostly likely using an external assembly (either directly, or having ILMerged into your plugin assembly)
  • System.Net.WebPermission - some for on network access, e.g. trying to access a network resource by IP address (the sandbox only allows by DNS name)
  • SqlClientPermission - accessing SQL Server
The list can go on, and on. Rather than trying to list everything you can't do, it's a lot simpler to list what you can, which is broadly:
  • Execute code that doesn't try to access any local resources (file system, event log, threading etc)
  • Call the CRM IOrganizationService using the context passed to the plugin
  • Access remote web resources as long as you:
    • Use http or https
    • Use a domain name, not an IP address
    • Do not use the .Net classes for authentication
All of which is pretty restrictive, but is understandable given the sandbox is designed to protect the CRM server. To me, the most annoying one is the last, which makes it pretty much impossible to call other Microsoft web services directly, such as SharePoint or Reporting Services.

So, what to do about it. If you have CRM OnPremise, the simple and only solution is to register the assembly outside the sandbox, so that it can run in FullTrust - i.e. do whatever it wants (though still subject to the permissions of the CRM service account or asynchronous service account that it runs under).

And if you've got CRM Online, then the normal solution is to offload the processing to an environment that you have more control over. The most common option is to offload the processing to Azure, using the Azure Service Bus or Azure Event Hub . The alternative, new to CRM 9, is to send the data to a WebHook, which can be hosted wherever you like. 

Thursday, 29 March 2018

Concurrent or Consistent - or both

A lesser-known feature that CRM 2016 brought to us is support for optimistic concurrency in the web service API. This may not be as exciting as some features, but as it's something I find exciting, I thought I write about it.

Am I an optimist
So, what is it about ?  Concurrency control is used to ensure data remains consistent when multiple users are making concurrent modifications to the same data. The two main models are pessimistic concurrency and optimistic concurrency. The difference between the 2 can be illustrated by considering two users (Albert and Brenda), who are trying to update the same field (X) on the same record (Y). In each case the update is actually 2 steps (reading the existing record, then updating it), and Albert and Brenda's try and do the steps in the following time sequence:
  1. Albert reads X from record Y (let's say the value is 30)
  2. Brenda reads record Y (while it's still 30)
  3. Albert updates record Y (Albert wants to add 20, so he updates X to 50)
  4. Brenda updates record Y (she wants to subtract 10, so subtracts 10 from the value (30) she read in step 2, so she updates X to 20) 
If we had no concurrency control, we would have started with 30, added 20, subtracted 10, and found that apparently 30 + 20 - 10 = 20. Arguably we have a concurrency model, which is called 'chaos', because we end up with inconsistent data.
To avoid chaos, we can use pessimistic concurrency control. With this, the sequence is:
    1. Albert reads X from record Y (when the value is 30), and the system locks record Y
    2. Brenda tries to read record Y, but Albert's lock blocks her read, so she sits waiting for a response
    3. Albert adds 20 to his value (30), and updates X to 50, then the system releases the lock on Y
    4. Brenda now gets her response, which is that X is now 50
    5. Brenda subtracts 10 from her value (50), and updates X to 40
    So, 30 + 20 - 10 = 40, and we have consistent data. So we're all happy now, and I can finish this post.
    Or maybe not. Brenda had to wait between steps 2 and 4. Maybe Albert is quick, but then again, maybe he isn't, or he's been distracted, or gone for a coffee. For this to be robust, locks would have to placed whenever a record is read, and only released when the system knows the Albert is not still about to come back from his extended coffee break. In low latency client-server systems this can be managed reasonably well (and we can use different locks to distinguish between an 'I'm just reading', and 'I'm reading and intending to update'), but with a web front-end like CRM, we have no such control. We've gained consistency, but at a huge cost of concurrency. This is pessimistic concurrency.
    Now for optimistic concurrency, which goes like this:
    1. Albert reads X (30) from record Y (when the value is 30), and also reads a system-generated record version number (let's say it's version 1)
    2. Brenda reads record Y (while it's still 30), and the system-generated record version number (which is still version 1, as the record's not changed yet)
    3. Albert adds 20 to his value (30), and updates X to 50. The update is only permitted because Albert's version number (1) matches the current version number (1). The system updates the version number to 2
    4. Brenda subtracts 10 from her value (30), and tries to update X to 20.This update is not permitted as Brenda's version number (2) does not match the current version number (1). So, Brenda will get an error
    5. Brenda now tries again, reading now read the current value (50) and version number (2), then subtracting 10, and the update is allowed
    The concurrency gain is that Albert, Brenda and the rest of the alphabetical users can read and update with no blocks, except when there is a conflict. The drawback is that the system will need to do something (even if it is just give an error message), when there is a conflict.
    .
    What are the options
    Given this post is about a feature that was introduced in CRM 2016, what do you think happened before (and now, because you have to explicitly use optimistic concurrency). If it's not optimistic concurrency, then it's either pessimistic or chaos. And it's not pessimistic locking, as if Microsoft defaulted to this, then CRM would grind to a locked halt if users often tried to concurrently access records.

    Maybe I want to be a pessimist
    As chaos sounds bad, maybe you don't believe that CRM would grind to a locked halt, or you're happy that users don't need concurrent access, or you've been asked to prevent concurrent access to records (see note 1). So, can we apply pessimistic locking ? The short answer is 'no', and most longer answers also end up 'no'. Microsoft give us almost no control over locking (see note 2 for completeness) within CRM, and definitely no means to hold locks beyond any one call. If you want to prolong the answer as much as you can, you might conceive a mechanism whereby users only get user-level update access to records, and have to assign the record to themselves before they can update it, but this doesn't actually work either, as a user may still be making the update based on a value they read earlier. And you can't make it user-level read access, and the user then wouldn't be able to see a record owned by someone else to be able to assign it to themselves.

    OK, I'll be an optimist
    So, how do we use optimistic concurrency ? First of all, not every entity is enabled for optimistic concurrency, but most are. This is controlled by the IsOptimisticConcurrencyEnabled property of the entity, and by default it is true for all out-of-box entities enabled for offline sync, and for all custom entities. You can check this property by querying the entity metadata (but not in the EntityMetadata.xlsx document in the SDK, despite the SDK documentation)

    Then, to use optimistic concurrency you need to do at least 2 things, and preferrably 3:
    1. In the Entity instance that you are sending to the Update, ensure the RowVersion property is set to the RowVersion that you received when you read this record 
    2. In the UpdateRequest, set the ConcurrencyBehavior to IfRowVersionMatches
    3. Handle any exceptions. If there is a row version conflict (as per my optimistic scenario above), then you get a ConcurrencyVersionMismatch exception. 
    For a code example, see the SDK
    I've described this for an Update request, and you can also use it for a Delete request, and I hope you'll understand why it doesn't apply to a Create request.

    One word of warning; I believe that some entities fail when using optimistic concurrency - this seems to be the entities that are metadata related (e.g. webresource or savedquery). I suspect this is because the metadata-related internals work on different internal (at the SQL level) concurrency from most other entities.

    How much does it matter
    I've left this till last, otherwise you may not have read the rest of the post, as it often doesn't matter. Consistency issues are most relevant if there's a long time between a read and the corresponding update. The classic example is offline usage (hence why it's enabled for out-of-box entities enabled for offline sync). I also see it as relevant for some bulk operations; for example we do a lot of bulk operations with SSIS, and for performance reasons, there's often a noticeable time gap between reads and writes in an SSIS data flow.

    Notes

    1. During CRM implementatons, if asked 'Can we do X in CRM ?', I very rarely just so no, and I'm more likely to say no for reasons other than purely technical ones. However, when I've been asked to prevent concurrent access to records, then this is a rare case when I go for the short answer of 'no'
    2. We can get a little bit of control over locking within a synchronous plugin, as this runs within the CRM transaction. This is the basis of the most robust CRM-based autonumber implementations. However, the lock can't be held outside of the platform operation
    3. My examples have concentrated on updating a single field, but any talk of locking or row version is at a record level. If Albert and Brenda were changing different fields, then we may not have a consistency issue to address. However, for practical reasons, any system applies locks and row versioning at a record, and not field level. Also, even if the updates are to different fields, it is possible that the change they make is dependent on other fields that may have changed, so for optimistic concurrency we do get a ConcurrencyVersionMismatch if any fields had changed


    Friday, 27 June 2014

    Plugin pre-stages - some subtleties

    The CRM SDK describes the main differences in plug stages here. However, there are some additional differences between the pre-validation and pre-operation stages that are not documented.

    Compound Operations
    The CRM SDK includes some compound operations that affect more than one entity. One example is the QualifyLead message, which can update (or create) the lead, contact, account and opportunity entities. With compound operations, the pre-validation event fires only once, on the original message (QualifyLead in this case) whereas the pre-operation event fires for each operation.
    You do not get the pre-validation event for the individual operations. A key consequence of this is that if, for example, you register a plugin on pre-validation of Create for the account entity, it will not fire if an account is created via QualifyLead. However, a plugin on the pre-operation of Create for the account entity will fire if an account is created via QualifyLead.

    Activities and Activity Parties
    I've posted about this before, however it's worth including it in this context. When you create an activity, there will be an operation for the main activity entity, and separate operations to create activityparty records for any attribute of type partylist (e.g. the sender or recipient). The data for the activityparty appears to be evaluated within the overall validation - i.e. before the pre-operation stage. The key consequence is that any changes made to the Target InputParameter that would affect an activityparty will only be picked up if made in the pre-validation stage for the activity entity.

    Friday, 26 April 2013

    The given key was not present in the dictionary - what it means, and how to avoid it

    A common error posted on the CRM Development forum is ‘the given key was not present in the dictionary’. This is a relatively easy error to diagnose and fix, provided you know what it means. It will also help to identify the line in the code at which the error occurs, which is most easily determined by debugging.

    The error refers to a ‘dictionary’, and a ‘key’. The ‘dictionary’ is a type of collection object (i.e. it can contain many values), and the ‘key’ is the means by which you specify which value you want. The following two lines of code both show an example:
    Entity e = context.InputParameters["Target"];
    string name = e.Attributes["name"];  // Note that this is equivalent to: string name = e["name"];

    In the first line, InputParameters is the dictionary, and "Target" is the key. In the second line, Attributes is the dictionary, and "name" is the key. The error ‘The given key is not present in the dictionary’ simply means that the dictionary does not have a value that corresponds to the key. So, this error would occur in the first line if InputParameters does not contain a key called "Target", or in the second line if there were no "name" in Attributes.

    The way to avoid these errors is simple; test if the key exists before trying to use it. Different collection classes can provide different ways to perform this test, but the collection classes in the CRM SDK assemblies all inherit from an abstract DataCollection class that exposes a Contains method, so you can use a consistent approach across these collection classes.
    if (context.InputParameters.Contains("Target"))
    {
     Entity e = context.InputParameters["Target"];
     if (e.Attributes.Contains("name"))
     {
      string name = e.Attributes["name"];
     }
    }


    There are a few common reasons of the use of CRM collection classes where a key might not be present when you expect it:
    • Within a plugin, the values in context.InputParameters and context.OutputParameters depend on the message and the stage that you register the plugin on. For example, "Target" is present in InputParameters for the Create and Update messages, but not the SetState message. Also, OutputParameters only exist in a Post stage, and not in a Pre stage. There is no single source of documentation that provides the complete set of InputParameters and OutputParameters by message and stage, though this post provides a list of the most common ones for CRM 4, and most of these still apply in CRM 2011
    • The Attributes collection of an Entity will only contain values for attributes that have a value. You may get the Entity from a Retrieve or RetrieveMultiple having specified a ColumnSet with the attribute you want, but this attribute will not be present in the Attributes collection if there were no data in that attribute for that record
    • Within a plugin, the Attributes collection of an Entity that you obtain from the "Target" InputParameter will only contain attributes that were modified in the corresponding Create or Update method. Using the example above, if this were in a plugin registered on the Update message, the "name" attribute would only be present if the "name" attribute was changed as part of the Update; the "Target" InputParameter will not contain all the attributes for the entity

    Thursday, 19 April 2012

    Using wsdlbasedproxies with Claims authentication

    The CRM SDK has a little-known, but very useful set of projects called wsdlbasedproxies, which show how to connect to the CRM web services without using the .Net 4.0 assemblies.

    When testing the project for claims, I found that it needs some code additions. The code as supplied (in SDK v 5.0.9) sets the credential.Windows property, but this fails with the error "The username is not provided. Specify username in ClientCredentials".

    Fortunately, this can be easily fixed by setting the credentials.UserName property instead. To do this, I made the following code replacements:

    Replace:
    credentials.Windows.ClientCredential = new NetworkCredential(UserName, UserPassword, UserDomain);
    with
    credentials.UserName.UserName = UserName;
    credentials.UserName.Password = UserPassword;


    And replace:
    client.ClientCredentials.Windows.ClientCredential = credentials.Windows.ClientCredential;
    with
    client.ClientCredentials.UserName.UserName = credentials.UserName.UserName;
    client.ClientCredentials.UserName.Password = credentials.UserName.Password;

    Wednesday, 13 July 2011

    PartyList attributes and the plugin event pipeline

    This should be a quick post about a subtlety with the plugin event pipeline. I recently wrote a plugin that could modify the data that's submitted when updating an activity record. This should have been a straightforward plugin on the Pre event that modified the Target InputParameter, and it all worked fine, except for partylist fields (such as the resources field on the serviceappointment entity, or optionalattendees on the appointment entity). Essentially, any changes I made to these fields in the plugin were ignored.

    Fortunately, there is a solution, and it depends on the stage that you register the plugin on. If you register on stage=20 (i.e. within the transaction), your changes are ignored. However, change the registration to stage=10 (before the transaction), then it does work. There's no documentation on this, but I expect it is due to how the partylist data is saved. This data is written to the activityparty table in the database, and I expect that the SQL for this is already fixed at the start of the transaction, and hence is unaffected by changes in the plugin code.

    Wednesday, 25 May 2011

    Unexpected error with ConditionOperator.In and typed arrays

    I just met a bizarre error when using the Crm xrm assembly when using the ConditionOperator.In in a query. In this case the query was to find all notes related to a list of CRM entities, and the error was "Condition for attribute 'annotation.objectid': expected argument(s) of type 'System.Guid' but received 'System.Guid[]'". I was using almost identical code to some code that did work, but there was a subtle difference in the overloads of some of the xrm methods.

    Consider the following code, which works:

    QueryExpression q = new QueryExpression("annotation");
    Guid g1 = Guid.NewGuid();
    Guid g2 = Guid.NewGuid();
    q.Criteria.AddCondition(new ConditionExpression("objectid", ConditionOperator.In, new Guid[] { g1, g2 }));

    However, change the last line to the following, and it fails with the error above:

    q.Criteria.AddCondition("objectid", ConditionOperator.In, new Guid[] { g1, g2 });

    On the face of it, you'd expect identical behaviour, but it looks like the problem is due to the parameter overloads on the different methods. The constructor for ConditionExpression takes 5 overloads, and the compiler will use System.Collections.ICollection for the array of Guids. However, the AddCondition method only offers one type for the third parameter (params object[]). The result of this is that the code fails because the parameter is interpreted as object[] {new Guid[] { g1, g2 }}.

    Interestingly, other code can also work, e.g.

    q.Criteria.AddCondition("objectid", ConditionOperator.In, new object[] { g1, g2 });
    q.Criteria.AddCondition("objectid", ConditionOperator.In, g1, g2);

    Saturday, 9 April 2011

    Options for upgrading ASP .Net extensions for CRM 2011

    I like April. One reason is that it's the start of my MVP renewal cycle. After April 1st April I will either have been renewed, or not, and I feel less circumspect about making critical comments about decisions Microsoft have made. So, ASP .Net extensions with CRM 2011. Since the first release of CRM (CRM 1.0 or CRM 1.2 depending on your country), a major extension point for On-Premise CRM implementations was to develop ASP .Net extensions and deploy them within the CRM web site. Having a supported way to place these extensions within the CRM web site was important for several reasons:
    1. To allow relative Urls in IFrames, ISV.Config and SiteMap. Main reasons for that are to cope with IFD environments where different domain names are provided for internal and external access, and to avoid configuration issues exporting/importing between environments
    2. To allow single-sign on and impersonation, so the code could act on behalf of the CRM user, without needing to re-enter their credentials
    3. To maintain the Site Origin (aka same site of origin). This is an important consideration as Internet Explorer security will only permit code interaction between pages if it considers that they are part of the same site. For example, the ability to pass data to a dialog using window.dialogArguments, or the ability to access objects on a calling window via window.opener both depend on the pages being in the same site
    However, CRM 2011 puts significant restrictions on putting your ASP .Net extensions within the CRM web site. Essentially, the only supported option is to retain existing extensions that use the CRM 4 endpoint. So, what happens to the points above if you can't put your ASP .Net extensions in the CRM web site:
    1. You'll have to use absolute Urls. This makes deployment between environments (e.g. from development to live) harder, as the Urls would have to changed. In small-scale environments this would be just a manual task, but in larger environments you may decide to build a process to automate this. Overall, I see this as a major annoyance, but not a major problem
    2. Microsoft have put work into making single sign-on work across web sites. This depends on setting up Secure Token Services, which incurs some administrative and deployment overhead. There's an additional deployment overhead of setting up a new web site for the extensions, and configuring access to it. I've not tested this fully, but assuming it works as promised, this should resolve the single sign-on and impersonation issue
    3. This is the big problem area. I don't think same site of origin can be maintained with the ASP .Net pages outside of the CRM web site, which effectively removes support for a common type of extension that was possible in all previous versions (Note, IE 8 has some settings that affect how Site Origin is applied, and this might help, but IE 7 is a supported browser for CRM 2011, so this cannot be a universal solution)

    So, what can/should you do with your ASP .Net extensions that you wrote for CRM 4.0, or were intending to write. I see 3 main options:

    1. Don't upgrade your code. ASP .Net code that uses the CRM 4 web service endpoint will still work in the ISV folder, and Microsoft have not said it is unsupported. This is the simplest option, but it means you need to maintain CRM 4 code, and you'll still have to address the issue when CRM 6 comes out. You could cross your fingers and hope that, in CRM 6, Microsoft reintroduce support for ASP .Net extensions within the CRM web site
    2. Upgrade the code to use the CRM 2011 endpoint, and deploy it in a separate web-site from CRM. As stated above, this causes extra deployment overhead (which I consider is a significant overhead, which is often under-estimated), and you won't be able to use same site of origin, so you have to expect some limitations
    3. Rewrite the code as web resources (Silverlight, or HTML with javascript). Microsoft have introduced quite a lot of integration points in CRM 2011 that makes this a powerful option. This has a major advantage that the resources are necessarily hosted within the CRM web site, and can be deployed as part of a CRM solution. However, these are client-side technologies and some extensions would need extra work to build (e.g. extensions that access a database on another server). My biggest problem though is the development effort required to rewrite code in substantially different technologies, and these are technologies that are not as mature as ASP .Net

    Of these, I don't like any of the options. Numbers 2 and 3 would be necessary if you need to support CRM Online, but for On-Premise implementations there are some difficult decisions to make. Don't get me wrong, I appreciate that there are a lot of good things for developers in CRM 2011; web resources and the single sign-on across web-sites are very powerful and very welcome, and the only options for CRM Online, it's just a shame to lose some On-Premise options.

    But to finish on a more positive note, another reason I like April is that this is a great month for ski touring.

    Wednesday, 16 February 2011

    Using CRM 4.0 assemblies on a CRM 2011 Server

    CRM 2011 Server includes a publisher policy that causes any assembly built against the CRM 4 sdk assemblies to load the CRM 5 sdk assemblies instead. There are certain circumstances where this can cause errors loading the assembly; see the end of this post for possible error messages.

    One workaround is to not run the application on a Crm 2011 Server, but there is an alternative, which is to explictly tell your application not to use this publisher policy file. This is done through adding the following to the app.config file:

    <configuration>
    <runtime>
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
    <dependentAssembly>
    <assemblyIdentity name="Microsoft.Crm.Sdk" publicKeyToken="31bf3856ad364e35" culture="neutral" />
    <publisherPolicy apply="no" />
    </dependentAssembly>
    </assemblyBinding>
    </runtime>
    </configuration>

    This raises one more issue: in some circumstances your assembly may not be the main .exe, but a .dll loaded by another process, in which case you'll have to modify/create the .config file for that .exe. This is done by creating a file named .exe.config in the same directory as the .exe (here's an example). I have a nagging concern that I may have to do this with SSIS packages that use a custom component that use the SDK assemblies, which could get interesting, as different executables are used for in design, debug and runtime. If I do have this issue with SSIS, then I'll post a more detailed workaround (if I find it).

    My hope is that this is a temporary problem that will be fixed, as the readme in the 5.0.1 version of the SDK refers to an 'incorrect Publisher Policy'. This readme also gives an explanation of this issue

    One possible error
    System.IO.FileLoadException: Could not load file or assembly 'Microsoft.Crm.Sdk, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)
    File name: 'Microsoft.Crm.Sdk, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' ---> System.IO.FileLoadException: Could not load file or assembly 'Microsoft.Crm.Sdk, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)
    File name: 'Microsoft.Crm.Sdk, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35'

    Another possible error
    System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.Crm.Sdk, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.
    System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.Crm.Sdk, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.

    Friday, 11 February 2011

    Plugin Deployment Options

    The CRM 4 SDK gives some information about the storage options when registering plugins but there are a few more considerations. I got prompted to elaborate on this in a forum post, and I think it's worth documenting this here as well:

    The 3 storage options are: Database, Disk and GAC. The main differences between these are:

    • Database: The assembly dll is stored in the database, rather than the file system. The major advantages are that the assembly need only be deployed once if you have multiple CRM servers, and that no additional action is required to restore / redeploy the assembly either during disaster recovery, or if redeploying to an alternate server. This is the preferred option in a production environment
    • Disk: The assembly dll is placed in the \server\bin\assembly directory on each server. You have to ensure the dll is placed in the correct place on all CRM servers, so the deployment overhead is a little greater. I normally use this option in development environments as you can redeploy newer versions solely by file transfer, rather than reregistering. Also, if debugging, the assembly .pdb file needs to be placed in the same location; with this option it's easy to ensure the dll and pdb are from the same build
    • GAC: The assembly is placed in the Global Assembly Cache on each CRM server, and again you will have to do this. The GAC does allow multiple versions of an assembly, but CRM doesn't, so you don't really gain anything by using the GAC. I don't think I've ever used this option

    There is one further consideration. If your plugin assembly has other dependent assemblies, then you can place this dependent assembly in the GAC whichever of the above options you take. However, if you use the Disk option, then the dependent assemblies can also be deployed into the \server\bin\assembly directory

    Wednesday, 17 November 2010

    How to use impersonation in an ASP .Net page using IFD in CRM 4.0

    This is is common requirement, and I've never found what I consider to be a suitable explanation of what needs to be done. This post is not intended to be exhaustive, but is intended to cover the essentials in one place.

    The fundamental requirement is to create a custom ASP .Net page that is accessible both internally (via AD authentication) and over the Internet (via IFD authentication), and where the code will access CRM data under the context of the user accessing the page. To do this, you need to deploy and configure your code as follows:


    1. Deploy the ASP .Net page within the CRM Web Site (the only supported place is within the ISV directory). If you don't do this, then IFD authentication will not apply to your page
    2. Run the ASP .Net page within the CrmAppPool, and do not create an IIS application for it. If you don't do this, then you won't be able to identify the authenticated user
    3. Ensure that the CRM HttpModules MapOrg and CrmAuthentication are enabled. This will happen by default by inheritance of the settings from the root web.config file in the CRM web site, but I'm mentioning it here as there are some circumstances (when you don't need IFD) in which it is appropriate to disable these HttpModules. Again, if the HttpModules aren't enabled, then you won't be able to identify the authenticated user
    4. As your code is in a virtual directory (rather than a separate IIS application), ASP .Net will look for your assemblies in the [webroot]\bin folder, so that is where you should put them (or in the GAC). The initial release documentation for CRM 4.0 stated that it was unsupported to put files in [webroot]\bin folder of the CRM web site, but this restriction has been lifted

    You also need to follow certain coding patterns within your code. An example of these can be found here. Note that, Crm web services refers to both the CrmService and the MetadataService:

    1. Ensure you can identify the organisation name. The example code shows how to parse this from the Request.Url property, though I prefer to pass this on the querystring (which the example also supports)
    2. Use the CrmImpersonator class. All access to the Crm web services needs to be wrapped within the using (new CrmImpersonator()) block. If you don't do this you will probably get 401 errors, often when accessing the page internally via AD authentication (see later for a brief explanation)
    3. Use the ExtractCrmAuthenticationToken static method. This is necessary to get the context of the calling user (which is stored in the CallerId property)
    4. Use CredentialCache.DefaultCredentials to pass AD credentials to the Crm web services. If you don't do this, then you will probably get 401 errors as you'd be trying to access the web service anonymously (IIS would throw these 401 errors)

    That should be all that you need on the server side. The final piece of the puzzle is to ensure that you provide the correct Url when accessing the page, which again needs a little consideration:

    When accessing the page from an internal address, the Url should be of the form:
    http://[server]/[orgname]/ISV/MyFolder/MyPage.aspx

    When accessing the page from an external address, the Url should be of the form:
    http://[orgname].[serverFQDN]/ISV/MyFolder/MyPage.aspx

    This is relatively easy to achieve when opening the page from within CRM (i.e. in an IFrame, via an ISV.config button or in client script). In each case you can use the PrependOrgName global function in client script - e.g.

    var u = PrependOrgName('/ISV/MyFolder/MyPage.aspx');

    This function will determine correctly whether to add the organisation name to the Url. Note also that I've provided a relative Url, which will ensure the first part of the Url is always correct. As this uses a JavaScript function, you will always need to use a small piece of code to access the page, and cannot rely on statically providing the Url in the source of an IFrame, or in the Url attribute of an ISV.Config button. Any relative Urls in SiteMap should automatically get the organisation name applied correctly. Remember to also pass the organisation name on the querystring if the server code expects this (you can get the organisation name from the ORG_UNIQUE_NAME global variable)

    Earlier I promised an explanation of what the server code does. This is not as complete an explanation as it could be, but the basics are:

    1. The HttpModules identify the correct CRM organisation (MapOrg) from the Url provided, and place information about the authenticated calling user in the HttpContext (CrmAuthentication)
    2. The ExtractCrmAuthenticationToken method reads the user context from the HttpContext, and puts the user's systemuserid in the CallerId property of the CrmAuthenticationToken
    3. Because the CallerId is set, the call to CRM is necessarily using CRM impersonation. For this to be permitted, the execution account (see Dave Berry's blog for a definition) must be a member of the AD group PrivUserGroup. The execution account is the AD account that is returned by CredentialCache.DefaultCredentials. This is where things get a little interesting
    4. If the request comes via the external network and IFD authentication is used, CRM handles the authentication outside of IIS and no IIS / ASP .Net impersonation occurs. Therefore CredentialCache.DefaultCredentials will return the AD identity of the process, which is the identity of the CrmAppPool, which necessarily is a member of PrivUserGroup
    5. However, if the request comes via the internal network, AD authentication is used and IIS / ASP .Net impersonation does occur (through the setting in web.config). This impersonation will change the execution context of the thread, and CredentialCache.DefaultCredentials would then return the AD context of the caller. This is fine in a pure AD authentication scenario, but the use of the ExtractCrmAuthenticationToken method means that CRM impersonation is necessarily expected; this will only work if the execution account is a member of PrivUserGroup, and CRM users should not be members of PrivUserGroup. This is where the CrmImpersonator class comes in: its constructor reverts the thread's execution context to that of the process (i.e. it undoes the IIS / ASP .Net impersonation), so that CredentialCache.DefaultCredentials will now return the identity of the CrmAppPool, and the CRM platform will permit CRM impersonation

    To finish off, here are a few other points to note:

    • IFD impersonation only applies when accessing the CRM platform. If you use IFD authentication, there is no way of impersonating the caller when accessing non-CRM resources (e.g. SQL databases, SharePoint, the file system); it cannot be done, so save yourself the effort and don't even try (though, for completeness, SQL impersonation is possible using EXECUTE AS, but that's it)
    • If you want to use impersonation, do not use the CrmDiscoveryService. The CrmDiscoveryService can only be used with IFD if you know the user's username and password, and you won't know these unless you prompt the user, which kind of defeats the point of impersonation

    Friday, 29 October 2010

    Web.config settings - e.g. ViewState, Session

    A common issue raised in the CRM Development forum is of custom web pages that work correctly in a development environment, but then fail to work when deployed into the CRM web site. The most common reason for this is that the CRM web.config overrides some of the default ASP.Net configuration settings.

    The relevant entries in the CRM web.config (in the root of the CRM web site) are:

    <pages buffer="true" enableSessionState="false" enableViewState="false" validateRequest="false/">
    <sessionState mode="Off"/>

    These have 2 main consequences:
    1. Session State is disabled. This issue is relatively easy to diagnose, as you tend to get a clear error message if trying to use session state when it is disabled
    2. ViewState is disabled. This can be a more subtle effect, as developers often rely on viewState without necessarily being aware of it. ViewState is what allows ASP.Net web controls to maintain property values across web requests; if it is disabled then it leads to symptoms such as values not being retained, or list boxes losing their contents

    The solution for viewState is straightforward. You can reenable viewState for you application either in the web.config in your application directory, or at the page level within the <@Page> directive. These looks like the following:

    web.config:
    <pages enableViewState="true" />

    Page directive:
    <@Page EnableViewState="true" />

    Session state is a bit more complex, as this is configured at the web application level. Personally, I've never seen any reason to use session state within custom code in the CRM web site; CRM doesn't use this, and I find it best to mimic CRM behaviour wherever possible.

    And one final point about best practise; as this post demonstrates, it is best not to rely on the default ASP .Net configuration settings, rather I find it best to always explicitly enable or disable settings in the local web.config

    Thursday, 28 October 2010

    SDK assemblies: Versions and Processor Architecture

    The download of the Dynamics CRM SDK includes the sdk assemblies (microsoft.crm.sdk.dll, microsoft.crm.sdktypeproxy.dll etc). There are 4 sets of these assemblies, one in the bin directory, another in bin\64bit, and two more in the respective online subdirectories.


    Given the directory naming, I'd always assumed that the assemblies in the bin directory were built for 32bit (x86) systems only, and those in bin\64bit were build for 64bit systems. I found this a little annoying, as I generally prefer to build assemblies as MSIL (AnyCPU) to avoid the need for different 32 and 64 bit deployment packages.

    However, it turns out that the assemblies in the bin\64bit directory are actually built as MSIL (AnyCPU), rather than specifically as 64 bit assemblies (apparently this was an earlier deployment mix-up which it's now too late to correct). This gives me what I want, so I now always use the assemblies that, bizarrely, come in the bin\64bit directory of the SDK.

    Wednesday, 12 May 2010

    Advanced Developer Extensions - an update

    Just a quick note to say that I've updated my post on the Advanced Developer Extensions. Shan McArthur of AdxStudio (who developed these extensions) was kind enough to provide extra information about these extensions, which I've now incorporated into the original post

    Tuesday, 11 May 2010

    CRM SDK 4.0.12 and the Advanced Developer Extensions

    The CRM 4.0.12 SDK has recently been released. Normally an SDK update is not particularly significant, but in this case it includes some major enhancements, which come under the banner of 'Advanced Developer Extensions'. David Yack has already posted a quick how-to on the CRM Team Blog; rather than duplicate that, this post is intended to cover the scope and limitations of the new extensions as I see them, and how they differ from the original CRM programming model.

    What are the new extensions ?
    It seems to make sense to split the new features into 2; the 'Advanced Developer Extensions' (whcih has been helpfully shortened to Microsoft xRM), and the Portal Accelerator (aka Portal Developer). The Portal accelerator uses Microsoft xRM, but I think it is otherwise best treated separately. So, for this post I'll concentrate on Microsoft xRM.

    Architecture of Microsoft xRM
    Although Microsoft xRM appears to provide a whole new programming model, it can be considered as essentially a (rather big) wrapper around the existing SDK assemblies (microsoft.crm.sdk and microsoft.crm.sdktypeproxy). So, although your code would not directly use the classes in the SDK assemblies, the communication with CRM is still ultimately done via the CRM web services, and is subject to the same limitations (e.g. limitations of the FetchXml query syntax). Another consequence of this is that you do not need to change any of your network configuration to use these extensions.

    Changes to the Programming Model
    This is where it gets interesting; the new extensions provide a whole new programming model, which can affect pretty well all of the code you use to communicate with CRM. The major changes as I see it are:

    1. You can use strongly-typed classes for the CRM entities. Although you can do this with the existing SOAP CRM web services, up till now you needed to use the DynamicEntity class with the SDK assemblies
    2. Native .Net types are used for attributes - e.g. int? rather than CrmNumber. Note that nullable types (e.g. int?, rather than int) as used
    3. The connection information is covered within a DataContext class, which effectively replaces the use of the CrmService instance
    4. Data modifications can be cached locally, then submitted as a batch, using the DataContext.SaveChanges method
    5. The extensions provide additional methods to update lookup values. These can work off either the lookup attribute name (e.g. customerid) or the relationship name (e.g. contact_customer_accounts). The extensions also provide methods to retrieve related records, which avoids the need to use QueryExpression or QueryByAttribute
    6. You can use LINQ queries to retrieve data from CRM, rather than building QueryExpression instances
    7. The DataContext and LINQ technologies allow direct data-binding with .Net user interface controls

    Use of Strongly-Typed classes
    As with the SOAP web services, you can use strongly-types classes for system and custom entities. Superficially the process for setting this up differs from the SOAP web services, although the underlying idea is pretty similar. With these extensions, you use a supplied tool (CrmSvcUtil.exe) to connect to a CRM server. This tool will generate the class definitions for all CRM entities into one output code file which you'll add into your .Net project. Ultimately, this process is very similar to what happens behind the scenes when you create a Web Reference to the SOAP web services. The main internal difference is that the generated classes with these extensions maps down to the DynamicEntity class, but this is hidden from you.

    You can still use a generic class with these extensions rather than strongly-typed classes. With the extensions it is ICrmEntity, rather than DynamicEntity.

    Native .Net Types
    Native .Net Types are used instead of Crm-specific types. The extensions use the nullable versions of the types (e.g. int?, rather than int) so that you can still identify null values (which was one of the main original reasons for the Crm-specific types). For picklist and status attributes the extensions provide an additional Label attribute (e.g. customertypecodeLabel) with the appropriate text, whereas for lookup attributes you can get the related entity via a property that has the relationship name (e.g. price_level_accounts).

    DataContext class
    This replaces the need for the CrmService instance, and handles the selection of authentication type, passing credentials and management of the CrmAuthentication token. All connection-related information (server URL and port, organisation name, authentication type and credentials) can be specified in one connection string. The extension code includes logic to read connection string information from the application config file. Overall, this should greatly simplify deployment across different environments and authentication types.

    The DataContext exposes an IOrganizationService instance, which looks to combine the underlying ICrmService and IMetadataService instances. This allows use of any of the more specific CrmService messages (e.g. InstantiateTemplate), or MetadataService messages.

    Interestingly the constructor for the DataContext can take an IContextService or IWorkflowContext instance, but not an IPluginExecutionContext instance. This implies that the extensions can work within a custom workflow activity, but are of limited use within plugins. See below for more on this.

    Batch Updates
    It looks to me as though any changes submitted via the DataContext will be submitted as a batch via the SaveChanges method. This won't provide any transactional support, as this is not possible within the Crm Web Services. Overall, I think this approach is due to the design patterns used in the Microsoft data-related classes, and I'm pretty neutral as to whether this offers more benefits or drawbacks. If you do develop with these extensions, I'd bear the following in mind:

    • I could imagine developers forgetting to include the SaveChanges method, and I can't see (though I've not tested this) a way that the extension code would throw an error is pending changes were discarded
    • There seems to be no means to control the behaviour if an individual data operation fails, and the documentation doesn't currently describe the default behaviour here. To get such control you need to call SaveChanges for each individual data modification

    Handling lookups and relationships
    I've not looked at this in great detail, but I see the greatest benefit is the simplicity of retrieving related records with methods such as GetRelatedEntities to get child and many-many entities, and GetRelatedEntity to get a parent entity.

    Use of LINQ queries
    Again, not an area I've spent much time with. In the first instance I see this as most useful for developers that are already familiar with the LINQ query syntax, but I generally encourage the use of standard technologies (such as LINQ) in preference to application-specific technologies (such as QueryExpression). My expectation is that you should expect developer productivity gains with use of LINQ instead of QueryExpression, but I've not spent enough time on this to get good metrics. It should be emphasised that the LINQ query ultimately maps down to a QueryExpression, and hence you are as limited in the scope of queries as you currently are.

    Data Binding
    This really comes as a direct benefit of the use of standard .Net data classes, and LINQ. You can directly bind UI components like the ASP.Net GridView control to the results of a LINQ query. Oddly, there doesn't seem to be an example of this in the main crmsdk4.chm help file, but the following code extract from the 'advanced_developer_extensions_-_walkthrough_webapp.docx' document in the SDK shows how easy and powerful this can be:

    var crm = new XrmDataContext("Crm");
    ContactsGrid.DataSource = crm.contacts.Where
    (c => c.emailaddress1.EndsWith("@example.com"));
    ContactsGrid.DataBind();

    Limitations of the Advanced Developer Extensions
    There's a lot of very good stuff in the areas mentioned above, which could have significant benefits on developer productivity. However, I don't think you can immediately replace all existing .Net code (even if you wanted to), as there are some areas that I don't think these extensions reach (yet?). Note that the following list is based on my investigations so far, and may include features that do exist, but which I've missed in my brief analysis so far.

    1. Plugins. As mentioned above, the constructor for the XrmDataContext class has no overload that takes an IPluginExecutionContext. I also can't see a way to manipulate a DynamicEntity (e.g. from InputParameters) within these extensions, so I don't think they are viable for use in plugin code
    2. IFD (aka SPLA) impersonation. AD impersonation is supported, but I can't see the equivalent of the CrmImpersonator class. I need to do some further tests on this to see if IFD impersonation can work with these extensions; if not it would restrict the deployment scenarios for web extensions

    It's also worth pointing out that the major benefits of the extension code relate to standard data operations (Create, Retrieve, Update, Delete). You can still use the additional messages (e.g. InstantiateTemplate, Merge) of the CrmService and MetadataService through these extensions, but you'll need to use the standard SDK types

    My understanding is that these extensions originated as a basis for developing portal-style applications against CRM, so in that context it is not at all surprising that there are areas where the code can't reach. It'll be interesting to see how this changes, both with CRM 4 and CRM 5.

    What Next ?
    What next, indeed. The timing of this release is interesting, coming relatively late in the life of CRM 4 as the most recent version of CRM. It's still a little too early to know what will happen with CRM 5, but it would be logical to expect that these extensions will continue to work with future versions of CRM. The more interesting question is whether we will continue to have more that one programming model for CRM 5 (e.g. a native interface, and extensions such as these which form a wrapper around the native interface), or whether these 2 models will start to merge together.

    Friday, 13 November 2009

    Socket Exhaustion when accessing CRM web services

    When submitting a large number requests to the CrmService web service, you may occasionally get the error 'Only one usage of each socket address (protocol/network address/port) is normally permitted'. The reason for this is 'socket exhaustion', which is excellently described here. You are at risk of this happening when over 4000 web requests are submitted within 4 minutes - I find this most commonly on data imports, but have recently met it during heavy user activity on a live CRM system.

    There are 2 solutions:
    1. As suggested in the link above, add registry values to give a much wider range of socket addresses. Note that you need to restart the server for these registry changes to take effect
    2. Set the UnsafeAuthenticatedConnectionSharing and PreAuthenticate properties of your CrmService (or MetadateService) proxy to true. This will allow your web request to reuse the same socket

    Thursday, 30 July 2009

    Some undocumented CRM attribute types

    I was working on some custom plugin registration tools recently, and came across some undocumented attribute types. Several customisation entities (e.g. PluginAssembly, PluginType, SavedQuery) have an attribute called CustomizationLevel which has a value of 0 for system data, and 1 if it is custom or customised (according to the SDK).

    The SDK documentation states this attribute is of type CrmNumber, so when writing a QueryExpression that selects only records with a CustomizationLevel = 1, it would seem reasonable to use something like:

    qe.Criteria.AddCondition(new ConditionExpression("customizationlevel", ConditionOperator.Equal, 1));

    However, this gives the error “Condition for attribute 'customizationlevel': expected argument(s) of type 'System.Byte' but received 'System.Int32' " (code 0x80040203 - Invalid Argument). Digging deeper I found that the field for the CustomizationLevel is stored in SQL as a tinyint (i.e. a single-byte integer), and that the AttributeTypes table in CRM has a corresponding AttributeType of tinyint.

    So, despite the attribute being identified as a CrmNumber, any condition expressions need to pass values as a single-byte integer, not the documented four-byte integer. This is easily done by using the following:

    qe.Criteria.AddCondition(new ConditionExpression("customizationlevel", ConditionOperator.Equal, (Byte) 1));

    Being somewhat nosey, I thought to see what other attribute types there were. These can be easily found with the following SQL query:

    Select * from AttributeTypes

    In addition to tinyint, 2 similar types caught my attention – smallint and bigint (SQL Server data type names are not that imaginative). Following on from this, the following SQL query lists attributes of these types, which may cause similar problems to those above:

    select e.name as Entity, a.name as Attribute, at.description as [Type]
    from attribute a join entity e on a.entityid = e.entityid
    join attributetypes at on a.attributetypeid = at.attributetypeid
    where at.description in ('tinyint', 'smallint', 'bigint')
    order by at.description, e.name, a.name

    This yields the following results. The bigint attributes aren’t a concern, as these 2 attributes aren’t available via the CRM platform, but you could encounter some of the smallint types on UserSettings, in which case I expect you’d have to cast values to Int16.

    EntityAttributeType
    AsyncOperationsequencebigint
    Subscriptioncompletedsyncversionnumberbigint
    Organizationtokenexpirysmallint
    UserSettingsadvancedfindstartupmodesmallint
    UserSettingstimezonecodesmallint
    UserSettingstimezonedaylightdaysmallint
    UserSettingstimezonedaylightdayofweeksmallint
    UserSettingstimezonedaylighthoursmallint
    UserSettingstimezonedaylightminutesmallint
    UserSettingstimezonedaylightmonthsmallint
    UserSettingstimezonedaylightsecondsmallint
    UserSettingstimezonedaylightyearsmallint
    UserSettingstimezonestandarddaysmallint
    UserSettingstimezonestandarddayofweeksmallint
    UserSettingstimezonestandardhoursmallint
    UserSettingstimezonestandardminutesmallint
    UserSettingstimezonestandardmonthsmallint
    UserSettingstimezonestandardsecondsmallint
    UserSettingstimezonestandardyearsmallint
    Organizationfiscalyeardisplaycodetinyint
    Organizationtagmaxaggressivecyclestinyint
    Organizationtrackingtokeniddigitstinyint
    OrganizationUIcustomizationleveltinyint
    PluginAssemblycustomizationleveltinyint
    PluginTypecustomizationleveltinyint
    SavedQuerycustomizationleveltinyint
    SdkMessagecustomizationleveltinyint
    SdkMessageFiltercustomizationleveltinyint
    SdkMessagePaircustomizationleveltinyint
    SdkMessageProcessingStepcustomizationleveltinyint
    SdkMessageProcessingStepImagecustomizationleveltinyint
    SdkMessageProcessingStepSecureConfigcustomizationleveltinyint
    SdkMessageRequestcustomizationleveltinyint
    SdkMessageRequestFieldcustomizationleveltinyint
    SdkMessageRequestInputcustomizationleveltinyint
    SdkMessageResponsecustomizationleveltinyint
    SdkMessageResponseFieldcustomizationleveltinyint

    Sunday, 31 August 2008

    Hiding System Views in CRM 4.0

    In CRM 3.0 it was relatively easy to hide one of the built-in system views - all you had to do was share the view with an empty team, which converted the view to a userquery entity. However, CRM 4.0 does not support this.


    There is an alternative route, but it required plug-in coding. Rather than have to write code for each deployment, I created a standard plug-in that can be used to hide any views, based on some XML configuration. The source code and compiled code is available on the MSDN Code Gallery, along with a sample configuration file.

    Friday, 29 August 2008

    Plug-ins - differences between Target and Image Entity

    In a plug-in there are potentially several ways to access entity data relevant to the plug-in action. For example, on the create message you can access the data on the new entity instance in one of the following ways:
    1. Via the Target InputParameter
    2. Via an Image Entity registered on the step
    3. Via a Retrieve request in the plug-in code

    These do not always work in the same way, as follows:

    Availability of the data by stage

    The general rules are:

    1. InputParameter is available in all stages. It can be modified in the pre-stage, but changing it in the post-stage will have no effect
    2. A PostImage Entity is available in the post-stage, and a PreImage Entity in the pre-stage only
    3. If using a Retrieve in the plug-in, then the data returned depends on the stage. In the pre-stage, you will see the data before the modification, whereas in the post-stage you see the data after the modification
    4. Some Image Entities are not relevant for some messages - e.g. there is no PreImage for a Create message, and no PostImage for a Delete message

    Data in the Name attribute

    If the message is updating CRM (e.g. a Create or Update message) then the InputParameter only contains the minimum information that needs to be saved to CRM. A consequence of this is that the name attribute of any of the following data types is null:

    • Lookup
    • Owner
    • Customer
    • Picklist
    • Boolean

    So, if your code needs to access the name, then you cannot rely on the InputParameter, and have to use either the Image Entity or a Retrieve to get the data.

    My preference is to use an Image Entity, mostly as this reduces the code I have to write. The CRM SDK also suggests that this is more efficient, though I've not done any thorough performance testing on this to determine if this is relevant.

    Friday, 20 June 2008

    Plugin Parameters

    Although the CRM 4.0 SDK is generally pretty comprehensive, I find it doesn't contain as much information as I'd like about the information passed to plugins for each of the messages.

    The following table lists the main parameters passed to plugins on the most common messages. If the message you want isn't listed here, post a comment and I'll update the table.

    MessageParameterDirectionTypeComments
    AssignAssigneeInputSecurityPrincipal
    AssignTargetInputMoniker
    CancelSalesOrderOrderCloseInputDynamicEntity
    Close*ActivityCloseInputDynamicEntity
    CloseStatusInputInt32
    CreateidOutputGuidOnly available on the Post Stage
    CreateTargetInputDynamicEntity
    DeleteTargetInputMoniker
    ExecuteFetchXmlInputString
    ExecuteFetchXmlResultOutputString
    GrantAccessPrincipalAccessInputPrincipalAccess
    GrantAccessTargetInputMoniker
    HandleSourceQueueIdInputGuid
    HandleTargetInputDynamicEntity
    Lose*ActivityCloseInputDynamicEntity
    LoseStatusInputInt32
    RetrieveBusinessEntityOutputDynamicEntity
    RetrieveColumnSetInputColumnSetBase
    RetrieveTargetInputMoniker
    RetrieveExchangeRateExchangeRateOutputDecimal
    RetrieveExchangeRateTransactionCurrencyIdInputGuid
    RetrieveMultipleBusinessEntityCollectionOutputBusinessEntityCollection
    RetrieveMultipleQueryInputQueryExpression
    RetrieveMultipleReturnDynamicEntitiesInputBoolean
    RetrievePrincipalAccessAccessRightsOutputAccessRights
    RetrievePrincipalAccessPrincipalInputSecurityPrincipal
    RetrievePrincipalAccessTargetInputMoniker
    RevokeAccessRevokeeInputPrincipalAccess
    RevokeAccessTargetInputMoniker
    RouteEndpointIdInputGuid
    RouteRouteTypeInputRouteType
    RouteSourceQueueIdInputGuid
    RouteTargetInputMoniker
    SendEmailIdInputGuid
    SendIssueSendInputBoolean
    SendSubjectOutputStringThis is the subject after the tracking token has been added
    SendTrackingTokenInputString
    SetStateDynamicEntityEntityMonikerInputMoniker
    SetStateDynamicEntityStateInputString
    SetStateDynamicEntityStatusInputInt32
    UpdateTargetInputDynamicEntityTo get the Primary Key, find the KeyProperty within the DynamicEntity
    Win*ActivityCloseInputDynamicEntity
    WinQuoteCloseInputDynamicEntity
    WinStatusInputInt32


    Notes:
    *ActivityClose. For the Win, Lose and Close messages, one of the parameters is an activity type whose name depends on the primary entity - e.g. the Win message could have a QuoteClose or OpportunityClose entity passed to it

    To gather this information I used the plugin tools described on the MSCRM Team blog. The source code for these tools can be found here:
    Bulk Registration Tool
    Plugin Logger

    Other Links:
    Plugin Development
    Plugin Messages