Thursday, 20 December 2007

CRM 4.0 - Installation with no Internet Access

The current set of downloadable files for CRM 4.0 do not include some pre-requisite software (including .Net 3.0 framework and Visual C++ redistributable). If this software is not already installed and the setup program can't find it locally, the CRM setup program can download the files from the Internet, which is fine, if you have Internet access from the CRM server...

If you don't have Internet access you can pre-install these components, except the Visual C++ redistributable which cannot be reliably detected, so the CRM setup program will always try to install it.

Therefore, without Internet access, you have to have a local copy of the Visual C++ redistributable locally. Philip Richardson's blog describes where to put these files, so I won't bother repeating it here, but there are a couple of extra things that it might help to know:
  1. CRM setup checks the version of the local copy of the redistributable, and if it's not what it expects, will again try to connect to the Internet. You can identify this scenario from the setup log; the message is "Error when checking signature", though oddly it's marked as an Info message
  2. CRM setup only checks for local files when it starts. So if, like me, you get to the screen where it expects to download the files, realise it won't be able to, then copy them locally, then go Back, Forward, it won't help. Instead, exit setup completely and start over again

As to Internet access from the CRM server, in most circumstances this can be achieved from a VPC image with the use of more than one network adapter. See my earlier post

CRM 4.0 - Client script changes from CRM 3.0

This post is intended to be a dumping ground that I'll update as and when I, or my colleagues, find issues that may affect the upgrade of CRM 3.0 implementations with client script.

RemoteCommand - removal of some options
In CRM 3.0 it was possible to make unsupported use of the client-side RemoteCommand function to make server-side calls, e.g. the following got the current CRM user id

var cmd = new RemoteCommand("SystemUser", "WhoAmI", "/MSCRMServices/");
var res = command.Execute();
if (res.Success)
return res.ReturnValue.UserId;

What this did was make a request to the SystemUser.asmx (1st parameter) file in the MSCRMServices (3rd parameter) directory. Almost all of these files have been removed in CRM 4.0, so code like that above will fail.

I can't see any obvious equivalent calls to make in CRM 4.0, but as the use of RemoteCommand is unsupported, I wouldn't expect there to be. My preferred option has always been to write a server-side wrapper around calls the CrmService web service, and call them directly using the XmlHttp ActiveX object, thus keeping within supported functionality.

CRM 4.0

As the RTM code for CRM 4.0 is now available (http://philiprichardson.org/blog/post/CRM-40-Downloads.aspx), I figure it's time to start recording some of the changes from CRM 3.0, and associated upgrade issues.

As such I'm going to create a few posts that I'll add to as we find out more about CRM 4.0; the first ones will be on client script changes, server code and upgrade issues

Stored procedures in SSIS; the need for nocount

I've been doing a lot of SSIS work lately, and met an odd problem when using stored procedures in an OLEDB source. The general idea was to have a stored procedure that did some data modification, then finished with a SELECT statement to output logging information.

When previewing the OLEDB source everything was fine, but running the package would give the following error in the pre-execute phase:
'A rowset based on the SQL command was not returned by the OLE DB provider'

It took a while to work out, but eventually the problem came down to the way that SQL Server returns information about the number of rows affected. The resolution is to put the following SQL statement at the start of the stored procedure defnition to prevent output of the number of rows:

SET NOCOUNT ON

After that it was all fine.

Friday, 14 December 2007

Automating Local Data Query Creation

I've been invited to post on the Microsoft Dynamics CRM Team Blog again. This time I posted about how to automate the creation of the local data groups that specify what data each user of the Laptop client take offline.

Rather than repeating the text, you can read the article here

I've posted sample code for the article on CodePlex

Monday, 3 December 2007

Providing different default views for different CRM users

Or, how to modify CRM grid pages via client script.

CRM provides 2 standard ways to add custom client script into the CRM application; via form events, and through menu items and buttons in ISV.Config. Neither of these allows you to write code to control the default views for CRM entities

However, there is a way round this. The approach is to create a host HTML (or ASP .Net) page that has no interface components itself, but contains the CRM page in an IFrame. This will appear exactly the same as the standard CRM page, but allows you to write client script in your host page that controls the CRM page, and hence change which view is displayed.

Note that this approach has to be considered as unsupported, as it involves programmatic control over a picklist outside of a CRM form.

Building the host page
There are 2 aspects of the host page; providing the HTML to host the IFrame without adding extraneous borders or padding, and adding client script that runs once the IFrame has loaded.

The following HTML shows how to host the IFrame:

<body style="margin:0" onload="Init();">

<iframe onreadystatechange="ors();" id="ifr" src="about:blank" width="100%" height="100%" frameborder="0" leftmargin="0">
</body>

In this example I’m creating a generic page that can host most CRM grid pages, and I’m setting the src property of the IFrame programmatically:

function Init()
{
var etc = getQS('etc');
if (etc != null)
{
document.all.ifr.src = '/_root/homepage.aspx?etc=' + etc;
}
}

This src is the standard way to display most entity grids, with etc as the object type code of the entity.

I’m using the onreadystatechange event to determine whether the IFrame has loaded:

function ors()
{
if (event.srcElement.readyState == 'complete')
{
var sView = getQS('view');
if (sView != null)
SetView(sView);
document.all.ifr.style.visibility = 'visible';
}
else
{
document.all.ifr.style.visibility = 'hidden';
}
}


This checks the readyState property of the IFrame, which will equal ‘complete’ when the IFrame contents have loaded, and the IFrame contents after they have been modified.

The following code shows how to modify which view is selected:

function SetView(sView)
{
var ifDoc = document.frames['ifr'].document.all; // access IFrame contents
var oSel = ifDoc['SavedQuerySelector']; // picklist control to select view
if (oSel != null)
{
var v = GetSelectValue(oSel, sView);
if (v)
{
oSel.DefaultValue = v;
oSel.DataValue = v;
oSel.FireOnChange(); // need to fire this event to apply changes
}
}
}
function GetSelectValue(oSel, sText) // helper function to select item in picklist
{
for (var i=0;i<osel.options.length;i++)

{
if (oSel.options[i].text == sText) return oSel.options[i].value;

}
}

The code uses another helper function to access parameters passed on the query string. This function is oversimplified in that it doesn’t cope with all possible encoding issues, but is sufficient for this example:

function getQS(name)
{
var ret = '';
if (window.location.search != null && window.location.search.length > 1)
{
var aQS = window.location.search.substring(1).split('&');
if (aQS != null)
for (var i=0;i<aQS.length;i++)
if (aQS[i].indexOf(name + '=') == 0)
ret = aQS[i].substring(name.length + 1).replace('%20', ' ');
}
return ret;
}

Putting this all together, we have a page that will display an entity grid, and change the default view, based on 2 query string parameters. For example

defaultViewChanger.htm?etc=1&view=Active%20Accounts

Will display the account grid, and set the view to the Active Accounts view (%20 is the encoding of a space, which is not permitted in a url).

Setting different default views for different users
We now have a page that can programmatically change the default view. There are 2 ways this can be used to provide different default views to different users: a programmatic way that identifies the current user (and probably team or role membership) and hence determines the default view, or via permissions in SiteMap.

I’m not intending the cover the programmatic route in detail here; my preference is to convert the page to an ASP .Net page that identifies the current user and their role, and populates the parameter to SetView in server code; an alternative approach in client code can be found here.

Another approach is to make use of the Privilege element in SiteMap to display different navigation links to different sets of users based on their permissions, as described in more detail here. Let’s assume we have 2 groups of users who want different default views of the account entity, group A (who are members of security role ‘roleA’) want to see My Active Accounts, and group B (members of ‘roleB’) who want to see Active Accounts.

We can create a dummy entity in CRM called exc_acctsecurity and grant roleA write rights on the exc_acctsecurity entity, and grant roleB assign rights. Then we can modify SiteMap, replacing the SubAreas for the account entity to the following:

<SubArea Id="nav_accthostA" Title="Accounts" Url="http://server/defaultViewChanger.htm?etc=1&amp;view=My%20Active%20Accounts" Icon="/_imgs/ico_18_1.gif">
<Privilege Entity="exc_acctsecurity" Privilege="Write" />
</SubArea>
<SubArea Id="nav_accthostB" Title="Accounts" Url="http://server/defaultViewChanger.htm?etc=1&amp;view=Active%20Accounts" Icon="/_imgs/ico_18_1.gif">
<Privilege Entity="exc_acctsecurity" Privilege="Assign" />
</SubArea>

Note the use of XML encoding (&amp; instead of &).

One drawback of this approach is that anybody in the System Administrator role will see both SubAreas.

A complete example of the HTML and SiteMap can be found on CodePlex.

Viewing all files in CRM related to an account

I've recently published a second CRM related reporting services project on codeplex. This one illustrates how to get a list of all files related to a CRM account. This includes files attached directly to the account or via related objects (e.g. contacts, opportunities, cases), and also includes both file and email attachments.

Most of the work is done in the SQL, which I've written as a set of SQL views for ease of reading, rather than embedding all the SQL in the report.

The project also illustrates how to link to CRM objects, within a report, as the report contains reporting services actions to display the CRM entity forms, and also actions on each file to link to the CRM file download dialog.

This report is written so as to take parameters from a ISV.Config button, as described here

Friday, 23 November 2007

External access from a Virtual Environment

We do almost all our CRM development and testing within virtual images. This confers many benefits, but raises issues if you need connect the image to the outside world. One example is when registering software; another is when developing callouts / plugins that communicate with external web services.

The major issue arises with a self-contained image that is a domain controller in its own domain, as well as a SQL server and CRM server. Active Directory depends on DNS, so the image has to be its own DNS server, which forces it to have a fixed IP address, which we place on separate IP private IP subnet (e.g. 192.168.x.0). This essentially prevents this IP address and adapter being used externally. With just the one adapter and IP address, you can’t change the IP address to a dynamic one (e.g. from DHCP), because within the image that breaks DNS, which breaks AD, which breaks CRM.

So, the solution; add a separate network adapter to the image, using an external IP address. With Microsoft Virtual PC or Virtual Server, the steps to take are:

  • Turn off the image
  • Add a 2nd network adapter to the image, and associate it with the physical adapter in the host machine
  • Start the image, and set the IP address on the 2nd adapter to something that will work on the external network; either dynamically assigned via DHCP, or to a reserved static address

This resolves the IP addressing issue, and if you only need to connect to an external resource by IP address than that’s as much as you need. However, you may still have a DNS problem. I’ve not investigated all scenarios, but I have met cases when, even if the 2nd adapter is configured for with external DNS server, the image still uses its own DNS server for all DNS requests. In this scenario the simplest solution is to configure a forwarder for the virtual DNS server, so that it forwards all DNS requests that it cannot resolve to your known external DNS server. This can be configured with the DNS management snap-in.

Wednesday, 21 November 2007

Using permissions to personalise SiteMap

CRM allows modification of the CRM application navigation structure via SiteMap. This is normally considered to provide one customised navigation structure for all CRM users, with the only difference between what one user sees and what one of their colleagues sees is what they have permission for.

For most entries in SiteMap, whether they are displayed or not depends on the user’s permission on the corresponding entity. For example, if a user does not have Read privileges at any level on the invoice entity, they will not see invoices in SiteMap. However, SiteMap allows more explicit control over what privileges are required to display any link; this is done via the Privilege element.

The following extract from the default SiteMap illustrates how this is used:

<SubArea Id="nav_managekb" ResourceId="Homepage_KBManager" Icon="/_imgs/ico_18_126.gif" Url="/cs/home_managekb.aspx" Client="Web"> <Privilege Entity="kbarticle" Privilege="Read,Write,Create" /></SubArea>
This specifies that the Knowledge Base link in the Services Area should only be displayed to users who have Read, Write and Create privileges on the kbarticle entity.

Privilege elements are interpreted based on these rules:
  • In one Privilege element, the user must have all privileges listed in the Privilege attribute
  • The level of privilege doesn’t matter; user level is sufficient
  • Multiple Privilege elements can be specified for one SubArea; in this case the user needs to fulfil the permissions for every element

There is no requirement for the Privileges element(s) to be related to the entity (or external link) that is displayed.

Using the Privileges element
Given this, it is perfectly reasonable to create multiple SubArea elements in SiteMap that would show similar (but not identical) information, but only display one of them to any given user, based on their permissions. For example, you may have created custom portal pages, but the customer service staff should see a different page from that seen by marketing staff. Showing a different page for each set of users could be achieved with the following entries in SiteMap:

<SubArea Id="nav_portalService" Title="Company Portal" Url="http://server/supportportal.htm" Icon="/_imgs/ico_18_home.gif"> <Privilege Entity=”incident” Privilege=”Create” /></SubArea><SubArea Id="nav_portalService" Title="Company Portal" Url="http://server/marketingportal.htm" Icon="/_imgs/ico_18_home.gif"> <Privilege Entity=”campaign” Privilege=”Create” /></SubArea>

This works providing that support users can create cases (schema name incident) but not campaigns, while the opposite applies to marketing users.

This approach can work well assuming suitable privileges can be found to distinguish between different groups of users, but what if there aren’t any suitable permissions ?

Creating dummy entities for permissions
The Privileges element can refer to custom entities, and there is nothing to stop us creating one or more entities solely for the purposes of personalising SiteMap. For example, we could do the following:

  • Create an entity called dummy_security
  • Grant one role, ‘Support Role’, Write privilege on this entity
  • Grant another role, ‘Marketing Role’, Assign privilege on the entity
  • Ensure users are added to the appropriate role
  • Change the above snippet from SiteMap to the following:

<SubArea Id="nav_portalService" Title="Company Portal" Url="http://server/supportportal.htm" Icon="/_imgs/ico_18_home.gif"> <Privilege Entity=”dummy_security” Privilege=”Write” /></SubArea><SubArea Id="nav_portalService" Title="Company Portal" Url="http://server/marketingportal.htm" Icon="/_imgs/ico_18_home.gif"> <Privilege Entity=” dummy_security” Privilege=”Assign” /></SubArea>

This would give users the correct link, without relying on existing privileges that may change based on changing job roles, or changing business requirements.

The one thing to watch for is to make sure you don’t grant Read or Create privilege on this new entity; this will ensure it will not appear to users anywhere in the normal CRM application. Note that members of the System Administrator role will necessarily see all the SubAreas, because they will automatically have full privileges on all entities.

Thursday, 15 November 2007

Redeploying callouts by recycling the application pool, rather than IISRESET

When redeploying a callout on CRM 3.0, it is considered necessary to run IISRESET to unload the callout assembly from memory, and hence allow you to copy a new version of the dll. This can have several issues:
  1. After an IISRESET, it takes long enough to reload / rerender CRM data to be annoying when frequently testing and redeploying versions of a callout assembly
  2. If other users are using CRM, there will be an outage during the IISRESET
  3. IISRESET affects all other web applications on the server

To minimise the impact of 1) above, Michael Höhne's local debugging mechanism is an excellent solution in a lot of environments.

However, there are still the occasions when the callout has to be redeployed on to the CRM server. There's not much that can be about 2), although the following suggestion helps a bit.

We can do something about 3). A callout is loaded into the process for the CRM application pool (CrmAppPool), and it is only this application pool that needs to be recycled, rather than the whole of IIS. An application pool can be recycled via the IIS management console, but there is an easy way to script this, as follows:

  1. Open a command prompt
  2. Change directory to %Windir%\system32
  3. Run the following command:

cscript iisapp.vbs /a CrmAppPool /r

This will just recycle the application pool, after which you can copy the new callout assembly

Syntax issues using SQL Execute As and Revert statements

SQL Server 2005 provides a means to switch user context via the EXECUTE AS statement. This can be very useful when querying CRM Filtered views, as they use the SQL user context to determine what data to return and if queried using the CRM service account they return no data. An example of how this can be used in CRM callouts can be found here. I've also used a similar approach when running SSIS packages programmatically from a web application, as ASP .Net impersonation doesn't work properly in SSIS packages.


However, when testing this further, and checking the behaviour of REVERT, I found this only works correctly if you separate your SQL statements with semi-colons. Take the following 2 examples:


Execute as user='crmdom\admin'
select * from filteredaccount
revert


declare @sql nvarchar(2000)
Execute as user='crmdom\admin'
set @sql = 'select * from filteredaccount'
exec (@sql)
revert


Of these 2, the former appears to work, but the REVERT statement doesn't get processed, and the latter gives an 'incorrect syntax' error. But, if you add a semi-colon after the statement before REVERT, then they work fine:


Execute as user='crmdom\admin'
select * from filteredaccount;
revert


declare @sql nvarchar(2000)
Execute as user='crmdom\admin'
set @sql = 'select * from filteredaccount'
exec (@sql);
revert



Now, that is not what I expected in SQL syntax. I always thought semi-colons were an optional statement separator, but apparently not -though I expect this is a bug rather than by design.


Note, this behaviour happens on build 2047 of SQL 2005, whereas it works without semi-colons on build 1399, but that doesn't stop me using them as a matter of course with impersonation.

Monday, 12 November 2007

Changes to client-side code for CRM 4.0

Following on from looking at server-side code changes for CRM 4.0, here's what I know so far about client-side code changes. Again, this is based on the CTP3 code so may soon become out-dated. Another caveat is that it is based on investigation of the CRM client files, and not on the SDK documentation (which at time of posting is incomplete); hence, some of what I cover may well be undocumented and hence unsupported.

Calling CRM web services directly
This has never been a favourite approach of mine, as the creation of the SOAP data in client-code seems too fragile; I prefer to create server-side web pages or services to abstract the CRM web service. However, I know many people who make direct SOAP requests, and there is another consideration - if customising a deployment on Windows Live you will not be permitted to deploy server web code, so you will have to make direct requests.

If you do make direct requests, you will need to pass the new CrmAuthenticationToken within the SOAP header. Rather than generating this yourself, there is a new global method for this:

GenerateAuthenticationHeader();

CRM 4.0 now provides a proper web service interface for offline use. This can be accessed from client-side code via the http://localhost:2525 location.

New Global Variables
CRM 4.0 provides the following documented global variables related to the multi-language and multi-tenant features:
  • USER_LANGUAGE_CODE - Code (LCID) of the current user's language
  • ORG_LANGUAGE_CODE - Code (LCID) of the base language
  • ORG_UNIQUE_NAME - Current organisation name

New Object Properties and Methods
Note, everything in this section may be undocumented and hence unsupported. As you may know, the crmForm and field objects are implemented as HTML components (.htc files) in the /_forms/controls directory. A comparison between the CRM 3.0 and CRM 4.0 versions show the following changes:

All CRM 3.0 properties and methods seem intact, even the undocumented ones. I've not tested this exhaustively, but all current code works fine.

The crmForm object has the following new members:

  • fireSaveEvent
  • HideField - I hope this will become supported, as it will avoid direct DHTML manipulation
  • VerifyFieldIsSet
  • BypassValidation - a property to tell CRM not to validate fields

The Lookup control has more members, mostly to do with the new auto-resolve functionality:

  • AutoResolve and ResolveEmailAddress boolean properties
  • AreValuesDifferent method allows comparison between the lookup contents and a supplied array of values

The Picklist control exposes the underlying properties of the select HTML element, SelectedIndex and SelectedOption.

Across these, and the undocumented members from CRM 3.0, what I see as most significant is the extent to which they will be supported in CRM 4.0. We shall see.

Friday, 9 November 2007

Changes to code to use CRM 4.0 web services

I've recently been rewriting some of our products for CRM 4.0, and here's a summary of the server-side code changes required when converting code from using the CRM 3.0 web services to the CRM 4.0 web services. Note that this is based on the CTP3 release of CRM 4.0, so may not be accurate for release code.

Use of the CRM 3.0 web service
CRM 4.0 retains the CRM 3.0 web service, along with an implementation of the CRM 3.0 microsoft.crm.platform.callout.base.dll. This allows you, as far as I can tell, to retain code written for CRM 3.0 and have it work with CRM 4.0. So far I've found 2 limitations to this:
  1. If you use the CRM 3.0 Web Service, you will access the default organisation, and will not be able to use multiple organisations. You will also have to use AD authentication
  2. A few of the CRM 3.0 Web Service methods have been deprecated and won't work. The main class of these are those used to manage workflow

If you look in the directory structure of CRM 4.0 you'll see the following:

mscrmservices\2006\CrmService.asmx (used for CRM 3.0 web service access)
mscrmservices\2007\CrmService.asmx (used for CRM 4.0 web service access)

However this post is mostly about the changes you need to make if you are targetting the CRM 4.0 we service.

The CrmDiscoveryService service
The first major differences with respect to the web services with CRM 4.0 is that there is a third web service, the CrmDiscoveryService. This is primarily used to enumerate the organisations in use and provide the connection information for them. This service will be necessary in a multi-tenanted environment; whether you need to use it in a single-organisation on-premise implementation is open for debate (which deserves a separate post).

Setting up the CrmService service
The first key difference is the use of the CrmAuthenticationToken property. This became necessary to accommodate multi-tenancy and the new authentication type. It also replaces the CallerIdValue property. With the Enterprise edition of CRM 4.0 you have to set the OrganizationName; I don't know yet if it will be required in the Professional edition.

So, combining the 2 services above, you have 2 ways to setup your CrmService instance. Both assume the variable Org contains the organisation name. First, without using the CrmDiscoveryService:

CrmService svc = new CrmService();
svc.Url = Server + "/MSCRMServices/2007/CrmService.asmx";
svc.Credentials = System.Net.CredentialCache.DefaultCredentials;
CrmAuthenticationToken token = new CrmAuthenticationToken();
token.AuthenticationType = 0;
token.OrganizationName = Org;
svc.CrmAuthenticationTokenValue = token;

Alternatively, you can use the CrmDiscoveryService to retrieve the data required for the CrmService instance:

CrmDiscoveryService disco = new CrmDiscoveryService();
disco.Url = Server + "/MSCRMServices/2007/AD/CrmDiscoveryService.asmx";
disco.Credentials = CredentialCache.DefaultCredentials;
RetrieveOrganizationsRequest orgRequest = new RetrieveOrganizationsRequest();
RetrieveOrganizationsResponse orgResponse = (RetrieveOrganizationsResponse)disco.Execute(orgRequest);
foreach (OrganizationDetail orgdetail in orgResponse.OrganizationDetails)
{
 if (orgdetail.OrganizationName == Org)
 {
  CrmAuthenticationToken token = new CrmAuthenticationToken();
  token.AuthenticationType = 0;
  token.OrganizationName = Org;
  crmservice = new CrmService();
  crmservice.Credentials = CredentialCache.DefaultCredentials;
  crmservice.CrmAuthenticationTokenValue = token;
  crmservice.Url = orgdetail.CrmServiceUrl;
 }
}

This code uses AD authentication. To use another authentication type, change the url to the CrmDiscoveryService, and change the AuthenticationType property of the CrmAuthenticationToken.

Using the CrmService service
The good news is, almost everything stays the same as before, and there is no need to rewrite code, other than to pass in an Organisation name. However, if you serialise and deserialise entities you'll have to pay attention to the Xml namespaces, as some classes retain the CRM 3.0 Xml namespaces, and some have a new Xml namespace. For example, from the generated proxy code:

[System.Xml.Serialization.XmlTypeAttribute(Namespace="http://schemas.microsoft.com/crm/2006/WebServices")]
public partial class DynamicEntity : BusinessEntity

[System.Xml.Serialization.XmlTypeAttribute(Namespace="http://schemas.microsoft.com/crm/2007/WebServices")]
public partial class account : BusinessEntity

Fortunately for me, the DynamicEntity is only class that I regularly serialise, and the namespace hasn't changed.

Using the MetadataService service
Here, your code has to change, as this has moved over to the Request model with an Execute method. So, for example, the following CRM 3.0 code:

EntityMetadata em = mdSvc.RetrieveEntityMetadata(EntityName.account.ToString(), EntityFlags.IncludeAttributes);

will need to be rewritten as:

RetrieveEntityRequest req = new RetrieveEntityRequest();
req.LogicalName = EntityName.account.ToString();
req.RetrieveAsIfPublished = true;
req.EntityItems = EntityItems.IncludeAttributes;
RetrieveEntityResponse resp = (RetrieveEntityResponse) mdSvc.Execute(req);
EntityMetadata em = resp.EntityMetadata;

Which is longer, but does illustrate one new feature, the ability to choose between the published and the unpublished information.

Another big difference relates to the way labels are used, which comes from the multi-language features. Again, some CRM 3.0 code:

PicklistAttributeMetadata pam = mdSvc.RetrieveAttributeMetadata(EntityName.account.ToString(), "industrycode")
foreach(Option o in pam.Options)
 if (o.OptionValue == 1)
  Console.Writeline(o.Description);

Which becomes:

RetrieveAttributeRequest req = new RetrieveAttributeRequest();
req.EntityLogicalName = EntityName.account.ToString();
req.LogicalName = "industrycode";
req.RetrieveAsIfPublished = true;
RetrieveAttrbiuteResponse resp = (RetrieveEntityResponse) mdSvc.Execute(req);
PicklistAttributeMetadata pam = (PicklistAttributeMetadata) resp.AttributeMetadata;
foreach (Option o in pam.Options)
 if (o.Value.Value == 1)
  Console.Writeline(o.Label.UserLocLabel.Label);

The Option.Value property is now of type CrmNumber, and there is a Label property that allows access to all localised labels.

The MetadataService also now supports modification to be metadata, but as that's new functionality I'm not going to cover it here.

Using Microsoft assemblies
With CRM 3.0, the only Microsoft CRM .Net assembly you were permitted to reference was microsoft.crm.platform.callout.base.dll. With CRM 4.0, there are 2 assemblies you can reference:

  • Microsoft.Crm.Sdk.dll
  • Microsoft.Crm.SdkTypeProxy.dll

At the moment I've not investigated them fully, as they are not well documented, and subject to change prior to release. The main new feature though is the implementation code for a PropertyBag, which is a collection class that allows, among others, easy manipulation of the properties of a DynamicEntity:

Microsoft.Crm.Sdk.DynamicEntity de = new Microsoft.Crm.Sdk.DynamicEntity();
de.Name = EntityName.account.ToString();
de.Properties.Add(new StringProperty("name", "My Co"));
if (de.Properties.Contains("name"))
 Console.Writeline(de["name"].Value);

This uses a bit of helper code from the Sdk for a partial class for StringProperty with a constructor that takes 2 parameters.

Other changes
One minor annoyance is that both the CrmService and MetadataService classes have classes with the same name, e.g. AttributeType. This means you cannot be as indiscriminate with your use of using or imports statements as I like to be.

Retrieving Billable time and activity time for cases

When a case is resolved in CRM, the user can specify the amount of Billable Time spent on the case, and they can also see the total activity time spent on the case:


Both sets of information can be useful, and are necessarily stored in the CRM database. However, how you retrieve these figures is not well documented.

Where the data is stored
Data about the case is stored in 3 main entities (note that I'm using the schema name for the entities, and in the schema a case is called an incident):

incident: This is the main case entity, which stores the information you see on the main form, but does not include and of the duration information listed above. The statecode field identifies whether a case is open, resolved or cancelled.

incidentresolution: This is strictly a special type of activity, and stores the user entered data in the Case Resolution dialog. The 'Resolution Type' is stored in the statuscode field, and the user-entered 'Billable Time' is stored in the timespent field.

activitypointer: All activities related to a case have an activitypointer record. The duration of each closed activity is stored in the actualdurationminutes field.

There are a couple of additional complexities to the data storage:
  1. It is possible for a case to have been resolved, then reactivated, and resolved again. This results in 2 incidentresolution records, so you need to identify the correct one. I always go by the most recent
  2. The incidentresolution entity is a type of activity, hence has an associated activitypointer record. Fortunately, this record has no value for the actualdurationminutes, so doesn't affect any calculations of total activity time. However, you do have to be aware of this if you want a count of the number of activities

Retrieving the data
Case data is normally retrieved via reports, and hence SQL queries. The following query shows how to extract the correct Billable Time and Total Activity Time:

select i.incidentid, i.title, ir.actualend
, max(timespent) as BillableTime
, isnull(sum(a.actualdurationminutes), 0) as TotalActivityTime
from filteredactivitypointer a
join filteredincident i
on a.regardingobjectid = i.incidentid
left outer join filteredincidentresolution ir
on i.incidentid = ir.incidentidand ir.statecode = 1 -- Resolved incident resolutions
and ir.actualend = (select max(actualend) from filteredincidentresolution ir2 where ir2.incidentid = ir.incidentid and ir2.statecode = 1) -- Most recent ir activity
where i.statecode = 1 -- Resolved cases
group by i.incidentid, i.title, ir.actualend

A few comments on the query:

  1. I'm only returning resolved cases in this example, but I've used an outer join to incident resolution to make it easy to change the i.statecode filter to include unresolved cases
  2. I'm using isnull on the actualdurationminutes field to return 0 if there is no activity time
  3. If you add any other fields from the case, or related entities (e.g. account), you'll need to add them to the group by clause as well as the select clause, or join to them in a separate query

Displaying the data in CRM
As the billable time and activity time are not stored in the incident entity, they cannot be displayed directly on the case form or in any views. To display this information in the case form, my preferred approach is to create a report using the above SQL query, and display this in an IFrame on the case form, using the techniques described in Writing Reports to display in IFrames

In CRM 3.0 there is no useful way to display this data in a view. In CRM 4.0 this should change, though it may not be as easy as you'd like; CRM 4.0 allows display of related data in a view, but it looks like this is only data from parent entities (e.g. the account to which a case belongs), not the summarised data from child entities that we're discussing here. However, the new PlugIn model allows intercepting of the view data, and should permit such display of child data with a bit of code. This is something I'll be posting on in the future, but it will have to wait till the CRM 4.0 SDK documentation is finalised.


Thursday, 8 November 2007

Workflow problems on CRM 3.0 with .Net framework 2.0

If you have .Net framework 2.0 or higher it can adversely affect the Workflow engine. This can give one or more of the following problems:
  1. Workflow rules do not run (errors can be found in the event log, pointing you at workflow monitor, but nothing useful can be found in workflow monitor)
  2. Bulk import operations never go beyond pending, even though the workflow service is running
  3. When starting Workflow Manager on the CRM server you get an error message 'Workflow Manager cannot start...configuration file is corrupted'
  4. You cannot attach to the CrmWorkflowService to debug workflow assemblies with Visual Studio .Net 2003

This can be because the Crm Workflow Service is running with the 'wrong' version of the .Net Framework. To correct this, create a text file with the following (case-sensitive) text:


<configuration>
<startup>
<supportedRuntime version="v1.1.4322"/>
</startup>
</configuration>

Save the file as CrmWorkflowService.exe.config in the \Microsoft CRM\Server\bin directory. Then restart the CRM Workflow Service.

Using reports in IFrames and from ISV.Config

This blog is not the first one I've posted on; I've previously been invited to post on the Microsoft CRM Team blog, and I posted this article Writing Reports to display in IFrames and from ISV.Config

It seems unnecessary to repeat the post here, so I won't. However I will soon be posting some additional reports that build on this technique.

It has to start somewhere - an introduction

It seems rude not to start with an introduction, so here goes. I've been working with MS CRM as my main technical focus for over 3 years, and became a Microsoft CRM MVP in April 2007. My main online involvement with the 'CRM community' thus far has been in the CRM newsgroups and forums, mostly helping with individual issues. This is the next step...

There are several reasons for starting this blog, the main one is to give me more space to cover fuller solutions or ideas than can be covered in the forums. These will be biased towards MS CRM customisation and development, but will doubtless stray into related technologies - SQL, Reporting Services, SharePoint, SSIS and .Net development - as well as infrastructure issues as I find them.