Friday, 5 April 2019

Using the Lookup function to create reports on multiple DataSets

With Dynamics 365 Online, the greatest thing I miss compared to OnPremise is the relative limitation on what you can do with reports, as FetchXml is not so powerful as SQL. One option is to use the Data Export Service to get the data into a SQL database, and get the SQL queries out, but there is still a lot that can be done with FetchXml and Reporting Services. This post shows how to combine data from multiple datasets using the Lookup function.

My first rule of thumb when working out if a given SQL query can be implemented as a FetchXml query is, 'can the SQL query be written so that there is just one SELECT statement ?' If so, you've got a good chance of being to rewrite as FetchXml, but if not, you won't be able to do this in FetchXml. This test is useful, as it immediately eliminates Union queries, sub-queries and table expressions, which you can't do with FetchXml.

So, in many cases you can't do get the result that you want with one query, but Reporting Services allows you to define multiple Datasets, and hence multiple queries, in one report, and the Lookup function allows to connect the data across the Datasets.

For this post, I'll use an example I came across recently when the requirement was to get a count of records created per user, broken down by entity type. The simplified output I was looking for was:

UserLeadsOpportunities
David Jennaway2010
My Friend158

This will need to get data from the systemuser, lead and the opportunity entities. It is possible to join these in one query, but not in a way that is useful, as you end up multiplying the opportunity and lead records.

Instead, we can create separate queries. Here I'm doing one for each entity, systemuser, lead and opportunity. The systemuser query will end up as the main source for the table, with lookups to the other queries to get the respective record counts. The 3 datasets and queries are:

dsUser:
<fetch > <entity name='systemuser' > <attribute name='systemuserid' /> <attribute name='fullname' /> </entity> </fetch>

dsLead:
<fetch aggregate='true'>
    <entity name='lead' >
       <attribute name='createdby' groupby='true' alias='createdby' />
       <attribute name='leadid' alias='lead_count' aggregate='countcolumn' />
    </entity>
</fetch>

dsOpportunity:
<fetch aggregate='true'>
    <entity name='opportunity' >
       <attribute name='createdby' groupby='true' alias='createdby' />
       <attribute name='opportunityid' alias='opportunity_count' aggregate='countcolumn' />
    </entity>
</fetch>

dsLead and dsOpportunity are both simple aggregate queries to get the respective record counts for each entity by user.

Then, to create the report, I add a table based on the dsUser dataset, with the Fullname in the first column. Then for the count of leads, I can use the following Lookup expression:

=Lookup(Fields!systemuserid.Value, Fields!createdbyValue.Value, Fields!lead_count.Value, "dsLead")

Taking each of the parameters in turn:
  • Fields!systemuserid.Value - this is the Guid for the systemuserid in the dsUser dataset. This value will be compared against...
  • Fields!createdbyValue.Value - this is the Guid of the createdby in the dsLead dataset. Note that I use createdbyValue to get the Guid, as for lookup attributes the createdby will be the name
  • Fields!lead_count.Value - this is the field in the dsLead dataset that I want to display
  • "dsLead" - this is the name of the dataset that the Lookup works on
We can then do the same for the expression for the opportunity count:

=Lookup(Fields!systemuserid.Value, Fields!createdbyValue.Value, Fields!opportunity_count.Value, "dsOpportunity")

And that's it to get the basic report. As a nicety, I can add a filter for the row visibility, so that it hides rows where there is no count across any of the datasets. The Lookup function returns Nothing if no record is found, so we can use the IsNothing function.

=IsNothing(Lookup(Fields!systemuserid.Value, Fields!createdbyValue.Value, Fields!opportunity_count.Value, "dsOpportunity")) AndAlso IsNothing(Lookup(Fields!systemuserid.Value, Fields!createdbyValue.Value, Fields!lead_count.Value, "dsLead"))

We can keep adding extra datasets to count other entities, using the same approach. I don't have the patience to work out if there's a practical limit to the number of datasets we can use in one report. 

A couple of points to note:
  • You can't use the Lookup function as a calculated field, which is slightly annoying, as I think it would be neater if this were possible. I expect this is due to how Reporting Services first processes the datasets, and will then render the results
  • When testing in Visual Studio, you get prompted for credentials (or to use cached credentials)for each dataset in turn. I don't think you can do anything about this. Interestingly, it looks like Visual Studio caches the credentials per dataset, and they can be differ even if they use the same datasource. I once managed to have different datasets querying different CRM organisations, even though they were using the same datasource
I'm intending to post the full report up on GitHub in the next few days, once I've got that working properly





Sunday, 24 March 2019

Solution Layers

In Dynamics 365 Online, you may have recently noticed a new button within an unmanaged solution, 'Solution Layers'. I first saw this appear around 17th Mar 2019, and it's a welcome tool to help understand more what happens with multiple solutions in a system.



I've been playing around with this a bit in the last week, and there are a few key concepts to understand from the beginning.
  • The button is currently only visible within an unmanaged solution or the default solution. This seems to be because managed solutions are not directly editable, though I think this is a bit of an oversight, because...
  • When you click the button for a solution component, it will display any managed solutions that contain that component, along with an 'Active' solution. From what I can tell so far, I'm treating the 'Active' solution to be the same as the default solution, but there may be subtle differences
  • It makes sense that this only displays managed solutions, as these are the only one solutions that a individually layered. In contrast, unmanaged solutions are all combined into the one unmanaged layer
So, it's a little confusing to start with, in that you can only access it from the unmanaged layer, but it is displaying information about the managed layers. But, leaving that aside, what does it tell us ? It provides information at 2 levels; first the list of layers, then the detail properties that were set within that layer.
The Layers
When you click on the 'Solution Layers' button, it will list all the managed solutions that contain this component, and the order in which they apply. 

Order no.1 is the first solution in which this component was imported into this organisation, with the remaining solutions in incremental order. The order listed is the order in which any changes will be applied; I think this would normally be the order in which the solutions are installed, though I suspect that Microsoft may change the order of some of their solutions, irrespective of the installation sequence. So, we start with order = 1, then any changes from each other solution are applied in turn, so that a change from a higher Ordered solution would override a change to the same property in a lower Ordered solution.

So, the information we get here is what solutions contain a component, and the order of the solutions. The interesting thing here is that some layers appear above the Active layer; from what I can tell so far, only solutions from Microsoft appear above the Active layer.
Properties within a Layer
If you click on a layer, it will then show the component properties within this layer. The 'Changed Properties' tab shows the component properties within this layer, and what value they were set to in this layer.

So, in this case we see that the msdynce_SalesPatch changes 3 properties of the Opportunity entity, for example the isauditenabled property. This indicates that the 'Include Entity Metadata' option had been selected when the entity had been added to the solution in the source system, which is one of the useful pieces of information that we can now get from the Solution Layer.

The 'All Properties' tab shows all the effective properties at this layer - i.e. taking all properties from layers at a lower order number, and applying the properties from this layer, but these can be overridden by layers with a higher number.


Note that, in the example above for the Opportunity entity, you several solutions were listed, but many don't have any changed properties. I think this is mostly because an entity will be included in a solution because one of its subcomponents (e.g. a view) has changed. To see this, you'd have to open Solution Layers on the subcomponent.

Different component types have different properties. Unfortunately the information given so far is, unsurprisingly, only the whole property. So, for example for a form, we just see the formxml and formjson, and this doesn't give us a representation of how forms are merged across solutions. However, I'm intending to dig further into whether the 'All Properties' tab can give an indication of how the formxml changes through the layers - if I find anything interesting then that could be another blog post

Thursday, 15 November 2018

Error importing solutions - "The 'options' attribute is invalid"

I hope that this post will be short-lived, and few people need it, but I had an issue today in Dynamics 365 v9.1 (version 1710 (9.1.0.638) for completeness), where a solution file failed to import. The error occurred on initial parsing on the solution file. The error was 'This solution package cannot be imported because it contains invalid XML' , and the technical details were:

Schema Validation Failed

Schema validation of the customizations.xml file within the compressed solution package file failed. To manually validate and edit the file, you can download the schema file here and use an XML editor that supports schema validation to get more details.

The 'options' attribute is invalid - The value '' is invalid according to its datatype 'String' - The Pattern constraint failed

This was followed by a snippet of a view XML definition, which gave a hint to the problem. It looks like Dynamics 365 has new attribute 'options' within the fetchXml schema. This is currently set to an empty string, but the solution importer fails to recognise it, hence the error. The affected part of the xml is:

            <fetch version="1.0" output-format="xml-platform" mapping="logical" distinct="true" options="" >

Fortunately, there's a relatively simple workaround to remove this attribute from a solutions file.
  1. Extract the solution xml
  2. Open customizations.xml
  3. Do a Find & Replace to remove all the following text: options=""
  4. Then recreate the .zip and it should import
This affects any solution that contains a view (I haven't tested to see if it also applies to charts or reports), and isn't due to a version mismatch between organisations, as I could replicate it by exporting and importing into the same organisation

Friday, 27 April 2018

Plugins in the sandbox, and why you don't get System.Security.Permissions.SecurityPermission

A relatively common error with plugins is "Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed". This is the general message that you get when you have a plugin registered in the sandbox, and it is trying to do something that is not permitted. You may get variations on this error, depending on exactly what the code is trying to do, for example:
  • FileIOPermission - access the file system
  • InteropPermission - mostly likely using an external assembly (either directly, or having ILMerged into your plugin assembly)
  • System.Net.WebPermission - some for on network access, e.g. trying to access a network resource by IP address (the sandbox only allows by DNS name)
  • SqlClientPermission - accessing SQL Server
The list can go on, and on. Rather than trying to list everything you can't do, it's a lot simpler to list what you can, which is broadly:
  • Execute code that doesn't try to access any local resources (file system, event log, threading etc)
  • Call the CRM IOrganizationService using the context passed to the plugin
  • Access remote web resources as long as you:
    • Use http or https
    • Use a domain name, not an IP address
    • Do not use the .Net classes for authentication
All of which is pretty restrictive, but is understandable given the sandbox is designed to protect the CRM server. To me, the most annoying one is the last, which makes it pretty much impossible to call other Microsoft web services directly, such as SharePoint or Reporting Services.

So, what to do about it. If you have CRM OnPremise, the simple and only solution is to register the assembly outside the sandbox, so that it can run in FullTrust - i.e. do whatever it wants (though still subject to the permissions of the CRM service account or asynchronous service account that it runs under).

And if you've got CRM Online, then the normal solution is to offload the processing to an environment that you have more control over. The most common option is to offload the processing to Azure, using the Azure Service Bus or Azure Event Hub . The alternative, new to CRM 9, is to send the data to a WebHook, which can be hosted wherever you like. 

Saturday, 31 March 2018

What's in a name - CRM, Dynamics 365, CDS

Now that I've restarted posting on this blog, I'm struggling to name the technologies consistently. It used to just be CRM (or Microsoft CRM, or Dynamics CRM, or Microsoft Dynamics CRM), but now it's Dynamics 365, or Dynamics 365 for Customer Engagement. And from the platform perspective, it's Common Data Services (CDS).
To an extent, we're necessarily at the whim of Microsoft branding, which can change, but I feel we're close to an overall set of terms that can be consistently applied. As I see it, there are 3 distinct things that can be named:

The overall suite of technologies
This has been Dynamics, Dynamics 365, or Microsoft Business Applications. Of these, Dynamics 365 is definitely the leader, though there has been recent use of Microsoft Business Applications, so we may find this to become more popular. To me, the main difference is that Microsoft Business Applications can include technologies such as PowerApps and Flow, which started out under the Office 365 brand


The applications that Microsoft deliver
We started with the separate Dynamics products (CRM, AX, GP, NAV etc). Several (but not all) were then included within Dynamics 365, along with some new application (e.g. Talent). From the original CRM application and implementations, we can refer to each Application, which are Sales, Customer Service, Marketing, Field Service and Project Service Automation. Here the roadmap is a useful reference. These applications can be usefully referred to individually, but we need to be able to refer to them collectively, and distinguish them from the other Dynamics 365 applications (Finance and Operations, Retail, Talent, Business Central) that are not based on the CRM technology. Rather than using the term 'CRM', Microsoft are pushing the term 'Microsoft Dynamics 365 Customer Engagement'. I do mostly understand the Microsoft approach, but it is a lot longer than 'CRM', so I'm going to struggle to move off CRM. For more on this, see Jukka's post

The platform - i.e. what underpins the applications
'Platform' itself can mean different things to different people, which we won't resolve here, but I'm taking about the technologies that started in CRM, and not just the Azure platform. Here we started with CRM, then the term xRM was introduced, but now (as of March 2018), I think that we should now be referring to CDS (Common Data Services). Now that Common Data Services for Applications and CRM are the same platform is a huge step. And from now on , I think the platform that started out as CRM is better termed CDS. There are a few details to sort out still; there are 'Common Data Services for Applications' and 'Common Data Services for Analytics', and I reckon only the former truly relates to the original CRM platform, but I'm not certain on that yet

Overall, I thing the picture will soon be reasonably clear, with a few caveats. For the foreseeable future, I expect I'll still preface most presentations by saying that I'll use the terms 'CRM' and 'Dynamics 365' interchangeably, unless there is a reason to differentiate between them, in which case I'll try and explain the difference. Similarly, I'll probably be using 'CRM' and 'xRM' and 'CDS' interchangeably for a while

Common Data Services Architecture in CDS 2.0

I struggled to think of a good title for this post, and I hope to change it to something more inspirational, as this is a very significant topic.
Microsoft have made several recent announcements in March 2018, but for me the most significant is the PowerApps Spring Update. This may seem strange for me, a CRM MVP, to say, given how much there was on CRM in the Business Applications Spring ’18 Release Notes, but I think it makes sense once you realise that the PowerApps Update describes the new and future Common Data Services (CDS) architecture, and that in this architecture, much of CDS is the CRM platform (aka xRM).
Rather than CDS being a separate layer or component that then communicates with the CRM platform, CDS and CRM are a shared platform.
Strictly, it's not quite as simple as the last sentence makes out, especially as CDS now splits into Common Data Service for Applications and Common Data Service for Analytics (I'm hoping we'll soon get good acronyms to distinguish these), but for now it's worth emphasising that, if using Common Data Service for Applications, you are directly using the same platform components that CRM uses. This has several major implications (all of which are good to my mind):

  1. CDS for Apps can fully use the CRM platform features, such as workflow, business process flows, calculated fields. This immediately makes CDS a hugely powerful platform, but also means there are no decisions to take on which platform to use, or differences to take into account, because they are the same platform
  2. There are no extra integration steps. Commissioning a CDS environment will give you a CRM organisation, and equally, commissioning a CRM organisation will give you a CDS environment. This is not a duplication of data or platforms, because again, they are the same platform
There's a lot to play with, and explore, but for now this seems a major step forward for the platform, and I feel I'll be writing a lot more about CDS (though I'm still not sure when I'll stop referring to CRM when describing the platform).
The one area that still needs to be confirmed, and which could have a major impact on adoption, is licensing, but I hope we'll get clarity on this soon.

Thursday, 29 March 2018

Concurrent or Consistent - or both

A lesser-known feature that CRM 2016 brought to us is support for optimistic concurrency in the web service API. This may not be as exciting as some features, but as it's something I find exciting, I thought I write about it.

Am I an optimist
So, what is it about ?  Concurrency control is used to ensure data remains consistent when multiple users are making concurrent modifications to the same data. The two main models are pessimistic concurrency and optimistic concurrency. The difference between the 2 can be illustrated by considering two users (Albert and Brenda), who are trying to update the same field (X) on the same record (Y). In each case the update is actually 2 steps (reading the existing record, then updating it), and Albert and Brenda's try and do the steps in the following time sequence:
  1. Albert reads X from record Y (let's say the value is 30)
  2. Brenda reads record Y (while it's still 30)
  3. Albert updates record Y (Albert wants to add 20, so he updates X to 50)
  4. Brenda updates record Y (she wants to subtract 10, so subtracts 10 from the value (30) she read in step 2, so she updates X to 20) 
If we had no concurrency control, we would have started with 30, added 20, subtracted 10, and found that apparently 30 + 20 - 10 = 20. Arguably we have a concurrency model, which is called 'chaos', because we end up with inconsistent data.
To avoid chaos, we can use pessimistic concurrency control. With this, the sequence is:
    1. Albert reads X from record Y (when the value is 30), and the system locks record Y
    2. Brenda tries to read record Y, but Albert's lock blocks her read, so she sits waiting for a response
    3. Albert adds 20 to his value (30), and updates X to 50, then the system releases the lock on Y
    4. Brenda now gets her response, which is that X is now 50
    5. Brenda subtracts 10 from her value (50), and updates X to 40
    So, 30 + 20 - 10 = 40, and we have consistent data. So we're all happy now, and I can finish this post.
    Or maybe not. Brenda had to wait between steps 2 and 4. Maybe Albert is quick, but then again, maybe he isn't, or he's been distracted, or gone for a coffee. For this to be robust, locks would have to placed whenever a record is read, and only released when the system knows the Albert is not still about to come back from his extended coffee break. In low latency client-server systems this can be managed reasonably well (and we can use different locks to distinguish between an 'I'm just reading', and 'I'm reading and intending to update'), but with a web front-end like CRM, we have no such control. We've gained consistency, but at a huge cost of concurrency. This is pessimistic concurrency.
    Now for optimistic concurrency, which goes like this:
    1. Albert reads X (30) from record Y (when the value is 30), and also reads a system-generated record version number (let's say it's version 1)
    2. Brenda reads record Y (while it's still 30), and the system-generated record version number (which is still version 1, as the record's not changed yet)
    3. Albert adds 20 to his value (30), and updates X to 50. The update is only permitted because Albert's version number (1) matches the current version number (1). The system updates the version number to 2
    4. Brenda subtracts 10 from her value (30), and tries to update X to 20.This update is not permitted as Brenda's version number (2) does not match the current version number (1). So, Brenda will get an error
    5. Brenda now tries again, reading now read the current value (50) and version number (2), then subtracting 10, and the update is allowed
    The concurrency gain is that Albert, Brenda and the rest of the alphabetical users can read and update with no blocks, except when there is a conflict. The drawback is that the system will need to do something (even if it is just give an error message), when there is a conflict.
    .
    What are the options
    Given this post is about a feature that was introduced in CRM 2016, what do you think happened before (and now, because you have to explicitly use optimistic concurrency). If it's not optimistic concurrency, then it's either pessimistic or chaos. And it's not pessimistic locking, as if Microsoft defaulted to this, then CRM would grind to a locked halt if users often tried to concurrently access records.

    Maybe I want to be a pessimist
    As chaos sounds bad, maybe you don't believe that CRM would grind to a locked halt, or you're happy that users don't need concurrent access, or you've been asked to prevent concurrent access to records (see note 1). So, can we apply pessimistic locking ? The short answer is 'no', and most longer answers also end up 'no'. Microsoft give us almost no control over locking (see note 2 for completeness) within CRM, and definitely no means to hold locks beyond any one call. If you want to prolong the answer as much as you can, you might conceive a mechanism whereby users only get user-level update access to records, and have to assign the record to themselves before they can update it, but this doesn't actually work either, as a user may still be making the update based on a value they read earlier. And you can't make it user-level read access, and the user then wouldn't be able to see a record owned by someone else to be able to assign it to themselves.

    OK, I'll be an optimist
    So, how do we use optimistic concurrency ? First of all, not every entity is enabled for optimistic concurrency, but most are. This is controlled by the IsOptimisticConcurrencyEnabled property of the entity, and by default it is true for all out-of-box entities enabled for offline sync, and for all custom entities. You can check this property by querying the entity metadata (but not in the EntityMetadata.xlsx document in the SDK, despite the SDK documentation)

    Then, to use optimistic concurrency you need to do at least 2 things, and preferrably 3:
    1. In the Entity instance that you are sending to the Update, ensure the RowVersion property is set to the RowVersion that you received when you read this record 
    2. In the UpdateRequest, set the ConcurrencyBehavior to IfRowVersionMatches
    3. Handle any exceptions. If there is a row version conflict (as per my optimistic scenario above), then you get a ConcurrencyVersionMismatch exception. 
    For a code example, see the SDK
    I've described this for an Update request, and you can also use it for a Delete request, and I hope you'll understand why it doesn't apply to a Create request.

    One word of warning; I believe that some entities fail when using optimistic concurrency - this seems to be the entities that are metadata related (e.g. webresource or savedquery). I suspect this is because the metadata-related internals work on different internal (at the SQL level) concurrency from most other entities.

    How much does it matter
    I've left this till last, otherwise you may not have read the rest of the post, as it often doesn't matter. Consistency issues are most relevant if there's a long time between a read and the corresponding update. The classic example is offline usage (hence why it's enabled for out-of-box entities enabled for offline sync). I also see it as relevant for some bulk operations; for example we do a lot of bulk operations with SSIS, and for performance reasons, there's often a noticeable time gap between reads and writes in an SSIS data flow.

    Notes

    1. During CRM implementatons, if asked 'Can we do X in CRM ?', I very rarely just so no, and I'm more likely to say no for reasons other than purely technical ones. However, when I've been asked to prevent concurrent access to records, then this is a rare case when I go for the short answer of 'no'
    2. We can get a little bit of control over locking within a synchronous plugin, as this runs within the CRM transaction. This is the basis of the most robust CRM-based autonumber implementations. However, the lock can't be held outside of the platform operation
    3. My examples have concentrated on updating a single field, but any talk of locking or row version is at a record level. If Albert and Brenda were changing different fields, then we may not have a consistency issue to address. However, for practical reasons, any system applies locks and row versioning at a record, and not field level. Also, even if the updates are to different fields, it is possible that the change they make is dependent on other fields that may have changed, so for optimistic concurrency we do get a ConcurrencyVersionMismatch if any fields had changed


    Friday, 27 June 2014

    Plugin pre-stages - some subtleties

    The CRM SDK describes the main differences in plug stages here. However, there are some additional differences between the pre-validation and pre-operation stages that are not documented.

    Compound Operations
    The CRM SDK includes some compound operations that affect more than one entity. One example is the QualifyLead message, which can update (or create) the lead, contact, account and opportunity entities. With compound operations, the pre-validation event fires only once, on the original message (QualifyLead in this case) whereas the pre-operation event fires for each operation.
    You do not get the pre-validation event for the individual operations. A key consequence of this is that if, for example, you register a plugin on pre-validation of Create for the account entity, it will not fire if an account is created via QualifyLead. However, a plugin on the pre-operation of Create for the account entity will fire if an account is created via QualifyLead.

    Activities and Activity Parties
    I've posted about this before, however it's worth including it in this context. When you create an activity, there will be an operation for the main activity entity, and separate operations to create activityparty records for any attribute of type partylist (e.g. the sender or recipient). The data for the activityparty appears to be evaluated within the overall validation - i.e. before the pre-operation stage. The key consequence is that any changes made to the Target InputParameter that would affect an activityparty will only be picked up if made in the pre-validation stage for the activity entity.

    Monday, 7 April 2014

    Controlling Duplicate Detection

    The CRM SDK messages CreateRequest and UpdateRequest support a configuration parameter "SuppressDuplicateDetection" that provides control over whether duplicate detection rules will be applied - see http://msdn.microsoft.com/en-us/library/hh210213(v=crm.6).aspx. However, this parameter is not available through over programmatic means (such as the REST endpoint) to create or update records.

    To workaround this, I created a plugin that sets the "SuppressDuplicateDetection" parameter based on the value of a boolean attribute that can included in the Entity instance that is created or updated.

    I've posted the source code to the MSDN Code Gallery here

    I created this because I had a need to apply duplicate detection rules to entities created via the REST endpoint in CRM 2011.

    It may be that this plugin could also be used as a way to revert the CRM 2013 behaviour back to that of CRM 2011, to allow duplicate detection rules to fire on CRM forms. However, I've yet to test this fully; if anybody wants to test it, feel free to do so and make comments on this post. Otherwise, I'll probably update this post if I find anything useful with the CRM 2013 interface.

    Friday, 13 December 2013

    Crm 2013 – Script errors after upgrading an ex-Crm 4.0 organisation


    After a recent upgrade to Crm 2013 of an organisation that had been a Crm 4.0 organisation, there were client script errors when navigating to the Case or Queue entities. The underlying cause was some SiteMap entries that referenced Crm 4.0 urls; these were being redirected to new urls, but seemed to be missing some elements on the QueryString.
    The SiteMap entries with issues were:

    <SubArea Id="nav_cases" Entity="incident" DescriptionResourceId="Cases_SubArea_Description" Url="/CS/home_cases.aspx" />
    <SubArea Id="nav_queues" Entity="queue" Url="/workplace/home_workplace.aspx" DescriptionResourceId="Queues_SubArea_Description">
      <Privilege Entity="activitypointer" Privilege="Read" />
    </SubArea>

    The fix is to replace them with the following (which come from a default SiteMap in a new Crm 2013 organisation, though I’ve stripped out the GetStarted attributes):

    <SubArea Id="nav_cases" DescriptionResourceId="Cases_SubArea_Description" Entity="incident" />
    <SubArea Id="nav_queues" ResourceId="Homepage_Queues" DescriptionResourceId="Queues_SubArea_Description" Icon="/_imgs/ico_18_2020.gif" Url="/_root/homepage.aspx?etc=2029" >
     <Privilege Entity="queue" Privilege="Read" />
    </SubArea>

    These are the only entries I’ve found so far with problems. I think the entry for Queues is a one-off, but the entry for cases is notable in that the original (Crm 4.0) SiteMap entry included a Url attribute, whereas entries for most other entities did not include the Url attribute. So, it’s possible that other entries that include both the Entity and Url attribute could have the same issue.
    Although annoying at the time, I don’t see this as a major issue, as reviewing the SiteMap will be one of the standard tasks we do for any upgrades to Crm 2013. This is due to change in navigation layout, which means the overall navigation structure deserves a rethink to make best use of the new layout. When doing this, we find it is best to start with a new clean SiteMap and edit this to a customer-specific structure for Crm 2013, rather than trying to edit an existing structure. It’s also worth noting that a few of the default permissions have changed (spot the difference above for the privilege to see the Queues SubArea), and it’s worth paying attention to these at upgrade time for future consistency.