Showing posts with label How it Works. Show all posts
Showing posts with label How it Works. Show all posts

Sunday, 24 March 2019

Solution Layers

In Dynamics 365 Online, you may have recently noticed a new button within an unmanaged solution, 'Solution Layers'. I first saw this appear around 17th Mar 2019, and it's a welcome tool to help understand more what happens with multiple solutions in a system.



I've been playing around with this a bit in the last week, and there are a few key concepts to understand from the beginning.
  • The button is currently only visible within an unmanaged solution or the default solution. This seems to be because managed solutions are not directly editable, though I think this is a bit of an oversight, because...
  • When you click the button for a solution component, it will display any managed solutions that contain that component, along with an 'Active' solution. From what I can tell so far, I'm treating the 'Active' solution to be the same as the default solution, but there may be subtle differences
  • It makes sense that this only displays managed solutions, as these are the only one solutions that a individually layered. In contrast, unmanaged solutions are all combined into the one unmanaged layer
So, it's a little confusing to start with, in that you can only access it from the unmanaged layer, but it is displaying information about the managed layers. But, leaving that aside, what does it tell us ? It provides information at 2 levels; first the list of layers, then the detail properties that were set within that layer.
The Layers
When you click on the 'Solution Layers' button, it will list all the managed solutions that contain this component, and the order in which they apply. 

Order no.1 is the first solution in which this component was imported into this organisation, with the remaining solutions in incremental order. The order listed is the order in which any changes will be applied; I think this would normally be the order in which the solutions are installed, though I suspect that Microsoft may change the order of some of their solutions, irrespective of the installation sequence. So, we start with order = 1, then any changes from each other solution are applied in turn, so that a change from a higher Ordered solution would override a change to the same property in a lower Ordered solution.

So, the information we get here is what solutions contain a component, and the order of the solutions. The interesting thing here is that some layers appear above the Active layer; from what I can tell so far, only solutions from Microsoft appear above the Active layer.
Properties within a Layer
If you click on a layer, it will then show the component properties within this layer. The 'Changed Properties' tab shows the component properties within this layer, and what value they were set to in this layer.

So, in this case we see that the msdynce_SalesPatch changes 3 properties of the Opportunity entity, for example the isauditenabled property. This indicates that the 'Include Entity Metadata' option had been selected when the entity had been added to the solution in the source system, which is one of the useful pieces of information that we can now get from the Solution Layer.

The 'All Properties' tab shows all the effective properties at this layer - i.e. taking all properties from layers at a lower order number, and applying the properties from this layer, but these can be overridden by layers with a higher number.


Note that, in the example above for the Opportunity entity, you several solutions were listed, but many don't have any changed properties. I think this is mostly because an entity will be included in a solution because one of its subcomponents (e.g. a view) has changed. To see this, you'd have to open Solution Layers on the subcomponent.

Different component types have different properties. Unfortunately the information given so far is, unsurprisingly, only the whole property. So, for example for a form, we just see the formxml and formjson, and this doesn't give us a representation of how forms are merged across solutions. However, I'm intending to dig further into whether the 'All Properties' tab can give an indication of how the formxml changes through the layers - if I find anything interesting then that could be another blog post

Saturday, 31 March 2018

Common Data Services Architecture in CDS 2.0

I struggled to think of a good title for this post, and I hope to change it to something more inspirational, as this is a very significant topic.
Microsoft have made several recent announcements in March 2018, but for me the most significant is the PowerApps Spring Update. This may seem strange for me, a CRM MVP, to say, given how much there was on CRM in the Business Applications Spring ’18 Release Notes, but I think it makes sense once you realise that the PowerApps Update describes the new and future Common Data Services (CDS) architecture, and that in this architecture, much of CDS is the CRM platform (aka xRM).
Rather than CDS being a separate layer or component that then communicates with the CRM platform, CDS and CRM are a shared platform.
Strictly, it's not quite as simple as the last sentence makes out, especially as CDS now splits into Common Data Service for Applications and Common Data Service for Analytics (I'm hoping we'll soon get good acronyms to distinguish these), but for now it's worth emphasising that, if using Common Data Service for Applications, you are directly using the same platform components that CRM uses. This has several major implications (all of which are good to my mind):

  1. CDS for Apps can fully use the CRM platform features, such as workflow, business process flows, calculated fields. This immediately makes CDS a hugely powerful platform, but also means there are no decisions to take on which platform to use, or differences to take into account, because they are the same platform
  2. There are no extra integration steps. Commissioning a CDS environment will give you a CRM organisation, and equally, commissioning a CRM organisation will give you a CDS environment. This is not a duplication of data or platforms, because again, they are the same platform
There's a lot to play with, and explore, but for now this seems a major step forward for the platform, and I feel I'll be writing a lot more about CDS (though I'm still not sure when I'll stop referring to CRM when describing the platform).
The one area that still needs to be confirmed, and which could have a major impact on adoption, is licensing, but I hope we'll get clarity on this soon.

Thursday, 29 March 2018

Concurrent or Consistent - or both

A lesser-known feature that CRM 2016 brought to us is support for optimistic concurrency in the web service API. This may not be as exciting as some features, but as it's something I find exciting, I thought I write about it.

Am I an optimist
So, what is it about ?  Concurrency control is used to ensure data remains consistent when multiple users are making concurrent modifications to the same data. The two main models are pessimistic concurrency and optimistic concurrency. The difference between the 2 can be illustrated by considering two users (Albert and Brenda), who are trying to update the same field (X) on the same record (Y). In each case the update is actually 2 steps (reading the existing record, then updating it), and Albert and Brenda's try and do the steps in the following time sequence:
  1. Albert reads X from record Y (let's say the value is 30)
  2. Brenda reads record Y (while it's still 30)
  3. Albert updates record Y (Albert wants to add 20, so he updates X to 50)
  4. Brenda updates record Y (she wants to subtract 10, so subtracts 10 from the value (30) she read in step 2, so she updates X to 20) 
If we had no concurrency control, we would have started with 30, added 20, subtracted 10, and found that apparently 30 + 20 - 10 = 20. Arguably we have a concurrency model, which is called 'chaos', because we end up with inconsistent data.
To avoid chaos, we can use pessimistic concurrency control. With this, the sequence is:
    1. Albert reads X from record Y (when the value is 30), and the system locks record Y
    2. Brenda tries to read record Y, but Albert's lock blocks her read, so she sits waiting for a response
    3. Albert adds 20 to his value (30), and updates X to 50, then the system releases the lock on Y
    4. Brenda now gets her response, which is that X is now 50
    5. Brenda subtracts 10 from her value (50), and updates X to 40
    So, 30 + 20 - 10 = 40, and we have consistent data. So we're all happy now, and I can finish this post.
    Or maybe not. Brenda had to wait between steps 2 and 4. Maybe Albert is quick, but then again, maybe he isn't, or he's been distracted, or gone for a coffee. For this to be robust, locks would have to placed whenever a record is read, and only released when the system knows the Albert is not still about to come back from his extended coffee break. In low latency client-server systems this can be managed reasonably well (and we can use different locks to distinguish between an 'I'm just reading', and 'I'm reading and intending to update'), but with a web front-end like CRM, we have no such control. We've gained consistency, but at a huge cost of concurrency. This is pessimistic concurrency.
    Now for optimistic concurrency, which goes like this:
    1. Albert reads X (30) from record Y (when the value is 30), and also reads a system-generated record version number (let's say it's version 1)
    2. Brenda reads record Y (while it's still 30), and the system-generated record version number (which is still version 1, as the record's not changed yet)
    3. Albert adds 20 to his value (30), and updates X to 50. The update is only permitted because Albert's version number (1) matches the current version number (1). The system updates the version number to 2
    4. Brenda subtracts 10 from her value (30), and tries to update X to 20.This update is not permitted as Brenda's version number (2) does not match the current version number (1). So, Brenda will get an error
    5. Brenda now tries again, reading now read the current value (50) and version number (2), then subtracting 10, and the update is allowed
    The concurrency gain is that Albert, Brenda and the rest of the alphabetical users can read and update with no blocks, except when there is a conflict. The drawback is that the system will need to do something (even if it is just give an error message), when there is a conflict.
    .
    What are the options
    Given this post is about a feature that was introduced in CRM 2016, what do you think happened before (and now, because you have to explicitly use optimistic concurrency). If it's not optimistic concurrency, then it's either pessimistic or chaos. And it's not pessimistic locking, as if Microsoft defaulted to this, then CRM would grind to a locked halt if users often tried to concurrently access records.

    Maybe I want to be a pessimist
    As chaos sounds bad, maybe you don't believe that CRM would grind to a locked halt, or you're happy that users don't need concurrent access, or you've been asked to prevent concurrent access to records (see note 1). So, can we apply pessimistic locking ? The short answer is 'no', and most longer answers also end up 'no'. Microsoft give us almost no control over locking (see note 2 for completeness) within CRM, and definitely no means to hold locks beyond any one call. If you want to prolong the answer as much as you can, you might conceive a mechanism whereby users only get user-level update access to records, and have to assign the record to themselves before they can update it, but this doesn't actually work either, as a user may still be making the update based on a value they read earlier. And you can't make it user-level read access, and the user then wouldn't be able to see a record owned by someone else to be able to assign it to themselves.

    OK, I'll be an optimist
    So, how do we use optimistic concurrency ? First of all, not every entity is enabled for optimistic concurrency, but most are. This is controlled by the IsOptimisticConcurrencyEnabled property of the entity, and by default it is true for all out-of-box entities enabled for offline sync, and for all custom entities. You can check this property by querying the entity metadata (but not in the EntityMetadata.xlsx document in the SDK, despite the SDK documentation)

    Then, to use optimistic concurrency you need to do at least 2 things, and preferrably 3:
    1. In the Entity instance that you are sending to the Update, ensure the RowVersion property is set to the RowVersion that you received when you read this record 
    2. In the UpdateRequest, set the ConcurrencyBehavior to IfRowVersionMatches
    3. Handle any exceptions. If there is a row version conflict (as per my optimistic scenario above), then you get a ConcurrencyVersionMismatch exception. 
    For a code example, see the SDK
    I've described this for an Update request, and you can also use it for a Delete request, and I hope you'll understand why it doesn't apply to a Create request.

    One word of warning; I believe that some entities fail when using optimistic concurrency - this seems to be the entities that are metadata related (e.g. webresource or savedquery). I suspect this is because the metadata-related internals work on different internal (at the SQL level) concurrency from most other entities.

    How much does it matter
    I've left this till last, otherwise you may not have read the rest of the post, as it often doesn't matter. Consistency issues are most relevant if there's a long time between a read and the corresponding update. The classic example is offline usage (hence why it's enabled for out-of-box entities enabled for offline sync). I also see it as relevant for some bulk operations; for example we do a lot of bulk operations with SSIS, and for performance reasons, there's often a noticeable time gap between reads and writes in an SSIS data flow.

    Notes

    1. During CRM implementatons, if asked 'Can we do X in CRM ?', I very rarely just so no, and I'm more likely to say no for reasons other than purely technical ones. However, when I've been asked to prevent concurrent access to records, then this is a rare case when I go for the short answer of 'no'
    2. We can get a little bit of control over locking within a synchronous plugin, as this runs within the CRM transaction. This is the basis of the most robust CRM-based autonumber implementations. However, the lock can't be held outside of the platform operation
    3. My examples have concentrated on updating a single field, but any talk of locking or row version is at a record level. If Albert and Brenda were changing different fields, then we may not have a consistency issue to address. However, for practical reasons, any system applies locks and row versioning at a record, and not field level. Also, even if the updates are to different fields, it is possible that the change they make is dependent on other fields that may have changed, so for optimistic concurrency we do get a ConcurrencyVersionMismatch if any fields had changed


    Friday, 6 December 2013

    Crm 2013 – No more ExtensionBase tables


    So, Dynamics Crm 2013 is here, and there’s lots to say about the new UI, and the new features. But, many others are talking about these, so I thought I’d start with what may seem to be an obscure technical change, but it’s one that I welcome, and which is a significant contribution to the stability and performance of Crm 2013.

    With Crm 3.0, Microsoft changed the underlying table structure so that any customisable entity was split across 2 tables; a base table that contained all system attributes, and an extensionbase table for custom attributes. For example, there was an accountbase and an accountextensionbase table. Each table used the entity’s key as the primary key, and the extensionbase table also had a foreign key constraint from the primary key field to the primary key in the base table. Each entity has a SQL view that joined the data from these table to make it appear as one table to the platform. As I understand it, the main reason for this design was to allow for more custom attributes, as SQL Server had a row-size limit of 8060 bytes, and some of the system attributes were already using ~6000 bytes.

    The same table design was retained in Crm 4.0 and Crm 2011. However, Crm 2011 introduced a significant change to the plugin execution pipeline, which allowed custom plugins to execute within the original SQL transaction. This was a very welcome change that provided greater extensibility. However it did mean that the duration of SQL transactions could be extended, which means that SQL locks may be held for longer, which means potentially more locking contention between transactions. In very occasional circumstances, a combination of certain plugin patterns, the design of the base and extensionbase tables, and heavy concurrent use, could give rise to deadlocks (see below for an example).

    Given this, I’m very glad that the product team retained the facility to have plugins execute within the original transaction (then again, it would be hard to remove this facility from us). It wouldn’t be realistic to ask customers to reduce concurrent usage of CRM, so the only way to reduce the potential deadlock issue was to address the design of the base and extensionbase tables. From my investigations (sorry, but I actually quite like investigating SQL locking behaviour), a substantial improvement could have been made by retaining the table design, but modifying the SQL view, but a greater improvement comes from combining the tables into one. An added advantage of this change is that the performance of most data update operations are also improved.
    Deadlock example

    Here are two SQL statements generated by CRM:
    select
    'new_entity0'.new_entityId as 'new_entityid'
    , 'new_entity0'.OwningBusinessUnit as 'owningbusinessunit'
    , 'new_entity0'.OwnerId as 'ownerid'
    , 'new_entity0'.OwnerIdType as 'owneridtype'
    from new_entity as 'new_entity0'
    where ('new_entity0'.new_entityId = @new_entityId0)  

    And

    update [new_entityExtensionBase]
    set [new_attribute]=@attribute0
    where ([new_entityId] = @new_entityId1)
     
    These were deadlocked, with the SELECT statement being the deadlock victim. The locks that caused the deadlock were:
    • The SELECT statement had a shared lock on the new_entityExtensionBase table, and was requesting a shared lock on new_entityBase table
    • The UPDATE statement had an update lock on the new_entityBase table, and was requesting an update lock on new_entityExtensionBase table
    The likely reason for this locking behaviour was that:
    • Although the SELECT statement was requesting fields from the new_entityBase table, it had obtained a lock on the new_entityExtensionBase table to perform the join in the new_entity view
    • The UPDATE statement that updates a custom attribute (new_attribute) on the new_entity entity would have been the second statement of 2 in the transaction. The first statement would modify system fields (e.g. modifiedon) in the new_entityBase table, and hence place an exclusive lock on a row in the new_entityBase table, and the second statement is the one above, which is attempting to update the new_entityExtensionBase table
    Both operations needed to access both tables, and if you’re very unlucky, then the two operations, working on the same record, may overlap in time, and cause a deadlock.

    The new design in Crm 2013 solves this in three ways:
    1. With just the one entity table, the SELECT statement only needs one lock, and does not need to obtain one lock, then request another
    2. Only one UPDATE statement is required in the transaction, so locks are only required on the one table and they can be requested together, as they would be part of just one statement
    3. Both operations will complete more quickly, reducing the time for which the locks are held
    Of these 3 improvements, either no. 1 or 2 would have been sufficient to prevent deadlocks in this example, but it is gratifying that both improvements have been made. The third improvement would not necessarily prevent deadlocks, but will reduce their probability by reducing overall lock contention, and will also provide a performance improvement.

    Wednesday, 17 November 2010

    How to use impersonation in an ASP .Net page using IFD in CRM 4.0

    This is is common requirement, and I've never found what I consider to be a suitable explanation of what needs to be done. This post is not intended to be exhaustive, but is intended to cover the essentials in one place.

    The fundamental requirement is to create a custom ASP .Net page that is accessible both internally (via AD authentication) and over the Internet (via IFD authentication), and where the code will access CRM data under the context of the user accessing the page. To do this, you need to deploy and configure your code as follows:


    1. Deploy the ASP .Net page within the CRM Web Site (the only supported place is within the ISV directory). If you don't do this, then IFD authentication will not apply to your page
    2. Run the ASP .Net page within the CrmAppPool, and do not create an IIS application for it. If you don't do this, then you won't be able to identify the authenticated user
    3. Ensure that the CRM HttpModules MapOrg and CrmAuthentication are enabled. This will happen by default by inheritance of the settings from the root web.config file in the CRM web site, but I'm mentioning it here as there are some circumstances (when you don't need IFD) in which it is appropriate to disable these HttpModules. Again, if the HttpModules aren't enabled, then you won't be able to identify the authenticated user
    4. As your code is in a virtual directory (rather than a separate IIS application), ASP .Net will look for your assemblies in the [webroot]\bin folder, so that is where you should put them (or in the GAC). The initial release documentation for CRM 4.0 stated that it was unsupported to put files in [webroot]\bin folder of the CRM web site, but this restriction has been lifted

    You also need to follow certain coding patterns within your code. An example of these can be found here. Note that, Crm web services refers to both the CrmService and the MetadataService:

    1. Ensure you can identify the organisation name. The example code shows how to parse this from the Request.Url property, though I prefer to pass this on the querystring (which the example also supports)
    2. Use the CrmImpersonator class. All access to the Crm web services needs to be wrapped within the using (new CrmImpersonator()) block. If you don't do this you will probably get 401 errors, often when accessing the page internally via AD authentication (see later for a brief explanation)
    3. Use the ExtractCrmAuthenticationToken static method. This is necessary to get the context of the calling user (which is stored in the CallerId property)
    4. Use CredentialCache.DefaultCredentials to pass AD credentials to the Crm web services. If you don't do this, then you will probably get 401 errors as you'd be trying to access the web service anonymously (IIS would throw these 401 errors)

    That should be all that you need on the server side. The final piece of the puzzle is to ensure that you provide the correct Url when accessing the page, which again needs a little consideration:

    When accessing the page from an internal address, the Url should be of the form:
    http://[server]/[orgname]/ISV/MyFolder/MyPage.aspx

    When accessing the page from an external address, the Url should be of the form:
    http://[orgname].[serverFQDN]/ISV/MyFolder/MyPage.aspx

    This is relatively easy to achieve when opening the page from within CRM (i.e. in an IFrame, via an ISV.config button or in client script). In each case you can use the PrependOrgName global function in client script - e.g.

    var u = PrependOrgName('/ISV/MyFolder/MyPage.aspx');

    This function will determine correctly whether to add the organisation name to the Url. Note also that I've provided a relative Url, which will ensure the first part of the Url is always correct. As this uses a JavaScript function, you will always need to use a small piece of code to access the page, and cannot rely on statically providing the Url in the source of an IFrame, or in the Url attribute of an ISV.Config button. Any relative Urls in SiteMap should automatically get the organisation name applied correctly. Remember to also pass the organisation name on the querystring if the server code expects this (you can get the organisation name from the ORG_UNIQUE_NAME global variable)

    Earlier I promised an explanation of what the server code does. This is not as complete an explanation as it could be, but the basics are:

    1. The HttpModules identify the correct CRM organisation (MapOrg) from the Url provided, and place information about the authenticated calling user in the HttpContext (CrmAuthentication)
    2. The ExtractCrmAuthenticationToken method reads the user context from the HttpContext, and puts the user's systemuserid in the CallerId property of the CrmAuthenticationToken
    3. Because the CallerId is set, the call to CRM is necessarily using CRM impersonation. For this to be permitted, the execution account (see Dave Berry's blog for a definition) must be a member of the AD group PrivUserGroup. The execution account is the AD account that is returned by CredentialCache.DefaultCredentials. This is where things get a little interesting
    4. If the request comes via the external network and IFD authentication is used, CRM handles the authentication outside of IIS and no IIS / ASP .Net impersonation occurs. Therefore CredentialCache.DefaultCredentials will return the AD identity of the process, which is the identity of the CrmAppPool, which necessarily is a member of PrivUserGroup
    5. However, if the request comes via the internal network, AD authentication is used and IIS / ASP .Net impersonation does occur (through the setting in web.config). This impersonation will change the execution context of the thread, and CredentialCache.DefaultCredentials would then return the AD context of the caller. This is fine in a pure AD authentication scenario, but the use of the ExtractCrmAuthenticationToken method means that CRM impersonation is necessarily expected; this will only work if the execution account is a member of PrivUserGroup, and CRM users should not be members of PrivUserGroup. This is where the CrmImpersonator class comes in: its constructor reverts the thread's execution context to that of the process (i.e. it undoes the IIS / ASP .Net impersonation), so that CredentialCache.DefaultCredentials will now return the identity of the CrmAppPool, and the CRM platform will permit CRM impersonation

    To finish off, here are a few other points to note:

    • IFD impersonation only applies when accessing the CRM platform. If you use IFD authentication, there is no way of impersonating the caller when accessing non-CRM resources (e.g. SQL databases, SharePoint, the file system); it cannot be done, so save yourself the effort and don't even try (though, for completeness, SQL impersonation is possible using EXECUTE AS, but that's it)
    • If you want to use impersonation, do not use the CrmDiscoveryService. The CrmDiscoveryService can only be used with IFD if you know the user's username and password, and you won't know these unless you prompt the user, which kind of defeats the point of impersonation

    Thursday, 11 November 2010

    When is the Default Organisation not the Default Organisation

    In CRM 4.0, an organisation can be designated as the Default Organisation in the Deployment Manager tool:



    However, this does not always do what some people expect. CRM has 2 distinct concepts of a 'Default Organisation', and only one of them can be set using Deployment Manager. The 'Default Organisation' that you set in Deployment is a system-wide setting whose primary use is to define which organisation database is used by code that accesses CRM 4.0 through the CRM 3.0 web services (CRM 3.0 did not support multiple organisations, so the web services had no way to specify the organisation to connect to).

    The other type of 'Default Organisation' applies to users. Each user has a Default Organisation, which is the organisation that they are connected to if browsing to a Url that does not contain the organisation name. For example, if a user browses to http://crm/Excitation/loader.aspx, then the user will necessarily be taken to the Excitation organisation, but if they browse to http://crm/loader.aspx, they will be taken to their default organisation, which has no relationship to the 'Default Organisation' that is set in Deployment Manager. Each user's default organisation will be the first organisation in which their CRM user record was created.

    One issue that can arise is if a user connects using a Url that does not contain the organisation name, and either their default organisation has been disabled, or their user account in their default organisation has been disabled. In this scenario, the user would receive either the error 'The specified organization is disabled' or 'The specified user is either disabled or is not a member of any business unit'. The simplest solution would be to specify an appropriate organisation name in the Url; however if this is not possible, the rest of this post describes an unsupported alternative.

    Unfortunately, none of the MSCRM tools will display a user's default organisation, nor is there a supported way to change this (though there is an unsupported way - see below). All the information is stored in the MSCRM_Config database, in the SystemUser, Organization and SystemUserOrganizations tables. The following output shows some of the relevant data from these tables:


    SystemUser table


    Organization table


    SystemUserOrganizations table

    The DefaultOrganizationId field in SystemUser defines each user's default organisation, and this can be joined to the Id field in the Organization table.

    The fields that define the user are a little more complex: The Id field in SystemUser is unique to each CRM user. You can join the SystemUser table to the SystemUserOrganizations table using the SystemUser.Id and the SystemUserOrganizations.UserId fields. The CrmUserId field in SystemUserOrganizations can be joined to the systemuserid field in the systemuserbase table in each CRM organisation database. Note that the same user will have a different systemuserid in each organisation database. The following query illustrates these joins, taking the user's name from one of the organisation databases (it's not stored in MSCRM_Config):

    select o.UniqueName, u.FullName
    from Organization o
    join SystemUser su on o.Id = su.DefaultOrganizationId
    join SystemUserOrganizations suo on su.Id = suo.UserId
    join Excitation_MSCRM..systemuser u on suo.CrmUserId = u.systemuserid


    So, that's how it fits together. As this is all SQL data, it is not difficult to modify this, but be aware that to do so is completely unsupported, and could break your CRM implementation. If you were to make any changes, make sure you backup the relevant databases (especially MSCRM_Config) before you do so.

    If you did want to change a user's default organisation, please heed the warning in the preceding paragraph and backup the MSCRM_Config database. The following SQL update statement will change the default organisation of a given user, based on their systemuserid in one organisation database. The reason for writing the query this way is to ensure that a user's default organisation can only be set to an organisation that they exist in, and this query should only ever modify one record. If it modifies 0 records, then check the @systemuserid value, and if it modifies more than one record then your MSCRM_Config database is probably corrupt, and you should reinstall CRM and reimport your organisation databases (I was serious about my warnings).

    declare @systemuserid uniqueidentifier
    set @systemuserid = '25E1DC1D-BEC2-449B-AAD8-4A6309122AE1' -- replace this
    update SystemUser
    set DefaultOrganizationId = suo.OrganizationId
    from SystemUserOrganizations suo
    where suo.UserId = SystemUser.Id
    and suo.CrmUserId = @systemuserid

    One final point; CRM caches some of the data in MSCRM_Config, so you'd need to recycle the CRM application pool to sure any changes have taken effect.

    Tuesday, 9 November 2010

    The CRM 4.0 Reporting Services Connector - how it works

    The Dynamic CRM Connector for Reporting Services is a very useful component that avoids the need to configure Active Directory delegation (aka double-hop authentication) when multiple servers are used for the CRM, Reporting Services and SQL Server roles in a CRM implementation. In general, it is easy to install and use, but I'm always interested in how these things work.

    How the Connector is installed and invoked
    The connector is installed as a Data Processing Extension with SSRS. These extensions are registered within the rsreportserver.config file on the Reporting Server, as per the following snippet:

    <Data>
    <Extension Name="SQL" Type="Microsoft.ReportingServices.DataExtensions.SqlConnectionWrapper,Microsoft.ReportingServices.DataExtensions" /> <Extension Name="OLEDB" Type="Microsoft.ReportingServices.DataExtensions.OleDbConnectionWrapper,Microsoft.ReportingServices.DataExtensions" />
    <Extension Name="MSCRM" Type="Microsoft.Crm.Reporting.DataConnector.SrsExtConnection,Microsoft.Crm.Reporting.DataConnector" /> </Data>

    All CRM reports with SSRS are configured to use a specific SSRS Data Source. When the Connector is installed, the Data Source is changed to use the MSCRM Data Processing Extension, instead of the default SQL Server extension. See images below:


    Report properties showing the MSCRM Data Source


    MSCRM Data Source using the SQL Server extension before the connector is installed


    MSCRM Data Source using the CRM extension after the connector is installed.

    There are 3 differences between these configurations:

    1. The Connection Type, which specifies the extension
    2. The Connection String is absent with the CRM connector. This is because the connector reads some of the database information from registry values that were created during its installation, and some from data passed to it when the report is run (see below)
    3. The Credentials. With the SQL Server connector, standard Windows Integrated security is used - i.e. the user's AD credentials are used to connect to SQL Server. With the CRM connector, separate 'credentials' are passed to SSRS (again, see below)

    What happens when a report is run
    If you try to run a CRM report with the CRM connector installed, the connector will require some 'credentials', as per point no.3 above. This image shows what happens if you try to run a report from Report Manager:


    Running a CRM report from Report Manager when the CRM connector is installed

    These 'credentials' are not what they seem; rather they are a cunning way for the CRM platform to pass information about the current user to the CRM connector. The CRM connector expects the current user's systemuserid (a Guid) to be passed into the Log In Name box, and the organizationid (another Guid) to be passed into the Password box. These are not your login name and password.

    As the report uses a data source that uses the CRM connector, the RS Report Server code calls the CRM connector code (the SrsExtConnection class in the Microsoft.Crm.Reporting.DataConnector assembly, as per the rsreportserver.config data above). The code will then:
    1. Check that it is permitted to impersonate a CRM user. This checks that the identity the code is running under (which the identity of the ReportServer application pool, or the Reporing Services service, depending on the version of Reporting Services) belongs to the AD group PrivReportingGroup
    2. Connect to the MSCRM_Config database to determine the correct MSCRM organization database, based on the organizationid that was passed in the 'credentials'
    3. Connect to the relevant MSCRM organization database. Note that this is done (as was the previous step) using integrated security under the AD identity as per step 1 above
    4. Use the SQL statement SET Context_Info to pass the calling CRM user's systemuserid into the Context_Info
    5. Execute the SQL statement(s) within the report definition. The definition of all filtered views use the fn_FindUserGuid function to read the systemuserid from the Context_Info

    What can you do with this information
    One use is for troubleshooting. Checking the rsreportserver.config is a quick way to see if the connector is installed, and checking the configuration of the MSCRM Data Source will tell you if the connector is in use. Changing the MSCRM Data Source is a quick way to turn the connector on or off for test purposes.

    You can also run the reports directly, rather than from CRM. Again, when troubleshooting I find it useful to run a report directly from Report Manager web interface. To do this with the connector, you need to enter the systemuserid and organizationid when prompted (see image above). These values can be read from the filteredsystemuser and filterorganization views respectively in the MSCRM database.

    A further option is to run the reports via other means, such as Url Access, as described here (that article was written for CRM 3, see here for an update for CRM 4). To do this with the connector installed, you will also have to pass the systemuserid and organizationid on the query string. This is done using the following syntax:

    &dsu:CRM=<systemuserid>&dsp:CRM=<organizationid>