»

[15 Feb 2012 | 0 Comments]

 

Introduction

Microsoft Dynamics 2011 has made deployment of customisation much easier than pervious version. However there are some technical limitations you face when using Microsoft Crm Online such as not being able to install stuff on the server (such as putting dll’s in the GAC). One of the great features of the framework is the Early Bound Entity code generator. It is useful to put this in a separate project so it can be re-used across other projects, such as the Plugin and Workflow projects. However to deploy this solution structure to Online you need all the code compiled into a single assembly. This post describes how I solved this problem allowing me to have a common project with business logic shared between the assemblies that get deployed.

The project used in this post

I run a small software company called See The Link. The project I am using as an example is one done for one of my clients that we did a ]Dynamics Crm Online solution for. <plug> i’m always looking out for new business - if you want to hire me please get in touch!</plug>

Solution Setup

This is the solution setup I have. It is based on the Dynamics 2011 toolkit template for Visual Studio 2011. As you can see I have a project for plugins, SeeTheLink.Visalogic.Crm.Plugins, and another project for Workflows, SeeTheLink.Visalogic.Crm.Workflow, (I realise that custom workflows cannot be deployed online yet but will be coming in Q2 this year). The early bound entities have been put in the “SeeTheLink.Visalogic.Crm.XrmEntities’ project.  The last important project is the CrmPackage, this project is created by the dynamics template. I don’t understand all the voodoo that this project type does by it is responsible for deploying the assets to the target Dynamics solution. This is where we need to tap into to make it merge the assemblies before it deploys them.

image

Step 1 – Get ILMerge and the msbuild files

There are some msbuild files you you need. They can be found here:

C:\Program Files (x86)\MSBuild\Microsoft\CRM\

The files are called:

Microsoft.CrmDeveloperTools.CrmClient.targets
Microsoft.CrmDeveloperTools.CrmClient.dll

We need to take copies of these files so we can edit the targets per solution otherwise it will happen for all solutions built on the computer. Copy these to a folder just below the solution, I called mine “Tools”

image

Also put ILMerge.exe in there as well. This is the magic tool that does the merging for us. In the above screenshot I have my solution in the DynamicsSolution folder.

Step 2 – Edit the Microsoft.CrmDeveloperTools.CrmClient.targets file

Edit the in file. Find the <Target Name="BeforeDeploy"> tag. Here we put in the command to use ILMerge command:

<Target Name="BeforeDeploy">
<Exec Command="&quot;$(SolutionDir)Tools\ILMERGE.EXE&quot; /targetplatform:v4,C:\Windows\Microsoft.NET\Framework\v4.0.30319 /keyfile:$(KeyFile) /out:$(PluginMergeOutputName) $(PluginMergeDlls)" />
<Exec Command="&quot;$(SolutionDir)Tools\ILMERGE.EXE&quot; /targetplatform:v4,C:\Windows\Microsoft.NET\Framework\v4.0.30319 /keyfile:$(KeyFile) /out:$(WorkflowMergeOutputName) $(WorkflowMergeDlls)" />
</Target>

Here you might notice that I’m using a few variables:

$(KeyFile)
$(PluginMergeOutputName)
$(PluginMergeDlls)
$(WorkflowMergeOutputName)
$(WorkflowMergeDlls)

We will define these in the CrmPakage project file next.

Step 3 – Edit the CrmPackage project file

Unload the project file

image

Edit the file

image

Find the following line in the file:

<Import Project="$(MSBuildExtensionsPath32)\Microsoft\CRM\Microsoft.CrmDeveloperTools.CrmClient.targets" />

Change it to:

<Import Project="$(SolutionDir)Tools\Microsoft.CrmDeveloperTools.CrmClient.targets" />

This should be the path where we copied the files to in Step1 and edited in Step 2.

We also need to define the variables we used in step2.

Add these in a property group:

<PropertyGroup>
    <KeyFile>"$(SolutionDir)Plugins\SeeTheLink.snk"</KeyFile>
    <PluginMergeDlls>"$(OutputPath)SeeTheLink.Visalogic.Crm.Plugins.dll" "$(OutputPath)SeeTheLink.Visalogic.Crm.XrmEntities.dll"</PluginMergeDlls>
    <PluginMergeOutputName>"$(OutputPath)SeeTheLink.Visalogic.Crm.Plugins.Merged.dll"</PluginMergeOutputName>
    <WorkflowMergeDlls>"$(OutputPath)SeeTheLink.Visalogic.Crm.Workflow.dll" "$(OutputPath)SeeTheLink.Visalogic.Crm.XrmEntities.dll"</WorkflowMergeDlls>
    <WorkflowMergeOutputName>"$(OutputPath)SeeTheLink.Visalogic.Crm.Workflow.Merged.dll"</WorkflowMergeOutputName>
  </PropertyGroup>
this is what the end of the file should look like:
image

The ‘PluginMergedOutputName’ is what the output or the merge will be called. It is therefore important to update the RegisterFile.crmregister file. We will do that after we have reloaded the project:

image

Step 4 – Edit the RegisterFile.crmregister file

In the CrmPackage project there is a file called RegisterFile.crmregister. This contains all the references to the plugins that are required for the deploy project to register the dlls. We want to register the merged file so we have to just rename the reference:

image

Step 5 – Deploy!

When you deploy the package you will see the new steps in the build output

image

A couple of gotchas

If you are registering the same plugins just in a dll with a different name then you will need to un-register any dlls already deployed.

You will also need to add

[assembly: Microsoft.Xrm.Sdk.Client.ProxyTypesAssemblyAttribute()]

to the AssemblyInfo.cs file of the plugin and workflow assemblies so the early bound entities work.

In summary

This allows for having common / other files merged into a single assembly that can then be deployed. This allows for more advance features to be included in Online installations where it is not possible to install 3rd party dlls on the server.

charity, dev4good, coding »

[2 Jan 2012 | 0 Comments]

I have been to a number of charity coding events over the last few years. Craig Hogan has just asked people for their ideas on how he might run the second Dev4Good event this year, so here my top 6 things I like to get out of these events with my idea on how to implement them.

Why I enjoy these events

  1. Using my imagination on how to solve a problem. I truly love doing this, being given a problem and trying to solve it in a way that using various tools and services that are out there and I know how to use. Sometimes what I think is easy and straight forward is complete magic for non technical people.
  2. Creating a working solution. Actually creating something that adds value to a charities efforts and is actually used. As these events are short the scope of the solution has to be small for it to be successful.
  3. Meet interesting people – The type of person that goes to these events and donate their time are good people. The fact they are there and you are there makes it a pretty high chance you will be ‘liked minded’ with similar interests and a lot to talk about.
  4. Winning a competition – I’m quiet a competitive person and have always enjoyed working towards winning. This is something that PayPal’s charity hack event is very good at, they have different categories and a judging panel which makes it a real competition. They do have great first prizes in the categories which is a bonus BUT I would be very happy just having my name up in lights on the event’s website for bragging rights!
  5. Learning something new – it is not often that you get to mingle with a wide variety of other coders who are willing to share and show you how they work, equally I get a lot of satisfaction showing others new techniques and tools as well.. Getting a good opportunity to cross pollinate this knowledge is fantastic.
  6. Getting the T-Shirt – Getting swag is always a bonus at these events, but usually I have already got all the tools in my tool bag to do my job, I can go out and buy it. However there is one thing that you cannot buy – the Event’s T-Shirt. Long after the event has finished and possibly forgotten about, I often pull on an event T-Shirt and it brings back all the memories in an instant. I love it.

What I would do for my event

Have a large collection of problems or goals from charities (Points 1,2 & 4)

To achieve items 1, 2 and 4 I would try and collect the problems that charities have. Not just a few problems, lots and lots of them no matter how big or how small.

One of the recurring issues I have hear from event organisers is trying to get charities to engage with the event. To solve this I would partner up with organisation that deal with lots of charities, like JustGiving.com, and ask them to question the charities about ‘what is your pain’. It might be they need to be able to bulk e-mail, better analytics on their website, needing to raise more awareness of their brand, who knows. I believe by making the only effort by the charities to describe ‘their pain’ means it will only take then a few minutes of their time and a few sentences to submit an idea. There is no expectation on their half they will get anything and if they do they will be really happy.

Now we have a large pot of problems we can put them up on a “Problem Board” at the event and just let people pick the ones they would like to do. No direction about technology or implementation will be given (other than competition categories if they want to win a prize), leaving the developer to use their imagination (Point 1 solved). They might even create some working solutions (Point 2 solved) and then you can judge who created solutions that fit the different categories to have some winners (Point 3 solved).

Learn something new and getting the T-Shirt (Points 3, 5 & 6)

If possible have a social event the night before coding starts, hand out the T-Shirts, put up the “Problem Board” and let people meet and discuss how they might do things over a few beers and food in the evening. This should really help people meet each other in a relaxed environment which is the best thinking happen. It would be nice to have a way to identify what skills other people have. Name badges with skillset and twitter / linkedin details would be useful to solve this.

I would also had a period of de-brief and reflection after the event. It can be a full on experience with little sleep and lots of effort and emotion going in. People love to talk about how it went for them afterwards. I know my wife will never properly appreciate what I’ve been doing so having an opportunity to talk to people that went through it with me afterwards is really great. GiveCamp had a hog roast and a couple of hours of down time before the presentations which was perfect and really enjoyable.

In Summary

Every event I’ve been to has it’s own character and charm. Letting me use my imagination to add value and meet new people makes it very enjoyable for me. I have really enjoyed every event I’ve been to and you never know, perhaps some of these ideas might be adopted by Craig for this years Dev4Good!

C#, Amazon Web Service AWS, SES, Bulk Email »

[4 Dec 2011 | 0 Comments]

I recently did a talk at DevEvening about how I managed to using Amazon’s Simple Email Service with my (fairly) new website The Gig Market and I thought I would just share a bit more information about it if anyone was interested.

The Problem

As the website is hosted on Windows Azure there is no hosting mail server that I can use to send out emails. The answer is to use a 3rd Party service to do this for you. I had been using a company for 3 years for my other websites to do this and had been very happy with them…. until I breached the send limit one month and they  just stopped sending emails. This really annoyed me so I set about to find a better service. What I found was AWS’s SES which only costs $0.10 per 1000 emails, the problem was it is a webservice, not an smtp server, so I couldn’t just change the smtp server in my app’s settings.

The Research

What I wanted to do was to tweak my code to use SES api. To limit the impact I only wanted to change the:

SmtpClient smtp = new SmtpClient();
smtp.Send(mailMessage);

code to something that could just send the mailMessage object to Amazon SES. The problem is that out of the box the .Net MailMessage object can only be used with the .Net SmtpClient. doh.

I found a couple of other people with the same idea:

http://www.codeproject.com/KB/IP/smtpclientext.aspx

This post by Allan Eagle was a great start for me as it showed me how to get at the guts of the mail message object by using reflection. Now I only had to  send it to SES right?

http://neildeadman.wordpress.com/2011/02/01/amazon-simple-email-service-example-in-c-sendrawemail/

This post by Neil Deadman was a great article and I thought I had found my solution BUT there was a big big problem. I needed to send BCC and set the priority of the email. It turned out that I could not achieve this using the solution. So what could I do?

The Solution

I ended up hitting the Amazon docs and discovered that I would have to send a RAW message format and I would also have to write the email out by hand in MIME format. I thought this would be hard but it turned out to be pretty easy, I knocked up an extension method on the MailMessage class to do this for me.

public static string ToAmazonSesRawFormat(this MailMessage message)
        {
            var result = new StringBuilder();
            result.AppendLine("MIME-Version: 1.0");
            result.AppendLine(string.Format("From: {0}", message.From));

            if (message.To.Count > 0)
            {
                result.Append("To: ");
                int toMessageCount = 0;
                foreach (var address in message.To)
                {
                    result.Append(string.Format(
                        "{0}{1}",
                        toMessageCount == 0 ? string.Empty : ",",
                        address));

                    toMessageCount++;
                }    
            }
            
            if (message.CC.Count > 0)
            {
                result.AppendLine(string.Empty);
                result.Append("Cc: ");
                int ccMessageCount = 0;
                foreach (var address in message.CC)
                {
                    result.Append(string.Format(
                        "{0}{1}",
                        ccMessageCount == 0 ? string.Empty : ",",
                        address));

                    ccMessageCount++;
                }    
            }
            
            if (message.Bcc.Count > 0)
            {
                result.AppendLine(string.Empty);
                result.Append("Bcc: ");
                int bccMessageCount = 0;
                foreach (var address in message.Bcc)
                {
                    result.Append(string.Format(
                        "{0}{1}",
                        bccMessageCount == 0 ? string.Empty : ",",
                        address));

                    bccMessageCount++;
                }    
            }
            result.AppendLine(string.Empty);
            result.AppendLine("Subject: " + message.Subject);
            result.AppendLine(string.Format("Content-Type: {0}", message.IsBodyHtml ? "text/html;" : "text/plain;"));
            result.AppendLine(string.Format("Content-Transfer-Encoding: quoted-printable"));
             
            result.AppendLine(string.Format("X-Priority: {0}", ((int) message.Priority).ToString()));

            result.AppendLine(string.Empty);

            if (message.IsBodyHtml)
            {
                var encoder = new QuotedPrintableEncoder();
                result.AppendLine(encoder.EncodeFromString(message.Body, Encoding.ASCII));    
            } 
            else
            {
                result.AppendLine(message.Body);
            }
            
            return result.ToString();
        }

Now all I had to do was send the mesage via the SES RawMessage api:

AmazonSes.SendEmail.Instance.SendRawMessage(mailMessage);
 
The SendRawMessage function is a wrapper I put in the wrapper class I wrote for the API:
 
public void SendRawMessage(MailMessage mailMessage)
        {
            var memoryStream = new MemoryStream();
            using (memoryStream)
            {
                var encoding = new UTF8Encoding();
                var byteArray = encoding.GetBytes(mailMessage.ToAmazonSesRawFormat());

                memoryStream.Write(byteArray, 0, byteArray.Length);
                memoryStream.Position = 0;
                var message = new RawMessage(memoryStream);
                var sendRawMessageRequest = new SendRawEmailRequest(message);
                var response = Client.SendRawEmail(sendRawMessageRequest);
            }
            
            memoryStream.Close();
        }

Git »

[2 Jul 2011 | 0 Comments]

So I have been very busy this year and have not had time to blog anything. Now I find myself at the www.dev4good.net event and needing to show people on my team how to get the tools needed to use Git effectivly installed and running on their computers.

This post is just a short instruction set on how to do this and could be of use to others!

When I was trying to do this the following guide was so valuable:

http://www.paulrohde.com/github-101-on-windows/

Tools to download

http://git-scm.com/download <- install this first

http://code.google.com/p/tortoisegit/ <- install this second :)

»

[25 Feb 2011 | 0 Comments]

I was kindly given the opportunity to give a talk to the DevEvening crowd about my experiences with using PayPal’s Adaptive Payment services.

These Adaptive Payment services offer some very interesting and useful ways to collect and distribute money on the Internet, which is very important if you want to make a living with an online business!

My experience with the the Api was implementing a Chained Payment system on my Training Course Booker website to collect and distribute money to Training Providers that use it. There are lots of other uses, to find out more please visit PayPal’s developer website x.com and / or have a look at my slide deck below.

The video of me demoing training course booker at Le Web

C#, MS CRM, MS CRM4, Dynamics Crm »

[24 Jan 2011 | 0 Comments]

On my current Microsoft Dynamics Crm project we have done a lot of customisations, both creating custom pages and manipulating the existing crm pages via the OnLoad method. This post describes a method of ensuring the load order plus the benefit in increasing performance compared with loading multiple external JavaScript files the more conventional way.

The existing approach

One of the big problems we faced was making sure that the JavaScript files load in a specific order. This is because we have got some common functions in a file which can be re-used in multiple pages. These need to be loaded before the main page JavaScript file which might call one of the common functions.

There are a number of good blog post about this topic so I won’t go over this ground again. Here are a few articles I’ve found:

http://danielcai.blogspot.com/2010/02/another-talk-about-referencing-external.html
http://www.henrycordes.nl/post/2008/05/27/External-js-file-and-CRM.aspx

What is different with this new approach

The solution that I propose here was the brainchild of @njwatkins who came up with the idea of implementing at generic .net http handler as part of our custom webpages proejct that would read in the various external JavaScript files, in the correct order, compress them and then stream them back to the browsers as a single JavaScript file.

Unfortunately I cannot post the source code of the handler we use on this project here but a google round on topics like Gzip and StreadReaders and Handlers should enough to get most developers to a solution.

So the is script that goes on the onload of the Crm Enitty:

function loadJavaScript(file, onComplete) {
        var script = document.createElement('script');
        script.type = 'text/javascript';
        script.src = file;
        if (onComplete) {
            script.onreadystatechange = function() {
            if (this.readyState == 'complete' || this.readyState == 'loaded') {
                onComplete();
            }
        }
    }
    document.getElementsByTagName('head')[0].appendChild(script); 
};

loadJavaScript('/ISV/Northwind/ExternalJavaScript/contact.ashx', function() { if (EntityFormOnLoad) { EntityFormOnLoad(); } });

The contact.ashx file is the handler that does all the work and where you would define which JavaScript files to include and does the merging and compressing. It means you only need to load one external JavaScript reference and you can guarantee which order they external files will be loaded.

Other benefits of this approach is that you can control things like caching length which can be a problem when changing and deploying new external files to clients. We achieved a significant saving on load time of the javascript from 900ms down to 200ms using this approach and we have ideas on how to improve it further but this is as far as we have got today!

»

[25 Nov 2010 | 0 Comments]

If you follow my blog you will hopefully be well aware of my involvement with the Windows Phone 7 development community in the UK. I was fortunate enough to be invited to the launch party of the phone in London as well as present demo’s of apps I’m developing to London’s Windows Phone User Group.

‘The short’ of the post is if you want to be compliant with Microsoft’s stying guidlines watch this video about some templates they have developed and made available for free download for Windows Phone Development. Please read to the end of the article for ‘'the long’ of my thoughts!

 

One of the things which is a building block of Microsoft’s vision of Windows Phone 7 (WP7) is a consistent feel to applications that are developed and put into Marketplace. This is why the developed Metro as well as spending a lot of time and energy into creating detailed design guidelines for developers to follow. The problem is the majority of developers are more interested in writing code that reading design documents, are more interested in getting async callbacks to work than making sure the pixel spacing is correct between their labels and textboxes. Unfortunately for techies spacing and nice looking UI is as important to you app as the technical solution (developers are just a cog) To help us coding slaves Microsoft have kindly developed some Blend templates that implement the UI design templates, these can be copy and pasted into your solution for free.

Windows Phone 7, WP7, Microsoft »

[26 Oct 2010 | 1 Comments]

imageSo this is not breaking news but I was lucky enough to be invited to the Developer Launch party held by Microsoft in central London. This was the real party for those who are really involved in making the phone happen and want to build apps for the phone not the party for those hangers on who were fobbed off with this party ;-)

I had a great time, met a load of interesting people doing interesting things for Windows Phone 7 and I managed to get a few sound bites of video from some of them which I have edited together for your viewing enjoyment!

 

My Windows Phone 7 Developer launch party experience!

 

Many thanks to @will_coleman and his team for making everything happen and I am really looking forward to the future!

»

[13 Oct 2010 | 0 Comments]

I’ve just discovered something that I really really don’t like about Microsoft CRM 4. This is that creating some custom workflow activities that are used in a Crm Workflow means they are linked together forever and ever by version numbers.

I was getting this error message:

custom-workflow-step-is-not-valid

After some investigation it turned out that it was because I had changed the version number of the assembly that contained the custom workflow activities which meant that I could no longer publish the workflows that use them. When I try I get this error message.

The answer is to manually set the version number of the dll back to what it was when the workflows were originally published which is just silly.

For a much more detailed look at this problem I found this fantastic article by Eric Bewley.

Software Developement »

[7 Oct 2010 | 0 Comments]

We have recently upgraded our development environments on my project to match the production environment which is running on a 64bit servers. There was a big gotcha which was the delayed signed dll’s in the dev environment were chucking a “Strong name validation failed” error when they were accessed.

There are well documented solutions around using the following command

SN.exe -Vr *,*

However this did not work on the 64bit computer because SN.exe is a 32bit program and put the entry in the 32bit part of the registry. So I moved the registry key this tool enters and put it in the 64bit part of the registry and then everything worked.

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\StrongName\Verification\*,*]

Hope that helps some people out there ;)