C#, Software Developement, Projects, Games »

[28 Aug 2014 | 202 Comments]

I like playing CounterStrike Source (CS:S). I like playing against my friends at LAN parties (yes we actually still physically gather with our pc’s at some location and network them up and play against each other, we call it Lanfix).We then like to know how I did. We used to use to use Psychostats to generate a visual dashboard of what happened in game but it was no longer maintained and did not support the log format now coming out of CS:S.

@naiboss put out a request on the Lanfix mailing list to see if someone could tweak the Psyhcostats code to work with the log files I though I would have a look.

Instead of tweaking the Psychostats code I wrote a new application. After I had finished writing the app I then NoSQL Distilled: A Brief Guide to the Emerging World of Polyglot Persistence' target=_blank>read a good book about NoSql databases by Martin Fowler. At the end of the book he quickly covers other types of databases, one of them being an Event Sourcing databases and it occurred to me that is exactly what the CS:S log files were. The log files were events that had happened in that order in the game. My app essentially replayed these events to build up the state of the application. A fuller description of Event Sourcing can be found here. I didn’t know it at the time but I had built an app using an Event Sourcing database.

This post is a brief look at the the solution I came up with after I decided it would be easier to write an app from scratch rather than try and update the Psycostats codebase. The working version of the final application is up and running if you want to see what it does before reading how it does it. I finish the post with a summary of what I leant and

Why I didn’t tweak psyhcostats

I looked at the Psychostats code. It is written in php which is not my strong point and something I do not want to be a strong point, It required a mySql database which always cost some money and caused us problems with hosting and upgrades plus there was a process of uploading the files log files which then had to be interpreted and put the the correct database tables so the php webpages could run queries to display the webpages.

The interpretation engine was full of regex. I always forget how to regex stuff very quickly and find it hard to read. I didn’t relish trying to work out what regex I had to tweak to get the new style logs to load into the database properly. I also did not want to do a lot of re-work in a language that I personally don’t rate.

New Design Constraints

As I was starting from scratch it was a good opportunity to really think about the design. I came up with the following requirements:

  • Must be able to run without a database. That was always a pain to manage in the old system
  • Must be testable with automated unit tests
  • Must be easy to understand and maintain the code when the log files change. This really meant not using regex.
  • Must allow old logs to be interpreted side by side with new logs as and when the log format changes

Step 1 – Look at the raw log file

The most obvious place to start was the raw log file that had to be interpreted. It was interesting to see what I had to work with, every line had a timestamp, the lines ran in time order with each ‘cs:s server event’ being appended to the log file. Maps loading, players buying weapons, players shooting other players, in game chat, there were lots of things going on. The goal was to make sense of these things and display them in a nice dashboard.

Step 2 – Interpret the raw log file

I decided the first thing I needed to do was start writing something that parsed the file and populated an object I could then use in my application. I allowed the data I found in the log files drive the design of my domain objects.

I started by writing code and unit tests which picked out specific lines from the log file. After I got this working I wrote a ‘Line Processor’ for each line that was responsible to understanding what the data meant after the line had been identified. The processor extracted the data from the line and populated the appropriate domain objects.

Step 3 – Create a UI

The aim of the exercise is to create a nice dashboard for people to view the stats after a session. For this I used Sencha’s ExtJS framework. It has a very rich set of controls and excellent documentation. I delegated the layout to @naiboss and @krofunk. The UI development is what drove the requirements for the data that had to be delivered from the domain objects.

It was an interesting exercise because there were a number of ‘Server Events’ that were missing and the data had to be derived from what we had. An example of this was the ‘Successful’ bombing count, how to attribute that to a player. I solved this problem by keeping track of the last person to plant a bomb, when a round finished because of a bombing I attributed that to the personal who last planted the bomb. This required a context to be created for the Line Processors to work in so they could be aware of previous output.

Step 4 – Create a scoring system

People always like a bit of competition so @naiboss devised a scoring system so people were given points for various things, like ‘a kill’ or ‘a bombing’. The scores are weighted by weapon and bonus points given for awards, such as ‘most headshots’

The scoring system is developed against an interface allowing it to be swapped out for other scoring systems if required.

Step 5 – Mash it up - Steam Integration

As the UI and scoring system were progressing we were thinking of how we could make it easy for people to manage their profiles. The answer we came up with was to leverage their Steam profiles. The log provides the SteamId of the players so it was possible to query Steam for more information. Queries are made to get the logos and player names.

Step 6 File management – Mash it up – Dropbox integration

We were now thinking of how to allow the log files to be uploaded. As there was no database and the whole system runs on the fly from the log files I decided to look at integrating with Dropbox. A user could register their dropbox credentials and then sync their dropbox with the server to upload their log files.

This then raised another interesting possibility, how to allow sessions to be looked at in isolation or together. By using the ubiquitous analogy of file and folder management I updated the file parser to process all the log files in a folder and its sub folders. So when we had a lanparty with CS:S in the morning and afternoon we could create an ‘LanParty’ folder with AM and PM folders in it. By clicking on the LanParty folder you get the stats for all the logs on the day but by clicking on either the AM or PM folder you can see what happened in those individual sessions.

This just worked and I dread to think how hard it would have been to implement with something like the Pcyhostats database.

Closing Thoughts

It was a really interesting little project and is not quiet finished yet. The dropbox integration does not work properly yet which is essential to allow other people to use the system.

I learnt a lot as I implemented design patterns that I read in the excellent book ‘Dependancy Injection in .NET’. I have a good suite of unit tests over the file parser so I will be able to easily and safely change the parser to handle updated file formants.

Books, Software Developement, coding »

[1 Feb 2014 | 198 Comments]

I am currently pulling together various materials I have found useful over the years to create some short reference materials for software developers I am managing. This is just a quick reference list of the books that I have found very useful in shaping my opinions on the practical implementation of software development.

Clean Code – A handbook of agile software craftsmanship

This book gives detailed guidance on how to write software that is easy to read and maintain in the future by following S.O.L.I.D coding principles. It challenges many long held beliefs about how software should be written with well-reasoned arguments.

Refactoring – Improving the design of existing code

This is ‘the’ book to refer to when you want to change the design of existing code whilst not changing its behaviour. By following these methods is it possible to keep the existing functionality whist reduce the size of the codebase making the code more maintainable and extensible in the future.

xUnit Test Patterns – Refactoring Test Code

This book provides tried and tested strategies and patterns for structuring automated unit tests. It explains in detail what has to be considered when writing tests to ensure that they do not become more of a burden for a software project than an asset.

Dependency Injection in .Net

This book provides in depth and very clear explanation of what Dependency Injection is and how it should be used to realise the benefits it provides. Whist the excellent examples are in .Net the content in this book applies equally to any object orientated programming language.

The Art of Unit Testing

This book provides practical examples of how to write Unit Tests in a maintainable way. This book combined with Dependency Injection in .Net provides excellent working examples of how to structure code in a testable way.

Working Effectively With Legacy Code – Michael Feathers

This book provides tried and tested strategies for maintaining legacy code that does not currently have automated test coverage. It describes the various problems that you will encounter when trying to maintain or change existing code and what has to be considered

»

[6 Apr 2013 | 70 Comments]

The Problem

One of the things that I to do over and over on Dynamics Crm projects is to load existing data in. Using the api’s is usually ok for this but one project had a large number of records in the region of 30m which needed importing and the api’s just weren’t cutting the mustard. I did come up with a supported approach to load lots of records in quickly which I’ll detail here.

The headline number for speed was I increased performance from about 50 records / s to about 300 records per second on an all in one environment running in a virtual machine on my laptop (quad core i7, 8GB Ram, SSD disk). I had left the project before they ran it in anger, so I never found out how fast it really went!

I apologies to all the Crm Dev’s out there that like to just copy and paste code or download a tool to solve their problems as I don’t have that for you. All I have is my experience of the problem and memories of how I solved it. I will set this out in this post and will leave it up-to you to implement your own solution.

*Update on 7th April – I wrote this post in response to a tweet by Mike Read (http:/twitter.com/xRMMike ) asking for ideas on bulk loading records. Mike just let me know his implementation worked at about 450 records/s on a small dev rig. Great work Mike, I would love to know what it does in a production environment!

<shamelessPlug>Of course I am a freelance developer so you could hire me for a few days to do it for you!!! http://www.seethelink.co.uk </shamelessPlug>

The Main Stumbling Block – IIS

There were a number of problems I encountered when dealing with this sized data set but the main one was:

Dynamics Crm api’s are hosted in IIS / WCF. This causes technical problems as WCF and IIS are setup to stop DOS attacks so when you start hitting it really hard it just turns its self off before starting again.

When using the api’s IIS was also doing all the processing and was not very easy to scale out, I had some limited success creating a web garden but it just wasn’t fast enough.

I tried all sorts of things, multi treaded loader apps, async web requests, a number of clients machines firing in requests. I just couldn’t get past the bottleneck of IIS.

The Solution

In a (fairly big) nutshell the solution is this:

  • Create a tool to prep the data. I chose to do a winforms app, but it could be a console app I suppose.
  • Chunk the records you want to insert into batches. From memory I did 1000 records at a time.
  • Create a representation of the data in some kind of text based format. I used an XML schema for this. If I was to do it again I would probably use JSON or JSV format as it is less verbose and would allow more data to be represented and thus increase performance..
  • Serialise your batches of 1000 records into your text based format.
  • Zip up the text based format with some kind of compression tool (lots of zip libraries out there)
  • Turn the zip into a Base64 text representation
  • Create a ‘EntityBulkLoadReciever’ entity (or something like that)
  • Create the biggest text field you can on the ‘EntityBulkLoadReciever’ entity. I think I called this field ‘Payload’.
  • Create other text fields on it to records various metrics you are interested in.
  • Create an OnCreate plugin for the ‘EntityBulkLoadReciever’ entity.
  • In this plugin, take the text out of the PayLoad field. De-code it from Base64. Un-compress it. Loop through each of the records in the batch, turn them into the appropriate Dynamics Crm Entity and insert/update/delete them using the IOrganization Service in the PluginContext.
  • Clear out the ‘Payload’ field on the TargetEntity otherwise you will bloat your database
  • Put any metrics you are interested in recording for that batch into the extra fields you created on the ‘EntityBulkLoadReciver’ entity. (I think i stored excpetions, successful creation count, failure count on mine)
  • Register the plugin as ‘Async’ (this is the really really important bit)
  • Loop through every batch of 1000 records you have, Put your compressed, Base64 text representation of them into the ‘Payload’ field or a new ‘EntityBulkLoadReciever’ entity.
  • Fire the new ‘EntityBulkLoadReciever’ entity with payload of 1000 records at the standard Dynamics Crm api
  • Watch the Async service consume all processing power and RAM on your server
  • Watch 100’s or records / second get created in the database.

Why this works

  • By batching and compressing your data you can deliver 1000 records per request, rather than doing 1000 web requests. This gets past IIS/WCF trying to be helpful and stopping this as it thinks it is a DOS attack.
  • By using the IOrganization service  inside of the on create Plugin Context means you are already inside the Plugin Pipeline, I’m not 100% technically how it works, but it appears that the organisation service turns the new inserts into sql calls rather than calling the webservice again. This makes it uber fast, probably just sql statements, to insert each payload.
  • By putting the plugin on the Async service takes the load of the IIS thread. The Async service is already configured to use all processors on a machine, hell you can even scale it out and add a few extra Async servers if you have a big 1 off load.

Summary

I hope this is useful to those of you who face the same challenges as I did. The recently released Dynamics SDK does provide a new message to batch records together to improve bulk load performance which might be worth looking at. Unfortunately I haven’t had the time to do that yet.

If you have any questions please contact me through twitter: http://twitter.com/davehawes

programming, coding »

[5 Feb 2013 | 216 Comments]

One of the things that I have found when doing work at home recently is that I don’t get into a ‘flow’ which means I don’t seem to get as much work done as I think I should. I put the main reason for this down to my work area not being particularly conducive to coding and since I now have 3 kids there is a lot of their things around and they are always interrupting me. So with this in mind about 6 months ago I started to design how I would convert my shed (which is pretty big (4.5m x 6.8m) into a place where I could get into a good ‘flow’ for work.

I’ve put a little photo gallery of the work at the bottom of the blog post.

The Goal

To create an area where it is possible to create excellent software

Requirements

Temperature

When I ran my start-up a few years ago from home I did it out of a room in my house that had been converted from a garage by the previous owners. It converted badly with no central heating. I found it was hard to regulate the temperature, it got really hot in the summer and very cold in the winter. This was a real issue for productivity and was one of the issues identified as a bad experience when I recently caught up with some of the guys involved.

Lighting

As part of the overall environment for writing code I was very interested in finding if there was some kind of lighting configuration that good for coding.

Space versatility

I am not 100% sure what this space is going to be used for. Perhaps me just writing code, perhaps having a team of people in there designing and making software, perhaps hold some training courses, I just don’t know. With this in mind I want to keep all options open and make it easy to re-configure the space accordingly.

Others

  • Collaboration
  • Colouring
  • Connectivity

Limitations

The shed has no utilities attached to it - mains water, gas or electricity.

Design Decisions

Utilities

  • Connect electricity. This was the minimum number of utilities I needed to enable the implementation of the other requirements I had.

Temperature

  • Insulation – put 50mm Celotex insulation on all the walls and ceiling of the shed. Insulation is proven to be the most effective part of temperature control.
  • Under floor heating – this is planned to be the primary heating source. It is quiet and easy to control. It also works well even when it is very cold outside and most importantly it is discreet which is part of the space versatility requirement.
  • Air-conditioning unit with inverter – this can keep the room cool in the hot summer and with the inverter it can also produce hot air when it is cold, although when it get really cold it is not very effective.

Lighting

  • Indirect light for writing code. During my research on a good coding environment I found out it was to reduce eye strain you must not to have things that draw your eye away from the screen such as lights in the ceiling.
  • No windows – this eliminated a source of light that might cause eye strain, improves the insulation and security properties of the building as well as improves the Space Versatility of the room.
  • Day light bulbs – having lights that mimics daylight helps with concentration and energy levels. I will be using daylight bulbs for my indirect light.
  • Standard bulbs – while the indirect light is good for coding it might not be good for all possible uses. So I have also installed track lighting so I can clip in and out up to 16 GU10 bulbs anywhere in the room maximising the Space Versatility.

Space versatility

  • Items already covered is that there are not visible radiators on the walls or things in the floor, no windows so there are not limitations created by the walls or floor.
  • Power and network points are located on the cross beams of the ceiling. This makes it easy to plugin anywhere in the room

Collaboration

  • I love whiteboards and find it the best way to knock around ideas. With this in mind I have dedicated one long wall to be painted with dry wipe paint which will make it a 6.8m x 2m whiteboard!

Colouring

  • I’ve gone for a green colour on the walls as this is supposed to help relax and help concentration.
  • The floor is a white wood floor, as there is only artificial light I wanted to try and keep it as light as possible
  • Ceiling is white with special paint which is supposed to help reflect light.

Connectivity

  • There is CAT6 cable throughout the room for people to connect to.
  • To connect to the house broadband I will start with some Ethernet Powerline adapters but might have to get a proper external CAT6 cable installed if this is not fast or reliable enough. External direct burial CAT6 is quiet expensive so I’m just postponing that expense at the moment..

Does it work?

I’ve been in there a few times now and it is a very nice space to work. The whiteboard wall is awesome and is my favourite feature. I’ve ordered some desks that should arrive in a couple of weeks so i’m working off some pasting tables at the moment! Everything else seem to just work as I hoped, the temperature is nice, lighting is good and I always feel I can concentrate on work. So far so good :).

Credits

I must credit Rich Bartlett for the high quality work he has put into making this happen. His attention to detail and building skills has made my vision come true. If you need a good all round tradesman you can contact him via twitter: @richiethebass (https://twitter.com/richiethebass)

Photos

There are various photos from what the shed looked like before work started, the work Rich Bartlett (the guy in the nice woolly hat as it was about –5 outside!) did to renovate it showing the various things that we did along the way.

 

»

[13 Nov 2012 | 335 Comments]

Cristiano Betta has written a post about a subject I have also given a lot of thought about over the years.

http://cristianobetta.com/blog/2012/11/12/hacks-products-a-discussion-on-responsibility/

Who should be responsible for products after a hack event. He has also asked for a discussion on the subject so here are my thoughts. I basically come to the conclusion that the event organisers should set the expectation that ‘no viable product will be created’ but it is a great opportunity to learnt how technology ‘could’ be used to achieve a certain goal. and here are my reasons thoughts.

People don't understand that software is difficult and expensive

Anyone who has tried to make a software product knows it is very hard and expensive. It is not just having software that is important, you need support, sales, marketing, testing, training, bug fixing. I would say that having software is probably only about 30% of a software business. People should understand this before getting excited when they see a nice looking webpage.

Strong products can come out of these events, Charity Hack has seen a couple of successes (https://www.givey.com/ and http://www.positivebid.com/ also see this article about the Positive Bid story http://www.women2.com/starting-a-startup-at-charity-hack-in-london/ ). What these 2 services have in common is the substantial amount of money and time + other resources invested after the event to make it happen. Both of these company's took about a year to release something in the wild after the event.

One approach to address this which identifies this problem which I really liked was at last years http://www.givecamp.org.uk/ where one of the projects was to create an on-going open source product: http://www.givecrm.org.uk/. From the start it was designed as an open source project which will be worked on and added to at future Hack events. Essentially running it like a proper development project.

Configuration over development

Other things I have observed is the massive value charities can get from a techie setting something like Eventbrite up for them. Configure an existing product for a nicer website or integrating a YouTube channel to provide more interesting content. We techies forget how much we know that other’s don’t. We can provide value by just doing the things we think are obvious and easy.

Hack Event  = Idea melting pot

The other event that I’ve attended is www.dev4good.net. This is similar to Charity Hack and is a very free flowing event where you can drop in and out of the various teams to help push alone ideas and add value where you can. For me this is what hack events are all about, trying stuff out. Techies can try out new technologies under the guidance of others what know what they are doing, charities can get a feel for what can be technically achieved and there is no expectation that support will be on-going after the event for anything developed.

Summary

Event organisers should make it absolutely clear that there is little to no chance that a viable product will come out of 48 hours of work. Ideas can be tested and validated or dismissed, people can learn new tech but if anyone is expecting to bring something to market, they need to have a pile of cash and a business plan in place to make it happen.

»

[23 Oct 2012 | 257 Comments]

I have been surprised by the strength of my repulsion when I learnt that successful companies that I regularly spend money with are not paying any UK Corporation Tax on profits made in the UK. Creative accounting practices move the UK profits to their offices in countries with lower tax rates, which is legal but in my opinion wrong and short-sighted. So I thought I would jot a few of my thoughts down on why I think these things.

Why are they in business in the UK

The reason these companies have services and products in the UK is because they can make money here at a relatively low risk. We have a strong and stable country, good infrastructure, an advanced economy with people and other companies looking to spend money on stuff.

This has not happened by accident. The country has spent an enormous amount of money on creating this economic eco-system over a long period of time. I’m not saying it has all been spent very well but these vast projects are things which have been deemed to be for the good of the country as a whole. It would not make sense for any private company to spend money on these projects as they would not make money. So How do we get funding for these projects ‘for the greater good’? Taxes, the money goes to the government of the day to spend on stuff that the country voted them into power for.

My assertion is that over the decades UK governments have done a good enough job to develop an environment for businesses to make money and in return businesses should pay a reasonable amount of tax back to the government so they can maintain the existing system and improve it where they can. If this does not happen things will start to go wrong and everyone looses.

Why it is short sighted not to contribute

We need something bigger than international corporations to keep them in check. In this era of Globalisation and technology more power and influence is being concentrated in a relatively small number of global companies and we need systems in place to ensure that some kind of balance is kept in the market to keep a certain level of honesty.

In the UK we have things like the Competition Commission to investigate the UK markets to make sure there are enough players to keep each other honest. That is paid for through taxes and is very important to ensure that it is possible for new companies to be able to setup businesses in the UK. I cannot see companies like Amazon and Starbucks paying for this off their own back (turkeys voting for xmas springs to mind), but they must recognise that in the long run this type of service has make the UK an attractive place to do business and setup business in, which they have ultimately benefited from.

We also have other publicly funded bodies which keep this delicate economic eco system in balance. For example consumer rights, this allows individuals to feel comfortable they can spend money with any business in the UK and not get ripped off without a robust way to get recourse. There was a shocking example just yesterday of Amazon deleting someone's account, including all the Kindle books they had purchased, without any reasonable explanation. The lady in question is based in Norway so I don’t know what options she has available, but In the UK we have consumer rights and it would be relatively easy for the consumer to have their voice heard. Perhaps claiming her money back by the Small Claims court or raising it with the Office of Fair Trading (both publically funded) would be the first steps.

By having public funded bodies that try and maintain the right balance to create an economic eco-system where business can be done is extremely important and it needs to be funded. It is very short sighted of any business not to recognise the long term benefits of doing this. This is why I feel it is very important for companies to pay a reasonable amount of tax in countries where they operate and make money to enable the maintenance and development of the very environment they make money in..

What I have done about it

I have done what I practically can and voted with my wallet and try not to use parasite companies that take profits from the host economy but does not contribute to it. Now I get my coffee at Costa if possible and purchase Dyson vacuum cleaners and fans. I have cancelled my pre-ordered Kindle Fire HD and set my Amazon Prime account not to renew, I don’t use Amazon, Starbuck, eBay or PayPal if possible. I doubt it will change the world and I do find myself using these services from time to time when there is no other sensible option. It would be useful to know which companies did pay a reasonable amount of UK Corporation tax to make my decisions easier!

Oh, I’ve also written this blog post! So thank you for reading :)

Books, Employment, Software Developement »

[12 Oct 2012 | 307 Comments]

 

Why I’m writing this post

Over the last few years I have taken on a mentor / coaching type role from time to time for some up and coming software developers and I have found myself repeating my thoughts on what they should focus their energies on to progress their careers each time. People who know me know that I believe in DRY (Don’t Repeat Yourself) so I have decided to write this advice down for future reference, and perhaps some people I haven’t met yet will find it useful!

Do not rely on other people to look after you, you need to look after number 1

I think my most important bit of advice is for software developers to look after themselves. It can be very easy for enthusiastic developers to be worn down and loose any love for software development by the daily grind of work and especially when the effort (and often the extra-ordinary effort) they go to is not recognised or fully appreciated. We primarily owe a duty of care to ourselves and our career. The harsh reality is you cannot expect anyone else to do this for you. How do you do this?

Get a balanced approach to work and life, it’s ok to say ‘No’

The thing that I found that wears down enthusiastic developers more than anything is when they are too accommodating and put themselves under pressure by always saying yes (or ‘I’ll try’ which is always understood to be a ‘Yes’). I used to do this and one of the pivotal moments in my life was when I discovered the power of the word ‘No’.

As unbelievable as it might sound to some developers it is ok to say ‘No’ to development request on a project. You need to be reasonable and allow people to turn their request into a ‘Yes’. I like to have an easy to understand process in place for people to give me a requests. My personal choice is usually something based on a Scrum type approach. A simple backlog of items which people can add tasks to with a priority. I take a couple of weeks worth of  top priority tasks at a time and work on, delivery those new features then rinse and repeat.

Except for the obvious P1 issues, if people understand the process they seem happy to follow it, allowing you to say ‘No I can’t do that today but add it to the backlog and make the case it is more important than the other items and you can have it in ‘x weeks’. Its amazing how many ‘urgent’ issues dissolve into nothing after a few days.

For a good read about Balance and software development by Nathan Gloyn

Invest in your skills and set yearly goals

I like the saying ‘You cannot score without a goal’. The idea here is to spend time objectively and strategically thinking what would be good for your career and define some tasks to complete that will help you get there. Without doing this it is easy to end up in a Skills Drift, suddenly realising 5 years have passed and wondering ‘How did I end up here?’.

Setting yearly goals was the bane of my life when I had to do it in a formal structure at a consultancy but as a freelance with no corporation to hide behind all I’ve got are my skills to offer people to get work. I have to be and show I am at the top of the game in my field. I do this by taking some control of my own destiny and improve or gain relevant skills buy setting goals. DOING THIS IS REALLY IMPORTANT.

The practical application of this is to do this pick fun things to do, ‘I’ll learn a new programming language this year’, ‘I’ll complete a reading list of these <insert list here /> titles’, ‘I’ll attend x number of user group sessions’ (even better, give some presentations), ‘I’ll answer 10 stack overflow questions’, ‘I’ll write 6 blog posts’, ‘I’ll write a phone app with a Cloud backend’. These are the sorts of things I task myself with, they are cheap, interesting and packed full of learning value. These tasks should also help keep you interested in software development in general.

If you can get a budget to take a course and get a bit of paper with the word ‘Certified’ on it then go ahead but be aware that often they are sometimes not seen as a good indicator of your skills. See this post from Martin Fowler about the Certification Competence Correlation. 

Update your CV every year

This follows on from the previous point but make sure you update you CV every year. I found when I had nothing to add to it after a year it was obvious if my skills were stagnating and I was entering a Skills Drift. When this happens try to change this by talking to your manager for more training or something new to do. This might not work so. The saying i read in Jeff Atwood’s book Effective Programming: More Than Writing Code  ‘Try to change your company otherwise change your company’ applies here.

Write Clean Code

Any code you write will be read many, many, many times in its lifetime. So please try and make you code easy to read and understand. This does not mean just adding comments as this is generally a sign that the code is bad and not readable on its own (Martin Fowler calls comments ‘deodorant for bad (smelly) code’) .

Clean Code should be S.O.L.I.D so it is highly cohesive, loosely coupled, well named, easy to test (with tests please) making it easier and and safe to change sometime in the future.

My favourite resource and inspiration for explaining clean code is ‘Uncle Bob’. His style is quite zany but the message is very important and based in decades of experience in writing code. http://www.cleancoders.com/ 

Software development show be seen as a craft and there is a growing movement to make this a more mainstream idea and improve the quality of software development. http://manifesto.softwarecraftsmanship.org/ has lot of interesting information on the subject.

Admit when you don’t know

One thing that screams ‘inexperienced programmer’ is when they never say ‘I don’t know’ or even worse is when they do things and make statements which are based on obviously flawed assumptions of how things work and they see it as failure if they need to concede they were wrong. I recognise a good experienced developer or someone who is open to new or better ways of doing things as someone who is quick to identify the gaps in their knowledge and question what they think they know. When they are not sure (having evidence / experience to backup their argument) and are challenged will say ‘I’m not sure but I’ll find out’.

If I know you only have a few years experience I have a rough idea on how much stuff you probably know and expect to hear ‘How do you do that’ or ‘Is there a better way to do this’, ‘I don’t know how to do that’. While I expect this please don’t make me repeat the same advice or explanations over and over, it makes me feel you know value what I say. Make sure you learn from the things that people tell you.

Software development is like anything you learn, it takes many many years of practice and learning to become an expert in it. Research puts a time of about 10 years to become an expert in anything I’ve been doing this for over 10 years and am still learning and improving my skills every year, so perhaps I’m a slow learner! There are no short cuts, put in the hours, get experience be open to learning and you will get better.

Work smarter not harder

This follows on from the ‘Admit things when you don’t know’. If you find a task repetitive, long winded, error prone then it is probably worth the time to investigate if there is a more effective way to solve the problem. You might find that the answer is a revision to how you are currently solving the problem or perhaps there is a whole other technology or process that you might be able to use to make life easier.

Code generators, web frameworks, off the shelf products, open source projects, different languages, new software tools are examples of things that with some time invested in them can pay back multiples times over..

It is hard to do investigate and evaluate all these things on your own. To get an introduction to these things it is often better to hear from others who can summarize their field of expertise and if you find it interesting / relevant you can spend more time learning it it detail. See if you can go to some industry conferences, go to user groups, follow expert in you field on twitter, ask how other people are doing things on Stack Overflow, this is all part of you continuing self development. If you are lucky you might be working with someone who has done the research but see if you should still seek out new and improved ways of doing things. Once you get new knowledge see if you can share it with your peers in the industry.

Reading List

One of my favourite sources of quality information is books. Blogs and forums are useful for acute problems you need solved immediately but I find usually only treats the symptom of my lack of knowledge. Online media is great but there is a lot of shit on the Internet and sometimes it is hard to know when you stepped in it.

To treat the cause of you lack of knowledge you need to invest time in learning this missing bits of the jigsaw from a quality source that has been designed to tell the whole story. For me this is where books come in. The are significant bits of work contributed to by many experts in the field, with and army of people editing, reviewing and proof reading it to make sure it of a high standard. Books can describe the history and reasoning behind the designs and approaches of what you want to learn about. Below are some books which I would recommend to anyone wanting to do software development (there are more but this is what is on my Kindle at time of writing!).

Extra curricular activities

There is one thing that makes someone stand out as a developer who enjoys what they do an want to learn and that is what they do outside of the 9 to 5 job. So I encourage every developer to do something and it ties back to the early section of investing in your skills.

Create something

To really learn you need to do, so create something in your spare time to try out new ideas or technologies you’ve heard of. It’s the best way to get proper understanding of what things can do. Good types of projects are websites or mobile phone apps as they are easy to show people.

Sometimes I have found that my own projects are what keep me interested in programming when the work that pays the bills is dull and un-inspiring.

Get involved in an Open Source Project or two

If you want to get a springboard into learning a technology then get involved in an appropriate open source project. This is easier than ever with services like GitHub and CodePlex. The other great thing with doing this is that a lot of tool vendors, like JetBrains, will allow you to use their top of the range products for free on these projects.

This allows you to see how other people write code, structure their ideas and solve technical problems as well as getting feedback on how you write code, Don’t forget that there is no ‘correct’ approach and these projects with take an opinionated approached on how to tackle a problem. You might agree or disagree with the different approaches but make sure you keep your mind open to learn about the strengths and weaknesses of the different solutions.

Blog about things you have learnt

Get a blog going as somewhere to record things you learnt, did or think. You might not think that you have anything of interest to say on a blog, if that is true then nobody will read it but more than likely someone out there will appreciate your nuggets of information you throw out there. I’ve even found the answer to my own problem via Google on my own blog years after I  had originally solved the problem..

The thing I find about writing posts is that it forces you to really think about the subject, which really helps me understand the subject more. It also shows that you know stuff and is a good indication you are someone that likes to share and help others.

Attend community events

There are a lot of user groups / hack days going on all round the country. It is easy and cheap to attend these and you get to meet lots of like minded people. You never know who you might meet and learn from. The developers I’ve met at these events are of all abilities and backgrounds, from corporate IT workers, to start ups to people working on some of the biggest brand name websites in the UK.

In Summary

Being a developer can be one of the most rewarding and fun professions to be in. The industry is evolving so quickly it can sometimes feel a bit overwhelming but exciting at the same time.

There is a significant risk that you miss out on this excitement because the daily grid of un-inspiring projects using out dated technologies wears you down. You can easily become a battery hen 9 to 5 developer who looses the love they once had for the work. I have met a lot of these guys and it is a crying shame for our industry. As the country's economy evolves i can only see the demand for quality software developers will increase.

So my parting words are these:

Invest in yourself and give a little back to the industry and you will have a good chance of riding this growing technology wave to having a fulfilling career in software development.

Employment, Software Developement, Contracting »

[5 Oct 2012 | 29 Comments]

Why I think I can give some advice

Four years ago I created own start-up which did the very opposite of making me a millionaire, my plan B at that point was to become an I.T. contractor, which I did 3 years ago. I’m not the most experienced contractor but I have learnt a lot of lessons along the way. A couple of years ago a friend asked for some advice about becoming a contractor and another friend just asked me again this morning. So I’ve dug out the email I sent a couple of years ago and have put it here..

The Pros

  • The money is good
  • You can pick and choose contracts which includes locations (I compare this to when I worked at a consultancy who could send you anywhere round the country for months on end)
  • Every contract you meet new people, learn new things (business stuff as well as technical) and has the possibility to take you career in a different direction
  • You can stay out of corporate politics
  • As a temporary member of staff you generally don’t get given long term responsibility (although some people like having this responsibility so it might be a Con)

The Cons

  • The work is not guaranteed and the future is unknown
  • You generally only have 1 weeks notice (I was given that once when the client suddenly realised they had no money left, so it does happen)
  • You can be treated as a bit of an outsider sometimes.
  • No structured career path.
  • No perks and you never have that sense of ‘belonging’ which you can get as an employee in a company.

My Advice

In a nutshell, if you know a particular product / technology very well, you can convince people of this and my Pro list sounds good and the Con list doesn’t sounds too bad then you probably have a good chance of enjoying being a contractor, so here is the rest of my advice:

Setting your self up

  1. You have to believe enough in your abilities to quit a safe job for an unknown future.
  2. Set-up a Limited company
  3. Get an accountant to do the books / returns. This probably costs in the region of £800 - £1000 a year but it money well spent
  4. Get VAT registered on the Flat rate scheme. You charge you clients 20% and pay the HMRC 15% but you cannot claim VAT back on any of your purchases, so if you buy loads of stuff you might loose out here, ask you accountant you got in step 3!
  5. Get a good number of on going clients. There is a tax rule (IR35) that you cannot work solely for one company otherwise the tax situation is different. If you build a customer base of a dozen or so companies you regularly do business for this can help the situation. Speak to you accountant for professional advice on the situation (make them earn their £800 - £1000).
  6. Get an online accounts package. There are a good number out there, they can make it easy to raise and track invoices plus integrate with HMRC’s returns system, plus it’s online so it is available wherever you are. (£15 - £20 / pm ish) http://www.kashflow.com/ or http://www.freeagent.com are good examples.
  7. Build up a war chest. You need to save enough money to see you through any lean times. If you can save enough to keep you going for a year that is good. It also enables you to bargain harder for rates if you know you are not desperate for a new contract to pay the bills.
  8. Stay functional. When times get lean projects still need people who do the work and will cut out the management and make the doers manage the project as well, don’t get dragged solely into management and loose those functional skills.
  9. Be wary of recruitment agents. Take anything they say with a pinch of salt because they don’t work for you. ‘If you don’t pay for the product, YOU ARE THE PRODUCT’. At the end of the day the client companies hiring pay the the agent’s mortgage. This means the agent will do what it takes to keep the client company happy, not you. There are some excellent agents out there, just finding them and building a good relationship with them is the key.

Your are the product so sell and market yourself

Now that you realise you are a product you need to package and market yourself properly:

  1. Get a good CV. I used this book http://www.careerconsultants.co.uk/career/books-perfectcv.asp and spent 2 days re-writing my CV and hours doing re-work before I apply for new roles. CV writing is time very well spent.
  2. Be excellent developer but make sure you are a specialist in a popular product. This increases the chances of getting a contract plus increases the rate you can get. As a Microsoft developer I’ve chosen to be a Dynamics Crm specialist. Sharepoint and BizTalk are in demand at the moment as well.
  3. Blog / write articles, attend user groups, answer Stack Overflow questions, give back to the technical community you live in. This builds reputation and sets you apart from the other candidates who are going for the same contract. When you need a new contract these people you have helped by sharing your knowledge could help you back.
  4. Make a good name for yourself, be excellent at what you do and make sure the right people know this. If you are a contractor and you speak at events, make sure the audience know this as they might want to have you on their project for a few weeks / months. It is all about personal brand and inbound marketing.
  5. Networking and Inbound marketing is important. LinkedIn is excellent, make sure you keep your skills and availability updated. Updating your status to looking for work is sometimes all it takes to get a new contract.

 

In Summary, You need to get a hard nose

It might sound obvious but you are now on your own.

There is no sick leave, holiday pay or pension. If you are not working you are not earning. You have to pay the employees and employers tax / NI on what you do earn, you have to pay for accountants and your own training, your own hardware and software licenses the list goes on and on. This is why contractors rates look good to people who don’t understand the cost of being a contractor. Everything you do needs to be earning money. If you are asked to work a bit later on a project then make sure you get paid for it somehow. If your contract hasn’t been extended with a month to go, find a new contract as ‘if you are not working you are not earning’. Been asked to be ‘exclusive’ by an agent? Tell them to get stuffed unless they are willing to pay you if you are out of contract and they haven’t found anything for you. Been told that the contract is yours but haven’t signed anything? Keep looking until the ink is dry on the bottom of the paper.

I found my hard nose by loosing lots of money on my start-up and feel that is a big reason why I have done ok so far as a contractor. I ignored my own advice on one contract and put in a lot of effort over and above what I was contracted to do, the thanks I got was my 1 weeks notice just before Christmas when the project ran out of money. It was a sharp reminder to me why I should stick to my own rules!

»

[10 Aug 2012 | 0 Comments]

It is a common requirement to be able to automatically generate documents in an application. This post shows how I’ve used standard .Net client reporting libraries to allow me to create Pdf and Word documents in my applications.

The code for this post can be found on my GitHub profile: https://github.com/davehawes/RDLC-Pdf-Generator

High Level Design

The idea is not very complicated. You need to be able to design the data to go in the report, design the layout of the report and place the data in the appropriate places,  and then merge the two together to create the document.

Designing The Data

By using an ADO Dataset it is easy to define the data you want to put into a report. It supports strong types and is easy to design in Visual Studio. This can be seen as a Data Transfer Object (DTO) for getting data from your application into the Report.

image

Designing The Report

Microsoft .Net has a file type called an RDLC (Report Definition Language Client). This is the same file definition that is used in Microsoft Sql Server Reporting Services but is designed to run on a client machine instead of the server. You can create a new one of these files by clicking File –> Add New Item:

image

Once you have the empty report you can link it to the DataSet you have designed:

image

*Important* – Make sure you give the DataSet the same name as the DataTable you want it to use inside of the dataset. In this example it is ‘MasterData’. Failure to do this will mean the code that merges the data with the report cannot link the two things up and throws an error.

image

If you cannot see the ‘Report Data’ window it is because Microsoft like you to work hard for your money and have cleverly hidden it unless you know the special combination of things to click. First you have to select the Report on the workspace then click ‘View –> ReportData’ which is right at the bottom of the menu options.

image

This then lists all the DataSets in your project and you can select the one that you want to use.

You can then drag and drop the columns onto the report to make it look pretty:

image

 

Merging the two together

So we have done all the designing and now we need to create the Pdf. The first step is to get an instance of your DataSet full of records to be put in the report.

private static Dataset CreateSample1Dataset()
{
     var dataset = new Reports.Sample1.Dataset();
     var masterDataRow = dataset.MasterData.NewMasterDataRow();

     masterDataRow.Id = 1;
     masterDataRow.Name = "Master Data Row 1";
     masterDataRow.Description = "Description about data row 1";

     dataset.MasterData.AddMasterDataRow(masterDataRow);
     return dataset;
}
I have then created a Facade class to hold the data and also the file path to the rdlc file. Create one of these objects and pass in the populated dataset and the file path to the rdlc file you want to use:
var reportDefinition = new ReportFromFileDefinitionFacade(CreateSample1Dataset(), "Reports\\Sample1\\Template.rdlc");
(If you are using my project notice that I’ve set the rdlc’s Build Action to ‘Content’ and the Copy to Output Directory to ‘Copy if newer’. You can put in the file path to where ever you have deployed your rdlc files.)
Once you have got the this Facade class populated then you can just pass it into my generator class to produce a pdf:
byte[] pdfFile = Reports.Generator.CreatePdf(reportDefinition);

That’s it. You can then do what you want with the byte[].

Master Detail Reports (using sub reports)

A common requirement is to have a sub report in your main report. This can be achieved as well but you have to be careful with your naming conventions.

Design the Data

Create another datatable in the dataset to hold the data for the sub report. Notice that I have not created a relationship between the two datatables as this is done in the rdlc file later on:

image

Next create a new rdlc file that will be your sub report:

image

Notice that I’ve named this Template_Detail, this is the same name I gave to the DataTable in the dataset.

In the main report drop a subreport control onto the layout and set the sub report properties:

image

Be consistent with your naming here, I’ve used ‘Template_Detail’ in both places in this example. In the parameters section add a new parameter to create the relationship between the master – detail datatables. Make this the same name as the Foreign Key field in the Template_Detail dataset, in this example ‘MasterDataId’:

image

You then need to add this parameter to the sub report. Make sure the name is the same and the type is the same as it is defined in the DataTable:

image

That is it. Now all you need to do is populate the DataSet as before, but this time with detail records, and pass it in with the path to the master report. You have to make sure that the sub reports are deployed in the same folder as the master report so that it gets resolved correctly.

Passing the rdlc file as a byte[]

It is not always possible to access the file system to read the rdlc file and you might want to get it from somewhere else, such as a database, as a byte[]. You can do this by using the ReportFromStreamDefinitionFacade class I’ve created.

You have to set the reports up in exactly the same way but instead of passing in a file path to the location you pass in a byte[] that contains the rdlc file. If you are doing this with sub reports then you need to add each sub report in the SubReports list on the parent report, passing in the name that matches  the datatable name that it needs to get its data from:

var report = new ReportFromStreamDefinitionFacade(dataset, File.ReadAllBytes("Reports\\MasterDetailSample\\Template.rdlc"));

report.SubReports.Add(new SubReportFromStreamDefinitionFacade(File.ReadAllBytes("Reports\\MasterDetailSample\\Template_Detail.rdlc"), "Template_Detail"));
That’s it! I hope people find this utility useful Smile            

»

[15 Feb 2012 | 0 Comments]

 

Introduction

Microsoft Dynamics 2011 has made deployment of customisation much easier than pervious version. However there are some technical limitations you face when using Microsoft Crm Online such as not being able to install stuff on the server (such as putting dll’s in the GAC). One of the great features of the framework is the Early Bound Entity code generator. It is useful to put this in a separate project so it can be re-used across other projects, such as the Plugin and Workflow projects. However to deploy this solution structure to Online you need all the code compiled into a single assembly. This post describes how I solved this problem allowing me to have a common project with business logic shared between the assemblies that get deployed.

The project used in this post

I run a small software company called See The Link. The project I am using as an example is one done for one of my clients that we did a ]Dynamics Crm Online solution for. <plug> i’m always looking out for new business - if you want to hire me please get in touch!</plug>

Solution Setup

This is the solution setup I have. It is based on the Dynamics 2011 toolkit template for Visual Studio 2011. As you can see I have a project for plugins, SeeTheLink.Visalogic.Crm.Plugins, and another project for Workflows, SeeTheLink.Visalogic.Crm.Workflow, (I realise that custom workflows cannot be deployed online yet but will be coming in Q2 this year). The early bound entities have been put in the “SeeTheLink.Visalogic.Crm.XrmEntities’ project.  The last important project is the CrmPackage, this project is created by the dynamics template. I don’t understand all the voodoo that this project type does by it is responsible for deploying the assets to the target Dynamics solution. This is where we need to tap into to make it merge the assemblies before it deploys them.

image

Step 1 – Get ILMerge and the msbuild files

There are some msbuild files you you need. They can be found here:

C:\Program Files (x86)\MSBuild\Microsoft\CRM\

The files are called:

Microsoft.CrmDeveloperTools.CrmClient.targets
Microsoft.CrmDeveloperTools.CrmClient.dll

We need to take copies of these files so we can edit the targets per solution otherwise it will happen for all solutions built on the computer. Copy these to a folder just below the solution, I called mine “Tools”

image

Also put ILMerge.exe in there as well. This is the magic tool that does the merging for us. In the above screenshot I have my solution in the DynamicsSolution folder.

Step 2 – Edit the Microsoft.CrmDeveloperTools.CrmClient.targets file

Edit the in file. Find the <Target Name="BeforeDeploy"> tag. Here we put in the command to use ILMerge command:

<Target Name="BeforeDeploy">
<Exec Command="&quot;$(SolutionDir)Tools\ILMERGE.EXE&quot; /targetplatform:v4,C:\Windows\Microsoft.NET\Framework\v4.0.30319 /keyfile:$(KeyFile) /out:$(PluginMergeOutputName) $(PluginMergeDlls)" />
<Exec Command="&quot;$(SolutionDir)Tools\ILMERGE.EXE&quot; /targetplatform:v4,C:\Windows\Microsoft.NET\Framework\v4.0.30319 /keyfile:$(KeyFile) /out:$(WorkflowMergeOutputName) $(WorkflowMergeDlls)" />
</Target>

Here you might notice that I’m using a few variables:

$(KeyFile)
$(PluginMergeOutputName)
$(PluginMergeDlls)
$(WorkflowMergeOutputName)
$(WorkflowMergeDlls)

We will define these in the CrmPakage project file next.

Step 3 – Edit the CrmPackage project file

Unload the project file

image

Edit the file

image

Find the following line in the file:

<Import Project="$(MSBuildExtensionsPath32)\Microsoft\CRM\Microsoft.CrmDeveloperTools.CrmClient.targets" />

Change it to:

<Import Project="$(SolutionDir)Tools\Microsoft.CrmDeveloperTools.CrmClient.targets" />

This should be the path where we copied the files to in Step1 and edited in Step 2.

We also need to define the variables we used in step2.

Add these in a property group:

<PropertyGroup>
    <KeyFile>"$(SolutionDir)Plugins\SeeTheLink.snk"</KeyFile>
    <PluginMergeDlls>"$(OutputPath)SeeTheLink.Visalogic.Crm.Plugins.dll" "$(OutputPath)SeeTheLink.Visalogic.Crm.XrmEntities.dll"</PluginMergeDlls>
    <PluginMergeOutputName>"$(OutputPath)SeeTheLink.Visalogic.Crm.Plugins.Merged.dll"</PluginMergeOutputName>
    <WorkflowMergeDlls>"$(OutputPath)SeeTheLink.Visalogic.Crm.Workflow.dll" "$(OutputPath)SeeTheLink.Visalogic.Crm.XrmEntities.dll"</WorkflowMergeDlls>
    <WorkflowMergeOutputName>"$(OutputPath)SeeTheLink.Visalogic.Crm.Workflow.Merged.dll"</WorkflowMergeOutputName>
  </PropertyGroup>
this is what the end of the file should look like:
image

The ‘PluginMergedOutputName’ is what the output or the merge will be called. It is therefore important to update the RegisterFile.crmregister file. We will do that after we have reloaded the project:

image

Step 4 – Edit the RegisterFile.crmregister file

In the CrmPackage project there is a file called RegisterFile.crmregister. This contains all the references to the plugins that are required for the deploy project to register the dlls. We want to register the merged file so we have to just rename the reference:

image

Step 5 – Deploy!

When you deploy the package you will see the new steps in the build output

image

A couple of gotchas

If you are registering the same plugins just in a dll with a different name then you will need to un-register any dlls already deployed.

You will also need to add

[assembly: Microsoft.Xrm.Sdk.Client.ProxyTypesAssemblyAttribute()]

to the AssemblyInfo.cs file of the plugin and workflow assemblies so the early bound entities work.

In summary

This allows for having common / other files merged into a single assembly that can then be deployed. This allows for more advance features to be included in Online installations where it is not possible to install 3rd party dlls on the server.