Archive for the ‘Programming’ Category

Public Virtual Auto-Properties and CodeRush

Saturday, February 11th, 2012

Since starting to make the move towards utilizing NHibernate in most of my projects I have been a little frustrated by the need to make properties virtual. This is by no means a criticism of NHibernate just the tooling, specifically Visual Studio 2010 and CodeRush. Whilst CodeRush has templates for everything the virtual keyword dosn’t seem to get a look in, fortunately it’s not hard to add support.

Thanks to Rory Becker’s article on Creating Virtual Methods I had everything I needed to create the template. Auto-properties are actually significantly less complex than methods as they are composed of little more than the type, property name and the { get; set; } block.

Templates are easy enough to create from the DevExpress Options Menu (DevExpress Menu -> Options) you jump to the templates page quickly with the search in the top left hand corner. I prefer to organize my custom templates into their own folder, then I can easily import and export these between machines via drop box.

1) Right click your preferred folder and select “New Template” inter “u?Type?” and press enter/OK.

2) Set the Context – I copied the context of my template from a?Type? and removed the InStruct directive, as virtual is not valid in structures, the Use: field should read something like this:

((InClass or InInterface) and OnEmptyLine) but not (InComment or InMethod or InPreprocessorDirective or InProperty or InString or VS2002 or VS2003 or VS2005)

It’s not critical you get this right, as it is probably going to work where you want it, if the above is wrong it may work where you don’t want it.

3) Enter the expansion

public virtual «?Get(Type)» «Caret»«Field(PropertyName)»«BlockAnchor» { get; set; }

This bit is essential to get right:

  • “public virtual” are string literals
  • the ?Get(Type) StringProvider is a macro that takes the ?Type? part of your template and emits the type of the property
  • Caret is the position of the Caret unsurprisingly
  • Field(PropertyName) gives you a place holder to enter the name of your property
  • BlockAnchor highlights the text between the Caret and BlockAnchor making the PropertyName field instantly replacable
  • { get; set; } is again a string literal

With that you are all set and can save and test out your new template in the code editor. Simply move to an empty line inside a class and type

us<space>

This should then expand to

public virtual string PropertyName { get; set; }

Orchard Placement ID Strings Must Match

Tuesday, May 31st, 2011

I spent far too long trying to fix a error while developing an Orchard Module. To explain further Orchard CMS derives a lot of its flexibility by representing content in highly abstracted terms called shapes, these shapes are dispatched by the module that wishes to output content then rendered at the end. In between these shapes can be manipulated by themes or modules to change how the information is displayed. It takes a while to get your head around it, but its a great system… until you make a mistake!

Modules define where shapes are placed through a file called placement.info your module dispatches shapes through it’s Driver (a class inherited from ContentPartDriver)

public class DefinitionListPartDriver : ContentPartDriver<DefinitionListPart> {

    ...

    protected override DriverResult Display(
        DefinitionListPart part,
        string displayType,
        dynamic shapeHelper) {

        return ContentShape("Parts_DefinititionList", 
            () => shapeHelper.Parts_DefinitionList(
                        ContentPart: part, 
                        DefinitionList: part.Entries));
    }

    ...

}

You may have seen the mistake already, but just in case your not sure here is my placement.info file:

<Placement>
    <Place Parts_DefinitionList_Edit="Content:11"/>
    <Place Parts_DefinitionList="Content:11"/>
</Placement>

Yep, when I dispatched the shape it had an extra it in Definition… oh well, remember to check that first next time.

Responsive Design for Programmers

Wednesday, March 30th, 2011

Responsive Design is a term that seems to have risen up around me over the course of the last year. Up to this point I have been largely unaware of what it meant other than had a good deal to do with designing markup and CSS for the mobile platform. Today I thought I would spend a couple of hours researching the topic, I have included some of the best articles and post I could find.

A List Apart’s Responsive Design Article

Key points I took away from this article are; the plethora of screen resolutions and orientations on the web is an advantage not a disadvantage. Mobile continues to rise in both business and the social web. Media queries are a neat implementation for modern browsers (IE9, Chrome, Safari 3+, Firefox 3.5+) with a practicable JavaScript backup for non compliant browsers.

CloudFour’s CSS Media Query for Mobile is Fool’s Gold Blog Post

This is quite a pragmatic stance on using Media Queries for the mobile platform as a direct response the aforementioned article from A List Apart. Key points for me were performance, specifically speed are very important for Mobile. The common use case for a mobile website is to find something quickly, waiting for large images and style sheets affects this performance. There are now low-power smartphones without the CPU power, resizing images in-browser does not work. There is a good response on Quirksmode.

Smashing Magazine’s Guidelines for Responsive Web Design Article

Kayla Knight goes into depth into the techniques and technologies that front end web developers have at their hands. There is a lot to take in in this article, and some great examples of responsive web design out in the wild. Further examples and discussion can be found on Adactio.

Windows Team Blog’s Targeting mobile optimized CSS at Windows Phone 7 Post

This was the article that kicked me off on my research spree this afternoon. I was seriously disappointed by Windows Phone 7′s 19/400 score on HTML5 test so I didn’t hold out much hope for support for CSS3 and Media Queries this article didn’t do much to reassure me. Microsoft advise going down the route of conditional HTML comments to import the correct style sheet for the platform you require.

Conclusion

Not being a “Front End” developer, I may never have to put any of this into practice however now I know more than when I started I can follow along when the design team talks about responsive design. If you know of any other resources or articles, please post them in the comments for me to read. If I find them useful I will update this post.

Listing Table Sizes

Friday, December 24th, 2010

Databases are a pain in the neck to look after, poorly designed models and processes that don’t remove temporary data can cause a database to grow in size. A database that is allowed to grow large beyond its requirements becomes a burden on the nightly backup, takes longer to restore in the event of a recovery scenario and slows down the development process by preventing developers from testing things out on “live” data.

More often than not I have found that the problem lies with log or analytic tables sometimes this information is liberally logged (which it should be) and then totally ignored without a thought for trimming the data on a regular basis.

SQL Server Management Studio provides a way of looking at the storage usage of tables individually from the properties context menu item of the table.

SSMS Storage Properties

In large databases this can be laborious, I found a script that will collect this information and present it as a table. I have adapted it a little so that I can see the total size of the table and sort by each column to drill down to the problem tables.

SET NOCOUNT ON
CREATE TABLE #spaceused (
  name nvarchar(120),
  ROWS CHAR(11),
  reserved VARCHAR(18),
  DATA VARCHAR(18),
  index_size VARCHAR(18),
  unused VARCHAR(18)
)
 
DECLARE TablesFromSysObjects CURSOR FOR
  SELECT name
  FROM sysobjects WHERE TYPE='U'
  ORDER BY name ASC
 
OPEN TablesFromSysObjects
DECLARE @TABLE VARCHAR(128)
 
FETCH NEXT FROM TablesFromSysObjects INTO @TABLE
 
WHILE @@FETCH_STATUS = 0
BEGIN
  INSERT INTO #spaceused EXEC sp_spaceused @TABLE
  FETCH NEXT FROM TablesFromSysObjects INTO @TABLE
END
 
CLOSE TablesFromSysObjects
DEALLOCATE TablesFromSysObjects 
 
SELECT	name AS TableName,
		ROWS AS ROWS,
		CAST(LEFT(reserved, LEN(reserved) - 3) AS INT) AS Reserved,
		CAST(LEFT(DATA, LEN(DATA) - 3) AS INT) AS DATA,
		CAST(LEFT(index_size, LEN(index_size) - 3) AS INT) AS IndexSize,
		CAST(LEFT(unused, LEN(unused) - 3) AS INT) AS Unused,
		(CAST(LEFT(reserved, LEN(reserved) - 3) AS INT) + CAST(LEFT(DATA, LEN(DATA) - 3) AS INT) + CAST(LEFT(index_size, LEN(index_size) - 3) AS INT) + CAST(LEFT(unused, LEN(unused) - 3) AS INT)) AS Total
FROM #spaceused
ORDER BY Total DESC
DROP TABLE #spaceused

Ordinance Survey OpenData (Part 3 – Cleaning Up)

Friday, December 17th, 2010

If you look through the schema of the table we imported in Part 2 there are a number of unused fields and some of the data appears to be missing.

Cleaning up the Schema

  1. You can go right ahead and remove the fields that start with “Unused” as far as I can tell the full version of Code-Point uses these fields.
  2. Remove the nullable attributes from all of the fields, this will prevent us from doing something silly at a later date, and will avoid Object Relational Mappers such as Entity Framework from creating nullable data types.
  3. Many of the fields contain codes not data itself but codes that describe other data, so lets prepend code on the end of those fields for the time being.

Cleaning up the Data

The quality column in Code-Point Open describes the source and reliability of the data, it ranges from the most accurate 10 through to no data 90, when building a system around this data you need to decide at what data is important to your use case. The following query will give you an idea of the quality of the dataset as a whole, I have annotated it based upon the OS Code-Point documentation.

SELECT Quality, COUNT(*) AS COUNT
FROM [OSOpenData].[dbo].[CodePointOpenCombined]
GROUP BY Quality
ORDER BY Quality
Quality Count Description
10 1683975 Within the building of the matched address closest to the postcode mean determined automatically by Ordnance Survey.
20 73 As above, but determined to visual inspection by GROS (General Register Office for Scotland).
30 1086 Approximate to within 50 m of true position.
40 52 The mean of the positions of addresses previously matched in ADDRESS-POINT but which have subsequently been deleted or recoded.
50 4395 Estimated position based on surrounding postcode coordinates, usually to 100 m resolution, but 10 min Scotland.
60 93 Postcode sector mean (direct copy from ADDRESS-POINT).
90 6361 No coordinates available.

For my purposes I want to use the coordinate data stored in the Eastings and Northings columns, which makes postcodes with no data useless to me, I can remove these with the following SQL script:

DELETE FROM [CodePointOpenCombined]
WHERE [Quality] = 90

Ordinance Survey OpenData (Part 2 – Importing The Data)

Friday, December 10th, 2010

All of the data is in different files; SSIS is capable of extracting data from multiple files however for the purposes of this article I am going to stick to the Import Export Wizard.

To combine all of the files into one (big) file a quick switch to the command prompt is required:

type data\*.csv > .\CodePointOpenCombined.csv

Because none of the data files have headers this works fine, if they did have headers some work would be needed to strip those out.

Create a new database in SQL Server then follow these steps:

  1. Right Click the Database select “Tasks” – “Import Data”.
  2. In the Data Source step change the drop down to “Flat File Source”.
  3. Select the combined file we created above (you may have to change the filter).
  4. Check the Columns page if Quotation Marks (“) appear in some of the columns change the Text qualifier field on the General Page to a “.
  5. On the Advanced page click Suggest Types.
  6. Set the number of rows to 1000 (the maximum), then click OK.
  7. Go through each column and update the name and DataType to match those we discovered in the previous post.
  8. Check the correct database and table are selected on the next two steps.
  9. Click Next then Next again, then check over the data type mappings.
  10. Click Next then ensure Run immediately is checked then click finish.
  11. All being well, all of the data will be imported successfully.

If there are problems importing the data you can go back and make changes to the configuration, typically the issue is incorrect data types (too small) or incorrect text delimiters.

You may be asking why we went to tall that trouble, and time, only to let the Import Data Wizard suggest the data types. The reason I wrote the script was the wizard is limited to checking the first 1,000 lines; even if you set the value to 2,000,000 it will default down to 1000 after you move your focus away.

The result being if your data is naturally sorted on a specific column as some of the Ordinance Survey data appears to be the import will fail. Running the schema scanner allows you to scan through all of the data so that you can modify the suggested data types to match the maximum values.

Ordinance Survey OpenData (Part 1 – Schema Scanner)

Friday, December 3rd, 2010

In April 2010 the Ordinance Survey released certain parts of their data under special licence which allows for commercial use without cost. All the types of data made available are outside the scope of this post although I hope that the techniques described could be applied to any data set not limited to Ordinance Survey data.

In this post I am going to look at Code-Point Open, a list of all UK postcodes with their corresponding spatial positions. Unlike many other OS OpenData downloads the ZIP file does not contain the User Guide or the Schema Data, this can be found on the website, I spent a good 10 minutes searching for this data.

The term for what we are doing in this post is Extract-Transform -Load (ETL), a process in which we take data in one format and covert it for use in another format. Generally ETL is used to take a flat file format and load it for use in a relational database, although technically any format or database could be used. SQL Server offers two built-in mechanisms to perform ETL; the “Import Export Wizard” and SQL Server Integration Services (SSIS). The “Import Export Wizard” actually creates a SSIS package in the background and is available to all versions of SQL Server, SSIS  is not available in SQL Express.

Before we create a table in a SQL Server Database we need to know something about the data we are importing, the documentation for Code-Point Open tells us the data contains the following fields:

Postcode, Quality, Unused1, Unused2, Unused3, Unused4, Unused5, Unused6, Unused7, Unused8, Eastings, Northings, CountryCode, RegionalHealthAuthority, HealthAuthority, AdminCounty, AdminDistrict, AdminWard, Unused10

A number of the fields are not used, the fields and the dummy data held within them will be weeded out at a later date, we know the fields but we don’t know the format of the data it contains, it could be numeric, strings, decimals, telephone numbers? I created a PowerShell script which scans through all of these files to work out what type of field it is and the range of data held within it, be warned it will take a few hours to run!

# Schema Scanner v1.0
# ©2010 Richard Slater
 
# Create an empty hash table
$columns = @{}
 
# Loop through every file that matches this pattern
foreach ($file in Get-ChildItem -Path "D:\OSOpenData\Code-Point Open\data\ze.csv")
{
	Write-Host "Processing $file"
 
	# PowerShell Import-Csv cmdlet is pretty powerful, but if there is no header row you must feed it in
	$PostCodeData = Import-Csv $file -Header "Postcode","Quality","Unused1","Unused2","Unused3","Unused4","Unused5","Unused6","Unused7","Unused8","Eastings","Northings","CountryCode","RegionalHealthAuthority","HealthAuthority","AdminCounty","AdminDistrict","AdminWard","Unused10"
 
	# Go through each row in the file
	foreach($row in $PostCodeData)
    {
		# Go through each column in the row
		foreach ($attr in (Get-Member -InputObject $PostCodeData[0] -MemberType NoteProperty))
		{
			$key = $attr.Name
 
			# Ignore unused columns
			if ($key.StartsWith("Unused"))
				{ continue }
 
			# Construct an object to store the meta data, store it in the hash table to retreive next loop
			$column = New-Object PSObject
			if (!$columns.ContainsKey($key))
			{
				$column | Add-Member -Type NoteProperty -Name StringLength -Value 0
				$column | Add-Member -Type NoteProperty -Name MaxValue -Value ([System.Int32]::MinValue)
				$column | Add-Member -Type NoteProperty -Name MinValue -Value ([System.Int32]::MaxValue)
				$columns.Add($key, $column)
			}
			else
				{ $column = $columns.Get_Item($key) }
 
			$isInt = $false
			$value = 0;
 
			# Work out if this is an integer type
			if ([System.Int32]::TryParse($row.($key), [ref] $value))
            	{ $isInt = $true }
 
			if (!$isInt)
            {
				# it is not an integer how many characters is the string
            	if (($row.($key)).Length -gt $column.StringLength)
                	{ $column.StringLength = ($row.($key)).Length }
 
				continue
            }
 
			# it is an integer start working out the maximum and minimum values
			if ( $value -gt $column.MaxValue ) { $column.MaxValue = $value }
			if ( $value -lt $column.MinValue ) { $column.MinValue = $value }
 
			$columns.Set_Item($key, $column)
		}
	}
}
 
# Print a report of all of the fields
foreach ($field in $columns.Keys)
{
	$stringLength = $columns[$field].StringLength
	$numericMax = $columns[$field].MaxValue
	$numericMin = $columns[$field].MinValue
 
	if ($stringLength -gt 0)
	{
		Write-Host "$field (String) : Length =" $columns[$field].StringLength
	}
	elseif (($numericMax -gt ([System.Int32]::MinValue)) -and ($numericMin -lt ([System.Int32]::MaxValue)))
	{
		Write-Host "$field (Numeric) : MaxValue =" $numericMax ", MinValue =" $numericMin
	}
	else
	{
		Write-Host "$field (Empty)"
	}
}

The output from the script should give you enough information to construct a nice tight schema to import the data:

AdminWard (String) : Length = 2
AdminDistrict (String) : Length = 2
AdminCounty (Numeric) : MinValue = 0 , MaxValue = 47
Quality (Numeric) :  MinValue = 10 , MaxValue = 90
RegionalHealthAuthority (String) : Length = 3
Postcode (String) : Length = 7
Eastings (Numeric) : MinValue = 0 , MaxValue = 655448
Northings (Numeric) : MinValue = 0 , MaxValue = 1213660
CountryCode (Numeric) : = 64 ,  MaxValue   = 220
HealthAuthority (String) : Length = 3

In a future post I am going to take it to the next stage; create a table and complete the import with the Import Export Wizard. I would also like to improve the performance of the schema scanner by converting the code into C#.

SchemaScanner

Encapsulating Alpha Fade in Unity3d

Friday, November 12th, 2010

Several days into my late Unity3d project I realised that the was a bulk of code designed solely to make an object invisible by fading it out of the scene. The code was not complex although because of the way it was all in one class it appeared complex.

After doing some research into the best way to go about making this change, realised it was both convenient and logical to extract the code into a separate script and attaching that to the object that I wished to apply the effect to.

This meant that instead of nested if-statements for state management in a script attached to the Main Camera I was able to make declarative statements:

GameObject.Find(“TargetObject”).GetComponent<SmoothAlpha>().MakeVisible();

I have named the script SmoothAlpha only by my convention no actual smoothing or damping of the alpha value, it is simply a linear reduction of the materials alpha value.

There are many improvements that could be made to the script, some of which I may well do over the coming weeks:

  • Should include a delegate call back to signal when the fade is complete.
  • Should include methods to instantly make an object (in)visible.
  • Should support changing the alpha of child GameObjects in unison with the parent.

In have included the full script below the cut.

Unity3d plus three weeks

Saturday, November 6th, 2010

It is about 3 weeks on since I started learning Unity3d, and today Makemedia delivered the product we were working on using it. I have thoroughly enjoyed the process and the experience, there is still much to learn however I am much more confident experimenting with Unity3d to see what I can come up with.

As I am not going out to see the fireworks tonight, I put together a 3d scene to demonstrate some of Unity Basics particle effects in the form of a personal fireworks display.

Move around in the scene using W, A, S, D or the Arrow Keys you can move your head by moving the mouse.

Some of the rockets go a bit crazy some time and fire off into the distance at low speeds which is quite peculiar when it is towards the camera.

Source Code

Unity 3D

Saturday, October 16th, 2010

I am starting a project next week using Unity3D, I have known about this for a while and have poked around a bit to try and figure out how to do various things. I have been really impressed as to how fast you can get something done with what is basically a free product.

There are some superb videos on Unity3d Student which have been invaluable. I think I have got my head around the controls, UI, paradigms and scripting, although not all in one scene. As evidence of my work I have made three balls bounce around the screen, something akin to a carpenter making a door stop perhaps, but I am sure I will learn.