Codeine .Net RSS 2.0
# Sunday, 30 January 2011

I've been working on a fairly large project the last few months which required SQL replication where the server that hosted the publication didn't run on the default port. I don't know what SQL Server 2008's issue is with replication on anything besides the default port, but my windows service that handled the syncing would always error out. (I'm doing a pull subscription with SQL Express 2008 so there is no SQL Agent to handle the syncing for me.) My work around for this was to create a SQL Server Alias during installation of my application. You need to reference the dll Microsoft.SqlServer.SqlWmiManagement.dll to do this. (It's possible you may need to reference another dll, but I already had several SQL Server dlls referenced for the actual replication code I had written). This dll can be found in the SDK which you get installed when you install SQL Server 2008. Look for it in the folder that was created in your Program Files. The name space you will be working with is Microsoft.SqlServer.Management.Smo.Wmi. Below is some example code on how to do it in C#.

using Microsoft.SqlServer.Management.Smo.Wmi;

 

 public void CreateAlias()

        {

            ManagedComputer mc = new ManagedComputer();

 

            ServerAlias alias = new ServerAlias();

            alias.Parent = mc;

            alias.ServerName = Server Address;

            alias.Name = Name for your alias;

            alias.ConnectionString = Port number as string;

            alias.ProtocolName = "tcp";

            alias.Create();

        }

 

        public bool DoesAliasExist()

        {

            ManagedComputer mc = new ManagedComputer();

            bool result = false;

 

 

            foreach (ServerAlias serverAlias in mc.ServerAliases)

            {

                if(serverAlias.Name.ToUpper() == Name for your alias)

                {

                    result = true;

                }

            }

 

            return result;

        }

 

I tried using a completely different name for the alias, but couldn't get replication to work until I created the alias name as the server name, then things started working nicely. That seemed a bit strange to me as you would think I should be able to name the alias whatever I wanted. Hopefully someone else will find this helpful and it will save them some searching.

Sunday, 30 January 2011 19:28:30 (Central Standard Time, UTC-06:00)  #    Comments [0] - Trackback - Save to del.icio.us - Digg This! - Follow me on Twitter
Development | SQL Replication
# Thursday, 12 February 2009

Well after a bit of work and some help from this blog post, I was able to get the Microsoft Surface SDK working on my 64bit Vista machine. The documentation for the SDK says that it needs to run on 32bit Vista but Steven Robbins seems to have figured out the steps to get around that. Now that I have it up and running I hope to put up a few blog posts on doing Surface development and hopefully I won't run into any issues using 64bit. If anyone would like to donate me the money to buy an actual Surface unit I would be grateful, but until then the Surface simulator appears to be a decent environment that allows you to hook up multiple mice to act as different finger touches. The below sample application comes with the sdk and allows you to select a picture to create a puzzle out of and then put the puzzle together. I can't wait to get my hands dirty and create a Surface application of my own.

Thursday, 12 February 2009 20:20:29 (Central Standard Time, UTC-06:00)  #    Comments [0] - Trackback - Save to del.icio.us - Digg This! - Follow me on Twitter
Development | Microsoft Surface
# Sunday, 07 December 2008

With the release of Silverlight 2.0 also came the functionality of Deep Zoom. Deep Zoom optimizes the viewing of large images in a smooth manner by partitioning an image into tiles and levels of resolution of the original image. Using this method allows for the browser to download a lower resolution image for a faster download speed and then only grab the higher resolution image if the user zooms in. When the user does zoom in the tile that represents that section of the higher resolution image is download to the browser again saving the viewer download time and enhancing the user experience.

In order to begin working with Deep Zoom the first thing we need to do is prepare an image for Deep Zoom by creating tiles out of the image of different resolutions. This would be a bit cumbersome to do by hand so luckily Microsoft has provided us with the Deep Zoom Composer which you can download for free. This allows us to create the Deep Zoom images and even allows us to create a collage of image in Deep Zoom. We will be doing the latter, using the numerous amount of photos that I took while traveling through Europe for the month of September.

The first thing we need to do is create a new project in Deep Zoom Composer. Do this by running the Deep Zoom Composer and selecting new project. After doing this you will see three tabs along the top – Import, Compose, and Export. If you are not already on the Import tab, select it, and then we need to add the images that we want to work with. In my case I will be adding in all my pictures from Rome. Click the Add Image… button on the right and select the images that you want to add. Remember that you can select multiple images at once to get your images imported in faster.

 

Once the images are imported we then switch over to the Compose tab. The tab allows you to create how the final image will look in your Silverlight Application. Select your imported images from the side and place them on the canvas how you would like them to appear. You can zoom in and out on the canvas and place your photos at different depths, even embedding them within each other. For instance I have an image of the Coliseum from the distance with a sign that is on the side of it. I then have a closer image of that sign as another picture. The Deep Zoom composer allows me to embed the more detailed image of the sign into the image of the Coliseum.

 

 

 

In the composer you also have the ability to associate tags with the images. I have associated various tags with all my images. Through the composer you can only apply one string as a tag, so I have used this field to enter multiple "tags" separated by commas. In a future tutorial I will demonstrate how to create a tag filter to filter down images shown in the MultiScaleImage.

 

 

Once we have exported the images from Deep Zoom Composer it is now time to get our hands dirty in Visual Studio 2008. In order to work with Silverlight 2.0 applications you are going to need 3 things, Visual Studio 2008 SP1, the Silverlight SDK, and the Silverlight tools for Visual Studio 2008. Fire up vs2008, go to File->New->Project and under Visual C# (or if you prefer you can select VB) select Silverlight. Then select Silverlight Application on the right, name your solution, and select where you want to save it.

 

 

Next a window will appear that will ask you if you want to add a new ASP.NET web project to the solution or generate a test page at build time. Every Silverlight application needs to be hosted inside a web application so we will let Visual Studio add a project to our solution for this purpose.

 

 

If you now look in the Solution Explorer you will see that we have an empty Silverlight Application called DeepZoomExample2 and a ASP.NET web project called DeepZoomExample2.Web. The web project has two pages that were created for us, an aspx page and a html page that hosts our Silverlight Application.

 

Next we need to add our exported images to the web project. Add a new folder under the ClientBin folder on the Web Project and call it GeneratedImages. The ClientBin folder is where the compiled .xap file from your Silverlight will live so this is the easiest place to put our exported files. Now go into your exported folder from the Deep Zoom Composer and open up the generated images folder and copy the dzc_output_files folder, and the dzc_output_images folder along with the three files, dzc_output.xml, Metadata.xml, SparseImageSceneGraph.xml into the new GeneratedImages folder that you created in your web project. If you used a lot of images this may take some time to copy. Your Solution Explorer should now look like the following image.

 

 

Now it's time to start writing some code. First we add a MultiScaleImage to the Page.xaml file. The Source property will be directed at the dzc_output.zml files that we copied into our GeneratedImages folder.

 

<Grid x:Name="LayoutRoot" Background="White">

<MultiScaleImage x:Name="msi" ViewportWidth="1.0" Source="/GeneratedImages/dzc_output.xml" />

</Grid>

 

This is all we need to do to get the basic images on the screen, but that isn't very exciting so lets wire up some zooming using the mouse button. In the Page.xaml.cs file we need to add a few variables to store some information:

 

//This variable is used to know if the user has just click the left button or has clicked and dragged

private bool mouseIsDragging = false;

//Indicates if left mouse button is down

private bool mouseButtonPressed = false;

//Starting Point of the Drag

private Point dragOffset;

//Get Current ViewPort position of MSI

private Point currentPosition;

//Last Position of Mouse

private Point lastMousePos = new Point();

 

public double ZoomFactor

{

get; set;

}

 

We then have three mouse events that we need to handle, LeftMouseButton down, LeftMouseButton up, and Mouse Move. Then we need to handle the actual zoom.

 

/// <summary>

/// Handles Left Mouse button down event

/// </summary>

/// <param name="sender"></param>

/// <param name="e"></param>

private void LeftMouseButtonDownHandler(object sender, MouseEventArgs e)

{

//Indicate that the button is down

mouseButtonPressed = true;

//Reset Dragging

mouseIsDragging = false;

//Set Starting point of the drag if the user start dragging

dragOffset = e.GetPosition(this);

currentPosition = msi.ViewportOrigin;

}

 

In the left mouse button down even, we first set the indicator that the button is pressed. We then reset the dragging indicator, get the current position of the mouse, and finally get the current position of the MultiScaleImage.

 

/// <summary>

/// Handles Left Mouse button up event

/// </summary>

/// <param name="sender"></param>

/// <param name="e"></param>

private void LeftMouseButtonUpHandler(object sender, MouseEventArgs e)

{

//Change flag to mouse button no longer pressed

mouseButtonPressed = false;

 

//If the user wasn't dragging then we do zooming

if(!mouseIsDragging)

{

//Check if shift was pressed. If so we zoom out, otherwise we zoom in

bool shiftDown = (Keyboard.Modifiers & ModifierKeys.Shift) == ModifierKeys.Shift;

 

ZoomFactor = 2.0;

if (shiftDown) ZoomFactor = 0.5;

Zoom(ZoomFactor, this.lastMousePos);

}

}

 

When the left mouse button is lifted we change the flag that indicates that the left mouse button is down and if the user was not dragging. We then check to see if shift was pressed. If shift was not pressed we zoom in by a factor of 2, but if shift was pressed we zoom out by a factor of .5. We then call our zoom function (which we have yet to create) with the zoom factor and the last point the mouse was at.

 

/// <summary>

/// Handlers the moving of the mouse

/// </summary>

/// <param name="sender"></param>

/// <param name="e"></param>

private void MouseMoveHandler(object sender, MouseEventArgs e)

{

this.lastMousePos = e.GetPosition(this.msi);

}

 

With the mouse move function all we need to do is capture the position that the mouse is at. Finally in the below Zoom function that is called from the LeftMouseButtonUpHandler, we do the actual zooming.

public void Zoom(double zoom, Point pointToZoom)

{

Point logicalPoint = this.msi.ElementToLogicalPoint(pointToZoom);

this.msi.ZoomAboutLogicalPoint(zoom, logicalPoint.X, logicalPoint.Y);

}

As you can see above we have to translate the mouse pointer relative to our MultiScaleImage and then we use the ZoomAboutLogicalPoint method to handle the zooming for us. The last thing we need to do is wire up the events to the MultiScaleImage control in the Page constructor which will look like the following:

public Page()

{

InitializeComponent();

 

//Wire up events

this.msi.MouseLeftButtonDown += LeftMouseButtonDownHandler;

this.msi.MouseLeftButtonUp += LeftMouseButtonUpHandler;

this.msi.MouseMove += MouseMoveHandler;

}

 

 

If you now run the application you will get the layout of all images on your screen as seen in the first picture below, and if I click on the picture of St. Peter's Square in the center of the image the MultiScaleImage control will zoom in on that particular image as seen in the second picture below. Holding down the shift key while clicking will zoom the MultiScaleImage back out.

 

 

In order to enable panning we need to augment a few functions that we have already created. Obviously we need to modify the MouseMoveHandler. Initially all this event was doing was updating the mouse position so that when we zoomed in or out we knew where the mouse was pointing. Now we are going to have it check to see if the mouse button is down to indicate that dragging is occurring. Also if the mouse is dragging we need to update the position so that the MultiScaleImage is centered on to where the mouse is dragging. Our new MouseMoveHandler appears below.

/// <summary>

/// Handlers the moving of the mouse

/// </summary>

/// <param name="sender"></param>

/// <param name="e"></param>

private void MouseMoveHandler(object sender, MouseEventArgs e)

{

if(mouseButtonPressed)

mouseIsDragging = true;

this.lastMousePos = e.GetPosition(this.msi);

 

//Update this View of the MultiScaleImage is dragging

if(mouseIsDragging)

{

Point newOrigin = new Point();

 

newOrigin.X = currentPosition.X -

(((e.GetPosition(msi).X - dragOffset.X)/msi.ActualWidth)*msi.ViewportWidth);

newOrigin.Y = currentPosition.Y -

(((e.GetPosition(msi).Y - dragOffset.Y)/msi.ActualWidth)*msi.ViewportWidth);

msi.ViewportOrigin = newOrigin;

}

}

In our left mouse button down handler we are already handling the dragging functionality by initializing the mouseIsDragging variable to false, and setting dargOffset to the initial point that the mouse was at when dragging was started. In the left mouse button up handler we need to update the dragging indicator that was set to true in the left mouse button down indicated. The new handler appears below:

/// <summary>

/// Handles Left Mouse button up event

/// </summary>

/// <param name="sender"></param>

/// <param name="e"></param>

private void LeftMouseButtonUpHandler(object sender, MouseEventArgs e)

{

//Change flag to mouse button no longer pressed

mouseButtonPressed = false;

 

//If the user wasn't dragging then we do zooming

if(!mouseIsDragging)

{

//Check if shift was pressed. If so we zoom out, otherwise we zoom in

bool shiftDown = (Keyboard.Modifiers & ModifierKeys.Shift) == ModifierKeys.Shift;

 

ZoomFactor = 2.0;

if (shiftDown) ZoomFactor = 0.5;

Zoom(ZoomFactor, this.lastMousePos);

}

mouseIsDragging = false;

}

 

We now need to add one more handler. If the mouse goes outside the MultiScaleImage we what to stop the panning so we need to add a Mouse Leave event handler. In the handler I am also going to reset the mouse down variable when the user leaves the MultiScaleImage. This way if they leave the image, lift up the mouse button, and re-enter the image, it won't continue to pan.

 

/// <summary>

/// Handles the mouse leave event

/// </summary>

/// <param name="sender"></param>

/// <param name="e"></param>

private void MouseLeaveHandler(object sender, MouseEventArgs e)

{

mouseIsDragging = false;

mouseButtonPressed = false;

}

 

We now need to assign the event, so in the constructor where we wired up the other events we need to also wire up the MouseLeave event of the MultiScaleImage:

public Page()

{

InitializeComponent();

 

//Wire up events

this.msi.MouseLeftButtonDown += LeftMouseButtonDownHandler;

this.msi.MouseLeftButtonUp += LeftMouseButtonUpHandler;

this.msi.MouseMove += MouseMoveHandler;

this.msi.MouseLeave += MouseLeaveHandler;

 

}

 

There you go. Adding panning was that easy. Now if you fire up the site you can click and dragging the image around and still zoom in and out using click and shit click.

The next thing we need to do is add zooming with the scroll wheel. We are going to do this by using the MouseWheelHelper class provided by Peter Blois. So grab this class and add it to your Silverlight Application. With Silverlight 2.0 the silverlight application can reach into the DOM using the Sliverlight DOM bridge and listen to events in the class, abstracting that away from us.

The first thing we need to do is create an event to handle the mouse wheel. Inside this event we set the event to handled so that the system knows that the event has be taken care of, we set the zoom factor by checking the Delta value that is passed to us from the MouseWheelHelper class. Finally we call the Zoom method passing the factor that we want to zoom by and the point we are zooming at, just like if the user had clicked the mouse button to zoom.

 

 

/// <summary>

/// Handles the mouse wheel events

/// </summary>

/// <param name="sender"></param>

/// <param name="e"></param>

private void MouseWheelHandler(object sender, MouseWheelEventArgs e)

{

//Notify that we handled the event

e.Handled = true;

ZoomFactor = e.Delta > 0 ? 1.2 : .80;

Zoom(ZoomFactor, this.lastMousePos);

}

 

Lastly we wire up with event in our constructor just below where we we wiring up the rest of the events. The newly modify constructor appears as follows:

 

public Page()

{

InitializeComponent();

 

//Wire up events

this.msi.MouseLeftButtonDown += LeftMouseButtonDownHandler;

this.msi.MouseLeftButtonUp += LeftMouseButtonUpHandler;

this.msi.MouseMove += MouseMoveHandler;

this.msi.MouseLeave += MouseLeaveHandler;

new MouseWheelHelper(this).Moved += MouseWheelHandler;

 

}

 

 

If you now build and run the application you'll see that the ability to have zoom with the scroll wheel of the mouse is now present. Hopefully this has demonstrated that working with the Deep Zoom functionality and Deem Zoom Composer is fairly simple and can greatly enhance the user experience for a web site. The even better news is that Deep Zoom composer will now generate all this functionality for you wrapped into a Silverlight 2.0 application when you export your Deep Zoom image. This particular sample was pieced together from what I learned from Kirupa Chinnathambi blog who is currently a member of the Expression Blend team at Microsoft.

 

 

 

 

Sunday, 07 December 2008 11:21:38 (Central Standard Time, UTC-06:00)  #    Comments [0] - Trackback - Save to del.icio.us - Digg This! - Follow me on Twitter
Deep Zoom | Deep Zoom Composer | Development | Silverlight
# Monday, 07 April 2008

I'm really getting tired of waiting for computers to do things; boot, shutdown, open Visual Studio, build. I feel like I spend more time waiting for the computer to do something then actually getting work done and this is very frustrating. I like my laptop and unfortunately it's not dual core, but I've decided that for the amount of mobility that I need it can still handle the job. What I do what to do is build a new desktop development box that I can use when working in the office since this is where I do most of my work, outside of the day job anyways. I've gotten a bit out of touch with the hardware world lately so I was hoping to solicit some feedback in a few areas. Here is a list of the general specs I've laid out.

 

Quad Core Processor – I realize that this is still a bit vague since some argue that a dual core with a faster speed is better that a slower quad core, but I hope to get the fastest quad core processor for a reasonable price when I make the purchase.

Motherboard – This is where I've gotten a bit rusty. Does anyone have any suggestions on brands and specs that I want to look for?

Vista Ultimate 64 bit – Yeah, Yeah don't do it, Vista sucks. I'm going to do it anyways.

4GB of RAM – Anything important here besides speed and response time?

16-30GB Solid State Hard Drive – This one I think is the sweet spot. The last moving part on the computer is the hard drive and common sense says that means it's the real bottleneck. I hope to get a small solid state hd that will hold the OS, VS2008, and maybe Office depending on price vs size. My one issue here is I haven't seen any 3.5 inch solid state drives.

Large Secondary Hard Drive – A regular SATA hard drive to handle everything else that isn't used all the time. I'm thinking a WD Raptor for this.

Video Card(s) – Yet again I'm going to need to do some research here or get some feedback. What specs do I need to look for here?

2 X 22 inch Widescreen Monitors – I haven't decided on anything specifics here, I just know I want 2 that are exactly the same so they are at the same level.

Case – A really, really quiet case. This is going to involve research too.

 

Any feedback that anyone has would be great.

 

Monday, 07 April 2008 21:01:36 (Central Daylight Time, UTC-05:00)  #    Comments [3] - Trackback - Save to del.icio.us - Digg This! - Follow me on Twitter
Development | Hardware
# Sunday, 16 March 2008

    Well its official The Gifford has turned me into a code coverage addict. After he helped me through writing my first unit test using NUnit the concept slowly started sinking in and I'm now addicted to code coverage. Writing NUnit tests is so easy that I don't understand why everyone won't do it, which is causing me to get a bit frustrated with people as it gives me the impression that they just don't care. I can't remember the number of times that I have gotten rather pissed because someone has changed my code and broken functionality and in some cases I've even broken my own code when I've had to go in and make changes after not looking at it for some time. Unit testing can minimize these hair pulling moments by giving you more confidence that what was supposed to be happening before you touched the code still happens after you touched it. If someone is in muddling around in your code and breaks something, you'll know immediately when they check in and the test doesn't pass on the build server. It all comes down to code quality and I have begun to notice recently that code quality everywhere sucks.

    If you're not using unit testing of some sort then I think it just boils down to pure ignorance. Either you see the value of unit tests, but don't know how to write them and are too lazy to exert the effort to learn how, or you are just too ignorant to see the value of them. I fell into the first group. I've always seen how they could be helpful, but never took the time to learn how to do them. In the last couple weeks I've been really thinking about code quality with the product that I am building with a couple partners. We decided to build this product because we had gotten stuck working with another product that we thought had many problems and was a pain to work with. I realized that we were starting to walk down the same path, rushing to get a beta out, sacrificing design and architecture for deadlines. There are plenty of poor applications on the market and I don't want to be contributing to that. I want to produce a quality product or nothing at all because in the long run if we produce a poor quality product then we get stuck supporting that poor quality and our customers are stuck working with it. While assessing quality I kept coming back to our lack of unit testing and the fact that things kept breaking that worked fine the day before. This motivated me to overhaul the build server so that it runs NUnit tests automatically and then uses NCover to provide statistics on the code that is covered by unit tests and the code that is not covered. Then to take it one step further I built a screensaver that runs on the monitor of the build server that displays the overall percent of code coverage from the project in bright flashing red while it is under the coverage goal. (Currently FloFactory is at 9.5% code coverage with a coverage goal of 50%.) I look forward to the day that that text turns green to indicate that we have met our code coverage goal (and then I'll raise the goal).

    I'm sure plenty of people would argue about the value of unit testing, pointing out various short coming with unit testing which I am sure are all valid in certain scenarios, but if you're following a tiered structure for your application, keep your logic out of the UI and keep your data access separate, then you should have an entire service layer that can be unit tested. Does having unit tests mean your code is perfect, of course not. It's not a silver bullet for quality but it is one tool to provide a bit more certainty about an application.

    Where I am really struggling is motivating others to improve their code quality, understanding the importance of testing and getting them to write even one unit test. I realize that unit testing can't cover your entire application, but it's a start and if you start thinking about how you structure your application you'll starting seeing better ways to design things so that you can properly test them using such things as mock objects and inversion of control containers. The Gifford told me once that he goes by the statement "You don't have write unit tests, but don't break mine." Whereas I like this statement, I don't think it motives any one to write a single unit test, so I'm adding onto this by trying to make the statistics more visible. Hopefully if someone checks in code that lowers the overall coverage percent, that they can see very clearly on the build server screensaver, they may feel more motivated to raise that number.

    As for myself, I'm going to keep digging into ways to increase quality so that the final product is stable and worth using. One last thing to point out is that if you plan to get started unit testing and are going to use NUnit then I would suggest picking up a copy of Resharper. It allows you to run your NUnit tests extremely easily from Visual Studio taking some of the headaches out of writing them.

Sunday, 16 March 2008 18:39:55 (Central Standard Time, UTC-06:00)  #    Comments [0] - Trackback - Save to del.icio.us - Digg This! - Follow me on Twitter
Development | NUnit | Testing | Unit Testing
# Sunday, 20 January 2008

    So far I have went over how to create a basic workflow activity that evaluates a regular expression against a string and I have also covered creating the validation class for the activity. The next thing I want to cover are some basics to enhancing the designer experience of the workflow activity allowing you to override the default visual display in the designer for a custom activity. Based on the default designer class our RegEx activity appears as follows:

    

    The first thing that needs to be done is to add a new class to the project called RegExDesigner.cs.

    

 

    Next we need to make our new class inherit from the ActivityDesigner class. To do this we will also need to import the namespace System.Workflow.ComponentModel.Design, which means your code should now appear as follows:

    using System.Workflow.ComponentModel.Design;

 

namespace FloFactory.Activities.Util.Logic

{

    public class RegExDesigner : ActivityDesigner

    {

    }

}

 

 

    There are a lot of functions that you can override, but the main function that we are going to work with is the Initialize function which will allow us to override the appearance of our activity in the designer. This first thing that I am going to do is add a nice image to the activity. Assuming you already have an image that you want to use as a resource in your project you add the image to the activity's design view by doing the following:

 

protected override void Initialize(System.Workflow.ComponentModel.Activity activity)

{

base.Initialize(activity);

Bitmap WorkFlowImage = Properties.Resources.workflow;

 

Image = WorkFlowImage;

 

}

 

    The other thing I would like to do is change the text that appears in the activity. This will be the default text that shows up until the user of our activity sets the name property. This is easily done by setting the Text property in the Initialize function. After doing this the code should appear as follows:

 

    protected override void Initialize(System.Workflow.ComponentModel.Activity activity)

{

base.Initialize(activity);

Bitmap WorkFlowImage = Properties.Resources.workflow;

 

Image = WorkFlowImage;

 

//Set the Text That appears

Text = "FloFactory RegEx";

 

}

 

 

    Now we need to associate the designer class with our activity. We do this by going to the top of the RegEx.cs class and add the following attribute:

    [Designer(typeof(RegExDesigner), typeof(IDesigner))]

    public partial class RegEx: BaseActivity

    {

        …

}

 

 

 

    After adding this code to the Initialize function and adding the attribute if we now look at the RegEx activity we will see that our image has replaced the default image that appeared when the original designer class was applied to the RegEx activity. Also, the default text that was defined appears in the center of the activity.

 

 

    As you can see we have easily modified the image and the text that appears in the designer for our RegEx activity. In a future post I will cover modifying the appearance of the activity by changing its size and colors so as to create a more custom look and feel for the activity.

 

    

 

 

 

Sunday, 20 January 2008 20:38:13 (Central Standard Time, UTC-06:00)  #    Comments [0] - Trackback - Save to del.icio.us - Digg This! - Follow me on Twitter
.Net 3.0 | Development | Windows Workflow
# Tuesday, 01 January 2008

    Well lately I have been using Resharper from JetBrains which is an amazing Visual Studio Add in that adds an extensive amount of new features and shortcuts. It has greatly increased my productivity and if you're not using it then you should at least download the 30 day trial and test it out. (Greg this means you. Tell Matt I said he should buy it for you.) I especially like the file structure explorer that allows me to organize functions and properties in a class by dragging them around instead of needing to copy and paste them. This is very helpful when organizing your classes into regions. Also, if you've ever had a giant solution where it begins to be difficult to find files you'll appreciate the file search that allows you to start typing a filename and get a filtered list of files. It definitely can be a bit overwhelming with all the shortcuts it adds to Visual Studio so in order to make looking up the shortcuts easier I created this basic VS2005 add in that displays the Resharper cheat sheet on the screen. After installing the Recheater Add in you can display the cheat sheet by hitting CTRL-ALT-SHIFT-H and close it down by hitting the ESC key. To change the shortcut key simply modify the config file located under My Documents\Visual Studio 2005\Addins\RecheaterAddin. It's a pretty basic application, but feel free to download Recheater and post a comment if you find any bugs or want to suggest an enhancement.

Tuesday, 01 January 2008 11:09:57 (Central Standard Time, UTC-06:00)  #    Comments [0] - Trackback - Save to del.icio.us - Digg This! - Follow me on Twitter
Development | Resharper | Visual Studio
# Friday, 28 December 2007

    I'm my last posting I walked through how to create a custom regex activity in Windows Workflow and I thought a good next step would be to give a short example of how you would add validation to this custom activity. In this particular instance we want to make the InputString property and the RegExpression property required in order for the workflow designer to consider the workflow that is being built a valid workflow. The first thing that needs to be done is to add a class to the project called RegExValidator.cs.

 

    The next thing that needs to be done to the new class is to add using System.Workflow.ComponentModel.Compiler to the top of the class and make it inherit from ActivityValidator. The one function that we need to override is the validate function. The validate function takes in a ValidationManager and an object. The object is what we will be using as this is the instance of our activity that we will be validating. The function returns a ValidationErrorCollection which is simply a collection of error messages that we will send back after validating the activity. In the override for this function we check that both the InputString and the RegExpression properties are not null or empty. If they are we add a corresponding error message to the collection. Because the error is associated with a property we provide the property name as the last parameter in the constructor for the ValidationError object so that the designer can associate the error message with the apporpriate property.

 

public class RegExValidator : ActivityValidator

{

public override ValidationErrorCollection Validate(ValidationManager manager, object obj)

{

RegEx myRegEx = obj as RegEx;

ValidationErrorCollection myErrors = new ValidationErrorCollection();

 

if (myRegEx != null && myRegEx.Parent != null)

{

 

 

if (myRegEx.InputString == null || myRegEx.InputString.Equals(String.Empty))

{

myErrors.Add(new ValidationError("InputString is required.", 101, false, "InputString"));

}

 

if (myRegEx.RegExpression == null || myRegEx.RegExpression.Equals((String.Empty)))

{

myErrors.Add(new ValidationError("RegExpression is required.", 102, false, "RegExpression"));

}

}

 

return myErrors;

}

}

    That is all we need to do for the validator class. The last thing we need to do is associate our validator with the activity. To do this we add the same using statement to our RegEx.cs activity (using System.Workflow.ComponentModel.Compiler) and we add an attribute to the top of the class.

     [ActivityValidator(typeof(RegExValidator))]

    public partial class RegEx: BaseActivity

    {

    …

    }

    That's it. If we now use the original sample workflow I created and blank out the InputString and RegExpression properties we will get several indications that there are errors. First in the designer it will be indicated directly on the activity.

    In the properties window it will also indicate each individual property that has an error.

    Finally the build will error and the error messages will be provided in the error window.

 

    That's all I have to say about validation. Obviously the validation can be much more complicated than just checking if the property has been provided, but this example hopefully provides a basic idea of how to get things going. Feel free to post a comment if you found this helpful and would like for me to dedicate more postings to the wonderful world of workflow. Now I'm off to go find a life since it appears that I am blogging on a Friday night. Unfortunately that's not something you can Google for!

Friday, 28 December 2007 20:54:49 (Central Standard Time, UTC-06:00)  #    Comments [0] - Trackback - Save to del.icio.us - Digg This! - Follow me on Twitter
.Net 3.0 | Development | Windows Workflow
# Wednesday, 26 December 2007

    While having lunch today at Old Chicago with Nick we began discussing Windows Workflow a bit and I started to realize that even though I have been pounding away on a project for six months that is highly dependent on WF I have yet to write a single blog post on the topic. I found the need today to work on an activity that simply applies a regular expression against a string and returns the value that it finds. Since this is a fairly basic activity I thought I would go ahead and blog about the process of creating it.

First off you need to create a new activity:

 

Visual Studio 2008 will create the skeleton of an activity for us that inherits from SequenceActivity. For my purposes I need to have it inherit from Activity so I will go ahead and change this. The next thing we need to do is to define a dependency property for our input string. To do this we do the following

#region "Dependency Properties"

 

public static DependencyProperty InputStringProperty = DependencyProperty.Register("InputString", typeof(String), typeof(RegEx));

 

#endregion

What we are doing here is registering a dependency property called InputStringProperty that is bound to the property InputString (which we will be creating in a second), the property of InputString is of type string and the owner of InputString (our activity) is of type RegEx. Now we need to define our property InputString.

 

     #region "Properties"

 

public string InputString

{

get { return ((String)(base.GetValue(RegEx.InputStringProperty))); }

set { base.SetValue(RegEx.InputStringProperty, value); }

}

 

#endregion

The layout of this property is fairly similar to what you should be useto, but instead of getting and setting from a private member variable you instead set and get from the dependency property. Piece of cake right? The next thing we are going to do is define our output parameter the same way. The only thing that we are going to do differently with this property is to put an attribute on it of readonly. This will prevent the user from binding this property to something since the user should only be binding the properties on other activities to this property.

 

public static DependencyProperty OutputStringProperty = DependencyProperty.Register("OutputString", typeof(String), typeof(RegEx));

 

     [ReadOnly(true)]

public string OutputString

{

get { return ((String)(base.GetValue(RegEx.OutputStringProperty))); }

set { base.SetValue(RegEx.OutputStringProperty, value); }

}

 

 

Though we are not complete yet, if you build this activity and drop it onto a test workflow you will see the following in your Visual Studio Properties Window. We have a property call input string that is bindable and a property called OutputString that is readonly.

 

 

The next things we need to do is to declare a property that will hold our regular expression for the user of our activity to input. The will not be a dependency property, but a standard property as it will not need the ability for binding.

     private string _regexpression;

     public string RegExpression

     {

get { return _regexpression; }

set { _regexpression = value;}

     }

 

If you now drop this activity onto a test workflow you will see the following on the properties window.

 

 

We're almost done and all that is left to do is to implement the Execute method of our activity which will actually evaluate our regular expression against the input string. First off make sure you put using System.Text.RegularExpressions at the top of the class as we will obviously be using the .Net RegEx features. Then we just have the following function left.

 

protected override ActivityExecutionStatus Execute(ActivityExecutionContext executionContext)

{

Regex myRegEx = new Regex(RegExpression, RegexOptions.IgnoreCase);

OutputString = myRegEx.Match(InputString).Value;

return ActivityExecutionStatus.Closed;

}

 

The Exceute function is the meat of the activity where our processing actually happens. We first create a new RegEx object using our RegExpression property and in this case we are passing the option to ignore case. We then evaluate the regular expression and place the value into our OutputString property. Finally we return with a closed status and that's it. Just to test it out I put together a basic workflow that consists of my activity with the InputString property set to "my email address is testperson@test.com and of course it isn't a real email" and the RegExpression property set to "\w+([-+.']\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*" which should pull out the email address. I then used a basic code activity to output the email address to the console.

 

 

There you have it, a workflow activity that evaluates a regular expression against a string. If an email address had not been found then a blank line would have been written out to the console, but in our case we got testpersion@test.com. Obviously in a more practical scenario you would not type the string directly into the InputString property, but instead you would bind that property to the string property on an activity earlier in the workflow, but you get the idea. That's it for now folks. If you enjoyed this posting or found it helpful and would like to see me post more on WF feel free to post a comment.

Wednesday, 26 December 2007 21:39:10 (Central Standard Time, UTC-06:00)  #    Comments [3] - Trackback - Save to del.icio.us - Digg This! - Follow me on Twitter
.Net 3.0 | Development | Windows Workflow
# Sunday, 25 November 2007

The other day I ran into the need to access a nested property on an object with the actual name and path to the property stored in a string variable. For example I had a mail object and wanted to access "From.DisplayName" which was stored in a string. If I was access just the from I could have used something like this:

using System.Reflection;

PropertyInfo[] pi = myObject.GetType().GetProperties();

foreach (PropertyInfo item in pi)

{

if (item.Name = "From")

{

object o = item.GetValue(myObject, null);

}

}

 

In this example myObject is the mail object and we first get the type of it, and then get an array of the properties for that type. Finally we find the From property and get the value for that property on our object.

Unfortunately this doesn't work for nested properties and I could not find a built in way to do it so I built a recursive function to handle getting the property for me and returning it. I then packaged it into a handy .Net 3.5 extension method. Let me know if you know of a built in method to get a nested property, otherwise feel free to using the extension method I put together, though it probably needs a bit of refactoring before you put it into production.

namespace Common.ExtensionMethods

{

public static class TypeExtensionMethods

{

public static object GetNestedProperty(this Type t, Object o, String Property)

{

object myObject = null;

PropertyInfo[] pi = t.GetProperties();

 

 

 

if (Property.IndexOf(".") != -1)

{

 

foreach (PropertyInfo item in pi)

{

 

if (item.Name == Property.Substring(0, Property.IndexOf(".")))

{

object tmpObj = item.GetValue(o, null);

 

myObject = tmpObj.GetType().GetNestedProperty(tmpObj, Property.Substring(Property.IndexOf(".") + 1));

 

}

 

}

 

}

else

{

foreach (PropertyInfo item in pi)

{

if (item.Name == Property)

{

myObject = item.GetValue(o, null);

}

 

}

}

 

return myObject;

 

}

}

}

Sunday, 25 November 2007 10:03:26 (Central Standard Time, UTC-06:00)  #    Comments [1] - Trackback - Save to del.icio.us - Digg This! - Follow me on Twitter
Development | Extension Methods
# Sunday, 28 October 2007

    I originally thought of this idea while at Professional Edge. We had a client that wanted to build a portal and resell the functionality it provided to other companies. The main functionality and content for the site would be the same for each customer, but styles, images, and logos would change based on each company allowing the company to portray the website as their own. Professional Edge closed before the project was signed off on so I never got the chance to tackle a solution, but every once in awhile the concepts pops back into my head. I am sure this is a fairly common scenario and most likely what I have done here has been done plenty of times before, but I thought I would attempt to put together a proof of concept. My initial plan was to dynamically generate the App_Themes data based on information from the database. Using this method the developer would build the site with the look and feel defined via css tags and skin ids. The end corporate user would then have a user interface to define their custom styles and images that applied to each css tag and skin id. If the user did not define a tag then the website would simple use a default theme that the developer defined.

    To do all this I have utilized the functionality of virtual path providers. A virtual path provider allows you to essentially take over when ASP.NET is looking for the file to server to the user. In my particular case it is for .css and .skin files, so that whenever a request comes in for one of these files it is instead dynamically built based on data from the database. Originally I had planned to associate a particular theme name with each portal's customizations. For example on the pre-init the theme would be dynamically set based on some type of user information, either a login account, or possibly the web address used to access the site. Basically any type of information that establishes the unique portal that the visitor is attempting to reach. Unfortunately this plan won't work. Apparently ASP.NET requires the actual physical files for a theme to exist or else it will throw an error. This was a bit of a disappointment to me as I had hoped to parse the theme name in the virtual path provider to know what dynamic data to load. Since the path is the only data passed into the virtual path provider it is the easiest and cleanest way to know what data to load. If the file trying to be loaded is ThemeABC.css I could load the styles for portal ABC and if the file trying to be loaded was ThemeXYZ.css then I could have loaded the styles for portal XYZ.

    Even though this method didn't work I was able to development another method that does. I have created the file structure for the Default theme with dummy files, including Default.css and Default.skin. Every portal will essentially be using this theme as far as ASP.NET is concerned, but when a request comes in for Default.css or Default.skin I pass back a dynamically generated file. The decision for what styles and ids need to be loaded is based on a cookie that is set in the pre-init. This cookie is then read in the in the Open function of the virtual file class for the virtual path provider and based on what the cookie says a decision is made as to what styles are handed back to the client.

    In the attached sample code there are two different themes that can be accessed, blue and red. Setting the cookie is handled in the Default.aspx's OnPreInit. Since this is a proof of concept the actually string is hard coded, but it could just as easily read the address that the request came in on, a user login, or user setting. Then in the virtual path provider I handle any of the requests that come in for APP_THEME/DEFAULT/. Finally in the virtual file class I read the theme name from the cookie and hand back the proper page. Since this is a proof of concept I have simply hardcoded a few values to hand back, but you could easily query the database and hand back a much more dynamically generated file.

    At this point hopefully you can see the numerous possibilities this allows for. If all the ok buttons on the entire site are defined with a skin id of btnOk then you could allow the user to upload the image they want for their ok buttons and instantly all the ok buttons on the site reflect what the user has defined. If you consistently use your css tags within you site then the user could easily define custom colors, fonts, etc. for each one. This would allow the user to very easily customize the site to the look and feel that represents their company without having a developer make any changes.

    Feel free to check out the sample code at the link below and make any comments or suggestions. Since it is just a proof of concept it is a little rough and not that elegant, but it displays the point.

Download the source code to the proof of concept. (VS2008 required)

    

    

Sunday, 28 October 2007 19:03:34 (Central Standard Time, UTC-06:00)  #    Comments [0] - Trackback - Save to del.icio.us - Digg This! - Follow me on Twitter
ASP.NET | Development

Navigation
Archive
<2017 December>
SunMonTueWedThuFriSat
262728293012
3456789
10111213141516
17181920212223
24252627282930
31123456
About the author/Disclaimer

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2017
David A. Osborn
Sign In
Statistics
Total Posts: 70
This Year: 2
This Month: 0
This Week: 0
Comments: 33
Themes
Pick a theme:
All Content © 2017, David A. Osborn
DasBlog theme 'Business' created by Christoph De Baene (delarou)