November 07, 2004


I wrote earlier about having written a Bloglines OPML (Outline Processor Markup Language) extractor in python. It was fun little project but someone took the topic more seriously and wrote a generic library to access the Bloglines API.

Go check out PyBloglines over at

The part of the code that fascinates me most is their use of expat, the fast XML parser for Python. I've never used it, but the syntax is so easy it only took me a few seconds to see what they did and realize how superior it was to what I did. Check out  this code from PyBloglines:

class OpmlParser:

    def __init__(self):
        self.parser = xml.parsers.expat.ParserCreate()
        self.parser.StartElementHandler = self.start_element
        self.parser.EndElementHandler = self.end_element
    def parse(self, opml):
        self.feedlist = []
        return self.feedlist
    def start_element(self, name, attrs):
        if name == "outline":
            if attrs.has_key('title') and attrs.has_key('xmlUrl'):
                sub = Subscription()
                sub.title = attrs["title"]
                sub.htmlUrl = attrs["htmlUrl"]
                sub.type = attrs["type"]
                sub.xmlUrl = attrs["xmlUrl"]
                sub.bloglinesSubId = int(attrs["BloglinesSubId"])
                sub.bloglinesIgnore = int(attrs["BloglinesIgnore"])
                sub.bloglinesUnread = int(attrs["BloglinesUnread"])
    def end_element(self, name):

When done, a list of feeds is returned quite handily.

Posted by Nick Codignotto at 08:02 AM | Comments (0) | TrackBack

October 04, 2004

Reflections on Programming

A friend of mind started his own blog a few months ago and I kept meaning to point it out. John's blog concentrates on, "design techniques, project management, and the little things that make the day to day slog through coding fun, rather than a chore. " He just moved from bloglines over to blogger.


As a point of note, I've switched over to bloglines for RSS aggregation. I still use NewsGator, but mainly to read my company's internal blogs. Bloglines has some cool features including blog recommendations, desktop alert clients, straightforward and fast page layout, and much more. The Feed and User directories are always interesting.


Posted by Nick Codignotto at 11:05 PM | TrackBack

August 13, 2004

SimpleBits, SimpleQuiz

I started reading SimpleBits blog (Syndication links) to learn more about web design. The author has been running a rather interesting discussion and has modeled that discussion as a quiz called SimpleQuiz. The latest entry is entry XVII.

In this quiz, he proposes a number of formats for markup and his readers discuss the pros and cons of each approach. The entries are easy to digest and I'm sure you'll learn a thing or two about HTML.

If you want to skip the original blog-entry format, as the questions were proposed and go right to the original problem statement and the conclusion, go here.


Posted by Nick Codignotto at 12:13 PM | TrackBack

August 08, 2004

Short but complete programs

Short but complete guide on how to write short but complete programs when asking for help in Newsgroups...

By the way, the Matt Reynolds' site (where I got that link) has consistently answered most of my questions over the past few weeks, through existing Q/A recorded in their forums. Definitely worth a visit.

Matt also has a blog.


Posted by Nick Codignotto at 08:06 AM | TrackBack

July 16, 2004

On the way to partial classes, I nearly pulled out all of my hair...

I have an project written in Visual Studio .NET 2003 at home. Around midnight, I get the bright idea to take my two forms and convert them to the newer style of form, which uses partial classes to separate the designer-inserted code and the user-code for a form class.


The first step was to remove Main() from my form. When you create a new project in Visual Studio .NET 2005, you get a Program.cs file which contains the global main. It looks like this:

#region Using directives

using System;
using System.Collections.Generic;
using System.Windows.Forms;


namespace WindowsApplication1
     static class Program
          /// <summary>
          /// The main entry point for the application.
          /// </summary>
          static void Main()
               Application.Run(new Form1());

The ApplicationModel class above is my experiment, not default code. The second step is to divide your form class into two sections. So, I generated a new project and used it as a reference. I noticed a few things:

  • The designer class dosn't show up in the IDE's file list. I saw in the csproj file (a simple XML file) that the designer file is simply a dependent file to the main form file that you own.
  • The designer file has the definition for Dispose(),
  • It is home to all of your control member variables,
  • It doesn't specify a base class (this is done by your main form class, though no error is generated if you re-specify it... this has some interesting implications).
  • Finally, the designer file declares the components IContainer variable.

So, in about 5 minutes I cut and paste myself into a corner and was left with a form with no controls on it and multiple compilation errors.

I resolved the compilation errors and guess what? I still have a form with no controls on it.

I decide to re-do my entire form, using my old form as a reference. Note that it's about 12:30am and I'm getting tired.

Now I have a form with controls on it, but the designer code is inserted into my regular class file and not the designer file.


I decide to go to bed, leaving my code in ruin. This is almost as bad as going to bed mad at your spouse. I really wanted to make up with the code, but it left me on the couch and a fitful sleep it was.


Posted by Nick Codignotto at 08:24 AM | Comments (0)

July 15, 2004

if comparisons in properties

In C++, when I compare variables against constants and "value" in a property "set" method, I put “value” first like so:

if (value == _myProp)

This habit prevents the following catastrophic typo:

if (_myProp = value)

For most types, this statement won't compile in C#. The resulting statement is equal to the resulting type and the if clause expects only a bool. The above line would fail to compile If _myProp were an int. Seeing this for the first time, I was almost ready to switch back to the more attractive form:

if (_myProp == value)

Then I thought what if _myProp is a bool? As you can expect, the line compiles just fine. Therefore, I'm back to the "safe" form of the if statement:

if (value == _myProp)


Posted by Nick Codignotto at 08:23 AM | Comments (0)

July 12, 2004

Mono 2.0

I see on the Mono web site, that they plan on delivering Mono 2.0 around Q2 2005. Not coincidentally, they plan on providing updates to System.Xml, ASP.NET, and Windows Forms to match the .NET 2.0 API.

I'd love to experiment with writing assemblies in Windows and deploying them on MacOS X. Is that possible? I suppose as long as the referenced libraries are available (like Windows Forms), this should work right out of the box.

I've pretty much decided that my mapper program is going to be written in C# 2.0 running under the .NET Framework 2.0. This ties me to the "Whidbey" timeframe, but that's ok. I can be in beta for a very long time and get lots of feedback.

Getting back to Mono, I also see that Code Access Security (CAS) is missing. This basically means that all applications running on Mono require/have full trust capabilities.

If you want to read more about the Mono roadmap, go to the Mono Roadmap page, which has lots of interesting info.

Posted by Nick Codignotto at 12:00 AM | Comments (0) | TrackBack

July 10, 2004

Scrollable control fixed in 2.0 Framework

In my first .NET Annoyances post, I talked about how the ScrollableControl class was broken. There was no way to be notified when a scroll occurred. Well, this seems to have been remedied in the .NET Framework 2.0.

All I had to do was to write a delegate to handle their new Scroll event. The new Scroll event addresses my earlier concerns about knowing the type of scroll. They address this with a new ScrollEventArgs class, which has a Type property of type ScrollEventType.

Anyway, life is good now.

Posted by Nick Codignotto at 11:47 PM | Comments (0)

Microsoft Visual C# 2005 Express

I downloaded Microsoft's new Visual C# 2005 Express beta today. Here's a [very] brief overview of my experience.


You can get the distribution here:

My hopes are that the beta won't mess with my existing installation of Visual Studio 2003. So far, the experience has been a pleasant one.

The installation has minimal steps to it, but it had some quirks. After installing the .NET Framework 2.0, I had to reboot. When my system restarted, the ISO image wasn't mounted so the installation halted with an "Abort, Retry, Cancel"-like message. I mounted  the volume and said retry, but that didn't work. I restarted the installation and it picked up right where it left off.

My version installed these components:

  • Microsoft .NET Framework 2.0 Beta
  • Visual C# 2005 Express edition Beta
  • Microsoft MSDN Express Library 2005 Beta
  • SQL Server 2005 Express Edition Beta

I'm writing a little Battlemap program in C# using VS 2003 and I'm going to try and code it up in the beta for a little while. If I send the build to anyone, they will have to install the .NET Framework 2.0 Beta 1 distribution.

Project Migration

Project migration from VS 2003 was a snap. In fact, a Wizard came up and asked me if I wanted to backup my project in case the upgrade didn't go well. I had no problems.


It's fast! Invoking the IDE is fast and editing is butter smooth.

I use a plugin called C# Refactory from XtremeSimplicity for refactoring in Visual Studio 2003. However, the refactoring support in the beta seems more solid. I guess that feeling is mainly based on the fact that the refactoring dialogs are invoked much more quickly.

There are numerous other editing enhancements that I'm sure to like, but I won't go into them here.


What can I say? The beta's form designer is light years ahead of where it was in VS 2003. The toolbox is organized better and gives you a better idea of where your components and controls are coming from.


Finally, MSDN help is integrated both locally and on the Internet. If a help item isn't found locally, it searches the Internet. It took way too long for this to happen.


The download is small (267MB) and it seems to work side by side with my regular development tools. I'll find it hard to go back to VS 2003.


Posted by Nick Codignotto at 06:06 PM | Comments (0) | TrackBack

July 06, 2004

.NET Annoyances #1

The Annoyance

I have a control that derives from System.Windows.Forms.ScrollableControl. Great. I write my control, which is a kind of square grid that extends beyond the Window’s bounds. Great. I scroll my control and see that my control is not repainted the revealed areas. Curious. I think to myself, oh, simple, I’ll just override the scroll event and invalidate my Window. I mean, a ScrollableControl has to have a bloody Scroll event, right?


There is no scroll event for a Scrollable Control. I thought I would inaugurate my new column with a gem like this. Providing a Scrollable Control that doesn’t have a Scroll event is like providing a custom control that can’t be customized. Or, to put it in everyday terms, it’s like going into a candy store with your son and leaving without buying him so much as a lollipop.

The Solution

I wanted my lollipop. As usual Code Project came to the rescue. However, once I saw the solution I kicked myself since I should have thought of it myself. The key to the solution is the override of the WndProc and simply handling the WM_HSCROLL, WM_VSCROLL, and SBM_SETSCROLLINFO messages. Here is the class that I found on Code Project, written by Martin Randall:

    public class ScrollableControlEx : System.Windows.Forms.ScrollableControl


        private sealed class NativeMethods


            public const int SBM_SETSCROLLINFO = 0×00E9;

            public const int WM_HSCROLL = 0×115;

            public const int WM_VSCROLL = 0×114;




        public Point ScrollPosition


            get { return new Point(-AutoScrollPosition.X, -AutoScrollPosition.Y); }

            set { AutoScrollPosition = value; }



        public event EventHandler Scroll;


        protected virtual void OnScroll()


            if (Scroll != null)


                Scroll(this, EventArgs.Empty);




        protected override void WndProc(ref Message m)


            base.WndProc (ref m);


            if (m.Msg == NativeMethods.WM_HSCROLL ||

                m.Msg == NativeMethods.WM_VSCROLL ||

                m.Msg == NativeMethods.SBM_SETSCROLLINFO)









If your drawing is particularly expensive, you probably want to invalidate only the portion of the Window that gets exposed as a result of the scroll. The above code doesn’t do that since the Scroll event doesn’t contain the needed information. This information is contained in the low and high words of the WParam property of the Message class shown above. In addition, you’ll need the page size found in the SCROLLINFO structure (see the Win32 SDK).


A more complete implementation of this extended class would provide this information to the Scroll event so you should be able to invalidate the proper portions of the display.


In my case, invaliding the Window was fast enough so I didn’t bother. If this changes, I’ll write up the solution and post it.



Posted by Nick Codignotto at 11:04 PM | Comments (0) | TrackBack

June 09, 2004

James Avery invented the Internet

Now that I have your attention I’d like to talk about a pretty good article I just read in the new issue of MSDN Magazine. It’s called Ten Must-Have Tools Every Developer Should Download Now and it’s written by James Avery. I’m a regular reader of James’ .Avery blog. He’s the guy that does the “.NET Nightly” thing, where he spits out quick links to various tools and articles that interest him at the time. A few other bloggers that I read do this too. Sam Gentile does his own “New and Notable” thing and Mike Gunderloy has his Daily Grind.

His latest article does have a few annoying points, though. In three places (maybe more) he seems to believe that .NET invented something when it actually didn’t.

First, when talking about a cool regular expression tool called Regulator he says, and I quote, “There is renewed interest in regular expressions because of the excellent support for them in .NET Framework.” Say what? RE’s have been around for a long time and you either need them or you don’t. If you do, there were and are still plenty of good libraries out there. In the C++ world, we’ve always had Boost RegEx. Perhaps he’s talking about the Visual Basic programmer’s perspective? I’m not sure how VB programmers coped with regular expressions in the past. Yeh, perhaps he’s talking about them.

Second, he introduces us to a cool reflection tool called Lutz Roeder’s .NET Reflector. However, he goes and says, “The .NET Framework introduced the world to the concept of reflection which can be used to examine any .NET-based code, whether it is a single class or an entire assembly.” Are you joking? Java has had the Reflection API for years. They were the innovators there.

James also talks about the .NET build tool called NAnt and the .NET based unit testing tool called NUnit and doesn’t even mention the original Java-based projects these bad boys were based off, namely Ant and JUnit respectively. Quite expectedly, he does mention Microsoft’s upcoming MSBuild technology (which competes on a feature basis with NAnt) which will ship with Visual Studio .NET 2005.

In any case, the tools he talks about are very cool indeed.

Posted by Nick Codignotto at 08:43 AM | Comments (5)

May 17, 2004

Extreme Programming vs. Interaction Design

Jon Udell writes about personas and points be back to an blog entry he wrote a while back about a well-covered topic (though he recognized this in the end by providing a link to a google search on his entry’s title).

My favorite quote form the article was:

It’s extreme design versus extreme programming. I don’t buy either one completely. Call me an extreme anti-extremist. Call me a bundle of contradictions. There’s more than one way to do it. I’ll never sign up for a methodology, no matter whose. But I’ll learn what I can from all of them.

He points out an interesting contradiction between Extreme Programming and Interaction Design (“Extreme” design). The former school promotes interative discovery. The latter school promotes a complete up-front thought process about the problem domain which is aimed to produce a complete specification.

The google link points to an article specific to this topic in which Kent Beck (of XP fame) and Alan Cooper (of Interaction Design fame) defend their philosphies. The article is called Extreme Programming vs. Interaction Design

In the article, Copper explains how XP is a developer’s self-defense against problems in the organization. Rather than try and solve the problem through the tenets of XP, Cooper would rather fix the organization. Cooper says,

I think XP has some really deep, deep tacit assumptions going on, and I think the deepest tacit assumption is that we have a significant organizational problem, but we can’t fix the organization.

Beck holds fast that all of the interaction analysis doesn’t have to hold up the production process. He contests that once the customer sees what has been created, they can tell you what’s right, what’s wrong, and what needs to happen.

Cooper’s main point is his assertion that the customer cannot give the correct answer. Cooper says,

This is one of the fundamental assumptions, I think, that underlies XP—that the requirements will be shifting—and XP is a tool for getting a grip on those shifting requirements and tracking them much more closely. Interaction design, on the other hand, is not a tool for getting a grip on shifting requirements; it is a tool for damping those shifting requirements by seeing through the inability to articulate problems and solutions on management’s side, and articulating them for the developers. It’s a much, much more efficient way, from a design point of view, from a business point of view, and from a development point of view.


I can’t help but to raise the glove of Alan Cooper. Even though the two tried to kiss and make up in the end, they got back into a fight and Kent Beck just seemed desperate to validate the code-right-away philosphy. Alan Cooper’s lets-just-think-about-the-problem approach seemed to much more rational to me.

I say that with one caveat, though. IMHO, The waterfall approach is fundamentally flawed and my agreement with most of what Alan Cooper says does not in any way imply I approve of long discrete phases of development. It’s damaging for a developer to work on a recent bug in code he wrote months ago. There is information and understanding achieved once development begins.

When Kent Beck and Alan Cooper were both charged with finding a way their philosophies could interoperate, it was Alan Cooper’s description that seemed to make more sense to me. Read the article and tell me if you disagree.

Posted by Nick Codignotto at 03:15 PM | Comments (1) | TrackBack

April 20, 2004

VS 2005 Visualizers for managed code only

I just found out that the cool visualizer feature of VS 2005 is for managed code only. This feature is extremely cool and I was hoping that it would have been available to unmanaged C++ projects. Duncan Mackenzie wrote up an entry called Scott Nonnenberg on Debugger Visualizers in VS 2005, where you can find a lot more information on debugging.

The visualizer feature allows you to write any Winform code you want in order to debug a variable. You basically select your watch variable, for instance, and select a visualizer like “Image”, “XML”, “spreadsheet”, etc.

Posted by Nick Codignotto at 07:48 AM

April 15, 2004

Change now, change later...

I’m going to refer to the article I mentioned in my last blog entry once more, the article where Bill Venners speaks with Luke Hohmann on Marchitects (ug).

The article has a nice quote on the difference between an XP approach and a pragmatic approach:

Both the XP crowd and the Pragmatic Programmer crowd are making a bet. Either bet is a future option: the option to not spend the money now, or the option to spend the money now so that the cost of future change presumably is less. If the Pragmatic guys make the bet to spend some money now, and they’re right, we’re all happy. If they’re wrong, we may have some crud to take out or clean up. If the XP guys make the bet to not spend the money now, and they’re right, then we haven’t incurred the cost, and we’re happy. If they’re wrong, if it turns out you need to make the change, then you’ve got to go back and refactor and retrofit.

Personally, the pragmatic approach seems more natural to me. I’m a big fan of many XP concepts but to Do The Simplest Thing That Could Possibly Work just seems stupid to me in many situations.

I suppose if I could map out a project and all of the decisions I’d have to make while working on it, a good baseline for all of my decisions might be “Do The Simplest Thing That Could Possibly Work”. I kind of dig this baseline since it helps you start out in a mode where you aren’t in danger of over-designing a project. However, it’s just a baseline!

However, I’d treat this map like an graphic equalizer and push up the sophistication of pieces that I felt are in danger of change. I’d do this in advance of any requirement. I think that philosophy violates XP a bit, but I think the violation is warranted and I kind of fault XP for promoting it so strongly.

I mean, if I look at a piece of poop, I know it’s going to smell like crap. I don’t need to discover this by getting too close. Likewise, if I were writing a feature that depended on a few business rules that I felt could change, I’ll design the rules tests in a way that new rules can easily be added.

Obviously if I’m inexperienced, especially in the problem domain, such risk-taking could bite me. I suppose you have to take that philosophy with a grain of salt. The rules behind some features may simply never change and if you don’t know this then you should probably “Do The Simplest Thing That Could Possibly Work”.

Posted by Nick Codignotto at 08:27 AM | Comments (0) | TrackBack

April 05, 2004

Indigo on Windows XP

Juval Lowey, in his blog post on why Windows Forms should be embraced, despite the Longhorn buzz, confirms a rumor I’d been hearing lately. That “…we know Indigo will be available on Windows XP too.”


Posted by Nick Codignotto at 05:24 PM | Comments (0) | TrackBack

March 31, 2004

When not to use SOA

Jason Bloomberg talks about Why SOA may not be right for you.

Specifically, he states that it might not be a good fit in a heterogeneous IT environment. I’m not sure I understand that (notwithstanding the fact that I doubt any of these exist).

As I take each step in my journey to understand all of this stuff, I find myself at an interesting crossroads. Is SOA a fit for the issues I face each day or should I create a more specific solution? My gut tells me that SOA, at face value, is a manifestation of some pretty sound principles like abstraction, separating implementations, and de-emphasizing objects (and instead emphasizing getting some work done).

Plus, it seems like a lot of good technology is being developed to make this stuff a dream to work with.

So, on my to-do list is to learn about how SOA scales and what technologies exist that help out with that. If a single server implements my service, how do I distribute that to some idle worker machine?

Posted by Nick Codignotto at 04:13 PM | Comments (0) | TrackBack

March 29, 2004

Dialog Editor

Check out the dialog editor in Whidbey:


Notice a few things:

  • The spacings are marked with dotted lines. These appear when your control snaps to the correct position, say to the edge of a dialog or to another control, both veritcally and horizontally.
  • The red baseline is to line up text in controls that have different heights! How cool is that?!

On that last point, I wonder how your own custom control can play nice in Whidbey and integrate with this feature. Perhaps you implement an interface or provide a baseline property that allows the environment to adapt.

Posted by Nick Codignotto at 09:06 PM | Comments (0) | TrackBack

March 25, 2004

VSLive! - Wireless access... good

Wireless access is good here. I’m getting 5.5Mbps now, but I was geting up to 16Mbps at times (I have a Wireless-g PCMCIA card). This is good news for blogging in the middle of sessions…

Posted by Nick Codignotto at 09:31 PM | Comments (0) | TrackBack

VSLive! - Richard Hale Shaw on better C# class design

So, I’m sitting here listening to Richard Hale Shaw talk about better class design, CLS compliance, and FxCop, among other things.

I guess I’m writing a brief overview of the talk, complete with my own comments on the content. Kdeep in mind that this is a stream of consciousness post, so expect sudden stops, spelling mistakes, typos, and lost of unfinished business…

CLS compliance is all about what you expose outside of your assembly, not the private interface.

- Create consistent usage: delegates 1st param sender, 2nd param e

use specific suffixes for certain types, such as “Attribute”, “Collection”, “Exception”. etc. MyException, etc.

some prefixes and suffixes to avoid, like Delegate, Enum, Before, After. I;m not sure I agree with this one.

_btnMine, _cboMembers, leading underscore and common type prefixes. Underscores show up first when you use intellisense. Possibly a plus. The acronyms help group the items within the Intellisense UI.

Base classes have some advantages in that you can version them and free the deriver from the problems that would otherwise follow. This is interesting and requires more investigation.

The reflection stuff is cool… and easy… lots of cool ways to exploit it.

Use static classes instead of creating a regular class with all static methods. This is new for C# 2.0. This is perhaps an alternative for cases where you might want to use sealed. Again, I need to re-read this stuff since I’m not sure when I’d use one or the other.

Richard went into quite a bit of detail when he described the differences between Value types and Reference types. He presented lots of his own guidelines and guidelines that he’s gathered. For instance, ValueTypes are useful when yoiu have many of them, lifetime is short, and they are relativelky small, say 16 bytes or less. Boxing and unboxing is an issue with ValueTypes. His rule of thumb is to write most types as reference types and prototype and look for hotspots. Only change to ValueType when there is a performance bottleneck.

Brute force casts are evil (necessary for ValueTypes) but everyone should prefer as and is keywords. Static casts bypass the compiler and force cast exceptions at runtime, which is really bad.

Checked exceptions, not a good idea. They are in Java, but they are enormously cumbersome and stick on poor performance. The as keyword will simply return null, like C++ dynamic_cast (when RTTI is enabled). You can check for it (efficiently) and move on…

Here’s an interesting one. Avoid properties that return arrays. They look like indexers, but aren’t indexers. The returned array is a copy and has no built-in semantics. I’ve never written an indexer property, though I know the basic idea, so some experimentation is in order. Do you see a common theme here?

Delegates vs. Event objects. Hmmm. I missed some of the details on his point here, when to avoid delegates. I’ll have to talk to him afterwards to resolve this.

Looks like Richard Hale Shaw went over, so things are closing down quite abruptly. I’ll be thinking about a lot of this stuff and I hope to post followup posts.

Comments on any of this are welcome.

Posted by Nick Codignotto at 08:59 PM | Comments (0) | TrackBack

March 24, 2004

VSLive! - VS 2005 - Windows Forms

Wow, incredible stuff. I attended a Windows Forms demo this morning, where many of the new features of Visual Studio 2005 (“Whidbey”) were demonstrated to spectacular effect.

Here are some quick notes I jotted down:

Dialog Editor

Holy crap. The dialog editor can snap controls to alignment. I knew this from the PDC bits. However, what’s new is the control auto-spacing, matching Windows design guidelines. If that weren’t enough, they provide guidelines for different control types so the fonts all align, even though the frames of the controls may differ. Are you kidding me?

You can edit common string properties without going to the properties menu. I could swear I saw this before, perhaps in Sheridan’s VBAssist from the old days? Anyway, cool feature.

FRefactoring support pervades the designer, the code editor, the property editor and the solution explorer. Changing class names ripples through everything.

For VB, the “My application” project designer was cool. Most of the functions yoiu might set in an installer were part of the project properties, enabling seemless distribution later on, with ClickOnce, for instance.

Toolstrip control is cool.

SmartTags are cool. You can perform common actions on various controls that support it, which seems like most or all of the out of box components.

Partial Types allow you to put user and designer code in separate files. This has some profound benefits, allowing you to take advantage of some of their code generation features without worrying about blowing away your customizations.


I was blown away by the “Calculate Permissions” feature, where VS iterates over your code and suggests permissions that will be needed.

They even integrated this notion of required permissions into Intellisense, to the point where you are warned when you are about to call a method that will require permissions you aren’t set up for. Oh yeh, you can set up a debug sandbox, where you can similate the permissions you plan on deploying with.

“PC in the wild”, allowing the user to acknowledge possible problems that can be detected at install time (like permissions needed, but not initially granted due to deployment config).

More to come…

Posted by Nick Codignotto at 07:21 PM | Comments (0) | TrackBack

March 18, 2004


James Avery blogs about his new Wiki dedicated to Service-Oriented Architectures.

The Wiki is called SOAWiki, though that’s not a real WikiWord in the purist sense, but you get the point.


Posted by Nick Codignotto at 11:24 AM | Comments (0) | TrackBack

March 13, 2004

Avoid the GAC

Chris Sells tells us to Avoid the GAC.

In my previous DeepSize post, my instructions include using gacutil which, of course, installs DeepSize into the GAC. While my little application will never get updated by another party and possibly installed over the a version you may have already installed (wishful thinking), I suppose I should reconsider my choice to use the GAC for DeepSize.

After doing some research, it looks like I have to put the explorer extension in the GAC. I tried various configurations such as placing DeepSize.dll in a directory on my SYSTEM path and calling regasm. I understand that COM doesn't deal with the path since explicit references to the path a COM DLL is located is hard-wired into the registry when regsvr32.exe is invoked on the library. However, none of those scenarios worked.

I Googled for a few more articles on the topic of Explorer extensions and one link in particular corroborated the original Code Project article's instructions on placing the Explorer extension in the GAC. See Extending Explorer with Band Objects using .NET and Windows Forms (also on Code Project).

Furthermore, I'm beginning to understand the role of RegAsm.exe as opposed to regsvr32.exe. You can't register a .NET DLL using regsvr32.exe since a .NET DLL doesn't support the old COM registration functions. RegAsm.exe works through COM Interop, which, while it exposes .NET objects as COM objects, it doesn't do it through old fashion means of DllRegisterServer() and DllUnregisterServer().

I definitely dig having the Chris Sells piece in my mind. However, I'm also glad I've gained a little personal experience on when to do and not to do something.

Posted by Nick Codignotto at 07:41 AM | Comments (0) | TrackBack

March 11, 2004

ADO .NET Links

I’m mostly bookmarking here since I’m reading a lot into ADO.NET lately…

ADO.NET Articles on Know .NET:

Some migration tips from J2EE to ADO.NET:

From ADO to ADO.NET: A Gradual Approach (This is where I’m coming from…)

Some interesting thoughts on performance of ADO.NET/Oracle

Wintellect ADO.NET FAQ


A decent introduction to ADO.NET?

Code Project strikes again with an article on Using ADO.NET in a managed C++ application. I must admitm though, that I’m looking for the same thing for an unmanaged C++ application hosting the CLR.

And again, ADO.NET – Get the notification events from Managed Providers

Got any other good ADO.NET links?

Posted by Nick Codignotto at 04:22 AM | Comments (0) | TrackBack

Visual Studio Debugging Tips

Min Kwan Park put together a nice document on how to troubleshoot your VS7 debug sessions.

Over the years, I’ve collected a few links that have been useful to me. Here are a few:

Got any of your own links?

Posted by Nick Codignotto at 03:58 AM

March 10, 2004


Sometime last summer, I saw a cool article on the Code Project called Explorer column handler shell extension in C#.

While I had some desire to see MD5 checksums of certain production files, I was more excited by the idea this article gave me. What if I modified the code to provide a “deep size” column in Explorer that would show the total size of all files in a directory? To me, this is a very useful thing.

So, I went and wrote the thing and it officially became my first C# program… or first C# Class, since I just modified the existing assembly.

I have a pre-built version here, along with my source (which is a modified version of the Code Project original).

Download file

I didn’t find any kind of copyright in the source files, so I didn’t include anything in mine. I hope I didn’t miss anything there and I offer apologies in advance to the original author if I did.

To install it, I like to copy if to my Global Assembly Cache. Just execute:

gacutil /i DeepSize.dll

Then, you need to register the assembly as a COM server via:

RegAsm.exe DeepSize.dll (register the version in your GAC).

You may need to restart explorer by logging out than back in. Once you do, right-click on any explorer file ListView (aside from My Computer and My Network Places) and select “More…”. “DeepSize” and “MD5 Hash” should be new choices.

I exclude certain directories from consideration, since it’s well known that they are big. These are directories “ending” in:

  • “WIN98”
  • “WINME”
  • “I386”
  • “WINNT”

When I included those, performance suffered and the other directories I was interested in had to wait until those monsters were calculated.

Feedback welcome. If you want the rest of the Visual Studio files, let me know and I’ll provide them.


Posted by Nick Codignotto at 11:55 PM

Copy and Paste Files from the Command Line

I’m one of those programmers that tries to use the keyboard and mouse equally. I do this mainly because I hate to clutter my desktop, Quick Launch bar and Start menu with a zillion icons.

I tend to write a lot of batch files. I have a directory called C:\utils, which has interesting binaries I’ve collected over the years. I also have a C:\utils\bat directory with abotu 150 batch files I’ve written over the years. Some of the common batch files are:

n —Starts my default text editor
eb —edits a batch file in C:\utils\bat, without having to specify the path
set_env —Using Python and Win32all, I can set an environment variable permanently, not just in my shell
desktop —Changes to my desktop folder, at the command prompt: C:\Documents and Settings\ncody\desktop.

I’m also a big fan of Cygwin, since I need to ssh and scp files to my web server almost daily. To copy files from my desktop to Cygwin, I generally type “explorer .” while in my Cygwin home directory and drag the file from my desktop to the new explorer instance. I close explorer and now have the file available.

I also have a bunch of batch files that switch me to important directories. All of these batch files use the most excellent pushd* and *popd commands, so I can move from directory to directory and always have the ability to go back up my current directory stack.

One welcome addition to this workflow is a new utility, written in C# that allows you to copy files from one directory into another. You can read about it here.

All you need to do is say:

ezclip copy myfile.txt
cd some_other_directory
ezclip paste

Done. I’ve already been using this tons, which is a lot easier than navigating my complex directory tree in explorer.

UPDATE: I noticed that wildcards, directories, and sub-directory traversal are not supported. It seems to me that this wouldn’t be hard to implement after looking at the source.

UPDATE #2: With a minor tweak, I made a minor improvement to the code. Unfortunately, you can no longer use the multi-file syntax using my method (file1 file2 file3), but you can use wildcards. To do this, I simply changed this like:

string[] files = new string[args.Length - 1];
Array.Copy(args, 1, files, 0, files.Length);

To this:

string[] files = Directory.GetFiles(@".", args[1]);

To make this code production quality, of course, I shouldn’t have to specify “.” as the current directory. If args[1] specifies a directory, I should extract it and use that. Also, it wouldn’t be hard to accept multiple files as Gus Perez did originally, allowing each argument to be a single file or a filespec (myfile.txt *.log *.xml)

Posted by Nick Codignotto at 11:03 PM

March 09, 2004

Waiting for GaXAML

(No, not a blog entry about Samuel Beckett)

Rather, Chris Sells pointed out that Chris Anderson pointed out that Mike Harsh tells us about an interesting WinForms markup written by Joe Stegman.

Using Windows Forms Markup Language (WFML), you can use markup today!

That little tidbit opened up my browser to the entire What a kick ass site that is.

Posted by Nick Codignotto at 08:59 PM

March 04, 2004

More on .NET Code Access Security

I’ve been doing a lot of reading on .NET Code Access Security lately, which is the new security security model for .NET. CAS represents a stark contrast to traditional Windows permissions, which are based on the permissions of the user.

Most developers log into their machines as Administrators since that’s the only way to get anything done! For instance, it’s the only way that you can install software, the only way you can adjust system wide settings and, most unfortunately, the only way most Windows programs work properly! My kids have their own computer, an older machine I’ve been keeping healthy over the years. I’ll be damned if I can find a single kid’s game that works properly out of the box while running as a non-priveleged user.

If you are a skilled Windows user, you may be aware that you can run a program as an Administrator even when your regular user account is non-privileged. The steps are documented in the online documentation for Windows XP Professional. Personally, I don’t have the guts to try running my day as a non-privileged user. I tried it a few times and I nearly pulled out my hair.

Despite the limitations imposed on non-privileged users, they still have full access to their profile and they can write to most areas of their hard drive (excluding areas owned by other users and the system areas.) Thus, under the Windows security model, any program that you launch as a non-privileged user can still do things that you or your system administrator may not want it to do.

A good example of a situation where a program should have a subset of a user’s permissions is a financial application that displays sensitive information to the user. The administrators may want to prevent the application from printing, writing to files, allowing itself to be screen captured, use the clipboard, etc.

Obviously a programmer can omit these features, but will the administrator believe them? Probably, but that’s another problem entirely ;-)

Microsoft seems to have nailed a far better approach to this problem. Under .NET CAS, the administrator can define rules of use on the enterprise level, the machine level, or the user level.

The rules are extensive! Here is a taste on the types of permissions you can grant or deny to an application:

  • Writing to a file, a directory, or a network share
  • Opening top-level windows, dialogs
  • Access to domain resources
  • Ability to establish network connections
  • Web access
  • Reflection
  • Printing
  • Use of the clipboard
  • and much more!

Applications are defined by membership in something called a code group. You can define a code group in a few ways. One way is to specify a directory, the presence of a hash, the presence of a strong name, etc.

When an application starts, the security system looks at the zone the application was launched from, directory the application started in, the presence of a hash, the presence of a strong name, etc. The security system gather’s all of this “evidence” and determines the appropriate code group for the application. A set of permissions (called, you guessed it, a permission set) is associated with each code group. The permission set thus defines everything that the application is allowed to do.

If you have the .NET Framework installed, you can launch the .NET Configuration Utility (Control Panel | Administrative Tools | Microsoft .NET Framework 1.x Configuration) and look at the different types of evidence available. You can also look at the rich set of permissions that can be used.

The Code Project has a good article titled Code Access Security from the perspective of the Developer and Administrator

This article kind of inspired this post. It explains what Code Access Security is and how developers and administrators should look at CAS.

I recommend that read in addition to the excellent Webcast by software legend Juval Löwy (as previously noted in this blog).

Finally, I watched an MSDN TV episode on CAS starring Matt Lyons.

Posted by Nick Codignotto at 10:12 PM | Comments (0) | TrackBack

March 03, 2004

.NET Code Access Security

I just got through Juval Löwy’s most excellent WebCast on .NET Code Access Security.

What can I say, it’s enlightening, it’s profound, it’s a WebCast that everyone interested in .NET should tune into.

The WebCast is long, about 90 minutes, and I had to listen to it in two sessions. I have a lot of notes that I plan on posting here.

The great thing about the presentation is that it didn’t merely go through the motions, it was more than a drag through the CAS API. It was a comprehensive guide to what CAS is and how it can be used. Juval gave some insightful examples that just made it all clear.

Posted by Nick Codignotto at 11:29 PM | Comments (0) | TrackBack

February 17, 2004

Avalon, Illustrator, Visual Studio ideas...

Chris Sells pointed out how Nathan Dunlap and Jonathan Russ created Splitter Bars in Avalon. Chris went on to describe how important XAML is going to be in getting designers and coders working more efficiently.

This hits a chord for me and represents something that has always excited me about Avalon. You can design something in Illustrator (or have it designed), and import it into your code.

I can see a great opportunity for an Adobe Illustrator plugin that allows a fine-grain of control over the export process (XAML markup, class attributes, etc.)

Or, better yet, an Illustrator plugin for Visual Studio! Amateur illustrators, but who are really developers [like me] would appreciate that side of the coin. A key feature of this integration would be, of course, the ability to allow the visual to evolve as the code evolves, as a separate entity that can be checked in and out of source control, etc.

The day is coming, indeed.

Posted by Nick Codignotto at 11:33 PM

February 04, 2004

More on Google Web API

I am now using a built-in feature for Movable type to access the Google Web API's. My last post seems kind of naive but it's all good. I've had some problems getting it to work right, however, so stay tuned...
Posted by Nick Codignotto at 02:39 PM | Comments (0) | TrackBack

Google Web API's

I just stumbled across Google Web API’s . Looks to be some very cool stuff. I hope to play with them soon.

The service is free and once you sign up, you’re assigned a license key that gets you 1000 queries a day.

It looks like you can issue a general search, duh, and get the results back in XML so you can traverse and process the results any way you’d like.

Second, you can get a binary cached representation of any URL, so long as Google’s crawlers visited the page.

Third, you can get spelling corrections, just like google does, when you make a query. This is useful if you’ve mistyped a word or words in your query and want to know if an alternate spelling will yield the search results you were really looking for.

Finally, you get a lot of options for all of these query types. Most or all of these options are available through Google’s standard browser interface, but perhaps you can envision a better interface or an interface that conveniently compares results when various filters are applied.


Posted by Nick Codignotto at 02:30 PM

January 07, 2004

More on web services

On a note related to my earlier rant on free web services, I just found one by accident. Jason Nadal over at created a natural language translation web service. He announces “WinTranslate 1.0 with Source Code released” This software allows you to perform translations on your desktop. The source demonstrates how to create your own solutions. Way cool.

I have to start wondering how many hidden web services are lurking around the Internet and how to find them. Any suggestions?

Thanks Jason.

Posted by Nick Codignotto at 02:14 PM

December 17, 2003

Musings on the Semantic Web

People who regularly develop web services will not find much of interest in this post since, in all likelihood, they have already “seen the light”. I wanted to explore a few concepts here after reading an article on Publicly available Web Services at Microsoft .

Tim Berners-Lee (you may recognize him as the inventor of the Web) once said that, “The Semantic Web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation.”

Before I knew what the potential for “Publicly Available Web Servers” meant, I thought he was simply referring to XML markup, where the current generation of “dumb” web pages (based on plain HTML) would be enriched with context. The potential I saw back then was simply additional context to existing pages. Say, a news site marked all of their headline copy (“President’s dog is found”) with special markup such as a “headline” XML element. Local features, entertainment news bites, moview reviews, etc., would all be similarly marked. The wishful part of me was hoping ads would be marked up, but we all know this will never happen. Once the markup was in place, one would be able to write a spider to troll over a site and extract the types of information they wanted.

XML markup, as I originally conceived the “semantic web”, is somewhat trite when seen in the light of full-blown Web Services. Still, XML markup has it’s advantages. For instance, the developer may use the XML markup in ways not originally conceived by the Web Service designers. Now, more on that…

I can’t remember the first time I heard of the phrase “Web Service”, but it was surely before the turn of the millenium. A whole industry was erected on top of the Web Service idea while I was sleeping in ActiveX land. Tools like Weblogic, J2EE, WebSphere, etc. (I’ve never actually used these technologies so please don’t be too critical at my ignorance if you comment on this post!) Companies started using the paradigm to further the reach of their empires, for it’s a very empowering technology.

My point is not so much the details behind these technologies, which ones are better, nor even what they are capable of. My point is that these technologies seem to be used primarily for proprietary development. Whether a company is facilitating Intranet development or dividing their public-storefront into multiple tiers, the majority of Web Services developed seem to be largely unavailable to the general public (i.e. the average application developer with a compiler and a dream).

Lots of signs are pointing to a change in this area. The MSDN article mentioned above kicked off this post so let me explore for a bit what it means to me. The MSDN article talks about four publicly available web services on their site. One service allows you to write a standalone program to get satellite images, another allows you to write your own MapQuest clone, another allows you to manage alerts, etc. The article goes on to say that Amazon and Google have started exposing some services to the public as well. That’s the trend I’m hopeful for and for which I see a huge potential.

As a developer, I’m used to an entire market of off-the-shelf components that I can use to build a user interface, access a new database technology, or even to access Web Services. However, power will truly be realized when a true public Web Service market is available for Internet-enabled applications. Is the state of the industry beyond infancy? I did a search on google for “Publicly available web services” and didn’t find much. Am I missing something?

In my head, the industry will have moved beyond infancy when there are as many public Web Service providers as their are tool and library vendors. Here are some ideas:

  • An application that leverages public recipe services, nutritional services, shopping services, and restaurant services to create diet plans and dining out recommendations that keep people in a healthy lifestyle.
  • An application that can leverage farms of computers for calculations that the application defines. You buy time on a massive matrix of machines and use them as you wish. The application can gather data on the Internet, possibly from other Web Services and use that data to provide a solution. There’s lots of proprietary activity in the grid computing sector, but I personally haven’t seen anything as generic and publicly available as I’m talking about here.
  • An application that combines “live” celestial data collected from the Hubble Space telescope, Chandra, GLAST or whatever, to create an educational space exploration program for students. Currently, this data is purchased in database form, packaged in CD-ROM, and shipped out as a snapshot. If a new celestial event is recorded in the news, the current generation of programs have no real ability to present the new data in ways the user is able to view their snapshot data.
  • An application that combines mortgage rate information, home availability, broker fees, neighborhood statistics, and lifestyle data on a regional or national level. Lots of small proprietary solutions exist in this area, but there is no opportunity for a third party to create a best-of-breed solution with user-defined algorithms.
  • Same thing for investments. A Financial Consultant might want to have a developer write her an application which combined publicly available financial data in ways that is unique to her style and not currently available anywhere else.
  • Etc.

Many services will never appear unless there is a business model to support them. I can’t see that being too huge a problem since tool and library vendors currently have a pay-for-use market that works. I’d be interested in hearing about any roadblocks in this area.

Posted by Nick Codignotto at 08:21 AM | Comments (1) | TrackBack

December 09, 2003

Learning C#

I was asked today to recommend a good book on C#. Harking back to my Windows and COM/ActiveX days, I always heed the words of Jeffrey Richter and Charles Petzold. These two pioneers authored such seminal works as Advanced Windows and Programming Windows, respectively. Recently, they’ve authored new works on .NET and C#. Jeffrey Richter wrote Applied .NET Framework Programming” and Charles Petzold wrote Programming Windows with C#.

Neither of these sources is really meant to teach you C#. One suggestion would be to start at the source, Microsoft has the C# Langauge Reference online.

Another suggestion is to look at O’Reilly Associates. O’Reilly has proved an excellent source of quality books for many years. I haven’t read any of their books, but I’m seriously considering Programming C#, 3rd Edition by Jesse Liberty and C# in a Nutshell, 2nd Edition by Peter Drayton, Ben Albahari, Ted Neward.

O’Reilly has an entire section dedicated to .NET. They also sponsor an interesting C# Learning Lab. Other books on C# include C# Cookbook by Stephen Teilhet and Jay Hilyard, C# Essentials 2nd Edition by Ben Albahari, Peter Drayton, and Brad Merrill.

O’Reilly has an excellent resource for evaluating and reading books on line in their Safari Bookshelf . I have an account and I recommend looking into this great service.

I’d love to hear about some personal recommendations. Aside from the Microsoft Developer Network Web Site and Applied .NET Framework Programming, I haven’t read any of these books. I’m just as curious as anyone on where to start reading next…

Posted by Nick Codignotto at 08:00 AM | Comments (0)

December 08, 2003

C# Patterns

There has been a lot of discussion on applying Design Patterns to the C# language. I found many useful links on the C2 Wiki entitled CsharpPatterns .

The point of this article is to explore how C# langauge features can be leveraged when implementing patterns. I’ll start with a few examples in C++ and Java before I describe some of the advances available in C#.

The Singleton Pattern , for instance, is commonly implemented C++ as:

class Singleton {
     Singleton() {}
     Singleton& getInstance() {
          static Singleton singleton;
          return singleton;
     Singleton() {}

The approach is very explicit in it’s aim to create the first instance of the singleton with it’s first use (a call to getInstance()). This approach could have serious problems in a multithreaded environment as two pieces of code on two different threads might be able to create two instances of the singleton. A simple workaround would be to instantiate your singletons before your threads start up…

In Java, many applications choose to use the double-checked locking mechanism as illustrated by the following example:

// Broken multithreaded version
// "Double-Checked Locking" idiom
class Foo { 
   private Helper helper = null;
   public Helper getHelper() {
      if (helper == null) 
        synchronized(this) {
           if (helper == null) 
             helper = new Helper();
      return helper;
   // other functions and members...

You can plainly see that Java enables synchronization with the synchronized keyword. However, the article that the code was copied from explains how this approach is flawed. Read the article for the full discussion.

Consider this almost trivial implementation of the Singleton Pattern in C#:

// .NET Singleton
sealed class Singleton {
     private Singleton() {}
     public static readonly Singleton Instance = new Singleton();

This approach is also possible in Java, but there is a subtle difference in the two implementations. In Java, the singleton gets instantiated when the class is loaded. In C#, the singleton gets instantiated the first time the Singleton member is accessed (much like static allocation in C++), lazy instantiation.

In addition, the .NET Framework guarantees the thread-safe initialization of static members. This is a common task that’s finally automated and enforced by powers other than our own diligence. I welcome that.

The sealed class modifier is used to prevent Singleton subclassing, something the GoF describe as dubious and prone to error.


Exploring the Singleton Design Pattern by Mark Townsend
The C2 Wiki

Posted by Nick Codignotto at 09:00 AM | Comments (0)

December 02, 2003

Feel the code

I decided to do all of my work using command-line tools and the TextPad editor. I came into programming using command-line tools and I suppose old habits die hard. Granted, the Visual Studio .NET IDE is terrific. I plan on using it once I’ve felt enough code, but naked code seems the best way to learn how everything works.

The first step was to get syntax highlighting for C# source working in TextPad. Although I’ve been aiming at naked coding without the IDE, a little lipstick goes a long way. TextPad has syntax highlighters for most languages. You can get a C# Syntax File on the TextPad web site. Once you download the file, you simply need to copy the file to a Sample subdirectory off your TextPad installation directory.

Launching the command-line tools from within TextPad was next on my list of required amenities. The main impetus for my effort in this area was the ability to double-click on a compiler error and be brought to the offending line of source. Yeh, my efforts are beginning to sound like I really miss the IDE. I could compile in a command-prompt, but I’m not a barbarian for pete’s sake.

I learned quite a bit about assemblies and the way they’re loaded by the C# compiler. My initial programs were simple, so the default core assemblies were all I needed to compile and go. However, as I started requiring the various managed DirectX components, I needed to find these on my hard drive. I did find them, but they were buried deep within my C:\Windows directory. I found the ones I needed and copied them to C:\SharedAssemblies. This made my command-line simple. I added the


option to the C# compiler and I was able to refer to the managed DirectX components via a simpler syntax: /r:DirectX.Drawing (or whatever), without re-specifying the entire path.

I suppose I could have utilized the GAC (Global Assembly Cache) but since the default install didn’t already install these there, I got scared and punted on the issue for now.

Posted by Nick Codignotto at 06:28 PM | Comments (1)

November 27, 2003

Image Loading Goodness

Happy Thanksgiving!

I took a crack at two things this morning. First, I took some old C# code I wrote that traverses a directory structure and gets information on each file. Second, I use the System.Drawing library to get image information. Finally, I output all of the results.

The directory traversal turns out to be trivial. I implemented this as a simple recursive function as follows:

static int traverseFiles(FileAction action, DirectoryInfo dir)
     int numFiles = 0;
 FileInfo[] files = dir.GetFiles();
     foreach (FileInfo f in files) 
          if (action.doAction(f))
 DirectoryInfo[] subdirs = dir.GetDirectories();
     foreach (DirectoryInfo d in subdirs) 
          numFiles += traverseFiles(action, d);
 return numFiles;

The FileAction argument is my simplistic shot at making the loop generic. My default implementation tries to load the file into a Bitmap() object. If this fails with a throw of System.ArgumentException, doAction() returns false. Here is the implementation of doAction().

public bool doAction(FileInfo file)
     ImageInfo image = new ImageInfo();
          Console.WriteLine(file.FullName + " " + image.Width + "x" + image.Height);
          return true;
     catch (System.ArgumentException)
          return false;

The ImageInfo class was lifted from AspHeute (AspToday, a German site I think). Thanks Christoph Wille. There is one error in their example, where the Width() wrapper returns the internal Bitmap() Height property.

The code for the sample I, uncreatively, named FindFiles.cs is available for your compiling pleasure.

Compiling can be done at the command prompt as follows:

csc FindFiles.cs
Posted by Nick Codignotto at 08:50 AM | Comments (0)